Build OS features in Software, not Hardware

Francois-Rene Rideau rideau@nef.ens.fr
Thu, 5 Dec 1996 12:35:47 +0100 (MET)


Just a few commennts on Eric's fine replies...
First of all,
for hardware design topics,
please followup to the MISC mailing list,
(http://pisa.rockefeller.edu:8080/MISC/).

>>>: Eric W. Biederman
>>: Alaric B. Williams
>: Eric W. Biederman

>> I mean a compiler that does all the checking
>> for illegal memory accesses and other crashmongers!
>
> Compilers can not guarantee a program won't crash.
>
Surely, given an expressive enough language,
there is no compiler that can tell for an arbitrary program
whether it will crash or not.
   Happily, computers are not used to run random arbitrary programs!!!
Well, there's genetic programming and corewar, fair enough.
When you program,
* you know what you want, so you know what a correct program is,
 and more easily even what a non-crashing program is.
* you write your program specifically so you can prove to yourself
 that the program is correct.
* if programming under contract, you are expected to have done the former,
 even though the customer has no way to check, but perhaps having you
 fill red tape.
Well, instead of all these knowledge and proofs to stay forever untold
and unchecked, it is possible to use a language that can express them all!
A computer language could very well express all the requirements
for the program, and logically check a proof that they are fulfilled.

> I suspect the proper place for `proofs' is in the macros themselves.
I'm not sure what you mean.

> In truth the
> only language that I have seen that do all don't allow illegal memory
> accesses are languages that can't _express_ them!  And even that isn't
> sufficient.
Why isn't it sufficient?


>> my general argument is that
>> specialised instructions in few bytes can provide amazing
>> speed/compactness wins in the cases they come in useful. Smarter
>> compilers can detect semantics equivelant to what this instruction
>> does more reliably. The choice of truely useful special instructions
>> is important, of course! We currently don't need an instruction that
>> resolves a series of CAT scan cross sections into a 3D image,
>> although such a CPU would be popular with medical visualisation
>> systems!
>
> I doubt it.  It is quite unlikely it would be quite expensive (since
> it has a small market), and slower than a high end processor with an
> optimized compiled program (probably a less well optimized i/o system
> or some other thing).  With hardware the motto is make the common case
> fast.  Also there is the fact that everthing you do in hardware takes
> up silicon area on a chip.  Currently I believe the number of
> transistors per microprocessor is increasing faster than silicon area,
> per wafer.  What is definentily true is high performance processor
> design is push the increase in microprocessor performance _faster_
> than the increase in transistor speed.
>
Yes, the problem is that we can't neglect *price*:
it costs a lot to develop, *produce*, *debug* and maintain
specific hardware that have limited life-duration and market,
so that you shouldn't do that but when sure that
you'll produce tons of similar specialized chips that
need run faster than software can do.
   Even then, the best way to solve the hardware problem,
is to turn hardware into software by using
a reliable and efficient hardware compilers.
Perhaps DPGA can solve that problem (see www.ai.mit.edu),
by providing cheap but efficient programmable hardware.
I'm waiting for more evidence from them, though.

> Code size is not really an issue.  It is at most a difference in a
> factor of 2 between risc and cisc.  The only group that I have seen
> claim smaller code size are FORTH programmers, and studies have shown
> it isn't a bytecode versus assmebly thing.  If the advantage really
> exists it is a programming philosphy inspired by that language, or the
> fact that it attracts better programmers :)
Yeah.
But I think the advantages of FORTH
are shared by all functional programming languages:
factoring programs into a composition of higher-order functions,
rather than their ground-level expansion in a dumb formalism.

> The kind of instruction that could prove important are simple
> instructions that make the basic set of operations more powerful.
> Though with current technology things that make branches less frequent
> are very important.  What I suspect would be usefull is if there was
> found a basic set of operations from which both basic integer
> operations, `Multimedia operations', and floating point operations
> could all be synthesized at no more than the current cost.  Thus
> simplfy the processor and making more space for optimization.
>
Floating point operations are not needed.
All calculations can be handled with integer operations
emulating fixed-point operations.
IEEE floating point is used
to handle 16-bit dynamical range and 64-bit precision.
but most of the time, the range doesn't change (much) (in a loop),
and that much precision is not needed
(or else, much more precision is needed),
so that range management could be (mostly) factored away (from loops),
and cost of precision could be reduced.
   Doing just the needed operation would simplify the hardware,
and hence speed things up a lot. If normalizing numbers to IEEE
format is a frequent operation, some specialized circuitry
could be designed, that would be explicitly called when needed only.
   "Multimedia operations": what's that? Mixing sound waves?
FFTs for J/MPEG? Scaling text fonts?
All additions and multiplications!
All extremely parallelizable stuff!
If you want to reduce cost,
then instead of building special purpose coprocessors
that would be cost a lot and sit idle most of the time,
use general purpose MISC processors in parallel!

>>>[Humans are punctually better than computers at optimizing,
>>>Computers much more regular]
Imagination vs regularity is precisely the point
about humans vs computers,
and why one is the perfect complement for the other.
Having humans unnecessarily do computerwork is a crime against mankind,
and trying to replace humans with computers is non-sense.

> when it is a critical inner loop that _must_ be
> optimized, (especially starting with the compilers optimized code).
> There are some significant performance improvements you can make.
> Both applying known optimizations that your compiler doesn't make, and
> after finding the bottle necks rearanging your data structures so you
> can run the inner loop faster.
Rather than requiring the programmer to hand-code the optimization,
which only works in a very special case,
I'd like to emphasize the possibility for the programmer
to *teach the compiler a new optimization technique*,
in a reflective compiler architecture.
   The advantage is that you tell your idea to the compiler,
and don't have to do computer work at applying it in a special case
while taking thousands of parameters into account,
like interaction of the "optimization" with other optimizations,
that could cause it to be rather a pessimization in some cases, etc.
   Of course, in some cases, it is not worth to teach the compiler,
because it would take a long time, and the case won't show very often,
while being simple enough so that hand optimization is better.
But meta-programming is always better than ground-programming,
when working for the long term, and should be made possible.


>>> What is needed in a language is a combination of what compilers do
>>> well.  Constant propogation and in general expression simplification
>>> and optimization.  And allowing people to do what they do well write
>>> algorithms.  And especially attempt a mechanism that allows those
>>> algorithms to be applied to mulitple types.
>
>> Yup. To rephrase, some goals of language designs:
>
>>  - it's easy to convert it into other languages (efficient compilers)
>>  - it's similar to how we think about the problem or solution thereof
>>  - it doesn't require the programmer to be unnecessarily specific -
>>    inflexible type rules, unnecessary ordering of statements, etc.
>
I'd replace the first point by "easy to compute on physical computers".
Point 3) is a particular case of 2).
2) and 1) are divergent goals.
   Most people do not solve the problem,
but make a static compromise in language design,
so that they can't have a generic unified language for all purpose.
   Tunes will not make any static compromise,
but allow dynamic adaptation of the compiler to *both* goals,
through reflective programming.

>>> How proving and type safeness or in general assertion proving can be
>>> put into the above frame work, is a little bit tricky.  However I
>>> believe a clever use of macros could handle that case.  Also it might
>>> be wise to have a language with very simple types at the bottom and
>>> use the macros to build up the frame system.
>>
>
>>> Nicer syntax should be an added feature that simply compiles to a
>>> trivial syntax, instead of built into the language.
>[a lot about macros].
>
Yes, I agree that the base language should be minimal,
an semantically simple,
and that extensions should be used to add features.
I agree that a reflective mechanism
to define new syntactic constructs and their semantics
should be used to define these extensions.
   But I'm not convinced that macros, as they are used,
(let's talk about the best existing ones, in Scheme):
are the solution to the problem:
they are used as some kind of low-level imperative meta-programming
without semantic check.
I'd rather have a system to define the semantics of new syntactic
constructs in non-imperative ways,
so that the implementation of those syntactic constructs
could itself be subject to transformation and optimization.
   I'm not saying that this is not feasible using macros
to define defining-macros, given a powerful enough macro-system.
Bootstrapping such a system will be a hard task, though...

== Fare' -- rideau@ens.fr -- Franc,ois-Rene' Rideau -- DDa(.ng-Vu~ Ba^n ==
Join the TUNES project for a computing system based on computing freedom !
                TUNES is a Useful, Not Expedient System
URL: "http://www.eleves.ens.fr:8080/home/rideau/Tunes/"