Build OS features in Software, not Hardware (was Re: Ne
Eric W. Biederman
ebiederm@cse.unl.edu
04 Dec 1996 22:50:24 -0600
>>>>> "Alaric" == Alaric B Williams <alaric@abwillms.demon.co.uk> writes:
Alaric> On 3 Dec 96 at 16:10, Eric W. Biederman wrote:
Alaric> Francois-Rene Rideau <rideau@ens.fr> wrote:
[snip MMU]
Alaric> Trap handlers in the OS don't take up much space, and I don't expect a
Alaric> runtime compiler to be a small piece of software.
>>
>> Tunes will definentily need a regular compilation facility. However
>> FORTHs regularly include a very simple runtime compiler (not
>> especially optimizing), and routinely included in embeded
>> environments.
Alaric> I mean a compiler that does all the checking for illegal memory accesses and
Alaric> other crashmongers!
Compilers can not guarantee a program won't crash. I suspect the
proper place for `proofs' is in the macros themselves. In truth the
only language that I have seen that do all don't allow illegal memory
accesses are languages that can't _express_ them! And even that isn't
sufficient.
>> [snip]
Alaric> It could probably execute those four or so instructions in less time
Alaric> than my 486 can do an AAM, but it still bloats it.
>>
>> Ah but the compiler on the 486 isn't usually bright enough to use AAM
>> so it's hardly an issue.
Alaric> Ok, perhaps AAM is a bad example, but my general argument is that
Alaric> specialised instructions in few bytes can provide amazing
Alaric> speed/compactness wins in the cases they come in useful. Smarter
Alaric> compilers can detect semantics equivelant to what this instruction
Alaric> does more reliably. The choice of truely useful special instructions
Alaric> is important, of course! We currently don't need an instruction that
Alaric> resolves a series of CAT scan cross sections into a 3D image,
Alaric> although such a CPU would be popular with medical visualisation
Alaric> systems!
I doubt it. It is quite unlikely it would be quite expensive (since
it has a small market), and slower than a high end processor with an
optimized compiled program (probably a less well optimized i/o system
or some other thing). With hardware the motto is make the common case
fast. Also there is the fact that everthing you do in hardware takes
up silicon area on a chip. Currently I believe the number of
transistors per microprocessor is increasing faster than silicon area,
per wafer. What is definentily true is high performance processor
design is push the increase in microprocessor performance _faster_
than the increase in transistor speed.
Computer Architecture is a definite discipline about hard compromises
(what can be done about the here and now), and is the one part of
computer science that has been very rigorously studied. The textbook
Computer Architecture a Quantatative Aproach bears looking at.
Code size is not really an issue. It is at most a difference in a
factor of 2 between risc and cisc. The only group that I have seen
claim smaller code size are FORTH programmers, and studies have shown
it isn't a bytecode versus assmebly thing. If the advantage really
exists it is a programming philosphy inspired by that language, or the
fact that it attracts better programmers :)
The kind of instruction that could prove important are simple
instructions that make the basic set of operations more powerful.
Though with current technology things that make branches less frequent
are very important. What I suspect would be usefull is if there was
found a basic set of operations from which both basic integer
operations, `Multimedia operations', and floating point operations
could all be synthesized at no more than the current cost. Thus
simplfy the processor and making more space for optimization.
>> Well thats easy to prove most humans can't program much less in
>> assembly. Usually it's the more the case that compiler generate more
>> consistent code quality than most programmers, while programmers at
>> any single spot can usually optimize better, though much slower.
Alaric> Can sometimes optimize better! Not always - especially with large
Alaric> pieces of code... indeed, by defintion, if you sat at it long enough,
Alaric> you could use the compiler's rules to derive an optimised binary,
Alaric> then sit and stare at it until you had a good idea the compiler can't
Alaric> - but that's /hard/!
Exactly. But when it is a critical inner loop that _must_ be
optimized, (especially starting with the compilers optimized code).
There are some significant performance improvements you can make.
Both applying known optimizations that your compiler doesn't make, and
after finding the bottle necks rearanging your data structures so you
can run the inner loop faster.
>> A trouble with trusting compilers too much, especially with high level
>> languages is that sometimes they can be so stupid :( Give me a
>> compiler that can optimize bubble sort into mergesort or quicksort
>> (with out specific code for that case) and I'll change my mind.
Alaric> See my other emission (probably on the Tunes list now) about a
Alaric> fundamental theory of computation :-)
>> What is needed in a language is a combination of what compilers do
>> well. Constant propogation and in general expression simplification
>> and optimization. And allowing people to do what they do well write
>> algorithms. And especially attempt a mechanism that allows those
>> algorithms to be applied to mulitple types.
Alaric> Yup. To rephrase, some goals of language designs:
Alaric> - it's easy to convert it into other languages (efficient compilers)
Alaric> - it's similar to how we think about the problem or solution thereof
Alaric> - it doesn't require the programmer to be unnecessarily specific -
Alaric> inflexible type rules, unnecessary ordering of statements, etc.
>> How proving and type safeness or in general assertion proving can be
>> put into the above frame work, is a little bit tricky. However I
>> believe a clever use of macros could handle that case. Also it might
>> be wise to have a language with very simple types at the bottom and
>> use the macros to build up the frame system.
Alaric> Yup; a simple basic language makes static analysis systems easier,
Alaric> since we can factor out the syntactic sugar!
>> Nicer syntax should be an added feature that simply compiles to a
>> trivial syntax, instead of built into the language.
Alaric> Uhuh. Like operator overloading:
Alaric> a+b -> +(a,b)
Mostly. The real benefit comes not at the direct level where the
transformation
a+b -> (+ a b) ; lisp/scheme syntax
is done. But at the level when you come to writing macros.
Whose purpose is of course to allow writing our own languages in
whatever base language we come up with. Suppose I want to write a
backwards version of `if' call it `bif'.
The syntax of bif is to be:
(bif condition form_to_execute_on_false form_to_execute_on_true)
example;
(bif (> x 3)
(progn (print x ) (print "is less than three!"))
(print (print x ) (print "is three or more!")))
; I'm only familiar with how common lisp does this :(
(defmacro bif (condition false_form true_form)
`(if ,condition ,true_form ,false_form))
Or equivalently to highlight that it is just a normal function that
returns a list, and could do a lot of other things if it so chose
(jump off the empire state building, eat pizza, ...).
(defmacro bif (condition false_form true_form)
(list 'if condition true_form false_form))
Or if I wanted to be really perverse I could write:
(defmacro bif (condition false_form true_form)
`(progn (print "bif is about to execute")
(if ,condition ,true_form ,false_form)
(print "bif has executed")))
Now none of what I did is truly safe is common lisp. Because someone
could come along and say redefine `if' or `print' _after_ my macro
making my code expand into things I didn't intend. However is there
is some kind of syntactic closure as in scheme (when I have time I
need to look at it more) the print and if can be bound to the
definitions of if and print at it's definition so you can redefine
print and if with impunity (say throw an unknown function error) and
bif will still work correctly (even if they were macros themselves).
Also note that macros can manipulate scoping.
The fact that defining complex macros are much simpler in trivial
grammers to the point of looking like simple code replacement (when (the scheme
people say ) you are doing something much deeper than simple source
code replacement). Is a major win in comprehensibility.
How a syntactic safe macro would look in a complex syntactic language
like C++ is an interesting question. Templates and inline functions
cover two of the common cases, but don't touch half the power.
Just to be on the completeness side it should be noted that ANSI FORTH
has equivalent functionality to the scheme macros with syntactic
closures, but there are no standard tools for building complex macros.
These `macros' have a slight edge in power as they can get access to
the unprocessed source code stream so can transform very complex
syntaxes if they choose.
Alaric> Then we get on with the compilation of a procedure call.
After the language has gone through macro expansion yes.
[snip]
Alaric> Yup! That's the kind of approach the Self guys seem to be following.
Alaric> I mentioned Self in my other email, so I won't put the URL in
Alaric> again...
I have at least glanced at self. But to date I haven't had a chance
to make deep investigations into very many of the complex syntactic
languages. ADA, Self, Modual 1,2,3 ...
Eric