Tue, 13 May 1997 13:15:24 -0500
| From: David Gadbois <email@example.com>
| From: "Dwight Hughes" <firstname.lastname@example.org>
| Date: Tue, 13 May 1997 11:24:45 -0500
| There are many reasons to have a lean, mean, lower-level, lighter
| weight Lisp for the kernel -- not just for programming the OS, but
| for everyone that will write a device driver, a real time control
| program, a new GC, a signal processor, ....
| I really don't understand this "small is good" valorization. The real
| beauty part of the lisp machines is that there is no boundary between
| the "kernel" and "application" code. Sure, you have to be more
| careful writing device drivers, the scheduler, the storage system,
| etc., but I think the LispOS should cater to making it easier to write
| ambitious applications rather than making it easier to write the
| hardware support. The payoff in applications for doing the hard work
| to make the system as a whole correct, consistent, and usable vastly
| outweighs the one-time costs of system implementation.
As I pointed out, I have no intention of leaving CL out of the system.
Nor would I want a baby-lisp in the core. What I *do* want is one
able to directly and naturally deal with lower-level data and
system constructs necessary. How are you thinking of creating
continuations as threads using CL? How do you program a new, highly
efficient GC using CL? "Kernel" to me is the core functionality that
makes everything go - the GC, virtual memory, basic hardware i/o, OOFS
primitives,... -- the "virtual machine", if you will. It is unlikely
to be a static heap of code - but it will need to be more or less
a permanent resident in ram for performance reasons *and* it is
extremely useful to base a system on a few well-chosen constructs
and primitives. For one, portability between machine architectures
is greatly improved (less to reimplement, less entanglement of
low-level design issues in the higher level portions of the system),
but my favorite advantage is "leverage" -- if the entire system
uses only a few basic primitives or mechanisms then anything you
do to improve the performance of these directly improves the whole
system -- this is a big win. You also wouldn't want to have your
"kernel" code spread all over your system -- you would take a severe
hit in performance since you would have to constantly move bits
and pieces from ram to cache - there would be virtually no locality
of reference and the different bits and pieces of the "kernel" would
live in memory pages all over, along with whatever code happens to be
next to it on the disk. The costs of a piggy implementation will be
felt every time you boot your system.
| Also, for example, the Linux kernel can fit on a 3.5 inch floppy.
| (Well, it can if you don't include too many drivers.) But in order to
| do anything actually useful with it, you need 200 MB worth of stuff in
| /usr. A 30 MB Genera world load still has a lot more functionality
| and is vastly more usable. By making the kernel small, you make
| everything else huge. What benefit is there to that?
This is not really a measure of what is or is not done in the "kernel",
but of integration of functionality and ability to share code between
various parts of the system. For Unix each utility must reinvent the
various pieces of the wheel - there is no way to reuse code through
inheritance and all the useful functions in one program are unavailable
to the next, or impractical to use due to the vagaries of C. A good
Smalltalk system can come on two or three floppies (including source
code) due to this -- and its "kernel" - the VM - is relatively simple.
(Not that it equals Genera, but you get the idea.)
About now I should be chanting "I'm not worthy, I'm not worthy" but
what the hell, I've never let that stop me before <G>.