Tue, 13 May 1997 16:25:06 -0500 (CDT)
From: email@example.com (Henry G. Baker)
Date: Tue, 13 May 1997 13:53:46 -0700 (PDT)
> By making the kernel small, you make everything else huge. What
> benefit is there to that?
I don't see how this follows. There isn't a kernel
vs. everything-else philosophy here. You start with a small,
efficient, easily portable core, and start layering stuff on top of
it. It grows organically, and upon occasion, may 'reflect' upon
its own lower-level implementation.
Perhaps we are talking past each other here. What I am envisioning is
a system with what are usually considered to be low-level primitives
that look and act just like functions and macros at the higher levels.
Example: Low level functions like READ-PHYSICAL-MEMORY-ADDRESS,
VMA-FROM-PMA, SET-VMA-PROTECTION, MAP-VMA-TO-PMA, etc are just Lisp
functions that can look like any other Lisp function. Maybe we'll
stick a few %'s in front of the name to indicate that if you use them
wrong, you lose. And, of course, you'll have to have some way of
having the compiler emit the right code for them, maybe via in-line
assembler notation and indications of a special calling convention.
But if an application program really wanted to grovel over a DMA
buffer without the higher-level abstractions copying it a zillion
times, at least it could. Conversely, if the code that does page
replacement wanted to use some whizzy, distributed neural net to
figure out what to swap out, it could as long as it made sure all the
necessary resources are nailed down.
What I object to is any kind of barrier between subsystems that is
enforced other than simply by convention. Sure, you could shoot
yourself in the foot more easily, or even ravage whole continents, but
you'd also be able to spend much less time reconfiguring the barriers.