Francois-Rene Rideau fare@tunes.org
Mon, 6 Sep 1999 19:27:20 +0200

Note: I removed prof. Shapiro from the Cc: list,
because I doubt he wants to participate in this conversation;
but I've added him in Bcc: so as to give him a chance to do so.

>: Eric W. Biederman

I think you miss the points I try to make:
a) design comparisons are more meaningfully done "at constant functionality"
b) we must separate the binary-level concerns from the source-level ones
c) modularity, typing, proofs, etc, are important at the source-level
d) efficiency, on the other hand, is important at the binary-level

> 1)  Micro kernels look good for hard realtime systems:
>     A) The source code is smaller and easier to prove and verify,
>        and predict.
Is it the micro-kernel approach that simplifies things,
or is it not the fact that you have smaller and simpler overall systems?
At constant functionality, I don't see quite what microkernels bring;
small and easy things have been done without any "kernel" at all for decades
on embedded controllers, with just mix-and-match libraries of executive code.
I know people who've programmed real-time systems in a microkernel (Chorus),
and it certainly was no particular joy; conversely, it might not have been
particular pain; the pain was largely in the real-time aspect itself.
In as much as uK design was relevant, it didn't help.

>     B) Mach doesn't fit my definition of a useable microkernel because
>        it still has hardware drivers in the kernel.
What's the interest of having drivers outside the kernel?
In a "reliable" system, the most important thing is to know what class
of failures exactly you're safe against. What kind of failures do you
think that microkernels eliminate? Are they important failures?
If it's illegal memory accesses that you're expecting your kernel to catch,
can't they be caught by classical compile-time or run-time type-checking?
If it's DMA and IO trespassing that you fear, instead of MMU-catchable errors,
how will MMU-based protection help about it in any way?

> 2) Monolithic kernels look good for general systems:
>    A) Functionally is exported at a higher level, so mistakes in the initial
>       abstraction can be corrected. vs. Microkernels which must get it right
>       the first time.
Again, assuming there is a kernel (i.e. strict user vs system separation),
it is important to think "at constant functionality".
At constant user-level functionality, monolithic and micro-kernel systems
have, by definition, exact same exported user-level functionality.
So the difference is in the internals, and ease of correct development.
Now, what do microkernels change? They force to introduce a lot of _rigid_
and _low-level_ inter-module interfaces. Does that help? not in any way
I can think of. On the contrary, it grieves strongly the dynamic evolution
of the system. As once said DEK, "Premature optimization is the root
of all evil". Any useful modular interface can be done in a much flexible
and high-level way in a suitable language (again, see SPIN, Fox, etc).

Again, the only interest I can think of in microkernels is to enable
black-box third-party modules in a proprietary software setting.
Proprietary software is evil. Let the beast die. Can you show me one
single thing of interest other than that in micro-kernels, that
cannot be done in a safer and more efficient way by using a suitable
module system in whatever high-level language is used to program the system?

> 3) No kernel design looks decent on paper but
>    A) What the definition leaves off what level interface
>       you are exporting/supporting.
Indeed. No-kernel is not the end-all, only the begin-all, of system design;
it's merely about acknowledging this basic principle: that modularity
is a high-level concept, and that it should be taken care of by
the high-level language in which to write and interface the system.
That's why C is not such a good systems programming language,
and why Modula-3, SML, and even Perl, are better choices.

>    B) A single address space loses some fault tolerance over current systems.
>      In particular a bug in the prover, can let through software
>       that can let a stray pointer take down the system.
If your proof-checker has a bug, you've got more important problems
than stray pointers. But this is just as valid for kernel-based systems:
issues of meta-tool correctness are completely independent from issues of
kernel design; _at same functionality_, proof-checkers, type-checkers,
compilers, etc, play the same role in both kernel-based and kernel-less

>       And the bug can be caused by a 1 time hardware malfunction. . .
If you don't trust your hardware, then maybe you've got bigger problems
than kernel design. And you shouldn't trust any single result given
by your system. Consider buying redundant hardware, radiation-hardened
electronics, high-performance CPU coolers, etc.

I admit there is an issue with error contention, but it goes both ways:
errors that give wrong result should be propagated and revealed,
instead of going silently unnoticed until too late.
Once again, the problem is to know what failures you're going to consider,
and what good properties you'd like your implementation to keep in
presence of these failures; see safety, soundness, totality, liveness, etc,
in my paper about the notion of implementation <http://tunes.org/~fare/tmp/>.
Just saying "we'll catch more errors", without having a model for
what errors you're catching and how you're catching them, is a fallacy.

And then again, you have to consider same-functionality systems.
If and when you identify your potential points of failure, then
the no-kernel approach does _allow_ for use of arbitrary hardware
"protection" around it, if adequate.

>    C) How to support paging, process control and many of the traditional
>       kernel functions is unclear.
Why unclear? Just provide functions in the usual way
your modular high-level language (Modula-3, SML, LISP, etc) allows you to!
Just because C makes it a pain in the ass to interface the system
(or anything) doesn't mean it has to be so.

>    D) Very aggresive optimizers are notoriously buggy.
a) they are no less buggy with a kernel than without;
b) they are no more needed in absence of kernel than in presence thereof
  (think "same functionality"; think about non-aggressively optimized
  dynamic glue around aggressively optimized static code).
c) they are among the things for which correctness proofs
  are ultimately both doable and wishable.

>    E) On current hardware the main cost of a cost of a context switch
>       is losing cache, and tlb state.  Monolithic kernels handle this
>       quite well.
Micro-kernel multiply this cost by a number of context switch
proportional to the number of "servers" involved, i.e. proportional
to the extent the micro-kernel "philosophy" was followed.
No-kernel removes this cost altogether except _when strictly necessary_.

>    F) The no kernel argument is one for moving correctness proving
>       from hardware to software.  As hardware can do things in parallel,
>       and software is restricted
>       to serial operation this is not a clear win.
Hardware is restricted to proving things at run-time.
Software can prove things at compile-time; it enables meta-computation.
Hardware costs a lot to change. Change in software is costless in comparison.
And software can take advantage of whatever the hardware does
for useful computation, instead of using it just for the sometimes wasteful
straightforward intended use of it (e.g. FPU used for memory block move;
MMU used for GC barriers). As for hardware that do type checking
in parallel, bring me back LISP machines...

So the question is really, given the hardware, to decide
what software architecture to use. No-kernel is about rejecting
needless attempts to software flexibility.

>    G) I obviously think the no kernel design hasn't been proven a doable yet.
I obviously think that people have been doing it for a very long time
without even thinking about it, since it's so natural.
The microkernel vs monolithic kernel is mostly a debate around
inadequate concepts, resulting from the corrupting polarization
of OS research around (for and against) non-sensical dogmas
(that themselves stem from the dominance of low-level languages,
that in turn are linked to the proprietary software barriers
to high-level programming and meta-programming systems).
People who say "a strict static system vs user separation is irrelevant
in my design" have been doing a no kernel design for decades.
LISP machines of old were such.
Operating systems for personal computers in the 80s were such.
So are newer Java-based platforms.
Oh, and that does not mean that security is not a concern.
Java, for all its warts, at least recalls us that security
is a high-level concept, not a low-level concept.

Best regards,

[ "Faré" | VN: Уng-Vû Bân | Join the TUNES project!   http://www.tunes.org/  ]
[ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics  | Project for  a Free Reflective  Computing System ]
Arbitrary limits to programs are evil,
particularly when they go either enforced or unenforced.