"no kernel" operating system design
Mon, 30 Mar 1998 14:25:21 +0200 (CEST)
je te réponds en anglais, pour possibilité de forwarding.
Puis-je poster ces deux messages sur la tunes-list?
Thanks for your post. All in all, a great text.
Parts I skip I have nothing to add to or to substract from.
Here are a few remarks:
> The main purpose of an operating system, is to arbitrate and abstract the
> usage of system resources between all applications. [...]
I'd just say "arbitrate (hence also abstract)".
Below is a recent message from the LispOS mailing list:
> "An operating system is a collection of things that don't fit into a language.
> There shouldn't be one."
> 'Design Principles Behind Smalltalk', Daniel Ingalls, Byte August 1981
> Wise words, IMHO. 16+ years later, I still agree with him.
> Martin Rodgers
I think that the statement "an operating system is a collection of
things that don't fit into a language" actually goes back further than
Dan Ingalls -- perhaps Dijkstra (??) said it?? I agree with Dan's
ironic conclusion, however!
The 'classic' defn of OS is that which deals with 'resources' rather
than 'values'. A 'resource' is something that can't be copied and
'shared' like a 'value'. (The phrase 'sharing a resource' is a
contradiction in terms; of course, the whole point of an OS is to
guarantee that the resource is used mutually exclusively in time.)
When languages acquire 'linear'/'unique' types as 'first-class'
elements, then the distinction between language and OS can disappear.
H. Baker (ftp://ftp.netcom.com/pub/hb/hbaker/home.html)
is the kind of guy whose every sentence is insightful.
I think that his ideas about linear types have been proven
usable by systems like Haskell and Clean;
we may very well use them for the language in which to describe
the OS interface, so that we gain resource sharing for "free":
the very type system enforces the correct sharing.
> The operating system puts the machine into a state suitable for
> applications, and provides them with function calls that allow them to
> use the underlying hardware without worrying about how it works
I generally disagree with the kernel/application layering.
Any software component provides an abstraction for what it does,
and allows users to not worry about its internals in-depth
(well, except that the calling conventions may be much more
that what the superficial "C" prototype shows).
> 1) The " monolithic " kernel approach [...]
> The kernel image is easy to boot
> It is easy to program too
If there is any ease of use, it's superficial:
* the kernel is only easy to boot once you compiled it;
if you want any non-standard feature or tuning,
you're off to recompile and reboot -- bad!
* monolithic design encourages the kind of mess found in Linux,
where to many things depend on too many others, so that local changes
imply global corrections (to be done manually in C), and break things.
> 2) The " microkernel " kernel approach
As I see things,
Microkernels start from the (Right) idea
of having modular high-level system design,
and end with the (Wrong) idea
of having modular low-level system implementation.
So they have system programmers manually emulate
an asynchronous parallel actor model
with coarse-grained C polling processes,
instead of a real fine-grained actor language.
The discrepancy between the model and implementation
induces lots of overhead, that get worked around with
lots of stupid compromises, with a two-level programming system:
Performance gets so bad that most "basic resources" must be
statically special-cased in the "microkernel" anyway.
As a result, everything gets both more complicated and slower!
> A new approach : the " no-kernel " idea.
> Implement an open, designed for the future, resource abstraction
> subsystem, to which different operating systems can be connected,
> Globally designed : making it possible to add new types of resources when
> they appear, without needing the above system layers to be aware of it.
> All system components communicate with each other the same way.
"Globally designed" is unclear.
You should say "Based on an dynamically extensible global resource set",
or even split that in two parts, with "dynamically extensible"
and the "global resource registry" being independent.
The "same" way worries me. If this means
"there is a generic algebra in which all interfaces can be described", fine.
If this means "we force a stubborn low-level syscall convention on everyone,
who must then spend half their time marshalling and unmarshalling", no.
> No 'miscellaneous' type of devices. New types are added and handled the
> same way the others are.
Yup. This means an *algebra* of types
instead of a random finite heap of types.
The whole idea of having algebras (abstract structures,
as explicited by a set of generators and a set of equations)
instead of unstructured enumerations is what makes the difference
between computer *science* and petty computer twiddling.
> Hotswappable device drivers : drivers can provide minimal functionality
> and be replaced by a larger one on the fly, later in the boot process,
> if needed.
Yup. linear types and continuation passing make such tricks
expressible in the type system.
> Inheritance : a resource can base itself on another and extend its
> functionality, just like deriving a C++ class.
Yes, except that a C++ class is perhaps a bad example on this behalf,
since inheritance doesn't mix well with recursive types.
Rather talk about "OO" languages in general,
and subtyping instead of inheritance.
> Small memory footprint.
Indeed. All the typing of interfaces can be resolved at compile/link time.
The dynamic loader may get more complicated,
but it can delegate certification of safe code to an external checker,
and known kernel modules can be pre-checked:
any typing/debugging information needs only be externally available
by the meta-system; it needn't be available at runtime in the target system.
All in all, the compulsory application-independent runtime footprint
can be reduced to zero; of course, the more drivers and functionality
you add, the bigger the system will get.
And if the system is its own development system, then it must include
all the debugging/typing information;
but then, space is no more important,
and this information can be swapped out.
Speed might implies non-portable low-level machine-specific interfaces
(notably if the target machine has a bizarre memory model, as is common
in embedded hardware), but that's ok if the compiler can handle that
automatically (see asm() statements in GCC).
What about orthogonal persistence of data?
> Design :
> The whole system is built as independent modules. Every module describes
> the external functionality it needs, to be able to execute. These are not
> to be thought as Linux-like " kernel modules ". They are separated binary
> entities, without symbolic linking between them, that request and provide
> functionality, through an interface common to all.
I only half-agree here.
The high-level design of the system should indeed be as modular as possible,
and this includes the ability to distribute modules as independent binaries.
But this doesn't mean the low-level implementation should have
static run-time boundaries, whose crossing costs a lot.
Instead, I think it is essential that optimizations (inlining and more)
should be done accross module boundaries,
so that we don't have the same problem of
performance vs modularity oppositions as seen in existing micro-kernels.
These optimizations may be performed at compile-time, at link-time,
at load-time, or at run-time, but they should be done nevertheless,
least people start programming again in the same dreaded two-level model.
good. We may even separate the consistency checking part
(verify that provided modules fulfill requirements),
and the resolution policy part (find a way to match modules).
The consistency checking is simple typechecking;
the resolution policy may be script-directed,
or goal directed, or any combination, or whatever.
> The generic and global driver interface allows [...]
[goal directed module connection]
I just *love* the idea. Tried to do it in Tunes 0.0.0.25,
but I didn't have a suitable language to do it (m4 macros -- yech).
Only you ignore a serious problem: that there may be many
(and even infinitely many) different ways to solve a given problem.
Even with the console problem, any combination of the many available
keyboards, mice and displays (including dummy drivers,
and driver filters/multiplexers) could do, so WHICH to choose at boot time?
There just *ought* to be more than the correctness/construction rules
that explain how to build valid combinations of modules;
there ought to be policy meta-rules that direct the resolution.
However, we CAN and MUST separate rules/mechanisms from meta-rules/policies,
and we may be able to reduce meta-rules to be as few as possible,
written in a declarative style, user/administrator-definable, etc.
By comparison, most current systems hardwire
the policies together with the mechanism in the drivers.
> The dependency manager provides itself the " locate module "
> functionality for the initial bootup, being able to locate module binary
> images stuck next to itself, for example ; when a filesystem module has
> been initialised, it can provide that functionality as well, allowing
> modules to be loaded from files on the disk. [...]
Very good. However, I'm not sure casual readers
will understand this formulation.
let's rather say:
« The dependency manager may use functionality provided
by the modules it loaded as soon as they are loaded,
so as to load more modules.
E.g. as soon as a filesystem is available
(either from disk or from network),
modules may be loaded from that filesystem.
There is no more clear-cut initialization phase vs run-time phase,
but rather, a dynamically self-extending system. »
[Examples of distributed or client/server computing]
> When a memory shortage happens or if the operating system does garbage
> collection, the dependency manager can unload all modules on which no
> other depends anymore. It can for example get rid of all modules that are
> only necessary during the boot process, once the system is up,
> freeing memory which is usually lost in the monolithic approach.
Or rather, monolithic systems require lots of special case code
to deal with that (when they do at all, as in Linux).
Having code/modules as first-class garbage-collectable objects
simplifies the initialization problem away, and brings much more.
Note that linear types again are helpful, since as applied to code,
they give for free the concept of initialization/one shot routines.
> Ideally, boot and core modules code, and the dependency manager, are
> written both in assembler, or the language with best performance on the
> given architecture ; and a portable language, for easy porting to new
> platforms, where that code can later be replaced by assembler.
> Architecture-specific modules are written in assembler where feasible.
> Most modules and the rest of the system are written in the most efficient
> portable language, for which the compiler is available on every supported
> platform, namely C.
Only the C typesystem is not expressive enough for most things we need,
so we'll have to use another language to express interface types.
Of course, we then have to trust C routines to match the declared
high-level type; but so is the case with assembler;
we can help by having the C header files and stubs
automatically extracted from our type language.
> Ideally too, all driver messages would be grouped outside of them so that
> they can easily be translated.
Or rather, if we use some declarative programming,
we can decouple from programs not only the text of messages,
but the whole way they are interfaced to the user;
instead of logging and printing strings,
drivers would log and print *events*, passed in a high-level format,
and arbitrarily handled by the user interface.
This also allows for much better, context-dependent,
translation into human language, (instead of stubborn string-based
translation whose meaning always mismatches the context, because
of discrepancies between languages).
Another great advantage is compression of message logs,
since they can be stored in compressed high-level format
instead of ASCII strings.
> Their binary format does not vary [...]
Well binary format handling is another of these functionalities
that the module manager can use as soon as they are made available.
Let's just say we don't need a specific format for such and such use,
and will try to stay as generic as possible.
For such things as ROMs, modules could come pre-loaded.
whether they are stuck next to the
There remains the need to describe this interface.
*THAT* will be the hardest part.
We'll need a typesystem that includes the notion of tagging objects
with the resources they require/provide.
> Appendix 1 : Example of resource listing local to every module
If what counts is the *computational* type
(i.e. interface as seen from user programs)
of provided/required computational resources,
then we should be careful to not identify
the resource with its driver.
> Appendix 2 : Example of boot sequence
Very detailed. Perhaps a bit too much (tu sautes trop de lignes, aussi).
## Faré | VN: Ð£ng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ##
## FR: François-René Rideau | TUNES is a Useful, Not Expedient System ##
## Reflection&Cybernethics | Project for a Free Reflective Computing System ##
La Science n'est pas un corpus de connaissance;
c'est un processus collectif coopératif issu de la pratique
par une multitude d'individus d'une attitude personnelle scientifique,
d'hypothèses et de mises en doute.