General OS design
Francois-Rene Rideau
rideau@clipper
Fri, 16 Dec 94 18:42:00 MET
Hey, there. Please everyone tell what he thinks about it all, if he
agrees with me (the no-kernel decentralized, but secure design), or
Mike's (a monolithic hugeware centralized unsecure kernel, as I see it);
or if you have other suggestions that show that we both are wrong, and
how to correct our ideas so that they be right.
Yes, I'm talking to *you* who are not saying anything. This is an
important issue; please make your voice heard.
[Sorry, Mike, for perhaps over-emphasizing your design's flaws; I
thought we agreed last time we talked; we should talk again soon,
then]
>>> portable high-level system. Nothing should prevent us to
>>> eventually redesign the LLL (i.e. the set of primitives)
>>> and still use "our system" so that a simple recompile
>>> will have the same applications running !
>>
>> I'll agree in theory, although I'm not sure what you have left in
>> "our system" if we remove the primitives....
>
> Bingo, Nothing! You must have primitives for a migratable design. I
> hope people aren't confusing portability with run-time migratability.
Of course you have primitives. But these are LLL dependent.
If we ever upgrade our LLL (or support OSF's ANDF, etc), which we can
*never be sure* we won't; or you so sure about yourself that you think you'll
manage designing at the first time something that nobody ever will be able
to beat ? I hope not.
We won't have upgrade the LLL very often (well perhaps we will during alpha
testing); perhaps once every two or three years. But we may *have to*. Or
perhaps some ANSI or de facto standard LLL will appear that we may have to
support ! Who knows. Always be ready. Never sleep on your laurels.
> OSF and Taligent, are among many who want to design portable software and
> standards. Take your code, COMPILE it without modification to run on a
> number of different systems. During compilation the low-level stuff is
> instantiated/created to support the higher level definition (code) of the
> application. But you still have multiple versions of the code for
> multiple platforms.
Nope. I dunno about Taligent, but using OSF, the code is compiled to an
intermediate form, and the same intermediate code goes for every architecture
where it is compiled to local code and linked before getting executed.
Of course, this allows portability but no migratability; so this is not
enough for us.
> If we say the LLL can be upgraded/modified
> etc then we are falling into the trap I'm trying to get out of, backwards
> and forwards compatibility of software.
Why ? No one forces you to forget older modules, or to translate them.
Remember we have a distributed system. There might be a "good virus" that
would spread and translate everything to the new standard, etc. I see no
problem at upgrading. Of course, we mustn't change everything every other
day; but upgrading is a *must*: you can never do *everything* right at the
first try; software evolves, we must be able to follow.
> The LLL should be fixed, period.
Fixed as much as possible. But our LLL should not be the dogma of a new
computer religion. As long as our LLL is useful, we keep it; when it can
be replaced by something better enough, it will. The only thing is we
must strive to design it the best we can, so indeed, the better we succeed,
the longer the LLL will survive.
> But if you start having many versions of our LLL language we have done
> little to advance a unified coding/distribution standard.
I'm not saying everyone should have his own LLL. LLL is standard.
I'm saying everyone *may* have a different version if it suits more his
needs (and those of enough people so that supporting it is ok). And also
and above all we should consider the *possibility* of upgrading it.
I'm for a free world. Freedom allows selection of the fittest. By some
authoritarian way, you may force some good thing to happen, but you break
the dynamics, and won't be able to evolve.
We don't want another MS-DOS for some virtual machine !
> I don't want my computer saying "Sorry, I don't know how to use X".
And it will never do. When it receives some object that needs
object/module X, it will go looking for object/module X on the network
automatically. Some configurable, programmable, etc, prefetch and caching
mechanisms will help reducing the lag from such object requiring.
We can never provide every module anyone may need. There will always be
bew things to do. We are not meant to be the only ones to write any software.
that's the fascist policy of Applef Computers, Inc. We do not want to be some
centralized programming center. We want computing freedom.
>> Where do objects fit in with our grand design, anyway? Toolboxes are
>> neat, but they aren't objects. Where does the high-level OOP stuff come
>> in, or is it just an illusion maintained by the compilers?
>
> There is no such thing as an OO computer.
It all depends on what you call OO. But yes, there are OO computers.
See SELF, Smalltalk, etc.
When I said I wanted an "OO" OS, it means I want a unified abstraction,
powerful enough to express any programming we need, that the system may
*securely* manage. This means that the unit in which the system thinks is
the "object", or "(lambda) term", or "foobar", or whatever you want, but
it's not a raw byte; it's something that can express any kind of high-level
(or low-level when you prefer) abstraction. Why have the system do it ?
Because else, we have no security, and will be forced to build another
system on top of it (or from scratch).
Why do you think I'm not happy with environments written on top of
MS-DOS, Windows, MacOS, or Unix ?
> Every computer runs machine language (please don't flame me for ignoring
> neural nets and such!).
Know about LISP machines ? Or FORTH machines ? Yes they run machine
language. But that does not mean the language is like machine languages
*you* know. Let's define *useful abstractions*. Then, we'll implement them
the best we can, and let compiler/interpreter technology translate it to
good native code for whatever hardware we have.
>>> Why shouldn't I be able to assign
>>> "click_event_handler := my_click_handler",
>>> or pass a function as a parameter to a generic sorting algorithm ?
>>> Have you never seen Scheme, Lisp, FORTH, ML, Miranda, BETA, SELF ?
>>> Have you never heard about functional programming ?
>>
>> Are you sure you want to do this in the LLL? (That's what this
>> discussion's about, right folks?) It sounds like you perhaps would like
>> to abolish the LLL altogether, and move right into the high-level stuff.
Whatever the LLL looks like, we need some portable binary encoding to
allow portability; to allow migration, the LLL must also be high-level enough.
See the CAML-light bytecode interpreter, or that from CLISP, or from the
latest CMU Lisp, for some portable LLLs which include some high-level stuff.
[Caml-light sources, and binaries for DOS and MacIntosh, are available on
ftp.inria.fr; for the others, I dunno exactly].
>> I will agree that SELF and BETA are great languages for high-level stuff,
>> but I can't figure out how you're going to impliment all this.
I'm considering threaded code for standard in-memory representation, with some
compressed encoding for files. Word-threaded, word-code, and byte-code
interpreters can also be useful implementations when instruction fetching
speed is less important than code compacity.
> Exactly. We need to seperate the low-level goals from the high-level
> goals.
No ! Like JVS say, the two are closely linked one to the other. It's as
crazy to design the LLL without taking into account its HLL use, as to
design an HLL without taking into account how it may be implemented on
actual machines.
> I'm still getting the feeling the OO thing is trying too hard to
> permeate all levels.
I'm feeling the opposite.
> First we need to come up with a list of goals, then specifications, then
> implmentational strategy, etc..... We couldn't do that.
Yes.
> So instead lets suggest ways of actually doing something. Both myself
> and Johan have stated techniques for actually getting something running.
But is that "something" what we want ?
> Everyone, please refrain from saying anything sucks until you have a better
> version, otherwise all our time is wasted.
I have nothing to say about your LLL implementation, but I'm against the
place you give to the LLL in the system. Again. We're not building an OS
to run on top of our LLL, but an LLL to run below our OS. The goal is the
high-level OS, the LLL is the means.
> The bottom line of every design is the CPU snooping around in memory
> executing machine instructions. Please give me examples of our
> distribution code that could accomplish all our stated goals. Saying
> that OO is cool and that the low-level stuff is implementation dependent
> or can be modified is skirting the issue. Give me real world code and
> data structures! Otherwise we are building a foundation in sand!
Ok.
Our binary encoding format has some header that identifies the file format,
its possible version, and the nature of the object inside, together with
some annotation fields, containing, size, CRC, author/originator
authentification, etc. Then some compressed table of contents, with a list
of imported and exported objects, with their global names, and some
indicative resource allocation information. Lastly, you have some custom
compressed data that contains the actual code.
LLL will some kind of threaded code, with compressed pseudo-pointers
(size adapted to the total number of objects accessible inside the module;
savings of 1/8th bit by use of 16-bit/8-bit unsigned division). The LLL
loader module will expand those.
Is that ok for data structure ?
Now, in-memory representation for the i386 code may be 16-bit or 32-bit
threaded code, or byte-code, according to the LLL flavor chosen (and needed
speed). Objects using different calling conventions may easily communicate
through standard gates.
Objects are allocated in a GC-able heap or outside it. You may recognize
that by testing a simple bit pattern of the pointer. The parity bit may be
used to differentiate pointers from integers (thus losing one -- or more --
bit of precision; this trick is very commonly used in LISPs, and in
caml-light); or the same bit pattern differentiates integers and constant
pointers from GC-able pointers that may be relocated automatically. In any
case, we lose some bit in integers (unless we have enough registers to
completely isolate integers from pointers). Of course, we use some cooperative
scheme for GC (and thus for multithreading; but real-time multithreading still
works, as it won't use the GC-able memory anyway). So between PAUSEs, we can
use full integer precision and pointer arithmetics. But at PAUSEs, values
must be ok (else the system may crash).
All that is an example. That's how I see it for now, but my mind is
not made up already for all the details.
>> Okay, I think this has been your best display of your idea of a
>> kernel-less system. Some of these concepts seem a tad bit strange,
>> however. You want a memory driver, yet you need to work with memory in
>> order to load/use it. You want a CPU driver, yet it would need to use
>> the CPU to run it. If you can explain how this would work, and the
>> overhead wouldn't be too high, this sounds like a good idea. (BTW, what
>> kind of messages could you send to a CPU object? Surely there would be
>> some primitives defined there, no?)
>
> Please enlighten me as well...
At high-level, the system is completely modular. You may change any
component, as long as you can ensure system integrity: object-dependent
protocols (including typing, bound checking or even correctness proof)
must be respected for two objects to connect. Once the connection is
established, it is inlined and thus fast. This object connector itself
needs a protocol, which I will name the meta-protocol. The meta-protocol
itself is an upgradable (or downgradable) object/module. Easy.
How will I implement the meta-protocol, will you ask ? Well, as always,
to found the system, we need some bootstrapping code. But *any system* does
need some, doesn't it ? Or did you expect to poke your binary code directly
into (FLASH|((E)E)P)ROM ? And even then, you need some external device to do
it !
So, there is *no problem* about complete modularity. Any distinct set
of features are managed by distinct objects. Objects export all the information
necessary to connect to them (protocols, restrictions, semantics, security
checking). Seeing an object (and satisfying the meta-protocol) is the
sufficient and necessary condition to be able to connect to an object.
You will then have page-level memory managers, used by static page
allocators, themselves used by dynamic page allocators, used by segment
allocators or heap managers, etc, etc. You connect directly to the one you
need (as long as you can see them); there is *no* centralized kernel to
address to; only more or less standard and more or less specialized
meta-servers.
>> Okay, so everything has to be modular. But does this really mean we
>> can't have primitives? Not constant primitives, but ones you can use?
>> (Gotta have something to compile the HLL into....)
Of course we have primitives, but these depends on standard objects, not
on a kernel. Everything may evolve smoothly. Everything is free.
>> Possible, although this would be dangerous if people starting making
>> too many calling conventions. I guess you really need to explain what
>> your concept of an object is here, since in my mind right now, I can only
>> see a bunch of question marks shooting s'more question marks back and
>> fourth.
See above (about "OO" OSes).
> A big chunk of memory belongs to my CPU, who manages it? Lets say your
> HLL has an object which somehow has thwarted any security and come in
> control of it all. My computer is humming along when my neighbor with new
> version 5.7 of Tunes HLL wants to use my resources (i.e. memory) and has
> a newer better version of the HLL memory manager that his code relies on
> (the new version has enhanced functionality which is necessary).
> What happens? How does his memory manager interact with mine? Does it take
> over? Then EXACTLY how? I need to hear the mechanisms. You can't
> continue passing the buck onto a library, without explaining at least how
> objects interact.
When you say the "better" version has new features, it means that the
specs are not the same (even if there is some backward compatibility support),
so the object is not considered the same at all by the system. When the system
detects the difference, it will try to load the newer object, and verify that
this object is trusted enough, and that its use may be founded by access to
your hardware through access to lower-level objects. Either it succeeds; then,
the program may run on your host; or it doesn't; then, the program may not,
and will run on another (for instance, his, as he uses the module !!!).
If the object is trusted enough, the meta-protocol *may* cache it, and
provide it to objects that need a better memory manager, or even a normal
one, if it decides to eliminate the redundancy by eliminating the worst
memory manager. Existing applications linked to the older memory manager
will still work, but no more applications that don't ask for it will be
linked to it, even if they get unlinked then relinked.
> My friend in Katmandu is using a modified version of
> Tunes (remember how wonderful OO is) and another guy "Down-Under"
> is working with yet another modified version (of course not modified the
> same way). They both link up on internet. How do they communicate?
> Again please give me exact data formats. Because both versions of Tunes
> could have mutated, they both should at least be able have a common
> communications format. Is that not a standard?
They communicate using the standard, uniquely identifiable tunes file format.
Then, the file contains info about modules needed to understand objects inside.
Modules are uniquely identified, so a modified incompatible version will have
another number. If you need use a version you don't have, you'll have to load
it as above, which can be done automatically or not.
> And if you say each must support a wide range of formats, just in case,
> then you are again skirting the interoperativity thing and migration.
That's the opposite. Allowing a wide range of format *is* allowing
interoperability and migration. The world is wide, and programmers always
do incompatible things until some efficient freely usable standard arises.
So in the meantime, a lot of formats exists. We must be able to process any
of them. I'm against any kind of centralized computing. Programmers must
be free to add anything directly and efficiently, or they'll extend the
protocol indirectly and inefficiently (or just throw away our system, and
they'd be right).
> My LLL should run on any computer, anywhere, anytime; No questions asked.
Who said the contrary ? Not me. I say it should not be the only thing to
run anywhere. Any useful software should be able to.
> It's not aceptable to have the computer say "I can't understand Fred
> because I don't have the proper communications drivers"
Just load them. You can have them together with Fred's stuff. That's what
a meta-protocol is useful for (and when there is an upgrade, of course we
offer bridges between old and new protocols). Now, if there's a copyright
issue, you may have to pay and register the driver. But hey, our system
is not meant to break business, but to allow free computing, including
business, but of course non-profit computing too.
> Standards are not arbitrary, then you don't have a standard.
All standards are arbitrary. Do you think ASCII is a gift of God, that
He revealed it to us as the One Truth, that no other choice would ever
have been possible and that those who don't use it should be burnt as
heretics ? Be serious, all standards are arbitrary.
This does not mean that they are not useful. Indeed, they may be quite
useful, and sometimes, they're necessary. But they are only conventions.
Sometimes, conventions are unuseful, as nobody really uses them.
Sometimes, conventions are old and unadapted, and must be changed, extended,
or merged with other conventions. Sometimes, they can prove harmful and must
be abandonned. There is never one known truth, but a quest for more truth.
> You've just gotten yourself into the big forwards/backwards compatibility
> problem.
The only way to escape this problem is to forbid any kind of progress.
That's not what we want, and even if we did, nobody can stop progress; you
can stop yours and be lost; you can slow the whole world's progress, but
progress is the survival of the fittest. If you don't evolve and adapt,
you're dead and forgotten.
>>> and we'll provide the
>>> necessary modules for that (or if you're using some old processor which
>>> had a native module for the old LLL, you can use it directly).
>
> You must think smaller. I have the feeling you're assuming 50 megs of
> RAM on every system. Try designing for 64K. Would your system still run?
> I want to run across a wide range of platforms (From Cray's to embedded).
I'm not assuming anything at all. I'm taking profit of everything that be.
A smaller machine will have less modules, and be able to do less things.
A bigger one will be able to run bigger programs that need more modules.
Why force the Cray's to run only the same as Apple ]['s ? Let them live.
Don't have some downward equalization (that's communism).
> I still have the text, make some suggestions and they'll be chiseled in
> stone!
Ok, so for "Object", say: "a unifying concept that encompasses just any
computing abstraction. Objects are furtherly managed according to their
useful properties, and not according to some unadapted arbitrary
distinctions".
>>> Let the system be *open*.
>>
>> Applause.... =)
>
> I'd love for it to be. But please explain how to do that and still make
> processes migrate to a variety of platforms efficiently.
Efficiency is a relative concept. When we migrate processes, it's because
we expect to gain time by moving the load from overused machines to under
used ones, and communication from distant objects to near ones. To achieve
best migratability, we need *small* objects, with well defined and
if possible predictable behavior as of load and communication. That is,
we need modularity. And allowing any kind of new module *is* being open.
We can't migrate in a closed system, and we needn't. A closed system can
be statically scheduled. An open, dynamic one cannot. Our virtual machines
must be *small*, with very few primitives or built-in data structures
(aka stacks), but a simple mechanism to point at other objects.
> Also explain
> how to do that quickly onto an embedded system, or a massively parallel
> machine, etc.
the quicker the link, the finer grain we can distribute.
The basic heuristics are simple:
* when the machine is loaded, it looks for some machine that is less loaded
* the farest and the slowest the link, the higher the link cost.
* before you actually migrate, you allocate the resources, so that no
uncontrolled massive migration waves are done on systems.
* systems are (dynamically) hierarchically organized with each time more global
resource servers.
* the more important the cost, the surest you must be before you actually move.
* only profiled objects (which can be done dynamically and/or statically) may
be migrated, so that we may compute the effects of migration on load (because
migration involves both pointwise and continuous communication bandwidth).
* you try to migrate objects that communicate heavily to machines that would
reduce communication overhead.
* you try to migrate objects that communicate lightly to machines less loaded.
* you try to migrate objects to places that fit best its needed resources
(e.g. inactive objects to disk, very active objects to fast hosts).
* never migrate an object to somewhere it wouldn't.
* you don't migrate an object in an unsecure place.
* you don't accept unsecure migrating foreign objects.
> I think this open thing is thwarting our attempts to make
> this thing work fast.
Why should it be at all ? I think the opposite.
> A system that crawls will never be used. A system
> that needs tens of megs and fast cpu's will have a more limited market.
You're the only one who talked about that. Modules should be *small*.
If you're wondering about the overhead of meta-protocols, well, I'm sure
the full ones should run only on big machines (which will be objects
meta-servers), in the same way as you don't compile big programs on small
machines but on big ones (unless you really need to).
That's what a distributed OS is all about !!!
>>>> Fat binaries encourage obesity. I'm for a pure LLL distribution.
>>> And how will you distribute the latest hyper-fast bitblt routines ?
>>> Or 3D game engine ? In LLL ? Let me laugh. Highly optimized code is needed
>
> BitBlt is usually done by a engine, easily programmed by the LLL.
No ! A really fast BitBlt must be done in optimized assembly.
> A 3-D game engine is, you said it, an engine, being passed relatively high
> level commands by the LLL. The engine would be bundled with the kernel;
Are you crazy ? Do you mean the kernel team (I'm against any
kind of kernel) should make just all the programming in the world ?
That can't be. Whatever you write, there will still be more things
to write !!! If you provide a 3-D game engine, someone will want a
better one with more features, or a 4-D game engine, or whatever.
Again, be aware that you're no God, and no Prophet. We're here to
give people a free computing world, not a totalitarian centralistic
one a la Apple Computer or Microsoft.
> you turn on the system and its already there, appearing just like any
> other object.
Hey, not everybody wants a 3-D engine. It's no use on those 64KB machines
you were talking about ! Or on a machine far away from any display, etc.
Will a machine without sound device be forced to have a sound driver too ?
> The difference is the engine and drivers are in machine
> and are hidden. I've heard this approach critiqued before, but again,
> give me a better solution.
Easy one: no kernel, only a bootstrap program that loads modules. All
modules are loadable and unloadable.
> FAT binaries with PGP signatures is not an
> option, if you let machine code float around between systems you are
> asking for big trouble. When your good security (PGoodP) is not good enough!
It's the only solution. That's not asking for trouble at all.
If your language is truely low-level, I don't see how assembly could be more
dangerous. But it can be much more efficient for that 1% code that needs be
very quick and would be darn slow to optimize that much with a compiler, if
even possible given available compiler technology.
As for PGP, that's Pretty Good Privacy. I'm not saying that should be
the only possible solution, but it's pretty good, and I don't see what
else you propose.
-- , , _ v ~ ^ --
-- Fare -- rideau@clipper.ens.fr -- Francois-Rene Rideau -- +)ang-Vu Ban --
-- ' / . --
MOOSE project member. OSL developper. | | /
Dreams about The Universal (Distributed) Database. --- --- //
Snail mail: 6, rue Augustin Thierry 75019 PARIS FRANCE /|\ /|\ //
Phone: 033 1 42026735 /|\ /|\ /