New OS idea
Francois-Rene Rideau
rideau@nef.ens.fr
27 Nov 1996 23:10:36 +0100
On comp.os.misc,comp.programming,comp.object,comp.sys.powerpc, thus spake
>>: Sam Falvo
>:G Sumner Hayes <sumner+@CMU.EDU>
> Let me add twp things to the above list: designing an OS without
> protected memory these days is foolish. Designing an OS without
> preemptive multitasking these days is foolish.
>
Yes and No.
Let me explain.
YES:
Surely, *from the outermost user-programmer-wise point of view*,
the system should be made of concurrent objects
safely isolated from each other's bugs/attacks.
If that's what you mean, then I 100% back you.
NO:
If you mean that a hardware-based implementation is required
for the above agreed features, I utterly disagree.
I see absolutely NO REASON
why this would have to be done in hardware,
without any software cooperation,
through MMU-based memory protection
and interrupt-driven task preemption.
Actually, I see ALL REASONS
why it CAN'T be done in hardware, only in software!
Obvious reason: the hardware can only check
a statically bounded small number of cases (finite expressivity)
while bugs/attacks come in unboundedly many cases (infinite expressivity).
Hardware implementation only slows down the machines
and makes software more complicated
to follow hardware bloatedness,
without solving the real problems,
only gross approximations to them.
Simpler hardware (MISC chips)
and finer software (software correctness proofs)
could do that much better, faster, cheaper.
Unlike full hardware implementation,
minimal hardware support might be welcome when useful,
according to the usual 10%/90% rule.
Of course, this means a revolution
in current software development mentality and technology.
So what?
Are we talking about current OSes or Future OSes?
Before forty years,
we're reaching the limit of
emulating the Von Neumann architecture
with ever huger ever finer hardware.
Heat consumption and quantum effects
will force people to rethink about it.
I'm confident that using clusters of cheapo MISC chips in parallel,
programming them with a proof-friendly high-level language
(like Clean or OCAML, unlike C/C++),
will appear as the obvious solution,
all the more when MISC can go farther than anything else in raw speed.
Of course, I'm also confident
that lots of zillion dollars will go on
being wasted with the traditional approach for quite a long time
until at last the facts become obvious to managers.
Actually I'm even confident that those zillion dollars will eventually
have been more than what would have been needed to resorb
hunger in the world, if spent more intelligently.
Well, the world's not perfect.
>> To give you an idea, the kernel, with one system library, and no
>> device drivers, occupies less than 10K of RAM. With the object model,
>> I'm expecting less than 20K of RAM. This is a FAR cry from the 90K or
>> so for the Amiga's exec.library (then again, I hand-crafted everything
>> in assembly. However, the API itself is very platform independent, so
>> it IS portable if you're willing to re-write the OS in C or
>> whatever).
>
> Sounds good; how useful is it as a general-purpose OS? Will you have
> compatibility layers with POSIX? Win32? MacOS? Anything that
> substantial amounts of code exist for? Does it serve its purpose as
> an abstraction layer between the hardware and the software?
> Networking? Sound? Video? What is the security model? Does it have
> an extensible design allowing it to take advantage of 64-bit (or
> 128-bit, in the future) machines?
>
Yeah. The deep problem is to provide a USEFUL OS,
or the proof of a concept to make further OSes more useful,
not just to write a toy OS.
I doubt any of us has the guts to bring a useful OS to the world,
when free Linux (and its future adaptations)
can compete with any traditional approach already.
Then there remains proving new OS concepts.
I don't think there's any point in proving that tiny OSes can exist.
There already exist commercial QNX (8K kernel fits 486 primary cache!),
or free VSTa (a bit larger, but may be due to -g compile option).
Anyway, if you're to use any existing software or development tool
on current hardware, you can't escape bloatware anyway:
How large are POSIX libraries already?
Network drivers, libraries, and glue?
Networkable GUI?
So the few K's you saved from the kernel won't be worth your time.
There's no reason to save memory on a specific application
if you're to run generic software
done with current bloat-friendly development tools;
The only reason why one should spend time
reducing resource consumption on a specific application
is to sell this application en-masse on specialized hardware,
in which case FORTH-based development tools also are currently available
that will beat anything in size and reliability.
I don't mean that software should be bloated,
but that making software smaller should be a side-effect
of using finer development tools,
not of wasting lots of time with current ones.
If you want too fight bloat, build new development tools, not a new OS.
*My* project is to prove, following the trail shown by Turing himself,
that the key of better computer systems lies
not in ever-more bloated hardware,
but in better software design.
I'd like the whole system to be built that way,
but I first need build the development tools.
I wish to prove that a system can provide safe concurrent objects
without using *any* of those bloated hardware mechanism that
slows down tenfold current CPUs: large instruction sets,
complicated memory protection, cache mechanisms to emulate
Von Neumann architecture.
I wish to prove that a dynamically reflective system
is much more efficient, responsive, maintainable, debuggable,
provable safe or correct, interfacable, adaptable, portable, etc,
that current static languages with contorted shell glue.
I wish to prove that interfaces are more easily adapted
to the users' interactive needs
when they are decoupled from the meaningful code.
Hardware note: on a one bloated CISC or RISC chip,
you could fit tens of times more horsepower with
parallel MISC technology units.
MISC CPUs are so small that they you could put one per memory chip,
instead of using levels of cache memory between a one active chip
and many passive ones.
This would also remove the power hog and slowdown
due to inter-chip memory busses!
Instead of emulating a one fast but huge and bloated Von Neuman machine
with lots of memory,
you could have hundreds of not-as-fast-but-fair-enough-and-much-cheaper
units running in parallel!
MISC technology is ready, and only lacks financial support
or academic interest,
suffering from its originality wrt traditional hardware design.
Software note: proof-friendly high-level languages exist,
and their implementation are faster than implementations
of traditional low-level languages,
relatively to the funds invested
in making optimized compilers for either.
Plus no one says that space/speed critical code (e.g. device drivers)
should forcibly be written in those languages,
or that no proof-capable-and-requiring low-level language
could be provided to the user to write the 1% optimized code/metacode
when the compiler doesn't get something right.
Plus semantically clean languages are a triffle
to parallelize, port, analyze, prove safe,
prove correct, manipulate for high-level optimizations, etc,
than traditional dirty languages. And of course
they can do can do "OO" cleanlier than any traditional "OO" language,
and make safe multithread/parallel/concurrent programming a breeze.
See Clean for an easily parallelizable/proven clean language,
CMU CL or OCAML for strong optimizing compilers for functional languages,
RScheme for a real-time system programming functional language,
Coq for a proof system for functional languages, etc.
Reflection note:
all development systems are reflective,
only this aspect is too often ignored, and made cumbersome.
Because reflection is not a recognized part of the system,
code generation is made a slow, static affair were newly generated
code cannot easily reuse information accumulated through
previous dynamic interaction,
people end up with using lots of special-purpose languages
that communicate unsafely through unstructured text/binary streams.
An openly reflective system can end this chaos,
by ensuring that structured objects are passed between
computations that actually follow these structures
through semantically preserving transformations.
This means the end of the need to explicitly generate/parse,
check/blindly-trust and convert/ignore file formats,
which eats most CPU time,
introduces unsafeties, inconsistencies, errors,
lack of real-time/interactive response, etc.
[Imagine that chip design tool
working on huge multimegabyte ascii text documents,
instead of directly manipulating adapted interal structures!]
This means the end of unpowerful special-purpose languages
that only do what the designer intended, not what the user needs.
This means that objects can freely communicate,
that any service is made universally available
and programmable/controllable, etc.
And it *IS* possible to do that efficiently,
thanks to partial evaluation, a generalization of code generation,
that can likewise be done dynamically at run-time,
so that when you can both have dynamic meta-objects
and not have to actually go through several levels of interpret loops.
Interface note:
The current trend software development is to make it interface-based,
generating stubs for the meaningful part from the meaningless
media-dependent shape.
This makes software unportable to different interfaces
(mind the blind and other people who can't use a screen),
and encourage neglect of meaningful parts of software.
Instead of generating meaningless code stubs from meaningless interfaces,
we'd rather generate full interfaces (not stubs) from meaningful code.
Because properly annotated code is meaningful, this is possible,
while the inverse isn't.
Different Interface generators could automatically generate
interfaces for the MacOS, Windows, X/Motif, X/Xt, X/Athena,
raw VGA, text graphic or raw terminal lovers/users.
And why restrict to vision-based interfaces?
sound-based, touch-based, (smell-based?) interfaces
could be automatically generated, too.
Of course, interface generators could be parametrized,
tweaked, etc, to most suit the users' needs, tastes, and habits.
It's easy to give an outer shape once you have the inner meaning;
it's impossible to guess the inner meaning from the outer shape.
The problem, again, is that by lack of academic or financial interest,
by the division of local efforts without any global view of the problem,
these ready technology have not been assembled in a consistent system yet.
Each have been developed separately
without any will to build a coherent whole.
The Tunes project aims at showing that such a system is possible,
by implementing one as a proof-of-concept.
http://www.dnai.com/~jfox MISC technology
http://www.eleves.ens.fr:8080/home/rideau/Tunes/ Tunes Project
See .../Tunes/Review/ for more about existing OS and Language technology.
I could also talk about other things that traditional designs do ill,
like building a wall between users and programmers;
like requiring the user-programmer to explicitly
implement object persistence when this could be managed orthogonally;
like depending on a centralized model of
kernel-vs-user client-vs-server programmer-vs-user.
I've been long enough.
== Fare' -- rideau@ens.fr -- Franc,ois-Rene' Rideau -- DDa(.ng-Vu~ Ba^n ==
Join the TUNES project for a computing system based on computing freedom !
TUNES is a Useful, Not Expedient System
URL: "http://www.eleves.ens.fr:8080/home/rideau/Tunes/"