MOOSE KER,*+ far10

Francois-Rene Rideau rideau@clipper
Sun, 28 Feb 93 0:41:23 MET


I have 50 Moose message to answer, accumulated since Feb 14 !
So here is my replies to all the ideas emitted with the which I don't
agree (or to the which I want to add something important), after
one week to far away from my computer, and another one too near to it.
   ,
Fare

(I have	 extracted pats of the longest letter, but I also answer to other
points)

-(0a)------------------------------------------------------------------------
>>
>>    ----- Transcript of session follows -----
>> While talking to whistler.sfu.ca:
>> >>> RCPT To:<moose-programmers@sfu.ca>
>> <<< 554 :include:up... Cannot open up: No such file or directory
>> 554 moose-programmers@sfu.ca... Service unavailable
>> 
>> ------- Received message follows ----
>> 
>> Received: by mulga.cs.mu.OZ.AU (5.64+1.3.1+0.50); id AA25157
>> 	Wed, 17 Feb 1993 17:00:43 +1100 (from winikoff)
>> Return-Path: <winikoff>
>> Subject: Re: MOOSE - comments (REPOST)
>> To: moose-programmers@sfu.ca (Moose Project)
>> Date: Wed, 17 Feb 93 17:00:42 EST
>> From: Michael David WINIKOFF <winikoff@mulga.cs.mu.OZ.AU>
>> X-Mailer: ELM [version 2.3 PL0]
>> 
>> I mailed this a while back and haven't seen it or responses to it.
>> Just in case it wasn't forwarded to everyone I'm remailing it now.
>> 
>> This is comments on the original draft and a few following mails.
>> 
>> Michael
>> 
>> --------------------------------------------------------------------

>> [...]

>> > >  The two most important features designed into the Moose
>> > > operating system will be simplicity and flexibility.  For one, a
>> > 
>> > Simplicity and flexibility are not features they are design guidelines.
>> > The way to have them in a system is to have them guide all the other design
>> > decisions we make.
>> >
I agree with both; simplicity and flexibility should guide our choices.
see my topic about OOness and genericity (2) at the end of the message.

>> > ** Could you please explain how object inheritance is done at the OS level?
>> > In particular how it is done SAFELY?
>> >
 At the end of this letter, I should add a (long) topic about my views
upon OO and protection (1) (numbers reversed because one put before
the other)
 
>> > Regards writing the kernel in assembler -- we can break the kernel up into
>> > machine dependent and machine independent parts and write only the former in
>> > assembler.
>> > 
>> > I think the less assembler we use the better.
(that should be obvious, for portability reasons)


>> > Giving device drivers their own priority -- good.
>> >
Ok, but WHO (what processes) can set up device drivers ?
if anyone can call its own local driver, there's no more security. See (1)

>> > >  Most tasks will never need to call these functions explicitly
>> > > as a certain amount of memory will be allocated to each upon
>> > > initialization, depending on the compiler used.  This larger block
>> > > of memory should then be broken up by the task into smaller,
>> >
>> > No. It's simpler to have the compiler insert code which at runtime calls
>> > AllocateMemory and uses it's own data structures to subdivide it.
>> > Automatic allocation of memory isn't needed at all let alone in the kernel
If MOOSE isn't like MS-LOSS, it should include a good dynamical memory alloca-
tion. In the MicroShit operating garbage, you have only one type of (low-level)
slow @+*=%&$ memory allocation system, and if you're not pleased with what
DOS gave you, you must do it all again yourself.
MOOSE should include a large number of different (but similar) memory
allocation zones, for different use needed: as (I don't remember who) justly
points out, different languages, different approaches of computer art need
use memory differently; C and Pascal use a impicit stack and an explicit heap;
functionnal languages (like LISP, ML, and others I don't know) need a implicit
heap with a garbage collecting organization; (those two approaches are the main
I know - tell me more if you know others). That's why the system should offer
many different capabilities of allocating memory, from raw reservation of
contiguous physical memory (to device drives) to garbage collecting-aware
lending of virtual memory (inside apps). See my opinion about OOness and
genericity (2)
. 
>> >
>> > Bear in mind that users should never use system calls -- the compile should
>> > provide (eg.) malloc and implement it reasonably.
>> > Doing that using the system calls is the problem of the compile writer.
(of course, but don't forget either that the system should be conceived for
tomorrow's compilers to do it easily and efficiently)

>> > ----------------------------------
>> >
>> > > most restricted "user" level.  All memory allocated will be
>> > > accessible only by the requesting task and its parent or children
>> > > tasks, allowing no other tasks access to this memory.
>> >
>> > No. I don't like allowing other tasks access to memory by default.
>> > I'd rather that the default was no access and that shared memory was explicitly
>> > requested.
>> > 
>> > My two reasons are
>> > (1) Robustness -- by allowing a whole hierarchy to read/write a process' memory
>> > 	it becomes possible for one of these processes doing something wrong and
>> >  	crashing the entire hierarchy.
>> > (2) Security. 
  To me all that discussion is good only for old bulky OSes of the ancient days
(Unix and its heirs). To me there are no definite huge tasks separate one from
the other, and communicating with great difficulty and slowness through
clumsy special files. To me there is a huge number of small object each
interacting with its few neighbours, and sharing a little bit of info with
them. For simpleness purpose (as well in conception, implementatin, as in use)
objects are more or less hierarchically grouped when possible: virtual
groupings as well as physical one; many groupings being able to overlap; but
as these groupings DO overlap (for example, different ressources each are
accessed by different set of objects), it would be stupid to privilege one
of these groupings in relation with the others (i.e. privilege a particular
resource), telling that objects are in the same process if and only if they
are grouped together in that particular way of grouping objects, and forcing
objects to use named (slow) files and pipes to communicate outside their
groups, and forcing you to rewrite entirely resource handling INSIDE the so-
called process.
 What you all moosers are doing and I don't agree at all is copying Unix
concepts. Unix an old system which (as DOS) has a long genetical history
(but not as DOS, that of a green slime monster), with defect accumulating,
and tricks more and more complcating everyone's life to correct them. It is
stupid, clumsy and bulky; it gives you bulldozers to smash flies, but isn't
able to lend you any tool both standard (an app fitting the others) and good
for little precision work.  With DOS, you know what you have: nothing; but
with Unix, you don't, unless you could read 3 tons of manual pages (far more
than Mac documentation) so that you lose a GREAT amount of time to interface
with the system, through a badly documented system. Unix is a computer
oriented system: you must learn all the machines bugs and hacks to use it;
MacOS is a stupid-user oriented system, so that you don't have many things to
learn, because you can't do many things either ; and of course, DOS is a
non-oriented system, designed to yield the greatest profit with the least
work, and so complicated and buggy you won't trust any second hand soure to
reproduce its horrible behaviour.
 Let's not copy it those OSes' respective flaws. If I did want Unix on my PC,
I'd use linux (sure I would, and certainly will if MOOSE goes in the same
mistakes as its predecessors). But Unix really isn't an OS to dream of; it is
just the only true OS (not like DOS) available with standard (and free !)
apps and tools; but personaly, I'm SO sick of Unix !

>> > I like the MEM_PHYSICAL flag.
 I don't since I have my opinion about how DISTINCT layers of memory allocation
systems should exist the ones inside the others, with as much independance as
possible between them. Low-level programs allocate low-level (physical) memory
zones, high-level progs allocate HL (logical, virtual) memory zones, etc at
each "level" of programming.
 Moreover, this is a hack, and we shouldn't mix specifications and
implementation (as Peter Mueller justly pted out). What would be its
significance in a (highly envisageable) MOOSE kernel a a subsystem of another
more common and/or harware-built OS (Unix on many workstations, MacOS,
IBM&Apple's next system) ?
 Let's not mix the two anymore (did we not say MOOSE should be portable ?).
 
>> > > There will be a limit on the total number of memory allocations
>> > > made by a single task and in the system as a whole, so efforts
>> > > should be made by applications to consolidate memory usage.
>> > 
>> > This is policy rather then mechanism.
>> > I think it is better to say "there will be a mechanism by which processes can
>> > have a limit set on the amount of memory they may allocate"
 To me, you can parametrize the interface beween apps (herarchical groups of
object) and memory; you put an intemediate request analyser between the two,
and that's all. 'sholdn't have anything to do with the Kernel. My solution
is that there be a standard common inteface for memory zones (see the mail
with my HL specs).

>> > > 'size' parameter.  To simplify and expedite memory management, only
>> > > large chunks of memory will be allocated at one time, usually
>> > > varying between 1k and 8k in size, depending on the host platform.
>> > 
>> > This is a implementation decision. I feel that we should either
>> > (1) Make a fixed decision across all platforms
>> > Or preferably
>> > (2) Design the system in such a way that the size of memory allocation does
>> > not make a difference to the user.
Of course (2) ! We're not bulding a low-level (next abbreviated LL) but HL
(will stand for high-level) standard for MOOSE !
Such questions shouldn't occur so the answe is obvious !
Next time, just state the answer (and if you really do want it, quote the
question and then tell for the nth time why the answer is obvious).

>> > >      The starting address of the memory block will be returned by
>> > > this function.  If the allocation fails a null address is returned,
>> > > indicated by an address of zero.
>> > 
>> > Firstly there is scope for a bug here -- consider what happens if the system
>> > allocates a task memory beginning at (virtual) memory location 0 ...
>> > 
>> > Secondly and more importantly we need a system wide standard facility for
>> > communicating back error types and causes -- In order to have sensible 
>> > error reporting we need to return some kind of error code.
There we come to an important point: error handling. Of course, old OSes
(stands for Old Shit ?), being based upon C, couldn't integrate this concept,
neither could they understand anything sensible about HL programming (nor
LL programming with respect (?) to DOS - Double Old Shit (OS/2 being half-old
shit). Recent language like ADA, later version of (true) C++ and CAML do
include exception handling, and many HLL I don't know certainly do. I like
very much that of CAML (not knowing the others - tell me - 't'should be
mostly taken from the same language as for C++, but CAML should be better
because of automatic genericity as opposed to C++'s template hack). The
Kernel should include exception handling as a standard, so that objects
can exchange not only usual data, but also exceptional data (notice that
these language do not allow embedding exception in data itself by declaration
of exceptional format as should accept the language I vouch for (of course,
their are ways to obtain equivalent results, but then why not repeat we all
use Turing Machine equivalent languages ?)

(BTW, who do have read my HL specs, and what do you think of it ? Do flame
its flaws, you're welcome, but do encourage what pleases you in it too.
Do not hesitate to ask for more precisions)
(NB: CAML is a particular version of ML integrating imperative programming
as well as declarative one; we work with it at the School (Ecole Normale
Superieure) in its CAML light 0.5 implementation by Xavier Leroy & Damien
Doligez; it's available for example at ftp: nuri.inria.fr - neither place
nor time to tell more about it here; unhappily, the syntax being to concise
it very dirty, as opposed to lisp's lots of insipid and stubborn parentheses).

>> > Is realloc necessary? Can someone come up with a situation where it is 
>> > essential or even important?
Easy to find: imagine a simple prog' begins by a huge recursive calculus,
using, say, 3megs of stack (let's say it's not optimized for recursive
stack usage, as should be able to compile my HLL). Then, the result is saved,
and a 1Kb largely suffises. Not only should the stack be reallocated, not to
occupy 3 idle megs in swap space, but the Task Manager (I'm talking about
the low-level manager of harware tasks - a close extension to the Kernel,
particularly on the 386 version) should do it by himself (except for possible
compatibility with hacks like reading previously poped values ...)

>> > > E. System Clock and Event Scheduling
>> > > 
>> > > >> How should this be done?
>> > 
>> > Have a device to handle this. 
>> > The advantages of doing things in devices are
>> > 	(1) The kernel is smaller and simpler
>> > 	(2) We can change our mind later and write a different device and
>> > 		easily add it to the system.
(obvious)

>> > > F. Interrupt Handling
>> > > 
>> > > >> How should this be done?
>> > 
>> > I favor a simple interrupt routine that "converts" an interrupt to a message.
>> > Of course some interrupts WILL need to be handled by an interrupt routine but
>> > in general they should be converted into messages.
>> > I feel that this will simplify the writing of device drivers.

 Well, Andreas has proposed a very fine system; as I understood it (perhaps
mixing sit up with my own ideas), you tell each IO manager what routine to
link to the device; instead of using a slow and uneasy common server/client
architecture, where the one does it all and the others just use it and can't
add anything to it, you have a more balanced architecture, where everyone
gives exactly what it has, and takes what it needs (sounds someho
communistic; funny, isn't it ? but the flaw about social utopias was men can't
totally trust each other in real life. Now, clean OO objects surely can, if
the user trusted the one who designed the software he uses...).
 If an object wants to use a message architecture, well, he can do it, and
won't have to do it all by himself (well, in any system, you can redo all by
yourself; that's not using the system but going round it): standard routines
will do that for him. For this to be efficient, OO genericity should be
present at system level: LLL compilers and interpreters available.

To me, here's the privilege hierarchy:

* (if we are to et a 4 level one)
Kernel < System < Libraries < User (no need of Apps)

* to sharpen this outlook:
Kernel exec/mem   LLL interpreter
         Input/Output     LLL compiler   HLL semi-compileree
 		    Devices	  Libraries
                                            User interface="Applications"
     					               common human being


exec/mem: the CPU resource manager, including low-level memory handling &
  task/thread switching
Input/Output: other hardware resource manager
LLL: Low level language - both interpreted & compiled
HLL: High level language - semi-compiled to LLL.
Devices: fast machine code routines
Libraries: LLL interpreted routines/low privilege executable code if exists
Apps: LLL interpreted code
Human being: external random number generator. Some generate HLL, a few use
 LLL.


 Another aspect of this (andreas' and now my) idea is that you transmit code
as easily as previous systems could exchange binary data. This is a standard
feature and standard LLL is here to harmonize it all, weather interpreted
(for debugging purposes/ when performing lossless HL tasks/ when code changes
so often compile time is greater than interpreting time loss) or compiled
(speed with portability and OO maintenance easiness). Previous system made
an arbitrary distinction between data and code (which Turing himself fought
in his first machine); this differentiation was worsened by the CISC
conception with its random stupidly complicated instruction set ( Hey,
Dennis, will you tell that to your intel comrades ? :-) (in fact, I think
they know perfectly well at intel how CISC is horrible and bulky; but they
also know that MS-LOSS compatibility is its golden eggs chicken). To get
back to our topic, the intermediate LLL should be so simple as to dispell
this strong artificial barrier between code and data (which OO embedding
only began to attack).
 To sum up, I think Andreas' approach is definitely the good one (even if
he did not -consciously- think about all its linked concepts). I learnt that
there are many many such thing we know somehow, but we can't use at the best
until we write them down and pronounce them aloud.

>> > Objects:
>> > I like the approach of simply defining a standard format.
Well, if not, this would no more really be OO'ed, and be another DOS, waiting
to be (VERY quickly) obsolete !
OO standardness is compulsory. To be in advance with other existing OS
projects, we must also include NOW genericity (for the which C++ templates
are only a hack) and logical programming (the most common use is find how
to find the "best" path to transform a virtual object into another, knowing
the elementary virtual transformations available and their physical cost).

>> > Files:
>> > 	Actually persistent objects subsume files.
>> > 	(Simply have an object with one of it's attributes being an array of
>> > 		bytes and methods including "read" and "write")
>> > [More about this later]
 The Unix idea was "files are everything", that is, all you can manipulate is
huge, bulky files, with 1Kb size units; when they saw what the user wanted
to manipulate is big number of little data objects, they added little ASCII
string computing in their shell; and to be used the one with the others,
programs all have to input/output ASCII text, ("Everything MUST be expressed as
ASCII", as their is no automatic multitasking compliant file translation system
available) with (moreover) external (slow) not-so-standard (or
worse:compatible!) utilities to handle small strings through a
huge-file-designed FS ! All that is a great bullshit, and that computer
power demand and waste make me vomit.
 A true system's motto should be the opposite: "Everything is an object" (call
it file, call it unit, call it scheme, call it GOD, I don't give a damn; but
it just exists !), so that anything you feed the system with, huge files or
tiny objects, is somehow UNDERSTANDABLE by the system and any generic OO
compliant software.

>> > > from its sleep and may enter its critical section.
>> > > 
>> > > 	semaphore@Down()
>> > > 
>> > 
>> > Why the "@" in semaphore@Down ?
Don't be so upset because of notations. When we have finished internal
specifications, we'll have plenty of time to discuss interface conventions.
(But I favor the same notation for methods as for data fields: why
differentiate each from the other ?)

>> > Nothing is said about how semaphores are created/destroyed.
>> > More seriously nothing is said about how semaphores end up being shared
>> > between two tasks.
The same way any objects get to share any information. See the (1) topic below.

>> > Should we have spinlocks too?
On multiprocessor systems, of course, but that's a kernel implementation
concern; let's not mix specs and its impl' again !

>> > Timers: These seem to be quite complicated.
>> > I feel that timers can be put into a device rather then being in the kernel.
(see what you said before about interrupts in general; this is good also for
timer interrupts and all device events in general !)

>> > One possible (simpler) interface:
>> > 
>> > 	eventid = schedule(time,event)
>> > 	status  = remove(eventid)
>> > 	eventid = schedulerelative(time_delta, event)
>> > 	time    = gettime()
>> > 
>> > In order to simulate timers have a process that does a gettime when it receives
>> > a message.
>> > 
>> > In order to simulate periodic events have the the receiver of the message 
>> > reschedule itself.
>> > 
>> > I'm not certain whether we need facilities to determine how much time a process
>> > has consumed.
>> > If we do it will have to be in the kernel since it involves the process 
>> > scheduler.
How complicated !
Again, let's program it by layer. The lowest (kernel) layer does a Andreas
says. Then, you can have a filter monopolize the lower-level resource to emit
events on a queue, then if you want, mix that queue into a general event queue
for stubborn processes to un-multiplex the global queue, as stupid current
systems do. YES, you CAN do it ! But once you see there are simpler, easier
means to handle data, by piping data just where you want, you WON'T use the
centralizing algorithm. Everything is easier, neater, quicker, better, when
objects just fit one into the other.

>> > > B. Display Output Devices
>> > > 
>> > >     A proper definition of the state of the display output devices
>> > > would be an object-oriented GUI similar to X11, only much easier
>> > > to use. :-)
>> > 
>> > NO!!!!
>> > The user interface is a USER program for flexibility.
>> > The display should offer graphics primitives.
>> > (Eg. line, text, blit ...)
Of course again there are several layers ! Every program uses just the
one it pleases. Every layer has a standard interface; every interface can
be filtered for programs not to interfere each with the other.
(why must I repeat always the same motto ?)
(why am I so aggressive today ? because of Unix !)

>> > User input:
>> > 	It is useful to be able to insert input filters -- processes through
>> > 	which input events pass before being sent to their destination.
>> > (Uses: Screen savers, macro recorders, shortcuts ...)
the BASIC PRINCIPLE of using an OO system is linking object one to the other.
Call it piping, method calling, whatever, its just the inherent idea that
object interact; objects live ! You can't just isolate one: an isolated object
does not live; it cannot even exist !

>> > File systems:
>> > 	One issue that hasn't been raised is crash recovery.
>> > 	I suggest people have a look at Amoeba's file system -- they store
>> > 	files contiguously sacrificing disk space for speed.

>> > Libraries:
>> > 	I'm sorry. I didn't understand this too well.
>> > 	Can you please explain them. 
>> > 	IN particular I'm uncertain as to the difference between objects and 
>> > 		libraries -- libraries seem to be just typed objects which
>> > 		sounds like what you were trying to avoid in ...
 Well libraries, as you see, ar paarticular objects. Of course every object
should be properly typed (that's a definition, something untyped doesn't
"exist" to the system)

>> > >      Strictly speaking, the operating system will define nothing
>> > > more than a standard format for storing a class and a methodology
>> > > for accessing its attributes and methods in a meaningful way.  No
>> > > enforcement of these policies will actually be done to allow for
>> > > maximum flexibility and efficiency.  If objects are not implemented
>> > > correctly by an application or device driver, the loss will be only
 Why "nothing more" ? That's THAT that is important, that's what make the
system OO'ed. That's why you MUST enforce OO compliancy, else you have
nothing but Unix. That's also why OO specs should be well conceived (they're
the key to success or failure), and fit expectation not of the worst
HLL possible (as Dennis seems to have suggested by forbidding variable
length parameters in procedure to fit Pascal stubborness), but of the best
with features not all reunited yet in a unique language (which I will never
fully say how I wish it be), but all more or less present somewhere:
generic types; unifying schemes, including types themselves and functions,
along with any executable code; logical constraint programming; implicit
parameters with a "the computer does it for you the best with the least
data" philosophy,...
 If object are not correctly  implemented, they just won't be able to
communicate ! And we don't want a system where every little thing you do
requires a different application, and you can't communicate the result of
one to the other, but under a postscript file or something like that, do
we ? That's why every object that a user one day may want to link with his
own objects you didn't thought about must be well defined in the OO system;
and these object are seldom what you believed, so the more OO compliant
objects, the better. Now, some object may have very little likeliness to
be interfaced with others, so that you can (meta-)build interfaces that do
exist if you really insist, but that are complicated and slow under
usual circomstances - give just the info in a remote place of an info file,
and that will do: the OO compliancy should be flexible when needed !
Also remember that for HL tasks, you don't gain much from systematically
use optimized code (notwithstanding compile time), and may especially lose
a great deal of memory for code that make you gain less time than it takes
to load it from disk !

>> > [...]
>> > Gary Duzan
>> > ~~~~~~~~~~
>> > 
>> > I think we SHOULD provide a CLI (Command Line Interface) -- their very useful
>> > when porting Unix based software.
>> > 
>> > >   So what language do we write the high-level stuff in? Should it
>> > > matter? Can we make it not matter?
To me, the only way to have an all-user-understandable system is for the HLL
and the CLI to be just the same ! That means that usual (unoptimized) HLL
objects are real-time semi-compiled to LLL; LLL could also be seen as a
subset of HLL, and their syntax may be devised to fit HLL's; the HLL may
thus have many a layer, more or less HL/LL. It's no more a HLL or a LLL, it
becomes THE Programming Language. That's why it's (sub)syntax(es) should fit
all very different needs of people for different tasks, from device I/O to
mathematical abstraction. If we do not implement everything now (let's have
the lowest layers first, and build each with the former ones, as we'll do
with the system more generally)

>> > We MUST make it not matter. An OS that only supports one language isn't going
>> > to be useful.
Well, there's a difference between supporting a language and have it as a
primary language. ANY system can run ANY language as long as it's powerful
enough. You CAN run AppleSoft BASIC on a Unix WorkStation; but you just WON'T,
because it's without any interest but historical. That should be the same for
MOOSE and C/C++: of course you can still use C, but you won't be able to
-directly- use all its power with such a LL dirty language, and you will have
to include many a library to interface both. Moreover, standard LLLing allows
easy implementation of ANY language you want, by providing a fore part of a
compiler from the new language to any (combination of) existing standard layer
of THE PL.


>> > Fare'
>> > ~~~~~
>> > 
>> > Agree. The system should be written in a HLL. C has had an influence on Unix etc
>> > 
>> > > To conclude, I think we mustn't rush doing what would be the kernel of a
>> > > system whose features aren't defined yet. Let's define the high-level object
>> > > orientation of the system before we write the kernel (more precisely, let's
>> > > not write something in the kernel we should have to change completely because
>> > > it will not fit high-level system requirements).
>> > 
>> > I disagree - I feel that the kernel should be designed first with the
>> > applications in mind.
 I disagree: we do agree ! Both must be done at the same time because THAT is
not the fundamental criteria for software development: you must first define
WHAT you want it to do, then see HOW to do it, and feed back the problems
encountered to eventually modify the HL specs, and entering the cycle again.

>> > Lets face it -- this is a chicken and egg problem: The kernel and the
>> > applications (including device drivers) depend on each other's design.
>> > The solution is to start with the kernel (which doesn't actually use the device
>> > drivers and so can be designed without knowing the precise interface) and then
>> > designing the device drivers. This process is then iterated to a fixpoint.


>> > Regards the Mac OS being in ROM - we can simply let it boot and THEN take over.
Why take over, and not run inside it ? after all, a Mac user will still want
to use its existing standard programs, and the MacOS is clean enough to allow
defining new interchange standards; it's up to the Kernel and the I/O device
to adapt the external MacOS layer. Everyone will enjoy interchange betweene
Mac and MOOSE software. That's not possible with DOS where no program can
communicate with no other; and no one sensible will want to communicate with
Unix clumsy ASCII, but to support standard network services.

>> > Some ideas and suggestions -- PLEASE READ THIS BIT
>> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please read every one else's whole stuff, until work was well divided - see (3)
below !

>> > (1) What audience are we aiming this OS at?
>> > 	My original understanding was standalone PCs but then you mentioned
>> > 	various network drivers.
>> > 	Do we plan to support (Eg.) diskless workstations?
MOOSE should be Pee O aRe Tee A Bee eL Ee. However, we all come to have 386
PC's, because cloning has made these stupid computers CHEAP and STANDARD.
But, we'd love  to be able to run our new standard OS on an even CHEAPER,
NON-STANDARD, BETTER computer (like, say, the Amiga or RISC machines).

>> > (2) We need to have a configuration facility to make installation of new
>> > 	software/devices/hardware totally painless.
>> > 	This of course, will not be in the kernel however I thought it
>> > 	worth mentioning.
>> > 	We should set things up so that either
>> > 	(1) An application doesn't need to know what it's running on
>> > 	(2) An application can easily find out what services the system provides
With my "link the objects" philosophy, there's no more need of a constant
global configuration file; the system is a dynamical link system; it just
boots an object (presumably an all-powerful interface that'll ultimately do
what you want, but propose you standard objects to begin with) and there you
are. You can link/unlink device drivers at anytime, for any group of objects
(you're responsible for it); the Kernel being reentrant, the system will be
able to run inside himslef, so that multiple task managing should be a trifle,
"only" imposing restrictions on atomicity of code/data (THAT is an important
issue !).

>> > 	This is particularly important since an application may find that
>> > 	certain device drivers have not been mounted.

>> > (3) (I've mentioned this b4) We need a facility for returning error codes
>> > 	when a system call fails.
see CAML light exceptions (better than C++ ones, but without the same name
polymorphing): you "virtually" (implementation is up to the Kernel)
define exceptions, with handlers pushing and poping with the code, (the
Kernel/compilers may group or ungroup execptions to optimize), and when
an exception occurs, it is fed to the nearest handler available (with
catch facilities; ultimately a system error handling device will get it);
simple isn't it ? So we may define parametrized exceptions (as code and
data are equivalent, a parametrized value may as well be a function or an
array element or whatever) ...

>> > (4) Objects, Files, address spaces and all that jazz ...
>> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

>> > Persistent objects subsume both processes and files.
>> > Getting rid of the concept of the files is (IMHO) a strong forward step.
see what I said above about Unix, files, and objects.

>> > One idea which may simplify swapping is to use a single address space across
>> > all objects -- possibly even the one corresponding to where they are stored
>> > on disk.
>> > Of course most of the address space will be inaccessible ...
>> > 
>> > The main advantage of this scheme is that it makes shared memory simpler to 
>> > program with -- otherwise pointers have to be avoided since the addresses
>> > will differ from process to process.
>> > 
>> > [But see arenas later]
Again, that's for implementation eyes only. See (1b)

>> > (5) Objects must be first class -- we MUST be able to store objects in variables	, pass 'em to functions etc.
>> > 	The implementation will of course use pointers. This might be a good
>> > 	place to start thinking about capabilities.
wasn't that obvious ? (didn't I say the same thing b4 ? )

>> > (6) Security: It hasn't been mentioned yet.
>> > 	Do we want it?
>> > 	To what degree?
See (1)

>> > (7) I/O redirection:
>> > 	Idea: Have the following convention: when an object is started it 
>> > 	is given by it's creator the objects representing stdin stdout etc.
>> > 	(It may be more appropriate to think of screen, kbd etc.)
>> > 	This lets us easily do I/O redirection -- simply have the shell
>> > 	substitute say, a file for the keyboard or a printer for the screen etc.
>> > 
>> > 	[Note: The list of initial objects could perhaps include ALL objects 
>> > 	used (Eg. file system) for maximum flexibility -- this would then also
>> > 	let us run a program in an isolated environment ... good defence against
>> > 	trojan horses]
Call it I/O, whatever. There will be a generic pipe type to connect anything
that produces something
(in ML: a pipe is a (('a->'b)->('b->'c))->('a->'c) (I know I over-
parenthesised it)) Then your can have all kinds of I and/or O buffered pipes;
pipes that allow you to interrupt and/or see the traffic; automatic
transtyping pipes, etc.

>> > (8) Microkernel:
>> > 	Seems to be the way we're heading. Good.
't'should be able to run even on my HP28 ! Of course, you won't have any
tools, then, hardly a few devices for the simplest I/O.

>> > 	* Efficient IPC is important
>> > 	* Possible interface:
>> > 		send	-- sends a message to an object
>> > 		receive	-- returns the next object. Pauses caller if none
>> > 				available.
>> > 		call	-- Like a send but does a context switch to the receiver
>> > 			for extra speed 
>> > 		spawn (or create_object) 
>> > 		kill	-- destroys an object, freeing it's resources
>> > 		self	-- returns the id of the caller. One use is "kill(self)"
>> > 				instead of a special exit call.
>> > 		alloc	-- AllocateMemory
>> > 		free	-- FreeMemory

Yes, but keep it virtual (i.e. HLL before interpret/compile) level: I hate
unnecessary physical (so-called "virtual") virtual-methods table. Well, after
all, we should agree on inter-objects communication; but then, I think those
methods don't suffise.

>> > The main missing calls in the interface are process communication and 
>> > synchronisation.
(among others). Let's define the OBJECT -virtual- type before implementating
anything (but let's try to define everything to see what we need at base
level); we should need, for example to have its type and possibly other infos,
etc.


>> > (9) Process communication and synchronisation -- a proposal
>> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> > 		
>> > Process communication requires an efficient means of passing large 
>> > amounts of information between processes.
>> > 
>> > The one commonly used is shared memory -- the problem is that pointers don't
>> > work.
Zap,Zap: that's implementation problem. Let's have standard pointer types
defined in the LLL, and a virtual pointer type in the HLL: anything that cast
be used to "point" at an object among others can be called a virtual "pointer"!

>> > I would like to suggest a variation which I'd like to call "arenas"e
>> > The idea is that the arena is like shared memory except that when created 
>> > it is specified to which virtual memory addresses the arena should be bound.
>>
>> > Any process that attaches to the arena has it mapped to the same virtual memory
>> > addresses so pointers can be safely used as long as they remain pointing within
>> > the arena.
>>
Hey, this has few to do with an OS specs: Very Unportable ! Let's leave this to
LLL implementation(s): sure it can be used, but we don't use assembler hacks
-only- for the fun of it, but also to find HL requirements. What do 680x0
programmers would say if they heard you with these 386 tricks ?
There's only one thing at HL: you may want to allocate an object, and you may
want to publish it, so that others can see it where you put signs.

>> > Regards synchronisation -- semaphores are a fairly common primitive.
>> > Rather then having objects we can simply have operations "up" and "down" that
>> > take a pointer to an integer.
Yes, but semaphore need not be needed -explicitly- all HL: they should be a
very low-level built-in Kernel construct (the same as error handling, etc)

>> > By convention the first 4 bytes of an arena can be used as a semaphore for
>> > mutual exclusive access to the arena. This is only a convention and is not 
>> > enforced anywhere.
always more and more hacks ! If there should be semaphores, they should be
standard, and of course the system should enforce its mutual respect policy,
else anyone can have a "safe" task destroying data by not respecting simple
restriction.
>> > 
>> > Comments?
Isn't that enough ?

>> > (10) Do we support distributed applications? How?
I'd like to. That means a lot of sheculing is neaded to know how to distribute
the stuff.

>> > (11) Where is non-deterministic behavior expressed and how?
>> > 
>> > (12) Do we support some notion of "signal" -- ie. pre-empting messages?
>> > 	Should we? (I don't know)
There may simply be external (parallel) exceptions taking over when such signal
appears. The same as Andreas' trick. Remember that you give code as parameter
to other executable code. And then ?
   
>> > 
>> > (13) Should we be providing threads? (I don't think so but am open to debate)
what are threads/tasks but little/large objects ? Why build such a distinction
between both ? We saw well that any definite limit between two was artificial,
as is U*x's: when they don't share memory, unix processes share files; indeed,
there are but large clumsy file under unix; following that philosophy, numbers
should be stocked as directory entry, etc. That's more than stupid !
 Well, any executable object is a thread (or task) somehow. Some tasks need
more context than other (for example, a DOS emulating task may need many
complicated variables to achieve running, while a LLL interpreted task only
needs its (LLL) program counter & exception/running stack pointers &
environment pointer (= 4 system pointers only !).

>> >[ ...]
(I boycott the naming discussion )

---------------------------------------------
Here are other answers:
* Duzan:
>> 
>> =>Is this OS only for programmers.
>> 
>>    The OS should always be for the programmers; the user interface
>> should be for the users.
 YES, but appl. prog. is so much eased or hardened (or even made impossible)
by the OS that you can't totally separate each from the other.


* Dennis : (approximatively quoted)
 (HLL choice) 
> ... (for example) no function calls with arg. lists because not supported
> in Pascal !
 
  Then nothing but ints & chars, because else not supported in assembly (or
SNOBOL, if you want an HLL).
  This is a counter argument because Pascal & C (and other) calling conventions
are different, so that whatever, you can't have your system direct calls
compatible with every HLL !
  Does this mean we must restrict HL programming to the most limited existing
HLL capabilities because of compatibility reasons ? THAT's DOS !
  More seriously, ... OO ... polymorphism ... exceptions ... constraints ? ...


* Dr. Hayden 15 Feb 93
>> So, here's my best and most current "definition" of Moose:
>>
>> "Moose will be a fully interrupt driven, preemptive, priority-based
>> multithreaded system."
  Then Moose would be just another kind of OS/2 and Unix, which would have
the additional disadvantage not being able to execute any existing code
(not to talk about compatibility with old code).
  Moose should be more than that; it should be multithreaded because it is
the only simple means to represent interacting living objects as people
imagine system components, and also because multitasking is the only way to
have the computer work and still be in interaction with the user; it is the
only way for the computer to meet the completely different things the user
(or users !) want it to do at the same time. A computer is a tool that can
and must react to man's mind, and satisfy all his simultaneous and
successive needs.
  MOOSE should be the first and ONLY Intelligent System for Intelligent
User, (ISIU) conceived for easiest mutual machine-human understanding.
  That's not having human learn the machine or do nothing, like Unix; that's
not either saying the human mustn't learn anything, and have him do nothing
as in the Mac. That's really enabling understanding of the human by the
machine; this can be done only through communication between both man ande
computer, where BOTH must make an effort; the effort shouldn't be put in only
one side. The machine must learn how to interpret the man's will so that
to execute it; that for, the human must not consider the machine as stupid
itself and give it unmotivated orders (that's LLL programming - see C).
In the same way, he won't have been thought too stupid to formulate calmly
his wishes: that means communication between both, and each making his best
make him understood.
 That's the basic philosophy of my OO'edness: the machine learns schemes and
general ideas about what it can do for the user. The user can define anything
he wants, it will be understood in its fuller conception; but the machine
won't be neglected, for with each object is defined machine-level info on how
to implement it, what method is the best. Finally, user and machine exchange
only the fewest needed data for both to be sure the other agrees. When one
is not sure, he may be able to "talk" with the other so that to make clear
some obscure points, or decide by himself which solution to choose if it
doesn't matter for the other - that is, machine choose between implementations,
user chooses between names and how to represent logically his objectss - and
that does not mean one has no access to the other, as they can begin a talk
session when needed.
 Now multi* capabilities are only one of the things 

* about DOS FAT systems:
Some talk about reading/writing it. Of course, this is obvious on 386 based
computers. But DOS FAT is so @#$^#$%^& that no sensible people would willingly
use an OS with it as a main FS. Whatever FS is choosen to work on, this should
be included as a device driver, not in the Kernel.


* Could one explain a poor ignorant frenchman the joke about Moose ?
('heard you talk about a Mr Moose; who was it ? )


*** To Andreas:
* UI: why not allowing text windows (a la Turbo-Vision), with
text-fullscreening window capability ? It may be useful for people who don't
have graphical capability (i.e. VT220 users; people who only have CGA, etc) ?
The TUI may be very similar to the GUI and may share most of its programmer
methods with the GUI. I'd like both to include object stack windows (a` la
HP) to use with FORTH directives (with possible switching between RPN and
infix notation).
* no for SVGA only: we'd like the GUI to be portable to any card, as well
lower resolution cards as higher resolution ones. It'd be good also that a DOS
emulator be able to use lower modes. (don't let andreas svgaize the MUSIC)
Last but at least software standards (as opposed to hardware standards) are
more easily portable, makable as a subsystem (for example, why not use the
Mac OS and windowing system in a possible Mac version ? I see no reason why
MOOSE should always be at the lowest layer of the system to run properly.
If we can skip writing I/O, that's all the better for us.

* Andreas : function tables for device drivers.
(Michael David disagrees: says it's too slow)
That's not slow; that should be far faster than stupid polling (on PC's, we
can and must use interrupts to interface with hardware, whereas on many 8 bit
machines we just couldn't and had to synchronize manually). In fact, I think
that's the good method (which I myself thought about when I dreamt about my own
system, a long time ago).
 As for Andreas concerns about list maintenance, there are better ways to
implement a list than use an array; array maintenance is of linear cost,
whereas tree maintenance may be logarithmic if properly managed; so what I
propose is a combined tree/array structure (you give arrays of functions to
the device driver which manages a tree)
 Michael shouldn't worry about queues, for a function given to a device driver
can use queues: each window will have its queues, if it wants; there also are
standard public queues to use explicitly (with standard queue managing
routines provided) (among them, there should be the timer queue, ordened by
increasing date, so that for example the mouse handler enters
wait-for-double-click state at the first click, and tells the timer to end it
some time later; if it happens before the timer signal that the click wasn't
a double click, the mouse driver will just tell the timer to forget the
request)
 Good question: can we really manage a logarithmic algorithm for moving a
pointer into a set of 2D windows, and finding which window it is in...


* Realloc necessary ?
well, what if a task needs 4 Meg stack space at initialization for
recursive calls, and then runs well with 4K ? Will the full 4 Meg stay
in (even swapped) memory while the process is running, whereas it is not
used ? What if only one instance of the process needed the 4 Megs, but
the other instances, not having resizing capability, ask for a 4 Meg stack
as a precaution ?
 Yes, I think ReAlloc may prove useful, and may be included in some
standard extension of the system, if not the minimal one. A simple version
of it is (in an imaginary enhanced Pascal language)

Procedure ReAlloc (x:Allocated_Object;NewSize:Object_size);
 Var
   Y:Physical_Allocated_Objecte ;
 Begin
   With X being physical_object(x) and OldSize being X.size do
    if OldSize <> NewSize
    then
      begin
         Alloc_Physical_Object (y,NewSize) ;
         CopyRawData (X,Y,Min(OldSize,NewSize)) ;
         Tell_New_Size (Y.Contained_Object,NewSize) ;
         Free_Physical_Object (X) ;
         X := Y ;
      end ;
 End ;

* dmarer:
 Keep the Kernel as pure as possible: OK.
 Allow non-OO programming: ??
 - if it means device drivers are not bound to be OO clean, and heavy
 computation need not look multiple method tables at each iteration, ok;
 but if it means there is not a standard compulsory class hierarchy
 from low level raw data classes to high level virtual classes for system
 calls, I don't agree anymore !
  It would mean no more standard means to connect objects one to the other.
 There will be a proliferation of dirty apps which won't recognize each
 other's objects and methods, so that we are back to old DOS incompatibility
 (or to Unix low-level text compatibility). (a)
  The system Kernel interface MUST be OO based; but there are accessible
 low-level objects for low-level programming by low-level programmers (or
 compilers). Of course, only the INTERFACE of the Kernel will be OO based.
 By definition, the Kernel is the minimal set of routine needed to run
 system objects, so that all the remaining can and must be OO compliant
 (including low-level OO, as I please you don't forget before flaming)

 (a) you may notice that compatibility is  decided by what the user can create
 and modify directly from the shell; for as all machines are equivalent each
 to the other, up to power (every one being equivalent to a Turing Machine),
 any app can be rewritten for any machine and OS; what counts is what the user
 actually has when he buys the machine, given with the system, or with the
 main common apps: on 8 bit machines, he had antique line number BASIC's (and
 an assembler and disassembler, on the Apple ][); under DOS he cannot do
 anything; under MS-Losedows, he cannot do anything either, but he's got nice
 windows not to do it; move icons, redraw them, but he cannot really do
 anythingomething elseither; under MacOS, the only standard user language,
 HyperTalk, is good for object manipulation, but incapable of any efficient
 calculus; under Unix he can only manipulate text strings, in fact he MUST use
 them or stick to the remaining, i.e. a COMMAND.COM equivalent. Let alone
 power, I prefer the 8 bit machines.
  Then for the GUI being an "extension": yes, but it may be standard and
 complete, so that you can think programming graphics without it (both
 because it is so standard as to be demanded by the user and because it is so
 complete as to make the programmer feel happy and eased). But it would be nice
 for simple programs not to care about the actual UI to be text or graphics
 based.
 
 - as for a task being an object, I don't see at all how it would slow down
 the system, except if you want MOOSE systematically to implement objects by
 virtual method tables, which effectively is slow and stupid.



-(1)-------------------------------------------------------------------------
As promised, here is my opinion about OO, multi* and protection

--a--
 It sounds clear to me that safety and OO compliance are equivalent topics in
a OO system. If there's no enforcement of OO method use, objects won't bee
secure, and thus will be unusable between processes; the system won't be an OO.
Conversely, in a secure system, only Kernel enforced means are secure, so that
you can't add inter object communication through other means than those given
by the Kernel. If it only handles big files and not tiny objects, you won't
be able to use your tiny objects, unless you voluntarily restrict your use of
the system; but then, as for security, you'll never be able to ensure your
tiny objects behaviour, as any uncomliant (unaware/hostile) internal source can
jeopardize the whole subsystem's integrity.
 That's how MS-Losedoze is, for example !

 To me, the system units and files and processes; they are objects and methods,
the method space and object space being the each other's dual space. Then, what
you called IPC is IOC: Inter-OBJECTS-Communication, or dually IMC, Inter-
METHODS-Communication. As methods are themselves objects, we have a 
monomorphism from one space in the other (which not being one-to-one, proves
the (virtually) infinite dimensionness of the system's logical Object
universe).
 And I make no fundamental distinction between small or huge object/methods,
the big one being tasks and the lesser ones procedures. Objects are all
fundamentally free one from others; but they each interact with a very few
number of neighbours. The system is connected as each object is indirectedly
linked to every other. But, We manage to find significance on objects because
of their linking particularities and structures: the see how to group them,
or differentiate them. We know that two objects are free WITH RESPECT TO a set
of neglected objects if they are independant when you actually cut the
neglected objects. Now, if we ultimately trust the user to tell us all we
must and can about his objects, security is when objects that are free with
respect to user/external manipulation stay free when you don't trust the
user and external world as much as you could if they were perfect.

  Sharing objects: iff you know of an object (have its ID), you can
use it (i.e. use its methods). More or less public/private objects/methods are
achieved by declaring different "views" of the same physical object. Less
privileged actors (with respect to the given object) will only see restricted
"views" of it (my english is poor -- help me find the proper word; I'm suree
that's not "view"). If you want to protect your objects, just don't publish
their ID, or publish ID's for limited "views" of them only.

  That's a general method for protection, which is after all only a "view" of
sharing objects. To enhance this, you may add a key to names so an aggressive
program not be able to pirat you because you use common names.

  Now, what are the IDs, and where are they ?
IDs should be a pointer to an object. For security purpose, you can add a key
to it, so that one who picks unmatching random IDs are detected and isolated.
Now, each object must have infos including its class, ID, and environment,
that is the context in the which the object is defined, its neighbours.
Usual objects have very few neighbours. "Task" object have a large
environment; to some, it may seem that this reproduces Unix environment; but
definitely, this is a very limited comparison: a MOOSE environment variable
can contain any kind of type object. Objects are organized hierarchically, as
are objects in the intermediate code produced by a compiler. And there we are:
we will work on the compiler's intermediate code; interfaced variables will
be published by that means; Uninterfaced variables are forgotten, and may not
fear any exterior jeopardy, thus aer safe, and may have been compiled as such
(the compiler can look for more optimization on uninterfaced variables, while
interfaced variables take more place because when the variable is simplified,
the interface is often complicated, so that you can't optimize much code on
interfaced objects).
  About File system, you will say that an evil object need only look the
main directory, and then have the ID of whatever object he wants - Yes,
that's why a given object/task WON'T have complete rights upon the file
system, and will be forced to use a File server which will automatically
ask object ID from a standard UI call.
  As for copying huge symbol tables for each object, we need not copy the
entire "table" (should be lists of tables of lists of ...), but only point
to table modifications. If modifications exceeds a certain rate, larger parts
of the table may be copied, etc. Pascal-like hierarchy should cause no
problem, with the last defined object checked first: that's what we obtained,
didn't we ? (See later, I call it the dictionnary)

 
 In fact, I think the Kernel Set should be exactly the C/ASM coded methods for
handling low-level objects: tasks/thread/procedures (executable code in
general, including stack management, and exception handling), memory
allocation, object typing, virtual pointers and naming (including subnaming
and polymorphism, linking and unlinking object to the running system (imagine
the coding/decoding needed to load/save an object from the running session
from/to mass memory). Nothing more.
 A basic extension should be a little Forth-like semi-interpreted language
(very quick for high-level stuff, very portable, very light). This language
should be equivalent (with a null or linear time computation) to the famous
standard intermediate code language I vouch for, and may be a VERY usefule
extensible tool for both booting the system, and using it daily (pushing and
poping from/to multiple stacks is great ! Better than mere cut/paste with a
unique one-level stack as in Losedoze and Max).
 Other extensions needed for a minimal lightweight system will be a
simple file system, say, inside a single DOS file; and a simple interface
system, say, a raw terminal to begin with. All that should fit in a very few
K's; less than 64K !
 A basic extension should allow scanning through the hierarchical "dictionnary"
(list of all accessible objects) with several criteria, including comments one
given objects (for example, find the fastest available version of a realee
number handler including numbers ranging from -10e4 to 10e4, with precision
1e-4 (that is 9 decimals, and no need of floating point); on this example, a
32 bit fixed point manager may be sufficient and faster than a co-processor
emulator. I think the dictionnary is very similar to concepts as file system
(files are the only objects supported in Unix and like systems), environnment
(which under Unix and DOS contain only litterate text strings), and of course
compiler symbol table and FORTH dictionnary; the main difference with respect
to DOS/Unix is dictionnary may contain any kind of typed objects, not only
ASCII. Objects in the dictionnary need not be copied for each instance of the
dictionnary, or the dictionnary be duplicated for each object. Each object may
just tell what difference its dictionnary have from parent version.
 The dict. is everything an object know from the exterior. Most simple objects
have a very restricted vocabulary; but an object may allow dictionnary
expansion: local methods name are hierarchically given "inside" the definition
of the local object's classe

 

--b--
386 Kernel Implementation

Here are my assumptions
- You very seldom manage huge unsecable object; if you really do and want
optimal speed, do compile a device driver to assembly; it's worth it; then,
your huge object while be interfaced with the system.
- There only remains to manage small and tiny objects to the Kernel and its
low-level devices. There shall be a huge number of suches.

 You'll tell me having a FORTH-like interpreter is slow ?
No, it may be as quick as Turbo-Pascal Code, more if there the
libraries' features are complete.
 Let me explain
DOS-like dynamical link is unadapted to managing those tiny objects; you can
link one big stable program, but how can you link, unlink, relink tiny
objects ? the linkage table would always be present with the object; or
objects would be forced to fit page alignement ! In both cases, it isn't
stable !
Following 386PM rules as defined by intel works fine with tasks and libraries,
but completely fails if each procedure instance is associated to its particular
"thread"; OO, particularly with Turbo like so-called virtual tables,  would be
SO slow I understand Dennis would hate it. But here comes an interpreter.
The interpreter is in Flat mode, that's why it is FAST (as compared to multi-
segment mode): function call is 2 to 3 times faster, 2 times even faster with
my hack (I too can propose hacks, but I don't mix them with specifs :-).
The security is guaranteed because WE system engineers wrote the interpreter.

PM RET+CALL
>70 clock ticks
Flat RET+CALL
35 ticks
Flat RET only  -> that means if you compile assembly to interpreted code
10 ticks only   you have the worst bound of 20 times slowest. But you never
		do that, so that standard time should be 8 times slowest than
                assembly for integer calculus, and faster than usual assembly
		for virtual table call.

If SS:(E)SP is our LLL PC, that's fine !
interrupts use another stack (because of hardware priviledge) -> ok
we can write-protect SS !



an JMP[BP++] equivalent or LODSW AX, JMP AX should be as fine as well, if you
still want clean intel-compliant kernel.

In the MOOSE standard format (used on disks, for example), function numbers
are used for the FORTH interpreter. When read to memory, function numberse
are mapped to function address. The mapper recognizes operands and do not
map them.

Even the interpreter can have more than one privilege level, so that some
instructions numbers are or not available at compile and/or run time...


BTW, let's ask intel guys: can you have a 16-bit long CS with 32 bit
default instructions ? Or the same for SS ? It should be VERY useful for
Kernel compression.


-(2)--------------------------------------------------------------------------
and here my views upon OO'edness & genericity
(well, that'll be next time !)
--a--
see C++ templates ; CAML generic types


--b--
here's how to implement it
-> we have compile time virtual tables. Only the fewest possible
needed of them are (partly) represented in object code. Directly
interfaced object of course must include full virtual tables,;



-(3)-------------------------------------------------------------------------
 To finish with, here is what I propose for mailing organization:

 We should name our message subject following the actual MOOSE sub-ject of 
the letter, and divide a communication in as many letters as there are
subjects. We will number letters with a personal ID and counter to easily
archive and quote each other's work.

example:
main subjects available would be
ORG - moose ORGanization, nothing to do with actual programming, but
    compulsory.
KER - system KERnel, the minimal set of instruction to bootstrap the OS.
TUI - User Interface; useful for boot strap; also interfaces the language.
GUI - you know what that is.
HLL - High Level Language; its specs, its compiler.
LLL - Low Level language; its specs, its interpreter (compiler for moose 1.0)
DEV - Device level; we define the algorithm for implementing our virtual
     data.
386 - interface with hardware
MAC - interface with MacOS
UX  - interface with Unix systems
NET - remote NET link.

 Most message will be messages at the interface between two or more subject;
let's just list the subjects. For example: the GUI should be intersecting
both HLL and LLL in its specs, HLL so write GUI,HLL and GUI,LLL; Text UI
should be have the same; common GUI and TUI parts should be present in
GUI,TUI; graphics impl' may be talken about in GUI.386 or suches.
compiler specs would be HLL,LLL; the LLL interpreter would be LLL,KER; 
Memory Management would be respectively at KER,DEV LLL,DEV HLL,DEV following
programming level, etc (I'm sure I did not make the right choices; but the
one to manage the mailing list should make the definitive choice after having
heard expressed opinions; he'll have to manage further modifications).

 I thought to add + for OO definition (how it looks from outside), or - for
implementation (how it looks from the inside), so that people interested in
interface only don't HAVE TO know how it is inside (but still CAN). Again, I
don't want to impose my choice to the other, and I'll correct my numbering
following final moose ORG decisions (and I certainly wouldn't take the
responsibility to take this decision alone).
  Next, when those are settled and work well divided, there will be subfields
to  subjects.
HLL.GEN for HLL syntax about GENericity)
LLL.GEN for LLL representation for the same
DEV.FAT for DOS FAT objects devices
DEV.OFS for a possible custom disk Object Filing System
386.DOS for a DOS EMUlator under MOOSE (that's for version 2 at least; for
        now, we'd better make a MOOSE EMUlator under DOS !)
  We'll know better what to add when there we are.

 Personal ID might be 3 letters indicators also, so that I am far, Dennis
is den, Andreas is arf, JJ Lay is jjl, etc (well veryone chooses his ID).
Let's also initialize personal counters to 10, so that we see what to do of
previous messages. Thus, for example:
the first part of my message should be KER,ORG+ far10 then perhaps 
KER,*- far 11
part (1) would be KER,HLL+ far12
part (2a) would be HLL+ far13
part (2b) would be LLL- far14
part  (3) would be ORG- far15


                            --------------


Mailing is too slow a means of communication when there are more than two
people, because each message takes a long time to be read/written.
We should try also a forum, talking, phoning (but this may be difficult).
Why not publish everyone's disponibility for such means of communication ?
Personally I can't receive talks, but I can send one with my brother's account.
I spent more than 24 hours an this mail, and it isn't even a good input
for precise specs.
We should also define a standard format for asking and answering, proposing
and counter proposing to be more efficient. Does anyone in the group mastere
Unix text managing tools, so that we manage directly specs with simple
commands (=add a "message", link it to keywords/other messages as answer,
counter-answer, acknowledge, particularization/generalization, etc...;
for each field, you can keep a list of up-to-date messages, with
the three particular fields Open problems, choices to make do, decisions taken
as everyone can't do this by himself, each part of the specs is let to
someone to manage it for everyone, keeping it as up-to-date as possible.
 The list of keywords must also include where the keyword first appears, is
the best defined; is used in another definition ...

			   ,
			Fare