Wed, 31 Mar 93 4:26:58 MET DST
I thought I could explain directly my language ideas, but I see I must
explain first my global vision of the system. The language's features arise
naturally from how the whole system is conceived: that's the coordination
language, that reflects exactly how the system is conceived, and what it
offers to you.
I quote Dennis (mar 17 message) (hey, Dennis, please number your messages
> What I want out of this project is something that is more *useful* to me
> than DOS or Windows, not something which takes the scientific world by storm.
So for the system to be useful, you must be able to use its full
functionalities simply from a language (the so-called coordination language
as appeared in all OS project that arose in the discussion). I do not
consider calling libraries with grotesque structures (pointed ,referenced,
or copied) a simple system call. A system call must be simple, easy, and
don't change its syntax when you change implementation ! (that's also why a
Low-Level language as C++ cannot do).
This is originaly Michaels mail, and Andreas tried to sumarize what we agree
on; I added my own stuff as a reply; disagreements are marked ***; additions
are marked +++. Now notice this, if we are going get something done in a
resonable amounte of time we must start to agree on things, as David
mentioned as a postlude in his mail.
There are several "fields" which hadn't been discussed yet, so I took them.
Remember one thing: We can always begin in the simple and step up when we
need, AS LONG AS WE KEEP THE SYSTEM EXTENSIBLE BY HAVING ONLY HIGH-LEVEL
REQUIREMENTS; that's why here are only HL specs. Implementation is left out
(I have some ideas about them, but I won't explain it there; if you're not
convinced something can be efficiently implemented, then I'll show you
System -- The Fourth Attempt
> [NOTE: I'm starting from a conventional OS and moving away rather then starting
> from objects and moving towards OS -- I feel that this is more likely to
> yield a running system in a short amount of time. As Dennis has pointed out
> we don't have the resources to re-invent the wheel]
I'm rather starting from the other end: the user/programmer; how he
(actually, that's more like I) conceptualizes the system; what he expects it
to do, how he'd like to modify its behavior.
THE Basic principle
We base the system (and THUS the Kernel) on persistent object/processes.
[Terminology: by persistent I mean that it can be swapped out to storage,
the machine turned off and on and then continued. In practice we would like
to be able to have a large number of suspended processes on disk]
[This subsumes directories and files -- a directory is a particular kind of
dictionary (see below), describing persistent objects (then if you call
"files" persistent objects, a directory is a "file", and not even a
particular one, as any object can contain sub-objects). Then the
file/directory terminology in other system is made obsolete, and remains
the more general object/dictionary paradigm.]
The underlying idea, is that there's a Yin-Yang duality between processes and
objects (one always come with the other, and even different, they can't be
separated as each generates the other): object evaluation is done through
processing; processing management is done through object. This is true for
virtual as well as physical objects/processes, and at any abstraction level:
at lowest level, CPU instructions are binary coded / binary data is modified
through CPU instructions ; at highest level virtual object data may be
accessed through functions (a "zap" field can be accessed through virtual
read/write functions, so that zap := f(zap,...) will invoke zap.read and
zap.write), and virtual functions can be implemented as just data (i.e. a
function on a very limited number of elements can be coded as just an array
containing its values).
By virtual, I mean the system includes a high abstraction level where objects
are accessible through their abstract inter-object properties, not depending
on how they are implemented, implementation being the system's problem
(through a standard object evaluator, including compiling) ; this is not
only while programming, but at anytime when just using the system --
"programming" must be just using more deeply, without any clear boundary
between programmation and common use -- if you ever used a HP28/48, you know
what I mean. this implies there is no boundary either between compiling and
interpreting ; you just evaluate or simplify objects.
In Mar 26 message Re: Kernel 0.11 [arf1], Michael says (hey, Michael, do
number your message !):
> What I meant was that a MOOSE object is defined to be a process.
> A small object that isn't a process (Eg. is internal to a
> process/large-object) isn't viewed as an object by the system.
If by process, you mean a system object (assuming that by duality, objects
ever end up into processing), of course only processes are system objects !
BUT, if you mean that the system's base object will have to save the CPU/FPU
state, plus system data, as unix tasks, then of course not ! If that's what
you want, just write a server under unix to relaunch tasks with checkpointed
data, and add it in the superuser's profile (I'm sure that's not what you
> An object is a process with method declerations.
Anything is an object.
Integers are objects; the whole system is an object; even code is an object
class; classes are objects; anything is an object.
NOW, every object doesn't have to be accessible from every other. That's
why, for example, every single temporary integer won't have to be directly
interfaced to the user. However, if you possess debug info, you can access
any single integer in the system (apart possibly from some temporary variables
in parts of the nucleus which need forbid interrupts).
How to access an object - (O)ID's
The system implements a basic type of data which is (object) IDentifiers.
The "object" word is unnecessary, as ANYTHING is an object, so of course,
what you IDentify IS an object. So I'll later refer to it as ID's (or OID's
when it's not obvious a general object is identified).
ID's may include ID identification data, so that you can't just invent an
OID, use it, then crash the system. ID's are are valid through the whole
system; possibly automatically translated when transferred from one part
of the system to another (see remote adress space in a distributed system).
Objects possess subobjects, accessible if you get their ID. There are
objects at any level; each level can be thought as an "implementation"
of higher level requirements.
What you can do is evaluate an object (giving him arguments), get a
sub-object/another aspect of the object, use it as an readable and/or
writeable argument to another function, or destroy it.
An object's method is only one of its sub-objects (or of it's class'
sub-objects). As it is itself an object, no ID fields is required to
access an object's method: the method's ID is just a common OID;
after all, every object is another object's sub-object, except perhaps
the whole system !
So let the inmost kernel manage but OIDs: that's all we need; then,
basic devices (see further lolos) can manage everything else. All
we need is OIDs (BTW can someone find how we can make "LOVE" an acronym
for OIDs ? :-)
[implementation: IDs will organize naturally following modular
grouping: if for example on the i386, we can use a 64 bit ID representing
the DT/segment/offset of a physical object, objects from the same module
being very likely to be in the same segment or at least the same DT. Let
the implementer do it the best he can to fit requirements. We can also
have but DT and segment, or but offset, or but segment and offset, and/or
add a key/check field, to ensure that a bugged/pirat can't use an object
that doesn't exist/didn't really give you the rights to use it. (that's
been called capabilities, says Gary)]
[if you're sure to use local objects, you're not forced to use global OIDs;
external, unlinked objects must use OID's to communicatein (that's late OO
binding); if you want to speed up execution, then link statically instead
of dynamically linking (that's early binding); as you CAN compile only
parts of your program and/or interpret part of it, and/or compile parts of
[Thus, in the same system implementation, there may be several different
types of OID's managed; the only thing is any identifiable object's access
may be allowed to another object through an OID; i.e. different tasks may
have different OID mapping as long as they can communicate whatever OID
to any other task through possible OID conversion.]
What differentiates objects one from the other, what unifies the concept of
objects, is the inter-object relationship.
Raw data doesn't mean anything, as it can mean anything. What gives it
sense, what makes interesting or important, is not just its absolute, raw
value; its how this value interferes with other objects. In Maths, it's the
same: raw sets aren't interesting at all; what it interesting is the
structure you have on it; a set without a structure is so meaningless, than
when you name a set, you always intend an underlying, IMPLICIT structure on
it (for example, you talk about "group G" and "ring R", rather than "group
(G,+)" and "ring (R,+,x)", but certainly mean the operators without
explicitly naming them). Sometimes, you must explicit your structure, as
none or more than one is implicitly available for your given set.
So in Computer Science, what describes the structure is classes. A class
is the data of a set of elements, and operators on this set; class may
(must) be interlinked by operators through both set, as there must be
(indirect, seldom direct) interaction between any class and the whole
system object (if not, by definition of existence in the system, the said
class would not even exist).
Classes must not only state that operators do exist, for raw operators
(as operators are dual to set elements, and operator sets to set on which
they operate) are not more interesting than raw elements. Thus, the data
of element/operator classes (which is necessary) just does not suffice
(they're enough for just running, and may be enough for the kernel; but
they aren't enough to efficiently link dynamically and staying
meaningful at the same time) We need know more general restrictions:
-what object is being accessed, written, read, or referenced,
-what algebraic/functional equations are verified by the system objects,
That may allow checking more strict but also more free for the programmer,
than just type
The Kernel need only know about the implementation of a base class class,
say a DVT-driven-class-class class, then it will be able to use virtually
any class, then any object, provided any object has a class that indirectly
can finally be expressed by evaluating a basic class instance.
What you may want to ask the class is:
- get the list of available members (methods/data).
- get an object's dictionary (isn't it more or less the same ?).
- get the info about restrictions applying on previous objects.
- creating a new instance/a list of new instance of the class.
Any object can also be considered as a virtual computation (call it a
function, a procedure) (polymorphic or not), which takes arguments as input,
and modifies consequently its output. A computation is deterministic, i.e.
given identical input, it will output the same thing to their output.
NOW, 1) input and output may be the same object.
2) runing twice on the same object is not runing twice on the same data if
the input object has been modified
3) in particular, when you read from a pipe, the pipe is part of the input,
but also of the output, as you tell it to "advance".
4) some of the input may be declared non-deterministic/dependent on
uncontrollable external parameters (example: peripherals; shared objects)
5) some of the parameters may be implicit: some functions use global
(actually, more global) parameters. A common example is CPU registers are
implicitly given as both input and output to all binary functions (but
some functions can be trusted not to modify them, at the system's risk,
if their info tell they do).
6) a complete function descriptor include all this. Of course, it is not
needed to load it all if you already are sure of the function's features.
Only the function's ID is required; if you have it, you're supposed to
know how it works; if someone wants type checking, register saving and
all that, he'll just pipe a filter between the function and it's caller.
Computations may well end up in a non-trivial way; a function may behave
exceptionally because of exceptional input: input not matching (common)
function's requirements, etc. Exceptions are the same as the events saw
in any "event-driven" environment: someone produces it, others (and/or
the same) catch it, and treat it. However, we are not bound to the client/
server way of implementing exceptions: we can have local exceptions as
well as (more) global ones; in fact, each exception may be implemented
individually, although modules are present to help each one implement.
Exceptions may also occur in/by special data coding: a function that
the returns the next element of a set can actually return an element from
the set, or emit a "set is empty" (local) exception that's to be caught by
only one process in each chain of process.
You can also have "positive" exceptions: a procedure that computes a
set's cardinal using previous function may emit a local "set not empty"
exception, but catch it itself if nobody else does; thus, if someone
knows only of a set counting function, he may more directly determine if
the set is empty or not without having to count each element (of course,
this is useful only is the first function is unaccessible, or slower than
the counting function that mustn't use it, then).
Thus, exceptions can be unrecoverable/recoverable and may be caught by
the emitter if no one else catches it (this may be implemented by
a boolean indicating if someone else catches the exception, and another/
the same for the function to return if thee exception occured; but then
a catcher won't be able to recover after catching... but all this is
Several objects maybe evaluated at the same time, or wait for others to
evaluate; any single object computation may be done by switching between
objects only when an evaluation waits for another one to finish. However,
this is not good when you want independent/very lightly dependent objects
to live together, as we want in a multi-tasking system, where every user/
sub-user may want something different done, independently from the
other objects. Then, we have to (justly) share execution time between
The time-sharing system I propose is both simple and fairly fair:
at base system, there is only one "system" ("root") user. Then, at
each level, a user can split into "sub-users". To each user
corresponds a process; but processes can be asleep (waiting for another
process to finish/to emit an event) or awaken. Each sub-user may have
more weight than another to demand time. A prehensive system switch
changes user according weight proportion, and continues running the
process if awaken; if the user possesses sub-users, it will
recursively do the same algorithm. When you accept to leave the
hand to another, brother processes may prioritarily profit from it.
A process can be shared by multiple user, then, each user will
launch the process (if he had a non zero weight).
[On multiprocessor systems, two cases arise: if multiple processors
use common data, we can keep the same scheme (with a little
optimization in distributing users preferably to a processor which
still has it in its cache); if different processors have different]
Thus, we have a tree defined by recursive circular lists of processes;
There are also chained processes that organize in stack rather than
a circular list: each such process launches another one, then waits
for it to succeed.
A dictionary is a kind of object server: you tell him what object you
want, and if it has it, it gives you its ID. Basic dictionaries use names.
More generic object servers can use any criteria. Basically, an object
server is a function returning an OID, or an "Object not found" exception.
But another aspect of the object may be a sub-functione that searches the
object in an unexplored part of the dictionary only, together with a
function that initializes the unexplored part to the whole dictionary, so
that you can get ALL fitting objects, and decide yourself which to choose.
Thus, you can build complex dictionaries from simple one, by just adding
functionality to the dictionary functions, via virtual function members.
Now, some dictionaries (not all: a function, for example, may know only
of its parameters and/or variables) may accept dynamically adding new
objects to them; this is done by a function having as explicit parameters
an OID, its type, and its "name" (be it ascii and/or number).
A complete dictionary actually is a compiler symbol table !
Protection is done by restricting object servers: as you can only get
OIDs through name servers (that's the def. of a name server :-), you
can't access an object if no server which you could access possessed the
object's ID & key.
Limited rights or also done by limited publishing of member functions/data
to public dictionaries.
Each object may also be more or less trusted. For an object to be trusted,
you will have to recompile it or interpret it (with the same rights as you
gave the object; i.e. a non lolo accepted software cannot compile forbidden
hardware access instructions). Trust can organized at the user/subuser
multitasking level: each user is more or less trusted according to its
subuser behavior and to the trust given to the modules it uses. Modules
themselves are trusted according to their successive use. In a distributed
OS, each host may more or less trust other hosts and/or interhost link.
This may be done simply by assigning a user and/or module to each connected
Inheritance & Polymorphism
To me, inheritance is only saying "one aspect of that object is that it
fits another object's requirement, thus particularizing it"; so it's more
easy to name inheritance as a subobject of main object. So for example, an
aspect of a circle is it being an ellipsis; this is purely virtual, and
does not mean the physical implementation of a circle will effectively
include say, the length of both axes.
Polymorphism is also done by using the same name for different objects.
Then, you can differentiate equally named objects by looking at their
different classes. This means a dictionary (although perhaps not base
dictionaries) can manipulate objects not only according to theirname, but
also acccording to their class/level.
Information on an object
- To manage an object, you often need additional info about it, but just
its raw class and value; for example, you may need info on how to display/
print it, how to link it, if the object was chekcpointed to the
disk, if it supports compatibility with other similar modules,
if the object is in a valid state, or if you must waitfor it to initialize
- the system scheduling may require the system to evalutate which is the
"best" of a set of operatons (more generallly of two objects), knowing of
a cost function upon.
- those are IMPLICIT informations, which don't appear during computation,
but nevertheless are essential to the environment.
- all this may be implemented by EXPLICITly having an info list and a
pointer on the object. Again, implementation isn't to be discussed here.
High Level Programming
If you followed my reasoning, the essential point that is important for
a HL programming language is allowing you to implicitly do any thing that
is "obvious"; of course, to be sure you and the computer have the same
opinion about the obviousness of such IMPLICIT statement, you must know
how he manages implicitness and/or ask him what is his opinion. This may
be done interactively in a user-interactive object editor/compiler
(editing and compiling may or may not be grouped; the GUI can also be
used to graphically view/edit objects).
Let us consider a C++-like function
struct zap zep (int zip, struct zop zup, struct zop & zyp)
static a ;
int b ;
if (c) /* c: global variable */
It will have a dictionary containing zap,zip,zop,zup,zyp,a,b,c,
where each previous object will contain info about how the object may
be accessed from the function. Of course,
- the dictionary is virtual: it doesn't need contain all this info in
memory, if only it can answer questions from object clients;
- internal local variables need not be published (but the existence of a
static variable is important), neither need parameter names; types are more
important. And if you no object has access to those objects' name or to
internal local variables; then these don't exist, so why talk about them ?
- if you don't provide dictionary contents, you may have a pointer to it
as an Internet ftp-able address, for example, and/or emit an exception
explaining that you couldn't find the dictionary contents.
- this dictionary won't be extensible, as you should recompile the function
to enlarge it; however, the zup & zyp structures may contain themselves
subobjects that enlarge zep's scope.
Notes on second/third attempt on the Kernel:
> * Invoking a method
> This takes
> (1) An OID
> (2) The method "name"
> (3) The arguments
NO ! just use the method's OID and arguments ! The method is an object !
Let's address it directly !
> What is the type of the name?
> An integer is the obvious.
> An atomic integer as suggested by all of us. See def. 3.
This should be implementation and/or coding dependent.
Maintaining a global integer scope for the whole system would be
VERY difficult, so names being pointers to a structure or anything
is just fine. Again, names should be accessed only through standard
name class methods, and thus be implementation independent.
> When would the allocation of these be done?
Allocation is done as soon as possible, i.e.
> There are two suggestions; mine and Davids. Any more?
> The permissible type of the arguments?
> in a purely OO system we could just have OIDs and be
> done. In our system it is doubtful that representing
> (eg.) an array by an object would be practicle so we
> need to be able to pass large granularity data between
> This is where IPC comes in.
> [I'm using the definition that IPC involves data
> copying between address space]
No ! IPC is meant to allow different processes using the same
objects, or writing objects that another one will read, etc.
If memory must be copied, well, it must. But the implementator
should be free to use the full hardware support to make IPC as
fast as possible (and in this case, we don't need copy big
chunks of memory; an the i386, segment descriptors may be used
to point more easily on any array of memory). I also suggest
dynamically compiling programs that do heavy IPC (the scheduler
should be aware that on a particular case, it is better to
perform a local compile, or to let the system use slower but
immediate late binding).
> A comment on (very) small objects
> In a pure OO environment one could use an object to represent say an array
> (not to mention an integer)
> This is impracticle if we treat objects as processes since then a method
> invocation involves a context switch.
> This does not prohibit using small objects WITHIN a single larger object it's
> just that these small objects can't be exported to the rest of the system.
> I don't think that we can develop a system supporting small objects with
> reasonable efficiency in reasonable time. It involves too much research.
Of course we can, what is the problem here ? I really see no problem at
using arbitrarily big/small objects. All we need is keeping the whole
compiling info for the object we want to use. (unused debug info will be
compressed and/or deleted/not generated, according to user's demand and/or
use). Of course, interfacing objects that weren't designed for interfacing
will be slower than interfacing objects designed for it; you may locally
recompile a program to change only a variable's access.
> Some Other Issues
> These are areas which bear further thought.
> I'll indicate my initial reactions to these issues.
> Rather then spend time filling out the details now (and spark a debate on them)
> I'd like to come to agreement on the basics first.
> (1) Memory allocation:
> What kernel calls, how and should we support higher level functionality.
> (Eg. GC etc.)
> We have two suggestions here. Either a C-like way of doing it or Fare's way.
> (Don't remeber it exactly). I think it was more support for Dennis way of
> doing it. (Hope I didn't jump upon anyones toes now:-)
That's the n-th time I repeat my opinion about memory allocation:
We'll have a layered organization of devices, providing graduated
functionality to user program; It's up to the compiler (as always,
with programmer's help) to use the proper device.
memory alllocation zone features may be:
- word/block granularity
- realloc possible or not
- garbage collecting/ volunteer alloc,free
Of course, a memory zone may be included in another one. For example,e
independent programs may want to use independant memory zones, so that
a crash in one won't crash the other, so they will open each its own
sub-space of a memory allocation zone (or more likely, the system will
automatically do when a program ask for a memory zone).
> (2) LOLOs:
> A Device or "LOw Level Object" (lolo) is an object which
> (1) has been granted the privilege of being able to refer to certain hardware
> resources. (Eg. disk control registers, ethernet card)
> (2) Has methods which are invoked upon the reciept of a particular interrupt.
> (Eg. data ready)
> [Note: Most lolo's will have (1) -- I think that few will have ONLY (2)]
I like it; but we must see that lolo's are only the lowest layers of a
more generally, a lolo is an object that has be trusted by the system to
access hardware/software directly without any further system control (but
volontarily passed test). Then the most basic task and memory managers will
be lolo's also.
From the system scheduling view, hardware will be a (rather protected)
object, so that allowing someone to access hardware should only be a
particularization of letting some object access another.
> How do we set an object up as a lolo, how do we give it access to a
> certain region of memory.
> (Implementation: we might need to have the access traped and then redone
> by the kernel after checking -- MMUs don't neccessarily support fine
> grain protection.
> (Eg. this object can read bytes 100-102 and write bytes 103-110 but
> can't touch bytes 0-99 and 111-500)
- first thing: if something is a lolo, it is powerful enough to crash the
whole system, so we can trust it not to crash it; of course, it may still
use any system protection scheme.
- now, an host dependent key must identify objects that have lolo rights
on a particular machine, so that you can't just write a virus and tell
another one's system it's a lolo (but you can still destroy your own
machine, if you really like it 8-).
- as for allowing limited rights on an object, that's for the compiler
to implement it (with help of libraries). On the i386, you can use
segment/page faults then manually verify rights. On any implementation,
you can use standard read/write functions, etc.
> (3) Virtual Memory and Address Space
> I think we can more or less agree that seperate objects should not
> be able to read/write each others address space for robustness
> Should we have an address space that is global or local to each object.
> (Note: This IS an independant issue)
> agreed upon local (well - noone else said anything else :-)
> How should VM be done? I think paging is the simplest way.
> User input to the process if any
> Existance of PHYS_MEM and it's management
That's implementation dependent, and should not be part of OS specs
(but we can still talk about it in implementation discussion)
(I suggest again the use of +- modifiers to subject, depending on
whether the message talks about specs or implementation)
> Agreed upon paging.
That's only for the i386, and that won't suffice. A protection scheme
should be included at OID level.
[protection suggestion: OIDs are either completely sure (as segment
descriptors in the i386), or not directly usable (you must ask the
system to link you with a local ID -> OID correspondance). Whatever,
if you can modify an OID variable, then some right checking must be
> Details havn't been discussed yet, but as far as we have come with the
> implementation I belive we could use the MEM_PHYS until we feel we need
Swapping may be at the same time a caching device, so it's not
that uninteresting (of course, it's not the first thing to implement
> (4) Semantics of persistance:
> One way to do persistance is to simply rely on paging -- assume all
> objects are in a very large memory and let paging load them in as
Yes; we can even assume that for small system implementation (like the
PC386 version), there will be a unique virtual addressable space of at most
4GB. To use more than 4GB disks, there should be a distributed system and/or
special device controllers.
> As Dennis has pointed out this is inefficient.
No. What he pointed out is that we couldn't page-align every object
(all the more if everything is an object). We may still have this kind
of paging, but we may adapt it so that at any moment when the disk
isn't being written, the disk system file contains some valid data, so
that persistent objects are preserved if the computer is powered down.
> Another way is swapping -- a process can only be either completely
> swapped out or completely in memory.
That's stupid when you can do paging, except if each process is small.
But we want BOTH tiny and huge objects. Moreover, as there is always a
bigger object to contain a given object, there will always be processes
too big to fit entirely in memory.
> Disadvantage: Lose the ability to save memory by having part of a
> process in memory
- VERY slow for big processes.
> Advantages: (1) Can be done without an MMU
if we don't have MMU, we'll have to use an interpreter to
enforce security, but for lolos, so there's no problem here; only
> (2) By letting the user do this manually we allow coarse grain
> manual resource allocation.
> (Sort of like doing a "copy foo TO ramdisk" b4 starting
> and having an automatic save files to hard disk from ramdisk
> when about to shutdown)
That's called caching. What we can have is a temp. object
We can also parametrize the system to accept partial/global automatic/
manual chekpoints. However a good persistent system may have extra power
for saving critical data before shutdown.
> We havn't come to an agreement here, since there hasn't been any discussion.
> So let me suggest this. This is not a priority issue. We just use the
> simplest available scheme, and then we can step up to use a more advanced
> scheme when needed.
> (5) Semantics of invocation
> MOOSE is multitasking.
> What happens in the following situation:
> Object X Object Y Object Z
> Starts executing method 1
> The three obvious possibilities are
> (a) Z starts another method in parallel
> (b) Y is blocked until Z finishes
> (c) Y continues and the invocation is queued.
> (a) Gives us multithreading and requires us to have semaphores to co-ordinate
> access to shared data.
> (b) Is the simplest to implement but is less versatile
> (c) Makes invocation asynchronous -- this adds paralellism to the system.
Notice that under (b) one could have the invoke call return a result.
Under (c) though the invoke would not return anything (other then
"this was succesfuly queued" OR "this failed") and the reciever would use
it's own invoke to return an answer.
> Of course we could (and probly will end up doing) provide more than one.
And we will provide the three of them; the only thing is the system may
generate an task switch time interrupt and/or see that the launching
process is running outside its normal time (example: an event driven
process that interrupted normal execution) and thus decide to first
execute next process (the launching thread will be given back execution
when its normal turn comes back).
A more generic invocation is telling the system: here is a list of
computation to execute in parallel; do them all (with this priority
order/time weight distribution), then give me back execution (or then
See "multitasking" paragraph above.
> (6) Inheritance
> I've left this out for now.
> One question I would like to raise is WHICH INHERITANCE?
> There are multiple models -- C++, Smalltalk.
> Oh, and this is without considering multiple inheritance.
Inheritance as an implicitely opened (pascal "with"ed) member
is no problem. The inherited aspect of an object will only be a
sub-object among others. Only its high-level syntax is different,
with possibly a low-level naming convention (but I think putting
suitable info in the class descriptor may be enough).
> Could someone outline the smalltalk way, ada way and the simula way. I'm not
> familiar with other then C++, and I havn't studied ADA for a couple of years.
> (7) Polymorphism and Overloading
> So far I've only come across these as language design issues.
> You have a "natural" polymorphism in some commands
> (Eg. mv,rm,cp) when the contents of a file don't matter.
> Could someone expound on their ideas -- how do YOU view the role
> and function of polymorphism in an OS.
In an OS as everywhere, what we want is ease of use, and the least
redundancy in code&data; this leads to genericity (that is, you produce
code that is valid for the most generic data types/values possible, and
once it's written, you never have to rewrite it for a particular data;
the system/compiler will specialize code; somehow, it "undertands" what
you told him the most it could), and polymorphism which is context
dependent reuse of the same name (sometimes to stress the similarity
between two functions than couldn't be descended from a generic
function, and/or sometimes to use implicit parameters)
Note on servers
> Consider now the form of such a server (which incidently bears a striking
> resemblence to the nameservers (or dictionaries)):
> while (accept(request))
> case request of
> type1 : ... handle 1 ...
> type2 : ... handle 2 ...
> This type of code will be very common in the system.
> It allready is in event based systems.
Well, yes and no. If some code really appears often, it will be included in
modules which implement it efficiently and parametrizably. In particular, I
think the way a C compiler would implement it is bad; the while(accept())
function should be implemented by adding a request managing function to a
request catching handler list, or by putting the object to sleep (if
requesters already know of the object).
> Why then not have a process declare to the kernel that
> "I handle the following request:
> req1 taking 4 parameters
> req2 taking 3 parameters
That's more like it, but it certainly won't be the kernel (rather a
standard module in the base system). The compiler will do that; which
means it will implement it using a library (module) of more or less
compiled routines, and will choose the 'best' implementation himself
(or you can tell it your preferences/orders if you would; but a good
compiler/profiler couple should do it better than you).
The system compiler should be accessible through standard means, and be
able to compile pre-compiled LLL code as well as HLL code, or any standard
intermediate representation of code between direct user-interfaced code
and direct CPU executed code. A LLL standard will be very efficient for
both quick compiling & quick code communication. If you keep it rich,
you have a good input for back-end compiling; if you compress it, you
may have efficient code for interpreting.
Actually, the compiler does not compile code; it compiles objects,
simplifies them for optimal speed and/or memory occupation, or any
positive cost function. A common application is simplifying a chain of
functions: if you just run it, much time may be lost each function
calling next one. And checking functions may have been piped for security,
which are redundant and/or unuseful once you consider class info/
input restrictions/inter-object relations).
To help the compiling structure, dictionaries will actually be compiler
symbol tables, so that you can co-compile with an object from its full
dictionary, and/or you can extract the everyday info of a dictionary from
the full symbol table.
Summary of OO managing organisation in the system
- we first have the Kernel's inmost part (I think Peter called it
Nucleus) which only manages low-level ROI and OIDs; it should be optimized
- Then, we have a low-level security device, which only checks access
rights, through segmentation/pagination/whatever the hardware offers.
If the hardware offers no protection facility, non-lolo programs will
have to systematically use:
- a LLL interpreter.
- a typing/classing convention will be provided as a ML system.
- a type manager which will provide security through default type checking.
- a HL type manager understanding genericity, polymorphism, overloading.
- a scheduling type manager which chooses best object sequences from
an object sequence generator, and a cost function.
- an implicit type casting manager, using the scheduler to determine
the fastest path to transform a given type into another.
Summary of base system objects
- one basic physical class implementation (with DDVMT or such), that is,
a standard basic common ROI.
Basic Classes (low-level standards, implemented through direct ROI)
- dictionary class.
- class class.
- generic ROI class.
- computation class.
- event/exception class.
- an impliciteness/expliciteness protocol.
Modules (medium-level standards, implemented through basic classes)
- common UI functions.
- extensible object compiler
- other ROI implementation.
- other classes
HL Modules (high-level standards; given as source files; may be already
Fare, aka Fare, son of his mother and father, master of himself (sometimes),
who haven't roleplayed for a long time ;-(, and never could play as regularly
as I'd have liked ;-(.
P.S.: I'm very slow at redacting such a paper (not talking about
writing it in english !): I've been typing it for hours ! (whereas a
quick draft on paper was done in some minutes). That's why
1) I didn't have time to talk about the language itself
2) I'll ask you to allow me writing in telegraphic style, next time,
and as long as we're doing a discussion and not a definitive paper.