KER,GEN+ [far24]

Michael David WINIKOFF
Fri, 2 Apr 93 10:02:02 EST

> functionalities simply from a language (the so-called coordination language
> as appeared in all OS project that arose in the discussion). I do not
> consider calling libraries with grotesque structures (pointed ,referenced,
> or copied) a simple system call. A system call must be simple, easy, and


> don't change its syntax when you change implementation ! (that's also why a
> Low-Level language as C++ cannot do).
>  This is originaly Michaels mail, and Andreas tried to sumarize what we agree
> on; I added my own stuff as a reply; disagreements are marked ***; additions
> are marked +++. Now notice this, if we are going get something done in a
> resonable amounte of time we must start to agree on things, as David
> mentioned as a postlude in his mail.

Unfortunately it is no longer clear from the document who said what ...

> I'm rather starting from the other end: the user/programmer; how he
> (actually, that's more like I) conceptualizes the system; what he expects it
> to do, how he'd like to modify its behavior.

I feel this is less practical -- there are things that users want that are
still research issues as regards eficient implementation while maintaining
security and robustness.

> +++
>  The underlying idea, is that there's a Yin-Yang duality between processes and
> objects (one always come with the other, and even different, they can't be

Nope -- I'm DEFINING one in terms of the other.
Actualy objects as used in OO languages such as smaltalk are distrinct
from processes in that
(1) They don't have an independant thread of control
(2) Theyare of much smaller granularity.

> separated as each generates the other): object evaluation is done through
> processing; processing management is done through object. This is true for
> virtual as well as physical objects/processes, and at any abstraction level:
> at lowest level, CPU instructions are binary coded / binary data is modified
> through CPU instructions ; at highest level virtual object data may be
> accessed through functions (a "zap" field can be accessed through virtual
> read/write functions, so that zap := f(zap,...) will invoke and
> zap.write), and virtual functions can be implemented as just data (i.e. a
> function on a very limited number of elements can be coded as just an array
> containing its values)

I get the feeling that you'r defoining an obkject interms of itself without
giving a base case to break out of the recursion.

> between programmation and common use -- if you ever used a HP28/48, you know
> what I mean. this implies there is no boundary either between compiling and
> interpreting ; you just evaluate or simplify objects.

Now you're confusing the conceptual language model and it's implementatuion.
For the point of view of givoing semantics to the language OF COURSE it doesn't 
mater whether the language is interpreteted opr compiled.

> ***
>  In Mar 26 message Re: Kernel 0.11 [arf1], Michael says (hey, Michael, do
> number your message !):
> > What I meant was that a MOOSE object is defined to be a process.
> > A small object that isn't a process (Eg. is internal to a
> > process/large-object) isn't viewed as an object by the system.
>  If by process, you mean a system object (assuming that by duality, objects
> ever end up into processing), of course only processes are system objects !
>  BUT, if you mean that the system's base object will have to save the CPU/FPU
> state, plus system data, as unix tasks, then of course not ! If that's what
> you want, just write a server under unix to relaunch tasks with checkpointed
> data, and add it in the superuser's profile (I'm sure that's not what you
> want).

I *DO* mean a conventional process.

Regareds writing a server -- what do you implement the server on top of?

>  Integers are objects; the whole system is an object; even code is an object
> class; classes are objects; anything is an object.
>  NOW, every object doesn't have to be accessible from every other. That's
> why, for example, every single temporary integer won't have to be directly
> interfaced to the user. However, if you possess debug info, you can access
> any single integer in the system (apart possibly from some temporary variables
> in parts of the nucleus which need forbid interrupts).

NO NO No !!!!!!

As I was arguing, tiny granularity objects are not practical while maintaini ng 
protection. Mostly because there is an overhead of method cal as compared
to a direct access.

Things like integeres or arrays can be objects ihn the language you're using but
The OS doesn';t know or care about them - they are internal to the application.

>  So let the inmost kernel manage but OIDs: that's all we need; then,
> basic devices (see further lolos) can manage everything else. All
> we need is OIDs (BTW can someone find how we can make "LOVE" an acronym
> for OIDs ? :-)

You'd have problems finding a relevant word containing "V" ... :-)

> meaningful at the same time) We need know more general restrictions:
> -what object is being accessed, written, read, or referenced,
> -what algebraic/functional equations are verified by the system objects,
> -etc, etc.
>  That may allow checking more strict but also more free for the programmer,
> than just type
>  The Kernel need only know about the implementation of a base class class,
> say a DVT-driven-class-class class, then it will be able to use virtually
> any class, then any object, provided any object has a class that indirectly
> can finally be expressed by evaluating a basic class instance.
>  What you may want to ask the class is:
> - get the list of available members (methods/data).
> - get an object's dictionary (isn't it more or less the same ?).
> - get the info about restrictions applying on previous objects.
> - creating a new instance/a list of new instance of the class.

Can you convince me that this can be efficiently implemented?
Curently I don't see that it can.

>  Inheritance & Polymorphism
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~
>  +++
>  To me, inheritance is only saying "one aspect of that object is that it
> fits another object's requirement, thus particularizing it"; so it's more
> easy to name inheritance as a subobject of main object. So for example, an
> aspect of a circle is it being an ellipsis; this is purely virtual, and
> does not mean the physical implementation of a circle will effectively
> include say, the length of both axes.
>  Polymorphism is also done by using the same name for different objects.
> Then, you can differentiate equally named objects by looking at their
> different classes. This means a dictionary (although perhaps not base
> dictionaries) can manipulate objects not only according to theirname, but
> also acccording to their class/level.

I'm more interested in the mechanism you're using to express polymorphism
at the OS level then in what it is.

> Notes on second/third attempt on the Kernel:
> >             What is the type of the name?
> >             An integer is the obvious.
> > An atomic integer as suggested by all of us. See def. 3.
>  This should be implementation and/or coding dependent.
>  Maintaining a global integer scope for the whole system would be
> VERY difficult, so names being pointers to a structure or anything
> is just fine. Again, names should be accessed only through standard
> name class methods, and thus be implementation independent.


> >             This is where IPC comes in.
> >             [I'm using the definition that IPC involves data
> >             copying between address space]
>  No ! IPC is meant to allow different processes using the same
> objects, or writing objects that another one will read, etc.
> If memory must be copied, well, it must. But the implementator
> should be free to use the full hardware support to make IPC as
> fast as possible (and in this case, we don't need copy big
> chunks of memory; an the i386, segment descriptors may be used
> to point more easily on any array of memory). I also suggest
> dynamically compiling programs that do heavy IPC (the scheduler
> should be aware that on a particular case, it is better to
> perform a local compile, or to let the system use slower but
> immediate late binding).

I take it you disagre with my definition of IPC rather then with what I'm 

>  Of course we can, what is the problem here ? I really see no problem at
> using arbitrarily big/small objects. All we need is keeping the whole
> compiling info for the object we want to use. (unused debug info will be
> compressed and/or deleted/not generated, according to user's demand and/or
> use). Of course, interfacing objects that weren't designed for interfacing
> will be slower than interfacing objects designed for it; you may locally
> recompile a program to change only a variable's access.

There is a significant overhead.
Apart from the extra information that the system has to keep around to 
manage an object we have the problem that acessing an object is done
by method call whereas accessing a variable can be done by direct acess.
Direct access is much faster then a method call.
It realy bolis down tohow much work uis done by a method call.
I'd guess thatif we limited the functionality of method cals (EG. no virtual
functions or inheruitance - which implies a method search) then we could
bring the overhead down to afactor of 4 or so using hand tuned code.

> - first thing: if something is a lolo, it is powerful enough to crash the
> whole system, so we can trust it not to crash it; of course, it may still
> use any system protection scheme.


> >     As Dennis has pointed out this is inefficient.
> No. What he pointed out is that we couldn't page-align every object
> (all the more if everything is an object). We may still have this kind
> of paging, but we may adapt it so that at any moment when the disk
> isn't being written, the disk system file contains some valid data, so
> that persistent objects are preserved if the computer is powered down.

NO. it IS ineficnet due to the overhead of method invocations.

I would just like to point out that this discussion about doing without an 
MMU was a hypothetical aside.

> >     Disadvantage: Lose the ability to save memory by having part of a
> >         process in memory
>           - VERY slow for big processes.
> >     Advantages: (1) Can be done without an MMU
>           if we don't have MMU, we'll have to use an interpreter to
>           enforce security, but for lolos, so there's no problem here; only
>           interpreter design.

Nope - to slow.
Machines woithout MMUs tend to be old and slow anyhow ...

> Note on servers
> ~~~~~~~~~~~~~~~
> > Consider now the form of such a server (which incidently bears a striking
> > resemblence to the nameservers (or dictionaries)):
> >
> >     while (accept(request))
> >         case request of
> >             type1 : ... handle 1 ...
> >             type2 : ... handle 2 ...
> >
> > This type of code will be very common in the system.
> > It allready is in event based systems.
> Well, yes and no. If some code really appears often, it will be included in
> modules which implement it efficiently and parametrizably. In particular, I
> think the way a C compiler would implement it is bad; the while(accept())
> function should be implemented by adding a request managing function to a
> request catching handler list, or by putting the object to sleep (if
> requesters already know of the object).


> ------------------------------------------------------------------------------
>  Summary of OO managing organisation in the system
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> - we first have the Kernel's inmost part (I think Peter called it
> Nucleus) which only manages low-level ROI and OIDs; it should be optimized
> for speed.
> - Then, we have a low-level security device, which only checks access
> rights, through segmentation/pagination/whatever the hardware offers.
> If the hardware offers no protection facility, non-lolo programs will
> have to systematically use:
> - a LLL interpreter.
> - a typing/classing convention will be provided as a ML system.
> - a type manager which will provide security through default type checking.
> - a HL type manager understanding genericity, polymorphism, overloading.
> - a scheduling type manager which chooses best object sequences from
> an object sequence generator, and a cost function.
> - an implicit type casting manager, using the scheduler to determine
> the fastest path to transform a given type into another.

Too complex.

>  Summary of base system objects
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> - one basic physical class implementation (with DDVMT or such), that is,
> a standard basic common ROI.
> Basic Classes (low-level standards, implemented through direct ROI)
> - dictionary class.
> - class class.
> - generic ROI class.
> - computation class.
> - event/exception class.
> - an impliciteness/expliciteness protocol.
> Modules (medium-level standards, implemented through basic classes)
> - common UI functions.
> - extensible object compiler
> - other ROI implementation.
> - other classes
> HL Modules (high-level standards; given as source files; may be already
> partly/fully compiled)
> - anything.
> -----------------------------------------------------------------------------
>    ,         ,
> Fare, aka Fare, son of his mother and father, master of himself (sometimes),
> who haven't roleplayed for a long time ;-(, and never could play as regularly
> as I'd have liked ;-(.
> P.S.: I'm very slow at redacting such a paper (not talking about
> writing it in english !): I've been typing it for hours ! (whereas a
> quick draft on paper was done in some minutes). That's why
> 1) I didn't have time to talk about the language itself
> 2) I'll ask you to allow me writing in telegraphic style, next time,
> and as long as we're doing a discussion and not a definitive paper.

Sory, if I seem rushed - I am.
Gotta run.
Be replying to other mail later today.

Michael Winikoff
Computer science honours. University of Melbourne, Australia.