HLL Process Model

Francois-Rene Rideau rideau@clipper.ens.fr
Sat, 11 Mar 95 2:05:42 MET


Chris said:
> How, with so few processes and so many processors, can TUNES possibly
> near the 90% utilization?  My suggestion is to dump the big, ugly,
> linear process model in favor of something more flexible and scalable.
> 
> In real life, fine-grained objects don't require big, fat processes
> to do their operations, so why should they in a computing environment?
   Well, that's exactly what I always said from the beginning.
Perhaps I wasn't clear enough in the WWW pages. Anyway, I cannot agree more.


> In my view of things, code should have some way of being represented, so
> that it is easy to distinguish code that must be executed linearly from
> that which can be executed in parallel.  (No judgements at this level
> would be made about what SHOULD be done in parallel.)
>[...]
   Hey, this is just what OCCAM does (Jecel already talked about it):
have PAR and SEQ statements to say if next instruction block is composed
of instructions to be executed in PARallel, or SEQuentially (which is
the standard way of saying it, btw, not LINEAR programming, which generally
means something else: not sharing any variable, thus having trivial garbage
collection; but this may be quite a restriction).


> TUNES, then, would constantly keep an eye out for how much of the CPU
> is currently being used.  This could be done by AI techniques, genetic
> algorithms, whatever.
   Yep. If we separate mechanism modules from decision modules (again,
the run-time optimizer should inline everything it can), we can explore
any kind of distribution policy ! Once we found the bestest, we can make
it the built-in default...


> How does the OS tinker with a process?  Well, if utilization is too
> low, it will break off one (or a few) of the sub-blocks in a process,
> and spawn them as seperate processes.  If the machine is being
> over-utilized, it will combine existing processes together.  (You
> could just leave them seperate, but maybe that'd have more context
> switch overhead.  I dunno....)  Certain processes should also be able
> to specify attributes that they have, so that they might take
> advantage of any special hardware availible.  For example, if there is
> hardware availible for the speedy execution of celluar automa
> programs, there is no reason the process, without much of its
> knowledge, should be able to take advantage of such.
> 
> If we support changing of a process' code on the fly, we can then view
> the OS as one huge process, composed of manys smaller things....
   That's exactly what I propose we do. We're 100% in phase.


> The main conceptual problem I have with this model is how to apply
> quotas and such, when process are constantly growing, shrinking and
> migrating.  I guess how many processes a user is running now could be
> a factor in determining which sub-process to spawn as a seperate process.
   This is *the big* problem: the smaller the modules, the more efficient
the potential adaptation, but also the larger the administration overhead,
and the trickier an efficient handling of the whole system.
   However, difficulty should not frighten us, as it frightened the tenants
of other systems, and even if we start by implementing simpler scheduling
systems, we should keep in mind that more complex ones should eventually
replace it.


> The other alternative I have to this is the cell model I proposed.
> (Remember that these are NOT the same cells that Mike proposed.) The
> basics of what I said:
>   -Basic unit of processing is the cell.
   Ok. That's what I already called object, or module, but cell is shorter.

>   -Cells are composed of an internal group of sub-cells.
   Let's say "may", which only says that objects may be organized arbitrarily.

>   -Cells are connected via one-directional channels.
   Yup. They are typically called that object function members, or methods.
Sending the data into the channel is the same as calling the method, or
sending a message, etc.

>   -There are a few primitive cells, that allow for the execution of
>    low-level code, such as math.
   I'd say that there are standard cells; which actually represent low-level
code is implementation dependent. In our actual implementation, we'll first
strive to achieve portability, thus restricting us to a few number of portable
low-level operations on top of which everything is built (that's the what
the eForth model is to FORTH). But future implementations may include lots
of "primitive" low-level objects, on top of which everything is built.
Integrating optimizing compiler technology into the dynamic system partial
evaluator will be the next point I want to develop after version 1.0 is out.

>   -Cells have no internal storage; they get all their inputs from
>    other cells.  (So we maybe need some variable primitives.)
   Ahem. I think that's too restrictive. Yes, we *must* encourage the
separation of objects into pure objects (without states and side effects),
with well defined semantics, and then add a variable constructor. Pure
objects are much easier to understand, their semantics is quite cleaner,
much more optimizations can be done, etc. But well, state must be store
somewhere, so even if we manage to isolate it, we still need to have
side effects somewhere in the system.

>   -The OS can roll sub-cells into their own, first order cells, to
>    achieve higher CPU utilization.  It can also roll a number of
>    interdependent first-order cells to be sub-cells of another
>    first-order cells, if resource utilization is too high.  (This
>    happend at all levels, too.  2nd-order cells, for example, can be
>    merged to form 3rd-level cells.)
   Yes. I wouldn't use "order" here (as it's already used for order of
abstraction), but again I completely agree: object dispatch is completely
dynamic. As few things as possible should be statically constrained.
That's exactly what's behind my "no kernel" concept: the calling convention
and object internal representation dynamically evolves, adapts, and
distributes according to the system resources.


> I'm not sure if the overhead involved in this would be really gross,
> or if it would be barable.  Actually, that might also be a problem
> with my first model too, depending on how it was implimented.  Any
> ideas?
   I'm sure it's doable: we certainly can make better compromises than
the coarse-grain of bUllshIX and alikes, and by allowing freedom in the
domain, we leave room for further improvements, whereas the static design
of *U*IX is just hopeless.


> So now that we've taken that conceptual leap, why not take it a step
> further, and have resizable, self-enclosing objects, and view the OS
> as one huge object, composed of many sub-objects?  Sure, there are
> logical 1st order objects, but the actual physical storage should change 
> on the fly, so that every object, be it 40 gigs or 20 bytes, should be able
> to best utilize mass-storage resources.
   Yup yup yup yup yup.


> Maybe there would be some way to combine processes and objects into
> one concept, as they have sort of done is BETA.  That'd certainly be
> neat, but the two are probably seperate enough that it might not be
> worthwhile.
   To me, there's no such thing as processes. There are pure objects,
and objects that contain a state. All objects are accessed through
methods. Variables are objects that have get and set methods that
fulfill some algebraic relations. By keeping sending messages to self,
objects can keep running. There also are resource pools, so that exploding
objects should not overwhelm stable ones. But again, no need for coarse
grained process. Let's adapt abstractions to actual computing needs, and
not adapt habits to stupid arbitrary abstractions.


> Wow, there's a mouthful!  I think I'll throw this out now, and bar any
> further delay....
   I'm not sure what you mean, but anyway, thanks for your opinion.
It seems we mostly agree on the general concepts.
   Now, we still must design details. Will you Chris, or another project
member, agree to coordinate the Migration subproject ?