HLL Process Model

Chris Harris chharris@u.washington.edu
Sat, 11 Mar 1995 09:30:30 -0800 (PST)

I think Fare already covered a lot of these issues, but I'll reply too, 
in case not all of the issues were answered.

On Tue, 7 Mar 1995, Jecel Mattos de Assumpcao Jr. wrote:

> On Mon, 6 Mar 1995 22:06:09 -0800 (PST) Chris Harris wrote:
> > As development seems to be progressing on the HLL front, I thought I'd
> > again re-interate some suggestions on the process model, and throw out
> > a few new things too.
> I think the two thing are linked, but most people would disagree.

I'd definitely go with fare's no-process concept.  Makes everything make 
so much more sense.  In real life there's isn't a "process" concept, so 
why need there be in a high-level computer language?

> > In real life, fine-grained objects don't require big, fat processes
> > to do their operations, so why should they in a computing environment?
> The problem is that most people want to run "dusty decks" in parallel,
> not write new fine grained programs. I can't imagine why seeing how
> bad these old programs are. You'll have to rewrite them anyway to
> make the GUI generation even look at them ( like colorizing old
> movies ;-).

I think with what we're trying to do with TUNES, any programs/objects 
that are "dusty decks" will be rejected by the TUNES community.  Once we 
get people used to the benefits of small, interacting objects over huge 
programs, I doubt they'll want to switch back.  So most programs will 
have to be rewritten, but doing so will be a great advantage.

> > In my view of things, code should have some way of being represented, so
> > that it is easy to distinguish code that must be executed linearly from
> > that which can be executed in parallel.  (No judgements at this level
> > would be made about what SHOULD be done in parallel.)
> You might do it this way. An alternative is to have the compiler look
> at the data flow and figure this out for you. Much harder, but easier
> for the programmer.

As fare said, either way will be supported.  When you're actually 
executing code, it doesn't really matter how it was generated, so long as 
it's valid.

> You specify the maximum parallelism that can occur at runtime. One
> big problem with this example is that is doesn't use any communication
> ( other than the implicit shared variable quitting ) which is the
> real complication of parallel systems.

I like the process communication model used by Clouds.  Basically, each 
process has a small internal stack, and runs around executing different 
objects (basically C-like code with some entry points), modifying their 
variables, etc..  If you want to have IPC, you simply have two processes 
access the same object, and share its variables.  If you had the get/set 
variable methods be tied in with semaphore or such, you could make some 
really clean code.  (With a well-set-up object, you could just tell it to 
access variable j of object o, and if it wasn't availible at the moment, 
it would put your process to sleep.  That way, you don't have the sync 
code mucking around with your program flow.)

> > How does the OS tinker with a process?  Well, if utilization is too
> > low, it will break off one (or a few) of the sub-blocks in a process,
> > and spawn them as seperate processes.  If the machine is being
> > over-utilized, it will combine existing processes together.  (You
> > could just leave them seperate, but maybe that'd have more context
> > switch overhead.  I dunno....)  Certain processes should also be able
> > to specify attributes that they have, so that they might take
> > advantage of any special hardware availible.  For example, if there is
> > hardware availible for the speedy execution of celluar automa
> > programs, there is no reason the process, without much of its
> > knowledge, should be able to take advantage of such.
> None of this is very easy to do. But they are very interesting
> research problems that I am looking into. I plan to use the
> adaptive compilation technology in Self to help here.

Who said TUNES was going to be easy?  =)  I think the key is to design 
models with enough expansion capability to one day reach this level of 
complication.  Doesn't mean it'll be this neat from day one.  Just that 
it has the potential to be made that way.

> > If we support changing of a process' code on the fly, we can then view
> > the OS as one huge process, composed of manys smaller things....
> That is one way of looking at it. You will have to decide if your
> "prcesses" can share memory or not. That is a major question.

Only through objects, as I sez before.  But if we do go and abolish the 
process completely (at the high level anyhow), this isn't much of an 

> >   -Cells have no internal storage; they get all their inputs from
> >    other cells.  (So we maybe need some variable primitives.)
> Oh yes they do! Their memory is the channels connecting their
> sub-cells...

=)  Okay, folks, this is a stupid requirement.  Guess I'm wanting to have 
an infinte level of recursion inside finite computers.  It'd be neat to 
try, although it could be just a bit difficult to impliment....

> >   -The OS can roll sub-cells into their own, first order cells, to
> >    achieve higher CPU utilization.  It can also roll a number of
> >    interdependent first-order cells to be sub-cells of another
> >    first-order cells, if resource utilization is too high.  (This
> >    happend at all levels, too.  2nd-order cells, for example, can be
> >    merged to form 3rd-level cells.)
> There is a missing element here - cell sharing. Can you use a cell
> as a sub-cell of two different higher level cells? If you can, you
> will have to find a way to deal with multiple "instances" of a cell.

I would vote no.  I'm for a prototype/delegate object model, as described 
in the SELF stuff.  That way, you create objects by cloning existing ones 
and if you pass an object a message it doesn't understand, it will fire 
it off to its delegate.  They describe how to make whole class hiarchies 
this way....

Actually, we may not even need to support delegates.  Perhaps we just 
support cloning of objects, and then if an object needs a delegate (when 
it receives a message it doesn't understand), it can fire it off to an 
object of its choice.  That would reduce the overhead on the system level....

> > I'm not sure if the overhead involved in this would be really gross,
> > or if it would be barable.  Actually, that might also be a problem
> > with my first model too, depending on how it was implimented.  Any
> > ideas?
> Try writing the above example with cells. If you can't do it, you'll
> know you are in trouble ;-) If you can do it, try doing this with
> cells:
>            fact (n) = n < 2 ? 1 : n * fact ( n - 1 )

Have to get back on that one.  (Have to drag out the ol' C book and 
remember what some of this stuff means.  hehe...)  The problem with such 
an example is where do I define the primitives?  What if I make a fact 
primitive, and so it only requires one cell?  =)

> > So now that we've taken that conceptual leap, why not take it a step
> > further, and have resizable, self-enclosing objects, and view the OS
> > as one huge object, composed of many sub-objects?  Sure, there are
> > logical 1st order objects, but the actual physical storage should change 
> > on the fly, so that every object, be it 40 gigs or 20 bytes, should be able
> > to best utilize mass-storage resources.
> If you do object composition by "enclosing", how do you share things?
> Or am I missing something here?

I think I'm speaking mainly of physical storage here, not so much logical 
objects.  It might be applied to logical objects too, although it might 
take some thinking.  This is an issue, although I'm not sure how it 
works.  I guess we could use the generic "proceeses don't exist; we don't 
need sharing" argument.  =)

> > Maybe there would be some way to combine processes and objects into
> > one concept, as they have sort of done is BETA.  That'd certainly be
> > neat, but the two are probably seperate enough that it might not be
> > worthwhile.
> I have done so in Merlin: one object = one process ( except when
> they are not :-). This model is close to the real world where each
> component works in parallel with all of the rest. I have to allow
> some exceptions for performance ( immutable objects can have many
> processes ) and for compatibility with sequential code ( recursion
> would always deadlock otherwise ).

I don't think this is a very useful way to represent things.  Because 
depending on what you're representing, you're always going to want more 
or less processes per objects.  I would again like to follow Fare's 
notion: processes don't exist, only active objects.  How that works, I do 
not know.