Memory model for the LLL

Chris Harris chharris@u.washington.edu
Sat, 10 Dec 1994 21:14:39 -0800 (PST)


On Fri, 9 Dec 1994, Mike Prince wrote:

> On Thu, 8 Dec 1994, Francois-Rene Rideau wrote:
> > What will be our memory model for the LLL ?
> 
> 32 bit segments, each owned by a toolbox, or agent for resolving scope.

Would this use native x86 segments or some abstraction of them?  It seems 
fairly limiting to be able to only have one code, one stack, and four 
data segs in memory at once.  All that and many processors don't support 
native segments.  Wouldn't a flat address space w/ paging be easier?

> > For instance, will we allow pointer arithmetics ? (I hope not)
> 
> Yes, CPU's do it, why can't the LLL.  Remember, a HLL can always not use 
> this feature!  Pointers are within segments for process protection.

Might work.  I'm still trying to grasp what kind of language everyone's 
trying to think of here.  Seems a general agreement that it should be 
stack-based, but that doesn't say too much....

> > Will we allow infix pointers (not pointing to the beginning of an
> > object) ?
> 
> If the LLL has to deal with objects as such, it might become too 
> dependent on one way of "thinking", I'm going for a general memory 
> allocation strategy and let the LLL program determine it's use of the 
> storage (i.e. one "object" per memory allocation, an array of them, a 
> stack of them, etc).

Sounds good to me, provided GC can still be worked around this.

> > How will we adapt to different architectures ?
> 
> It will be give and take.  Nice answer, huh?

Quite so.  =)  Actually, though, there's only so far you can go in terms 
of compatibility.  In the origional MOOSE docs, there's talk of 
supporting machines with 2/4-bit words, or even ones without powers or 
2.  How many people do you know with one of those?  =)

> > Will 64-bit architectures have to throw away 32 bits out of 64 ?
> 
> Maybe, let's discuss this.  If I say add 16bit+16bit and it overflows, 
> what happens?  On a 16 bit machine i've lost the info.  But a 32 bitter 
> still has it.  Now do a little multiply and your two different 
> implmentations have two different answers.  That's not acceptable.  I say 
> a programmer defines a variable of size n, and it stays that size 
> irregardless of the CPU native size.  That, in conjunction with the 
> previously mentioned frugal programming practices will result in (I 
> believe) predominant use of 16bit sizes.

Sounds good by me.  It would be my hope that all platforms would support 
32-bit operations, even if it is a hack with afew 16-bit #s.

Are you saying that any size variable could be requested, from 1-bit to 
512-bit, or only certain #s within a certain range?

Oh, I'd Also be curious about specifying floating-point #s.  Do most 
processors out there follow the same IEEE standards, and thus have the 
same level of precision?

> > * GC would happen only during cooperative multithreading, while
> > preemptive multithreading is allowed for real-time behavior on
> > preallocated (outside global GC) zones.
> 
> GC would happen when the kernel/memory manager object says it would 
> happen.  It is not dependent on the state of any threading.

I would generally agree, but pre-emptive threads can give you a problem 
with some GC schemes.  If it uses handles similar to the macintosh, and a 
program dereferences the handle to a single pointer, and the OS 
interrupts to move that object, the pointer becomes invalid.  I dunno if 
this is what was being talked about though.  I'm sure there's some way 
around it....

> > * object type is determined by which page it is located on and/or
> > which tag precedes it.
> 
> An object can be tagged by the first few bytes of it's memory 
> allocation.  That way "higher-level" objects would be defined by the HLL 
> under which they were created.  Remember there are going to be a lot of 
> low-level things which cannot be objects which will be manipulated by the 
> LLL (although you could build a container class, but that in itself would 
> be a hack).

Quite true.

Say, this isn't really on topic, but I'm curious how you plan to 
migrate/store agents, and represent concurrency in the LLL.  In my mental 
picture now, agents fly around to whatever toolbox they need, and then 
run the code and use the data in that toolbox.  That's neat, but what IS 
an agent?  I've heard you say its a process, and that's okay, but my view 
of a process is some code and data, laid out like unix processes.  If you 
take away the code from this view (Its hidden in tools, right?), then a 
process simply becomes a collection of stacks.  How does this work?

Also, how do you schedule agents?  Are there fork/join ops in the LLL, or 
is there some other way (a kernel toolbox?)?

> > * low-level operators like addition are explicitly typed, and won't
> > have to test object type.
> 
> Completely in agreement.  But note that this goes in contradicition to 
> the pure object model.  I'm get the feeling of a compromise?

I dunno about other object-lovers, but I've been thinking a bit about 
your toolbox model, and the why of it all.  Finally, the obvious hit me 
over the head.  Without the underlying instruction set, a computer is 
100% useless.  There is no way for a piece of code to add two numbers 
when the CPU provides no way to at least emulate an add instruction.  
Therefore, at some level, objects, no matter how cute they are, must be 
translated into the local instruction set.  Since the hardware doesn't 
give cheese about objects, a pure OO model can't work; no matter what 
language you use, you'll end up using some primitive operation eventually.

Enough agreement, though.  =)  I still think that the user should get a 
higher-level view of things, complete with inheritance, etc..  Toolboxes 
are cool, but objects could be more productive....  (Not to mention fun.  
hehe....)

> Mike