11.3 distributed memory management: Resource allocation

Mike Prince mprince@crl.com
Mon, 5 Dec 1994 12:59:09 -0800 (PST)


On Mon, 5 Dec 1994, Francois-Rene Rideau wrote:

> [What discussion thread should this be on ?]
Assuming we adopt the most recent incarnation of our charter, its;
	11.3 distributed memory management

> IMPORTANT: memory allocation strategy.

<snipped>

> So here is the policy I
> propose:
> * there are service/resource providers
> * there are service/resource multiplexers/demultiplexers and mux
>   mergers/splitters, that behave like coarse grained objects, with heavy
>   resource allocation mechanism.
> * tiny objects work through these mux objects, they are automatically copied
>  between pools.

Here's my model;

Our virtual machine is broken down into layers.

GLOBAL (everything, all CPU's in existence, all memory, etc)
	DOMAIN (group of CPU's, perhaps your local net, or the collection
		of processors in your parallel system, whatever you define)
		LOCAL (usually one CPU plus its memory, other hardware,etc)
			TOOLBOX (for lack of better name, maybe a MODULE)
				TOOLS (our code)
				STACKS (our data, may be used as stacks or
					random access data, global to TOOLBOX)
				AGENTS (our execution primitive)
					STACKS (data only visible to owner
						agent)
				
I know I've already gotten flak for the apparent complexity. But a little 
complexity now will make the entire system simpler later.  Think 
holistically!

Ultimately, resources are requested by people.  I start up the my word 
processor, I start up a server, etc.  We can maintain a chain back to the 
"sponsor" of any computation.  Thus a DOMAIN could query the system for 
how many resources any one sponsor could consume.  So...

When I run my spreadsheet, wordprocessor, and graphics program at the 
same time on a local box allocated as one DOMAIN, I would ussually be 
allowed the use of all resources.  However, if I was running within a 
DOMAIN, which was composed of my office LAN, I would be given pre-defined 
limits, which if exceeded, would adversely affect my ability to consume 
resources, and perhaps cause the demise of some of my applications.

Now on to the details of resource allocation.  Our heavy resource 
allocation would be the movement of TOOLBOXES.  This would require 
rounding up all their data, code, and resident agents, shipping them off 
and recompiling them.  This should be avoided, but (my gut feeling), 
would not be as computationally expensive as many think, as long as we 
standardize our data format and have a low level language that quickly 
compiles (almost a one-one macro-like translation is possible).  We could 
also encourage the creation of "small" toolboxes (in Toas, if my memory 
serves me, some are only a few hundred bytes).

The "light" resource allocation is for memory requests for STACKS.  If we 
used a stack-based language, our stacks could grow and shrink on their 
own.  When bounds are hit, the OS could automatically expand the STACK, 
or if not possible, ship off the TOOLBOX to a LOCAL that could.  Also, 
when one stack needs to get bigger, the OS could scan the stacks for 
those with lots of "head rom" and shrink them up.  This way the OS would 
be given the bulk of GC responsibilty.  Remember, though that we should 
seperate mechanism from policy.  The OS provides the mechanism for many 
of these things, but external TOOLBOXES would determine who goes where with 
what.  This OS should be highly modular.

One last note, just because I use the word stacks (distasteful to some) 
doesn't mean they have to be used that way, you can also access them 
randomly, provided you stay under the stack pointer.  However, if you 
start to create holes in your stack, that makes GC a little harder.  I've 
also opened the possibility of having multiple stacks (a little harder on 
the OS and we take a hit for not having our data be nice and be close 
to each other (poor cache!) but for those who want very dynamic storage, 
we'd have it.

So back to Fare, our service/resource providers would exist at the DOMAIN 
level.  There would be redundancy in case the provider failed.  Our 
light-weight resource management would be done at the LOCAL level, and if 
not able, passed on to the DOMAIN resource provider above.

So....Whadaya think? 

Mike

P.S.  I'm a little fuzzy on these MUX stuff, you mean sharing resources 
(as in memory, or is it devices (printers), or processors, or...)