LLL model

Francois-Rene Rideau rideau@clipper
Sun, 11 Dec 94 18:41:13 MET


> On Sat, 10 Dec 1994, Chris Harris wrote:
>> On Fri, 9 Dec 1994, Mike Prince wrote:
>>> On Thu, 8 Dec 1994, Francois-Rene Rideau wrote:
>>>> What will be our memory model for the LLL ?
>>> 
>>> 32 bit segments, each owned by a toolbox, or agent for resolving scope.

  The problem with segments is most hardware not supporting such thing; also,
how use segments ? Imagine I have a 320x200 array, and want to give access
only to the first line, then only to the second line, etc. Will a segment
be actually defined for each ? And what if that's a column I wanna give access
to ? Or if I wanna grant access to part of a line ? or to a rectangle ?
Segments are also a very poor/specific abstraction for security.
  I'm 100% against using segments as a standard model for the LLL (though if
they are useful for some particular kind of objects in some architecture, why
not let the LLL->asm code generator use them ?)

  We should provide much more generic ways of specifying security conditions.
And it should be a compiler issue, not a system issue. If our language is
truely low-level, there will always be ways to fake the system, so either we
trust low-level programs with some signature system (as I propose) or we
build big, huge, slow, bulky, horrid unix-like processes for each system
object (which means a coarse grained system). I'm 200% against the later
method.
  We should have only correct programs. Programs should include specifications
in a HLL that explain outside what condition the program may fail, and outside
what condition the program may crash the system. If the latter conditions are
not filled, the program may not run (but filters may be automagically generated
to test the conditions before and/or after a routine is called, so that
failures are detected before the machine may crash, and an exception raised).
  Program correction means the LLL writing people must be especially cautious,
and LLL generating programs particularly well designed. Perhaps our "LLL"
should even be not so much low-level...


> BTW, I'm way against paging.  Our optimization routines should keep the 
> most active toolboxes in memory.  I don't favor fragmenting toolboxes 
> and inheriting another "optimization" problem of paging stuff to and from 
> disk.  I believe we could eliminate paging, DSM, RPC's, conventional 
> filesystems, disk caches, etc with this design.
  Unhappily, that's not possible in a multitasking system: the compiler
does not know what tasks will run by the time the process runs (especially
if the process actually needs a compiler, thus runs for a long time).
Paging is the only way to dynamically adapt to the needs. Note that
the `BLOCK' word in the FORTH language is some kind of software paging,
so we may adopt a LLL model where it is not specified whether the paging
is software or hardware (and let the optimizer fix that -- that's what I
propose). But paging is needed anyway. I know no dynamic system that does
not page (see Lisp implementations) and allows more virtual memory than
actual memory. Are does your memory model limit the use of an object to
computers that actually have that much free memory ???

BTW, Mike, as a LLL project coordinator, I think you should maintain a
summary file on the abstraction model for the LLL, shouldn't you (-8 ?


>> Are you saying that any size variable could be requested, from 1-bit to 
>> 512-bit, or only certain #s within a certain range?
>> 
>> Oh, I'd Also be curious about specifying floating-point #s.  Do most 
>> processors out there follow the same IEEE standards, and thus have the 
>> same level of precision?
Remember that we wanted dynamic object *migration*. So, of course HLL can
request any size for integers and such, and LLL libraries may allow to use
them. But because of migration, actual low-level object size must be well
defined, so either objects are systematically tagged or we must separate
them in different zones; in any case, you can't multiply the sizes without
having a lot of associated overhead *and* code size.


> I'm advocating having 8,16, and 32 _signed_ numbers as well as a float 
> (64 bit or 80 bit).  As with Forth, bigger things can by synthesized.
   I agree 100%. But a given LLL program will use only one data stack (or more,
but that will consume registers, etc), and each stack must have a well-defined
size (or will have to emulate different sizes -- see HP48 integers).
The problem is the same as above.
   I'd say support only 16 or 32 bit virtual stacks (actual implementations
will emulate such thing).


   Then, I don't agree completely with Mike's view about the execution model,
and more importantly, with the place and order in which this discussion is
being taken. I think the LLL should be an extensible, replaceable,
sub-abstraction of a more generic full-fledge binary encoding standard
   See my next (incoming) message...

--    ,        	                                ,           _ v    ~  ^  --
-- Fare -- rideau@clipper.ens.fr -- Francois-Rene Rideau -- +)ang-Vu Ban --
--                                      '                   / .          --
MOOSE project member. OSL developper.                     |   |   /
Dreams about The Universal (Distributed) Database.       --- --- //
Snail mail: 6, rue Augustin Thierry 75019 PARIS FRANCE   /|\ /|\ //
Phone: 033 1 42026735                                    /|\ /|\ /