HLL Process Model

Jecel Mattos de Assumpcao Jr. jecel@lsi.usp.br
Thu, 16 Mar 1995 01:59:07 -0300


On Sat, 11 Mar 95 2:06:34 MET Francois-Rene Rideau wrote:
> Jecel said:
> > Like you said, this is just a numbering convention. Then please decide
> > what Alpha version 0.X will have and what will only be implemented in
> > 1.0.
>    Well, to me, version 0.0.1 would be the first version to boot and do
> computations. 

Check.

> Version 0.1 will be the first running version that allows any
> potentially useful interaction. 

Agreed.

> Version 1.0 should include the full core system
> semantics, and be somewhat portable, even though the implementation is lame
> as of running speed. 

The "demo" version. Ok.

> Version 2.0 would be an efficient implementation.

Great!

> Version 3.0 would be fully ported to all common platforms.

If it is not different from the user's view point, I would keep
the 2.0 name.

>    Meanwhile we're version 0.0.0, and internal revision is still evolving
> (currently 0.0.0.10). I hope we'll arrive at 0.0.1 before we reach internal
> revision 255 ;)
>    Is that ok ?

This is a very good plan. I am not sure I understood all these different
levels ( why not 0.1 0.2 1.0 2.0 .. ? ) and when they are incremented, but
I like the general idea.

> > [ different stack models - adaptors ]
>    Well, it depends on what you call an adaptor. But adaptors could be
> automatically generated. Now, we don't need a native convention to reduce
> the numbers of adaptors: we just need the transitive closure of the
> "is adapted to" relation to link every method to every other, which is
> *quite* different. Having a native method is like having a kernel. We do
> not do it, and I'm sure it's harmful. Moreover, we may have adaptors
> involving only subsets of a language, e.g. FORTH words with simple decidable
> stack behavior, etc.

One example of a heavyweight adaptor is CORBA. A neater one is ILU ( take
a look at 
    "ftp://ftp.parc.xerox.com/pub/ilu/ilu.html"
Inter-Language Unification -- ILU).

> > How would the annotation translate into LLL terms? Library calls?
>    There would be different kind of annotations, and different means to
> implement them. One standard way would be that user-visible objects, being
> few, can afford having an extra field that would point to a hash-table
> structure. Other annotations could be implemented the other way round:
> there would be a hash table associating objects to the annotation's value.
> When you know an annotation is always present, don't hash it, but reserve
> a field for it alone. All this can be done automatically, and optimized
> at next major GC.

Right. This is like how an object is associated with its class in
Smalltalk.

>    As for library calls, annotations are like any object: they are bound
> into LLL terms by the meta-objects which made them visible anyway. If you
> know the object personally, you call its method directly; if you don't,
> you directly call another object that may provide you the means to
> directly call the object next time...

Ok, but I still don't have a good idea of what annotations are and
how they are used by the system.

> > Good idea, but let's leave translating Spice as an excercise for the
> > user ;-)
>    Err, is Spice some language ? Anyway, there's a subsubproject about
> automatic translation, and the main language to translate from will be C...

Spice is a very complex analog circuit simulator written in C and running
on Unix systems. It was originally in Fortran and I was given the crazy
task for porting it from the PDP-11 to the Burroughs 6700 back in 1982.
Don't try this at home, kids ;-)

-- Jecel