HLL Process Model

Francois-Rene Rideau rideau@clipper.ens.fr
Sat, 11 Mar 95 2:06:34 MET

Jecel said:
> On Mon, 6 Mar 95 18:03:21 MET Francois-Rene Rideau wrote:
>> Actually, it's just a problem of version numbering policy:
> Like you said, this is just a numbering convention. Then please decide
> what Alpha version 0.X will have and what will only be implemented in
> 1.0.
   Well, to me, version 0.0.1 would be the first version to boot and do
computations. Version 0.1 will be the first running version that allows any
potentially useful interaction. Version 1.0 should include the full core system
semantics, and be somewhat portable, even though the implementation is lame
as of running speed. Version 2.0 would be an efficient implementation.
Version 3.0 would be fully ported to all common platforms.
   Meanwhile we're version 0.0.0, and internal revision is still evolving
(currently I hope we'll arrive at 0.0.1 before we reach internal
revision 255 ;)
   Is that ok ?

>>> [ open stack ( Forth ) or framed stack ( C ) model? ]
>>    I think our HLL should be high-level enough to make such considerations
>> obsolete, and its implementation versatile enough to allow linking to
>> lower-level routines using either model. To the HLL, all routines will
>> have a fixed number of argument, but the way the arguments are passed is
>> function-dependent: according to the implementation, it could be done
>> through dictionary modification, LISP-like cons-list parameter, pushing
>> on an array-like stack, using registers (with various callee-saved and
>> caller-saved conventions), etc. ". rest" or "&rest" parameters, passing
>> parameters through dictionary, using implicit parameters, importing
>> a parameter's structure in the dictionary, would be standard techniques
>> to allow any kind of parameter passing.
> I don't think this is possible. You can't link Forth to C directly, or
> Pascal to LISP. You need to generate adaptors, but these cannot handle
> all of the differences between the models. Some manual patching is needed
> in all of the systems I ever saw. Picking one method as native will
> reduce the number of adaptors you have to build from n/2*(n-1) to n-1.
   Well, it depends on what you call an adaptor. But adaptors could be
automatically generated. Now, we don't need a native convention to reduce
the numbers of adaptors: we just need the transitive closure of the
"is adapted to" relation to link every method to every other, which is
*quite* different. Having a native method is like having a kernel. We do
not do it, and I'm sure it's harmful. Moreover, we may have adaptors
involving only subsets of a language, e.g. FORTH words with simple decidable
stack behavior, etc.

>>    To me, distribution would be done at the HLL using parallel evaluators
>> for function evaluation (functions would then have to be annotated to
>> show how parallelization is possible); the LLL would still reflect the
>> sequential von Neuman machine architecture of today's CPUs. That's still an
>> opinion, though. Arguments pro/con welcome.
> How would the annotation translate into LLL terms? Library calls?
   There would be different kind of annotations, and different means to
implement them. One standard way would be that user-visible objects, being
few, can afford having an extra field that would point to a hash-table
structure. Other annotations could be implemented the other way round:
there would be a hash table associating objects to the annotation's value.
When you know an annotation is always present, don't hash it, but reserve
a field for it alone. All this can be done automatically, and optimized
at next major GC.
   As for library calls, annotations are like any object: they are bound
into LLL terms by the meta-objects which made them visible anyway. If you
know the object personally, you call its method directly; if you don't,
you directly call another object that may provide you the means to
directly call the object next time...

>> I think we should rather allow interactive programmable translation of
>> C into our HLL. We'd then "manually" translate, and would produce scripts
>> as we do, that we'd then refine, generalize, multiplex, clean, and merge
>> into a single package, as we do...
> Good idea, but let's leave translating Spice as an excercise for the
> user ;-)
   Err, is Spice some language ? Anyway, there's a subsubproject about
automatic translation, and the main language to translate from will be C...