HLL is not against LLL
Mon, 3 Jul 1995 00:43:27 -0400 (EDT)
[I inverted the order of your comments to make them easier to reply to]
> > The HLL implementation has to be written in itself (or a subset of
> > itself) since reflectivity is a requirement (one might want to observe
> > or modify the implementation dynamically).
> And the LLL is this subset of the HLL we write it into !
I thought the LLL was an intermediate language into which you
wanted the compiler(s) to produce code. (LLL as a target language)
But it seems you want to use it as a host language, to write compiler(s)
in that language. I thing that would be a "bad" thing, see below.
> I am still convinced that a LLL is a very good speed-up for the
> development of a computing system.
To me it looks like a slow down. If you can write your compilers in a
HLL why would you use a LLL, it just makes it more difficult. Of
course we need an implementation of that HLL to begin with we'll have
one if we start from an existing language (we may want to modify that
implementation a little bit so I suggest we choose one which is free).
> Bytecodes and LISP interpreters are
> what I call LLLs.
Surely we don't want to write a compiler in byte-codes. In something
that can be transladed to bytes-codes maybe but that means the compiler
is not implemented in the LLL.
> But they are unextensible, unadaptable LLLs. I prefer
> using some FORTH-like LLL, so it can adapt to the hardware (easy to
> add words written in assembly) and extend naturally into the HLL, thus
> yielding much better integration.
Writing the HLL compiler in a language without an object system makes
it more difficult to modify. Knowing that the compiler can/will be
made available at run-time, a program might want to recompile itself
with a modified compiler (to avoid the Pentium floating-point bug for
example). It will be made mode difficult and "ugly" if that requires
hacking in low-level details instead of providing a high-level fix
like subclassing a component of the compiler.