Reflection/built-in assemblers

Jordan Henderson jordan@Starbase.NeoSoft.COM
Mon, 12 May 1997 06:48:08 -0500 (CDT)

> I've been following the 'reflection' thread with some interest,
> because I agree with the goals of this approach.  That's pretty much
> what the Lisp Machines did--they were programmed in Lisp, but
> introspected all the way down to the hardware.  For example, many of
> the 'hard cases' of the machine instructions trapped so that they
> could be handled _in Lisp_!

Correct me if I'm wrong, I'm certainly no expert in this area, isn't it
true that the Lisp Machines could do this so well is because the machine
instructions had direct support for Lisp, that it was, in effect, a 
Lisp VM?  If this is true, then we have solid experience behind the
approach of defining the whole system on top of a LispVM.  

As I've said elsewhere, I'm attracted to the elegance of defining the
whole system on top of a LispVM.  We get ultimate portability.  The 
system would run hosted on top of other OSs, in web browsers (yoww!
But, I guess if you are willing to run Internet Exploder anyway, what's
a Virtual OS on top of that??), multiple versions of the OS could run
on the same machine (different LispVMs on top of Mach or some other
substrate), etc.  I'll state it again, if the LispVM were register
based, it could be compiled down to the machine efficiently and it
would run efficiently as well (perhaps not optimally).  (Of course,
the compiled register based LispVM would not be ideal for running
multiple copies on top of the Mach, etc.)

However, there is a practical problem with this approach which I will
discuss below.

> I would suggest that the way to get bootstrapped is to first write an
> excellent native Lisp cross-compiler which handles 'basic' Lisp --
> that part of Lisp that is necessary to write the compiler itself and
> most of the OS -- but is considerably smaller than Common Lisp.  By
> having access to the compiler and having complete control over the
> code generated, you are in a position to make sure that _important
> optimizations get done in the right place -- in the compiler --
> instead of having to constantly fight the compiler and be forced to
> program in machine language or use 'sub-primitives'.
> This isn't as big of a job as you might think, since basic Lisp is
> really a pretty small language.  I woul recommend reading the papers
> about the Lucid Lisp compiler, which had some very interesting ideas
> about how to organize the compilation.  In particular, they kept
> things into Lisp form until just before code generation, and went so
> far as to put functions into a form like:
> (lambda (r1 r2 r3) ...)
> where r1, r2, r3, were the actual registers to be used in the compiled
> code.

We already have a good CL compiler in CMU-CL.  How easy will it be to 
write a really good compiled small Lisp?  And then, how easy will it be
to write a completely standard CL on top of that?  It seems that this 
small Lisp will have to be more than bare-bones to support some of CL's
data structures efficiently.

Here's the rub.  We want to get started, there is energy in momentum.  
We have CMU-CL which generates 386 machine code (I believe, I've not
actually used it myself), we have the Flux toolkit.  We could marry these
and get to a prototype environment fairly soon.  But, this is not in
line with my goal of hosting the whole thing on top of a LispVM.  
Perhaps this will be the ideal environment to prototype this LispVM,
in CL.  I don't know.  I do feel that if we don't get a Lisp only 
environment, then we might end up with a novelty Lisp environment on 
top of Linux or BSD or something.  If people aren't forced to use their
Lisp tools and forsake their UNIX tools, then the Lisp tools might be
second class.  For example, who is going to build socket libraries in
CL when the underlying socket libraries are so good and tested?

Do others share this fear?

> -- 
> Henry Baker
> www/ftp directory URL:

-Jordan Henderson