Mon, 5 May 1997 08:07:58 -0500 (CDT)
> I was going to provide a point by point rebuttal to one of the posts
> about a C->Lisp compiler but I realized that it is a waste of time. If
> anyone gets enough of the Unix API up and running to allow a
> C->Lisp->Binary compiler to compile TeX (or whatever), then someone will
> use it to compile GCC. The C Library and Unix API can be provided
> through a simple wrapper on top of the library that runs TeX. As long as
> LispOS can run platform-binaries at all, then the GCC port can use the
> existing GCC backends to produce the binaries. The runtime system will
> not know or care if they were originally written in C, Lisp or COBOL. In
> my mind, a C->binary compiler for LispOS with a Unixish library is
> an inevitable consequence of a C->Lisp compiler and the C->Lisp compiler
> is just a distraction along the way.
I think that was probably a rebuttal to me.
I'm now coming around on this issue. I once envisioned a system built from
the ground up to support new OS concepts, where, like the Lisp Machines of
old, the low level and the high level language would be Lisp. And, like
the Lisp Machines of old, the C compiler generated Lisp (at some level).
In such a system, I don't think it makes much sense to support a Unixish
library in any way other than as a source/semantic translation.
For example, such a system could have a given execution threads object's
scattered all over the place, not in a single address space, so fork() is
non trivial. However, fork() is usually just a kludge for the
oversight in Unix, the need for a spawn(). Generally, when people use
fork() they intend to exec() something right away. In this case, why
copy the entire process just to replace it? A C->Lisp translator could
notice this and generate the appropriate code. If, in fact, you did
want to create copies of yourself (like is often done to spawn off
multiple daemons, which is generally a kludge for not having threads),
then the compiler might have to compile in some kind of semantic
copying code for any objects introduced in the program that might
get distributed about so that these could be easily made available
to the children.
There are other examples of why it doesn't make sense to support Unix/C
APIs. For example, why support printf when the C->Lisp compiler could
easily convert printfs to CL formats? Sure, this could be done by a
backend in binary, but why? It's so much easier to make this a
source to source translation (although it occurs to me now that for
format strings that are not constants, it will involve calling eval).
So, I think a high level translation to Lisp is entirely >justified<.
AND, I don't feel that the generated code, even by a fairly naive
compiler would be that bad. The only inherently inefficient thing
would be the generation of vector objects for the mallocs, but this
has such enormous benefits in being able to catch memory leaks/bad
pointers that it's easily justified. If the Lisp compiler were good
at all, then the translated code would otherwise compile to fairly
efficient code. I just don't see the problems. The C->Lisp compiler
could provide all of the type information from the original C program
that would allow the Lisp compiler to make efficient representations.
Also, the long-long term goal of semantic representation/translation/
optimization is a good one, but not a necessary one to for the
success of a Lisp OS. Which is, the ultimate goal here.
Now, why am I coming around? Because there seems to be tremendous
energy behind using Linux/BSD as the dev platform. I hear
a lot of talk of just taking Linux and bit-by-bit replacing the C
parts with Lisp. In this scenario, the Unix model will be with us
for some time to come. It's not my ideal for a Lisp OS, but it is
an interesting project with definite goals which will ultimately have
tremendous benefits (it will, for example be the best platform for
developing the Lisp OS with fully Lispy internals). So, let's not
spend any thought to C->Lisp now. It's definitely a sideshow.
> Paul Prescod