Common or rare Lisp ?
Sat, 17 May 97 15:06:16 +0200
It seems we're still very far from even a near consensus what a Kernel
Lisp should be good for, or a rough outline of what it might contain.
Before that's clear, I doubt it makes much sense to get down to concrete
proposals (thus I'm not yet going into that).
I think a Kernel Lisp should be both simple to understand, and implementable
very efficiently without excessive compiler magic on common hardware, without
needing much support from an operating system. It should enable e.g. Common
Lisp to extend and compile into it without considerable overhead.
If I'm interpreting the recent comments correctly, it seems a realistic
tendency (perhaps the only one) towards a reasonably broad consensus may
be to take Common Lisp as a "middle layer", extending it "downwards" with
more primitive facilities (e.g. immutability, computation with machine-
level numbers with overflow indicators etc., thread-aware allocation and
access, low-level storage manipulation, non-reflective type definition,
macro access to declarations and potential type inference), mostly by
defining a clean programming interface to something which an efficient
implementation would need anyway (but without strong commitment towards
a particular implementation), and documenting what low-level programmers
can expect to work even in the lowest levels of the system (e.g. what's
usable if you don't want to cause a garbage collection, what's usable
without explicit synchronization, and so on).
This would definitely be a major task. However, if Common Lisp is supposed
to be used anyway, the same problems would arise when trying to integrate
any Common Lisp implementation with some potentially smaller Kernel Lisp,
or a Virtual Machine Specification. If done right, this may even be useful
for the further evolution of Common Lisp, whether as official standard,
or as an informal portability toolkit which can be implemented with more or
less effort and efficiency in various Common Lisp systems.
"Reginald S. Perry" <email@example.com> wrote:
> It seems to me that we could define another metaclass that
> gives us static(ish) type semantics. What this would do is allow us to
> have fast method access for the low-level stuff or for people who dont
> want or need the full-blown MOP but allow everyone else to have the
> full power of CLOS if they need it.
Maybe I'm confused or just insufficiently informed about CLOS, but
I've got the impression that the behaviour is too distributed (through
time, ranging from separate compilation to loading until execution,
and potentially throughout a complete program compiled in many parts)
for a decent modular static analysis, without excessive reliance on
run-time code generation or whole program analysis. Creating another
metaclass doesn't appear to improve this very much. What would be more
helpful would be something which allows the compiler to derive useful
optimization information from inlining, analysis of lexical scoping,
and understanding a relatively simple type construction protocol.
Imagine bignums implemented as fixnum vectors, and complex numbers
implemented as pairs of scalars, with all numeric code going through
a completely dynamic MOP, just in case anyone wanted to change the
behaviour of pairs or vectors; this would easily get the first price
for slow implementation. I don't think it's good to require so many
special cases in the compiler for efficient implementation, since it
implies that user code needing something similar (but not the same)
will again be slow, or depend on some implementation-specific tricks.
Something like CLOS should be built onto a more primitive core, not
the other way around. It's not necessary for such a core system to
be fully reflective. Unlimited reflection means that fewer invariants
can be known at compile time (or easily understood by human readers,
for that matter). Where full reflection is needed, this should be
implemented explicitly on top of a more basic protocol, such that
primitive types/operations can still be implemented without all this
overhead. Full blown generic function dispatching doesn't really
belong into the base level either. If it needs to be there, that's
very likely to be an alert that a program needing different semantics
is going to be slow. (This isn't meant to prevent that a particular
compiler or run-time system might still have special optimization
strategies to make the complicated stuff even more efficient.)
"Alaric B. Williams" <firstname.lastname@example.org> wrote:
[POSITION STATEMENT: Specifications for a New Lisp Dialect]
Based on your proposal, I have the impression that we have very different
intentions/goals for a Kernel Lisp. I'd even keep any form of automatic
dispatching or inheritance out of the kernel, use a more implementation-
oriented type system, and almost exclusively for better efficiency. Linear
logic is a nice idea, like many other things, but if you want it to be a
basic feature, I'd recommend to develop a complete and consistent static
type system for it, perhaps creating something like a "Linear-Object-ML
with s-expression syntax and macros". Without a good theoretical foundation,
comprehensive type systems like what's needed (as far as I can tell) for
sufficiently expressive and efficient linear logic could easily become more
and more chaotic, as you discover more and more problems.
If you really want e.g. first-class environments, I think you should define
them on top of a non-reflective kernel, making explicit that/where you are
ready to accept any reduced performance (mostly since they can prohibit some
otherwise valid and obvious low-level optimizations, forcing a few values
out of registers into memory, such that potential reflection can get it, or
maintaing excessive fine-grained meta-information about the compiled code).
-- Marc Wachowitz <email@example.com>