Can lisp be an effective commerical application delivery language?
Harvey J. Stein
Sat, 10 May 1997 21:53:15 +0300
Kelly Murray writes:
> I want to see widespread commercial apps written in Lisp.
> I want to be able to get a job writting Lisp code as easy as I can
> writing C++ or JAVA (call a headhunter, tell them you're a Lisp programmer,
> see what they say)
This is my dream too, but I don't expect a new lisp machine to make it
a reality. Here's an article I posted to comp.lang.scheme awhile ago
regarding the problem:
Subject: STOP ME before I CODE in C again!!!
A Schemer's Lament
How I Learned to Stop Worrying and Code in Assembler^H^H^H^H^H^H^H^H^H^H C.
Scheme's 10 Step Program for World Domination.
I. The problem
I've been coding in Lisp for over 10 years now, and Scheme for the
last 3. At this point Scheme is my favorite language. It's my
language of choice for any task at hand.
However, I'm becoming more and more disillusioned and depressed when I
consider the prospects for Scheme in the programming world at large.
I'm slowly coming to the conclusion that Lisp and Scheme will always
be fringe languages - that the computer world will continue to be
trampled by high level assemblers like C and low level objective
monstrosities like C++. Here's my reasoning - I hope someone can
prove me wrong.
The biggest problem with scheme is that one cannot make an application
written in scheme widely available.
Sure you can distribute the source code, but what is this going to get
you? The vast majority of computers have neither scheme interpreters
nor scheme compilers installed. Sure, if they really want the
application, they'll go and track down a scheme interpreter and
install it and run your app. But, this is only if they've decided
that they *really* want your app, and they must decide this without
having seen it first. This is a far cry from downloading a .tar.gz
file, unpacking and running make. People will only do it for large,
extremely attractive, well advertised applications.
Even if scheme interpreters and compilers *were* installed on every
Unix box on the face of the earth, what good would it do? Unless I'm
distributing some character based toy app which only talks to the
outside world by reading and writing files, then my app will be
interpreter dependent. I *cannot* write portable code.
Sure, I could write R4RS compatible code, which will thus run in the
majority of interpreters, but this doesn't give me much - I can read
and write files, I can't append to a file, I can maybe read a
character from the keyboard without getting stuck in a wait state (if
char-ready? isn't broken in this particular interpreter), I can
interact with the user in a line based query/response fashion, and I
can compute stuff. I can't use select, I can't do any networking, I
can't use X, I can't use curses, I can't fork, I can't run
subprocesses, ... Basically, I can't write a modern application, or
even a menu driven one!
Chucking R4RS, one *can* write a modern application. Every version of
scheme has its own (often limited) way of talking to the OS, all
mutually incompatible. Of course, compatibility also sucks in the
C/Unix world, but a) not on such a grand scale, and b) at least
someone's cataloged the differences - I can use gnu configure, or
xmkmf if necessary. In the scheme world, I'm lucky to have
compatibility between different *versions* of an interpreter or
To some extent, I blame the R4RS committee here. They go and spec a
language, debating whether or not to include esoteric features such as
dynamic-wind and unwind-protect, but they don't spec an OS interface.
Of course, this isn't really part of the language, and so technically
speaking, shouldn't be in the language spec, but the R4RS committee is
the only committee with the respect necessary to issue an interface
spec - a foreign function interface, or a C library interface, or a
posix interface. What would have become of C had the C world proposed
the language, but left out the standard C library? The answer is that
it would be a fringe language like scheme.
Ok, so suppose I chuck portability, chuck R4RS compatibility, I write
for 1 interpreter, 1 compiler, and I don't distribute my source code.
I could still distribute binaries...
Of course, this would greatly shrink availability - I don't have
access to the 60 different flavors of Unix machine in common use, so I
can't provide binaries for them. Even if I did, this would be a major
burden for me, compared to just uploading cool_app.tar.gz to afew ftp
None the less, I guess this is possible. If I want the program (begin
(display "Hello, world") (newline)) to take up 500k of disk space,
allocate 4meg at startup, take 15 seconds to run, and be 10 times
slower than a similar app coded in C, then yes, I can use scheme->C or
bigloo or something to compile and link it, and then distribute the
This might change once Stalin is ready - I wrote a simple prime number
sieve in scheme, compiled it under scheme->C, bigloo, and stalin. The
scheme->C version and the Bigloo version were about 300k, the Stalin
version was about 30k. The scheme->C version took about 3 seconds to
generate all the primes <=100000, the Bigloo version about 1.7
seconds, and the stalin version .53 seconds - comparable to
/usr/games/primes - a C version with lots of tricks to make it fast.
But, I want to distribute applications *now*!
Where does all this this leave scheme apps? It means that scheme apps
will mostly be developed for personal use by programmers who prefer
programming in scheme to programming in other languages. I can write
apps in scheme, and even put the code into scheme repositories, but
the apps won't get wide usage. If I want to do anything other than
write toy applications, I'll have to write to a particular interpreter
so as to get the needed OS & GUI access.
This pretty much means to me that scheme will never have widespread
usage and acceptance. Its use will be restricted to personal use, and
isolated proprietary usage. It will never have the kind of growth
that C and perl have exhibited because the scheme source code for
applications can only be distributed amongst the users of a particular
interpreter - developers working with one particular interpreter, not
How could this be changed?
The very barest minimum needed is a scheme standard library
specification, and scheme interpreters which support it. The standard
library spec must include a simple and straight forward foreign
function interface. Everything else is basically syntactic sugar.
This is the minimum necessary to allow people to build serious
applications in portable scheme code. The portability would be
guaranteed by the scheme programmer restricting himself to a portable
class of foreign functions. One would be able to write scheme
programs which are portable to posix compatible platforms, for
example. At this point, a C foreign function interface would be
Once one has direct access from scheme to the underlying OS, people
can easily build their own abstracted schemish interfaces to it. A
schemish posix interface, for example, or a generalized schemish OS
interface. The latter is more of a research project, and so is
unlikely to make it into a standard, but the former shouldn't be too
hard for people to hammer out and agree on, and should give a cleaner
posix interface than straight use of the foreign function interface.
In any case, with this bare minimum, at least people would be able to
release portable scheme libraries for access to various OS services -
a scheme sockets library, or networking library, or WWW library.
We'd finally have a portable foundation to work from.
Given this bare minimum, at least scheme would have a chance against
perl. It would have the capabilities to compete with powerful
(Of course, scsh is scheme's answer to perl. However, because it's big
and non-standard, I can't distribute neat little scsh scripts the same
way people distribute nice little perl scripts. Once the FFI is
standardized, it'd be nice to see some of scsh's OS interfaces to be
standardized into a scheme library, perhaps added to SLIB, and then
scsh might start to become a serious competitor.)
Given this one small addition - a standardized FFI, all the scheme
programmers out there will be able to start distributing there
wonderful applications. Since scheme is presumably so efficient to
program in, there should be a flood of wonderful apps which are head
and shoulders above anything written in C or perl. This will increase
demand for scheme interpreters to run all these wonderful apps, and
any FFI compliant interpreter will do. There'll be an explosion in
the number of machines with scheme interpreters, building on the
number of applications, ... At least, that's the theory.
Thus, it's my opinion that a standardized FFI would at least enable
scheme to compete in the scripting world with perl, and later with sh,
and even Tcl.
As far as I can see, the reason for Tcl's popularity is not its small
size (it's *big*). It's not it's speed (it's *slow*). It's not its
elegance (no need for parenthetical comments here). Aside from the Tk
GUI (a *big* aside), it's the ease of adding C code to the
interpreter. Or, to think of it in the other direction, it's the ease
of using it to add a scripting language to a body of C routines.
With Tcl, there's no need to ever write a complicated main() again -
one can just develop afew specialized C routines, link it into the Tcl
interpreter, and start using them. Write some circuit analysis and
layout routines, and use them immediately.
With scheme+standardized simple FFI, the above would be just as easy
using any scheme interpreter.
To finish off the other interpreters/scripting languages, we need
scheme interpreters with particular properties. They must be *small*,
*fast*, and load libraries *quickly*. This is where we lose alot of
the currently available interpreters. Perl, bash, wish, scm, and snow
(the no-Tk version of STk) all take up about the same amount of disk
space (actually, scm and snow actually take up about 1/3 that of bash,
perl and wish, but when running they take up about the same RAM.)
However, the non-scheme interpreters start up significantly faster:
For example, we have:
hjstein:~$ time bash -c 'echo $[1+1]'
0.02user 0.06system 0:00.11elapsed 72%CPU
hjstein:~$ time perl -e 'print 1+1,"\n"'
0.01user 0.08system 0:00.13elapsed 69%CPU
hjstein:~$ echo 'puts [expr 1 + 1] ; exit' | time wish
0.15user 0.13system 0:00.34elapsed 82%CPU
hjstein:~$ time scm -e "(begin (display (+ 1 1)) (newline))"
0.38user 0.07system 0:00.47elapsed 95%CPU
hjstein:~$ echo '(+ 1 1)' | time snow
0.21user 0.30system 0:00.54elapsed 94%CPU
Until those scm times come down closer to the perl times, people will
hesitate to replace short little perl scripts with scm scripts.
Furthermore, I've written some sizable pieces of code in STk - the
startup times get to be over 5 seconds (on a 486-66 running Linux)
because of the time necessary to load any significant amount of code.
This severely hampers its performance in the shell scripting
I think that standardized FFI + small fast interpreters would be
enough to finish off other interpreters/scripting languages, or at
least make scheme a competitor (although for those Tk addicts, a good
Tk interface (a la STk, but built via the FFI on top of libtk, if
possible) would help).
But to encroach on the world of compiled code, we're going to need
more work - we're going to need compilers. Any language that takes
300k to produce "Hello, World!" is not going to be competition for C.
Add to the above needs a good, type inferring compiler, one that can
produce small fast binaries, and scheme might at least have a chance
against C. Make this scheme compiler extremely portable (as opposed
to the many available which only target particular microprocessors),
and we might get to the point where people *really* don't need to
program in C any longer...
III. A new beginning?
As I said, the first fundamental requirement is a standardized foreign
function interface, one which at least supports C. Since there
doesn't seem to be any standards group working on such a thing, maybe
we could at least get a defacto standard. Maybe the currently active
scheme interpreter/compiler developers could get together and agree to
adopt a particular C FFI & incorporate it into their systems. Or is
this just wishful thinking - the feeble grasping of a scheme
programmer for a ray of hope?
Dr. Harvey J. Stein
Berger Financial Research Lose C now, ask me how!