[gclist] Get Real! Require GC (was re: quality of impl., etc.)

Paul R. Wilson wilson@cs.utexas.edu
Mon, 22 Apr 1996 08:25:02 -0500


>From nickb@harlequin.co.uk Mon Apr 22 03:55:21 1996
>Date: Mon, 22 Apr 1996 09:54:45 +0100
>From: Nick Barnes <nickb@harlequin.co.uk>
>
>Rumour has it that some early Lisp Machines had broken GCs. Unlike
>[typical programs in] some other languages, [some typical programs in]
>Lisp allocates slowly enough that the absence of a GC used to be
>acceptable.

Good point.  As I understand it, the early Lisp Machines had simple
GC that was so bad that people would often turn it off, let the machine
run until swap space was exhausted, and reboot.

I think this is interestingly different from having no GC at all.  If
you have a lame but working GC, you can at least in principle buy
a lot of RAM so that you get decent performance and your system can run
indefinitely.  (These days, you can buy RAM for less than $20 a megabyte,
so even ludicrous amounts of RAM aren't so ludicrous anymore.)  With *no* 
GC, there's no bound on resource usage for many common programs that run
indefinitely.

If someone were to come up with a Common Lisp (say) with no GC, I think
that they should be required to say it's "Common Lisp minus GC", not
"Common Lisp."  Being able to turn the GC off is a different thing.
(For example, if I *know* my program doesn't allocate too much memory
before exiting, I might choose to turn off GC to adjust performance
tradeoffs.  But if I don't know that, then GC should be available, and
I think the language spec should require that.)

One of the consequences of the lame GC's on the early Lisp machines was
that people did in fact end up programming in "Lisp minus GC" rather
than Lisp.  To avoid exhausting swap space quickly (and running slowly),
they went to a lot of trouble to use side effects instead of applicative
techniques.  In effect, the poor performance of the GC reintroduced
problems Lisp was supposed to have solved at the language level.

There were also facilities for stack-like allocation that you could use
manually.  (A lot like GNU obstacks.)  This was the source of endless bugs,
as I understand it, because it's not nearly as safe as programming in real
Lisp---you get dangling pointers, etc.  While I don't think there's anything
wrong with providing such language extensions, I think there's something
broken about a language implementation that effectively requires their
use, just to get remotely passable performance.  These days, GC is
well enough understood that there's simply no excuse.

(Maybe Henry or somebody else experienced with the old Lisp Machines could
comment here---I'm getting out of my depth.)