LLL (HLL too?): biased towards today's technology
Chris Harris
chharris@u.washington.edu
Sat, 14 Jan 1995 22:11:42 -0800 (PST)
Hello Folks,
This list hasn't been very active the past few days, so I thought
I'd throw out my latest thought....
It seems that much of our recent discussion about the LLL seems to
have assumed that it will be a linear language, fit to run on the
now-common linear uniprocessor systems of the world. That's great, but
I'm not sure it takes into account enough of the future. People are
starting to "get" the benefits of multiprocessing, and its becoming
almost reasonably priced. (For about $1K more than I got this here
Pentium, I could have gotten a box with two CPUs in it.) For the moment,
multiple processors don't seem to be much of a problem for (decent)
multitasking OSs.
What about a few years down the road, though? I can see the return
of a more analog-like computer, perhaps designed to support complex
Celluar Automata or Neural Networks. Heck, Thinking Machines, who just
went bankrupt, has made machines with 64,000+ little processors, all
working at the same time. I can see computers that are not based on CPUs
and centralized clocks, but are a collection of small, extremely
specialized units, each syncronizing and communicating with only a small
subset of the others. I can see linear memory being replaced by the
complex flow of information between such processing units.
What then can we do to ensure that our LLL/HLL will work quickly,
efficiently and most importantly, logically on these new machines?
Perhaps if we are going to both to start this entire project from
scratch, we should get rid of linear code altogether (except at the very
lowest levels)?
Any thoughts on this?
-Chris
"If patterns of 1s and 0s were 'like' patterns of human lives and death,
if everything about an individual could be represented in a computer by a
long string of 1s and 0s, then what kind of creature would be represented
by a long string of lives and deaths?" --Thomas Pynchon