LLL (HLL too?): biased towards today's technology

Francois-Rene Rideau rideau@clipper.ens.fr
Mon, 16 Jan 95 15:31:56 MET


>       It seems that much of our recent discussion about the LLL seems to 
> have assumed that it will be a linear language, fit to run on the 
> now-common linear uniprocessor systems of the world.  That's great, but 
> I'm not sure it takes into account enough of the future [...]
   I agree, but then, any LLL is low-level; and adapting a program for
a specific parallel computer is *much* different from adapting it to
another one. So, if we want to support portable parallelism, our "LLL"
won't be that "low-level". That's an intersting point of view.
   Perhaps after all, the LLL would be just a concrete representation
of our HLL...

> Celluar Automata or Neural Networks.
   I don't think those will ever support the kind of programming we know
(though it may interface to it) so this is no problem to *us*. Those
kind of machines really are different from the computers we know...

> I can see computers that are not based on CPUs 
> and centralized clocks, but are a collection of small, extremely 
> specialized units, each syncronizing and communicating with only a small 
> subset of the others.  I can see linear memory being replaced by the 
> complex flow of information between such processing units.
   That, I see better as the mid-term future of computers (some tens of years),
though software concepts for these are not well-known enough yet. But I'd also
say the units will be CPUs; well, not the kind of expensive CPUs the industry
currently likes, but certainly something more MISC - minimal instruction set
computer - like the MuP21.
   I think that our system can adapt to it, if it can adapt to a net of old
apple]['s, C64's, and other old computers. The point is just what you said:
decentralized computing. No more big centralized programs, but small units
that are automatically, dynamically scheduled around a net of computers.
   Now, there still is a problem: should a chip fail in a huge system
(you talked about 64K chips machines), the system should go on working, which
means some kind of redundancy in information so that it doesn't get lost.
This is more like a problem.

>       What then can we do to ensure that our LLL/HLL will work quickly, 
> efficiently and most importantly, logically on these new machines?  
> Perhaps if we are going to both to start this entire project from 
> scratch, we should get rid of linear code altogether (except at the very 
> lowest levels)?
   This seems ok enough. This means our HLL must include some parallelizing
constructs and allow more to be defined. This means that in portable code,
our LLL will certainly be limited to describe very small routines embedded
in HLL constructs. This means we must have a deep integration between HLL
and LLL...
   Or am I getting mixed up ?

--    ,        	                                ,           _ v    ~  ^  --
-- Fare -- rideau@clipper.ens.fr -- Francois-Rene Rideau -- +)ang-Vu Ban --
--                                      '                   / .          --
TUNES is a Useful, Not Expedient System		          |   |   /
WWW: http://acacia.ens.fr:8080/home/rideau/	         --- --- //
Snail mail: 6, rue Augustin Thierry 75019 PARIS FRANCE   /|\ /|\ //
Phone: 033 1 42026735                                    /|\ /|\ /