a few queries
Francois-Rene Rideau
rideau@nef.ens.fr
Mon, 2 Dec 1996 10:51:26 +0100 (MET)
> you flame a lot about proofs and parallelism. . .
Well, it is not intended as a flame.
> with all due respect, i don't think the solutions to our computational
> problems are quite as straightforward as you tend to suggest.
I don't suggest that what I propose comes without problems;
I do suggest that this is a much more reasonable way
than what people are generally heading to currently;
their way have problems, too,
and what I'm saying is that solving *their* problems
is much more expensive with a much lesser yield
than solving mine.
> for example, proofs are spiffy --- when you can find them. but proof of
> correctness is intractable in general and often hard in the specific case.
>
Yes. And so?
Of course the finer proof you want,
the more you must pay for it.
The point is not in forcing everyone to prove everything!
The point is to enable people to write and verify proofs
for what they feel the need, when they can afford it.
If such a system had been used,
the $10^9 ariane V spacecraft would not have exploded.
For everyday life,
proving that a system is type-safe
without C-ish pointer bug/crash is very doable.
Most of the time, only similar doable proofs are needed.
Everyone should take the responsibility for what one
is going to trade of for what.
Unability to write/verify proofs encourages
both irresponsibility and paranoia,
because you can't be sure of anything.
> . . there's been a fair amount of work in automatic proof generation
> without tremendous successes; the theorem-provers i'm aware of are fairly
> slow even in soluble cases. and requiring the programmer to prove
> correctness seems rather draconian. . .
>
Most of the time, the kinds of things to prove will be trivial:
to prove that your code won't crash the system because of pointer bug,
just use a type-safe language with a proven code-generator backend.
The proven part of the code generator
needs not include the complex optimizer,
as long as the optimizer utilizes proven simple primitives.
When you need a prime number, for, e.g. random RSA key generation,
the system can test primality once, then consistently propagate
information about the number being prime.
And yes, I'm sure that all industrial customers will require
programs to be proven correct and all. This is not more draconian
than any of their stupid and inefficient "quality control" stuff.
In cases when a proof is too difficult to achieve,
it's up to the customer to decide whether they are ready to admit
the unproven tool in their system or not.
Proofs mean that people know what they have and what they don't.
Of courses, proofs are easier to write if you're using the right tool.
Just don't use C, C++, ADA or Java.
> as far as parallelism goes, exactly what parallel model do you support? it
> sounds like you advocate something like a connection machine style model. .
>
If you mean about the size of
If you're talking about network geometry,
Anyway, I don't have any certain off-hand answer about that.
What I'm sure is that it is much more worth to invest in
better parallel software than in more bloated sequential hardware.
> . in any event, using large-scale parallelism well is another hard problem.
> . . especially given that these days the speed of the processor is vastly
> faster than the speed of the routing network. sure, it would appear that
> parallel processing is a winner, but there are a lot of problems for which
> it is not at all clear how to split the problem up. . .
>
Most of the problems we know are easily parallelizable:
number crunching, database management, expert systems,
robot control.
As for it not being clear how to split up problems,
this precisely calls for languages where you can tweak
the way the implementation splits things up
without changing the behavioral semantics of your program.
Hence, cleanly reflective high-level language
using formal verification methods.
What's the use of a faster sequential machine
when you'll be using it to run multiple tasks in parallel?
Surely, there are intrinsically sequential problems,
for which there will ever remain the need for faster specialized
sequential processors (even then, bloat does not mean speed gain);
but this should stay the small niche it is.
> i'm not trying to attack you so much as point out my uneasiness with your
> rather glib discussions of issues that don't appear to me to be at all
> trivial.
I'm not saying that implementing Tunes is trivial.
I'm saying that choosing to invest in Tunes ideas
rather than traditional ideas is a trivial choice,
once you know these ideas.
Well, at least, I'm yet to find a valid opposite argument to that,
which will be most welcome. Up to now, the arguments I've faced
would help me make my theories cleaner and remove bugs from them.
== Fare' -- rideau@ens.fr -- Franc,ois-Rene' Rideau -- DDa(.ng-Vu~ Ba^n ==
Join the TUNES project for a computing system based on computing freedom !
TUNES is a Useful, Not Expedient System
URL: "http://www.eleves.ens.fr:8080/home/rideau/Tunes/"