discussion:Goals Round III

Mike Prince mprince@crl.com
Wed, 23 Nov 1994 17:41:38 -0800 (PST)

On Wed, 23 Nov 1994, Francois-Rene Rideau wrote:

> I only claim that there is no possible distinction between programming and
> using that would not be purely arbitrary, and inappropriate and useless.

I didn't mean to start a holy war over the distinction between 
programmers and users.  The line is completely arbitrary.

I wouldn't go so far though to say there isn't a distinction and group 
everyone together.  A million arguments can be made that users recording 
macros are programming, etc, but I think we all recognize the difference 
between creating an application and the limited "light" programming of 
macros and maybe even shell scripts by "users".

>    Most users don't write programs that need error recovery; or error recovery
> is straight-forward (i.e. if the computation is interrupted, go on when
> possible; if it eats all memory, then the algorithm is definitely wrong).

Every program needs error recovery.  Your program completes correctly or 
there is an error.  We've just been taught that if a program doesn't work 
it's our fault and we should try new parameters or somewhat.  Instead the 
program should guide us through with its error recovery routine and get 
the correct parameters to run.

> Programmers shouldn't mangle with failures that are not related to the
> algorithm they use. If network security is not good enough to ensure the
> integrity of the computation, then let the system not distribute the
> computation on the net; but do *NOT* introduce failure recovery where there is
> no need for one; let it be transparent.

We introduce one of my "new" directions.  I believe the programmer 
should be aware that the code is going to be distributed, and be aware of 
possible pitfalls.  Including that of creating a "thread" and having it 
never join.  From what I've read from Fare, he seems to favor very 
deterministic behavior from programs.  These are at odds, but not 

The low level OS will deal with recovery mechanisms, on top of which 
other languages "exist".  A high level language which favors determinism 
may limit the risks it takes during execution by using a subset of the 
OS's capabilities.  Other languages may be braver (or more stupid) and 
distribute their computations to less savory parts of the network.  When 
the failures occur, someones got to deal with it, and I don't think 
having the program just stop running and a "bomb" appear of the screen is 
satisfactory.  But by the same token, different apps may employ 
different recovey mechanisms.  I am leery about an OS being capable of 
a having a general recovery rule which will satisfy all application 
requirements.  Unless of course that rule will be to let the app decide.

>    Now, if there *is* an intrinsic need for error-recovery in some program,
> then the programmer must foresee the possible errors anyway; and the language
> he uses should have capabilities to cope with them (or he chose a bad language
> for his task).

Exactly, let the programer earn his pay and deal with the errors.

>    As for programming distributed databases (which is quite different from
> math problems)

Up till now it's been different because computation has been limited to 
the same machine for most applications.  But what happens when most 
programs are broken into pieces and farmed out?

> Then, yes, different
> programmable policies should be available, and any willing programmer/user
> should be able to access them.

Bingo, apps may pass errors along to the standard recovery mechanisms, 
but the point is apps get to make those decisions, and none are forced 
upon them.

> > So.......In answer to the previous posting, users *may* become aware of 
> > failures (your videophone link fails due to the satelite being shot 
> > down), the programmer will probably become aware, and OS's job will be to 
> > take directed remedial action.
> In case of failure, let the system warn the user, and interact to recover
> (saving the program for possible future failures), and not ask the user to
> foresee all possibilities (unless he really wants to).

Back to my previous equation, the system tells the app, the app may tell 
the user.  BTW, a simple app would just feed the error to the standard OS 
error handler.  I stand by my opinion that the app should have first 
crack at handling errors.  Not every error can be recovered.

> > Am I confusing the issue more?
> I *think* you are.
Confusion is the first step in learning; to know what you do not know.