relationship with our tools? NO.

Tril dem@tunes.org
Mon, 7 Dec 1998 12:15:42 -0800 (PST)


On Mon, 7 Dec 1998, Matt Miller wrote:

> On Mon, 7 Dec 1998, Tril wrote:
> 
> > TUNES is not a being.  It isn't any more alive than Linux is.  TUNES is a
> > dynamic, evolving software project.  That means it only evolves at the
> > whim of its users and coders.  We should not desire the computer to take
> > control of its own evolution.  It might be a fun experiment, but the
> > purpose of TUNES is to make a useful system.  If the system is not
> > completely controllable by the user at all times, how is it useful? 
> 
> Its utility for a given task may be decreased, however its total utility
> is increased in that the user is prevented from making changes which cause
> the system to become unstable.  The trick is not making the system
> completely controllable, rather to provide the user with as much control
> as he needs for any given task, while not compromising the system.

The system IS to be completely controllable, even at the expense of
allowing users (well, at least the administrator of that system) to
"compromise" the system.  There is no reason why a user should not be able
to make their own system insecure, or unstable.  In fact, if we don't
allow them to, the system will be useless and someone will make another
one.  More likely, they will make a work-around for TUNES' limitation.
It's better for us to include the ability to turn off security, as a
*feature*, so at least it's done right.

> > HAL was a murderer.
> 
> Do you honestly believe that the above follows from the below?
> 
> > Because the designers tried to make it/him autonomous.

Let me rephrase.  HAL malfunctioned, that is, performed some action that
was not intended by the designers.  The reason HAL malfunctioned is that
he came into contact with data that the designers did not anticipate.

Current software systems seem to be based around the idea that the
designer is God, and knows everything.  Of course this is not true and
that's why software breaks all the time.

To work on fixing this situation...
First, designers should not believe they know everything, and they should
not build software that is inherently suited for only one specific task.
That is, people should design software with the intent that the software
will be used for other things than it was intended for.  This includes the
possibility that some user will find it useful to modify the software for
a novel use.  Why should an author "intend" anything for the program.  As
a work of art, a program is appreciable as an entity by itself, apart from
any possible use it may have.  The best programs have the most uses.

Second, language designers, operating system designers, etc. should
redesign these systems to support a model of software evolving.  New
systems should be invented (like TUNES) which make it easy for programs to
be extended, replaced, or reoptimized for different uses.

My point about AI is that the software program itself won't be able to
anticipate all future situations, and therefore shouldn't be placed in
control of itself, just like one author shouldn't be placed in total
control of the uses of his or her program.  Humans are able to come up
with new ideas, but robots are not (randomness does not count as new
ideas, since new ideas have some measure of usefulness and applicability
to the situation, where did I read about this? Fare?).  Therefore humans
will always be needed.  A robot without help from humans will always be
less capable than one with the help of humans.  More error-prone, too.

David Manifold <dem@tunes.org>
This message is placed in the public domain.