Emergence of behavior through software

Alik Widge aswst16+@pitt.edu
Wed, 04 Oct 2000 15:58:56 -0400


--On Wednesday, October 04, 2000, 12:51 PM -0700 btanksley@hifn.com wrote:

> I'm not familiar with any work which showed that sentient software is
> theoretically possible.  There is some work which attempted to show
> impossibility, but I don't buy it.

But that's the point --- if you cannot prove it impossible, it is
theoretically possible. (Just as P may still equal NP. I really ought to
get around to proving that...)
 
> IMO, a better way is to scrap all of the talk about AI, and instead
> implement Intelligence Amplification (IA).  People are already smart and

Well, that's basically what I'm trying to say. Don't think *for* the user,
because it is almost impossible to know what the user really wants. Let the
user express an intention, and *then* do what he wants. Detect patterns in
his behavior (such as saying "No" whenever you ask if he needs "help"
writing a letter) and comply.

> which displays certain aspects of human behavior -- for example, it's
> going to have to recognise when the human's expressing a vision, and "buy
> in" to it.

Visions are awfully vague things, and I don't see what you mean by buying
in. I'm not sure this is something I want my OS to do, either. I mainly
want computers to send things through the network for me and to notice when
I'm repeating an action and offer to automate it for me in a relatively
flexible manner. Of course, some argue that this alone is AI-complete.