Emergence of behavior through software
Lynn H. Maxson
lmaxson@pacbell.net
Tue, 17 Oct 2000 13:18:02 -0700 (PDT)
I've been sort of desparate in finding some common ground with
Billy (btanksley). Fortunately I have found some thanks to Fare
and the subject about which mailing list to which this belongs.
In my mind there are three questions actually. One, is "true" AI
(as opposed to current rule-based) necessary (and sufficient) to
achieve Tunes' goals? Two, if yes, then is "true" AI possible?
And, three, if yes, is "true" AI desirable?
IMHO the answer to all three questions is "no" or (no, no, no).<g>
I agree with Billy that the first two provide grist for the mill
of discussion on this mailing list, while the third, desirable?,
as Fare suggests belongs under some ethics category. If the
answer to the first question, necessary?, then only the curious
will need explore the others.
Fare wrote:
"Tunes' goal is not, has never been, and will never be to make an
AI. It is to provide a reflective infrastructure whence more
complex computional behavior can emerge than has ever emerged
before."
If I understand this correctly, Fare has also responded with a
"no" to the first question. Thus his "reflective infrastructure"
will be "rule-based" on the same rules of logic inherent in all
software and host computers. These should prove to be both
necessary and sufficient to meet the tests of such systems.