Artificial Intelligence (was: Why Lisp languishes)

Francois-Rene Rideau fare+NOSPAM@tunes.org
23 May 2001 13:45:21 +0200


The following message is a courtesy copy of an article
that has been posted to comp.lang.lisp as well.

This is getting off-topic from comp.lang.lisp,
so I'm redirecting toward a more appropriate forum that I know of,
the archived mailing list cybernethics@tunes.org
where the topic has already been discussed
(see archives for september and october 2000):
        http://lists.tunes.org/mailman/listinfo/cybernethics

Followup-To: cybernethics@tunes.org


"John Flynn" <transpicio@yahoo.com.au> writes on comp.lang.lisp:
> Is there any evidence to suggest that these processes (if they ARE
> processes) can never be simulated with machines?
There are many hard issues to overcome before we can possibly achieve AI:

SPEED DISCREPANCY
One big problem is that you can define intelligence but
in terms of adaptable (or at least adapted) interaction.
Now, how much do you interact with computers,
what's the bandwidth and what's the feedback?
Ever tried talking with someone with a filter
that makes him 1000 times slower?
Whichever way the speed discrepancy goes,
it makes useful interaction of little interest,
since either party will be bored to death before anything comes out.
Thus, even the "right" program that you'd find by miracle
won't be of much use to you if it doesn't run at the right speed.

COMPLEXITY OF THE STATE OF MIND
Now, we can't hope to build the right "mind"
with the right "state of mind" all of apiece
        If the human mind were simple enough to understand,
        we'd be too simple to understand it.
        	-- Pat Bahn
Thus, we cannot hope build any kind of explainable algorithm
that "just works" and yields immediate intelligent behavior.
All we can hope for is to factor the problem into general rules
that be simple enough for us to manage, yet are able to gather
external complexity from interaction and/or a database,
and shape it into an internal complexity of its state of mind,
so it can show intelligent behaviour adapted to interacting back.
So we'll have to train and educate an AI just like we do with NI.

SLOW CONVERGENCE OF THE TRAINING PROCESSES
Consider that breeding a baby, even the most gifted one,
into being able to express intelligence takes quite a few years
(and decades afterwards to get him to be proficient at any valuable job).
Consider that we have a long experience of breeding babies,
and even then we create perfectly stupid or brainwashed human beings.
Consider that we have no experience whatsoever of successfully breeding AIs.
Consider that we have been rivaling into creating more adaptative
biological behaviour for 15 billion years.
Consider neither such strong drive as individual survival,
nor the duration to have it be a selective force,
is going to help creating an AI.

CHAOTIC DIVERGENCE WITH RESPECT TO SEED RULES
Education takes years, but crucially depends on the seed rules
being right from the beginning of the training period.
Yet, Getting these rules right is in itself quite a challenge:
under 2% of the genetic rules differ between chimps and humans,
and that genetic factors for mental insanity are much less than that.
Getting a one parameter slightly wrong (misordering of rules,
wrong factor, etc.), can yield critical unability to learn and adapt.
And since it may take years of efforts to train and test an AI,
we have little feedback on the long-term effects of seed rules,
so we have basically little process to get them right.

LACK OF A SUCCESS CRITERION
So even if go through all these processes and overcome the difficulties,
how do we know if or when we succeeded? How can we judge intelligence?
How do you know that you or I or anyone on this forum is intelligent?
Are we intelligent, despite all the nonsense we utter?
This only get worse if our goals for an AI is
to be "more intelligent" than man.
	The risk is that if, one day, machines become intelligent,
        we mightn't be mentally equipped to notice they are.
		-- Tirésias, in J.-P. Petit, "A quoi rêvent les robots?"


To overcome these difficulties is quite challenging.
Happily, computers also have advantages with respect to biology,
that can help go toward an AI:
* we can split and factor our system into subsystems
 that we can grow, adapt, or recombine separately,
 with much more flexibility than hereditary evolution has.
* we can save states (for a given set of rules) and restart them,
 or save interaction scenarios and replay them,
 so that when training a new baby AI, we needn't start from scratch.
* once computers get much faster than needed,
 we can replay training periods in accelerated time,
 train several AIs in parallel and have them interact with each other, etc.
* we can use dumber programs to help create, breed, and select
 smarter ones, so progress can be not only incremental,
 but hyperexponential (just like in biology). That's bootstrap theory.

As for the success criterion, well, we'll find out that just like
"intelligence" is ultimately a moot criterion for judging human beings,
it is a moot criterion for judging machines.
Machines, AI or not, autonomous or not, will have to prove
their efficiency through social and economical interaction;
either they are marginally useful to others and will thus survive
by earning their living, or they are not, and will disappear or be replaced.


> Is there ever going to be an "AI Spring" in our lifetimes?
Depends on how long we live, and how much we work toward that goal.
Supposing that we are superrational beings
(in the sense used by Hofstadter in "Metamagical Themas"),
this "we" means you and I and similar-minded persons:
If we don't do it, odds are nobody will;
whereas if we do, odds are lots of like-minded persons will do.

There is at least one institute dedicated to bringing up an AI,
that has very challenging prose on its website:
        http://www.singinst.org/
Check its links, too.


> If so, where are the breakthroughs most likely to be sought?
I think a crucial point in achieving an AI is bootstrap:
getting programs to help us design better programs.
That's where LISP comes into play: it is designed for metaprogramming.
Now, if we want a competitive process where lots of metaprograms
are trained on lots of programs, we'd better lower
the protectionist barriers that prevent us from running
meta^n-programs on meta^(n-1)-programs; I argued in this way in
        http://fare.tunes.org/articles/ll99/index.en.html

Another interesting side property of
such a metaprogramming bootstrap process
is that as long as we do get better programs through metalevel automation
(hence, as long as we can understand anything that can be explained
about the way we ourselves design programs),
the process is economically self-sustaining,
so that we actually get a chance to indefinitely pursue this activity
for its positive side-effects in all domains of computing,
whether or not we eventually reach AI in the end.
Such a metaprogramming bootstrap process is what TUNES ought to be about,
with the much more limited initial ambitions
to build a decent computing system.

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
Laziness is the mother of Intelligence. The father is Greed.