Emergence of behavior through software
Lynn H. Maxson
lmaxson@pacbell.net
Sun, 15 Oct 2000 19:28:05 -0700 (PDT)
Billy (btanksley), Alik Widge, and I have discussed to some length
the subject of AI, "true" AI, the AI of Fare, not limited to the
rule-base of current logic engines and neural networks. From my
perspective we have two basic questions. One, is "true" AI
possible with software (following Turing computational rules) and
a host computer (following von Neumann Architecture)? Two, if
possible, is it worth achieving?
Basically my answer to both questions is "no": one, it is not
possible, and, two, even if it were, we would not like
consequences of achieving it. Alik tends to regard (or at least
willing to take the chance that) "true" AI will result in a
benign, cooperative peer. In my view "true" AI will look at our
history, observe current events, witness the ecological damage we
are doing to this planet, and decide that we are the greatest
threat to its survival. Its answer will be, if not to eliminate
us entirely (make us as extinct as the species we do so daily),
reduce our population to where we no longer present a danger to
ourselves as well as to others. I feel this a "logical
conclusion" that "true AI" will reach as easily as have many of
us. In short "true" AI represents a greater threat to our
survival than ever did the atomic bomb or nuclear warfare, because
there at least a human finger interested in his own survival was
on the trigger.
I don't worry about such scenarios when attempts to produce "true"
AI consider using software following Turing rules and a von
Neumann-based computer. Never happen. Not only is the brain not
a computer, but a computer has no ability to become a brain with
or without software.
I don't say this because I am opposed to extending current AI
indefinitely in terms of applications. I favor this. On the
other hand no HLL at any level or any amount of meta^n-programming
or sophistication or elaboration will ever extend the domain of AI
beyond its rule-based range.
Now why is this true? For a reason which Fare refuses to credit
with any significance: no software can execute in a manner
inconsistent (outside the range) of its embedded logic. That's
true for emergent behavior software as well as non-emergent, for
deterministic software as well as non-deterministic. Regardless
of our ability to either understand or predict the results of
software we are assured that they derive strictly from the
embedded logic. As such no "extra something" transfers to the
software: it has nothing which has not been externally prescribed.
Regardless of what we do with software or with software on
software (reflexive programming) that which we do is rule-based
consistent with the embedded logic. It has no means of
transfering anything other than rule-based logic. It starts with
rule-based. It ends with rule-based.
Well, can we not mimic the brain's neural activity in software?
The answer again is "no". The brain's neural activity is based on
the interconnection of neurons. Neurons have multiple inputs and
outputs receiving impulses from and transmitting impulses to other
neurons. There are no fixed logic circuits other than what occurs
within a neuron. We have no combination of "and", "or", "not",
and "clock" circuitry which will mimic a neuron and its
interconnections.
Neurons do not combine to form an instruction set. With no
instruction set there is no software operating within the brain.
Just the basic component, the neuron, and its interconnection.
One would think that in observing Ashby's homeostat exhibiting
"adaptive behavior" that the component and the interconnection
were enough to create a "builtin goal" for which internal
self-direction sufficed: no outside intervention required. No
software (outside intervention) necessary.
I would think it obvious giving the "natural" method of brain
formation and the "non-natural" one of the homeostat that software
per se plays no role. Do we have any proof that we cannot employ
software in producing a "non-natural", i.e. manmade (artificial)
brain? I say "yes", again relying on the execution consistency of
embedded logic.
We have no means in software of creating higher level functions
(assemblies in manufacturing terms) which do not in turn rely upon
lower level raw materials (control structures and primitive
operators) and lower level assemblies. Eventually all these
machine-independent instructions must resolve into the instruction
set of a host computer. From the very top to the very bottom all
are based on the same set of rules formed by the circuitry, the
only builtin logic, of the host computer. No higher level logic
not so translatable can occur in any HLL regardless of how in the
scale it occurs.
No such restriction on higher level functions are imposed by the
neurons and their interconnections in the brain. There are no
rules except those dynamically (and temporaly) imposed from above
(the higher level functions) on the underlying lower level
processes. In short you have a dynamic hierarchy (networks) of
processes which are not "logical" in terms of replicatable
cause-effect rules required by the embedded logic of software.
It's interesting that we should want to do this using language,
specifically a HLL. You would think that enough experience with
"you said, but I meant" would be a clue to the inefficiencies
associated with verbal communication that we produce using
processes within the brain. This should tell us that no number of
HLLs (or language on language--reflexive, meta^n-programming) was
going to solve this one. But beyond that no language is capable
of producing the non-verbal processes, the dominant form within
the brain, the nervous, and the remainder of the human system.
If we could produce a "true" AI sentient form, regardless of their
number, they would not communicate via language. They would form
a connected-whole, communicating directly the sensory data and its
results so as never to miscommunicate as well as allow the best in
thinking, the continuing dynamic results of processing the sensory
data, to be available to every "true" AI everywhere
simultaneously. That's what we we do if "mental telepathy" were
available to us (which it may have been prior to the construction
of the Tower of Babel<g>).
I would suggest that you give some consideration to the study of
General Semantics, Zen Buddhism, and esoteric philosophies like
the Gurdjieff-Ouspensky. First understand the importance of
non-verbal experience and what is lost in applying the sequential
process of language to it (what you say as well as what you leave
out). Secondly, take to heart Ouspensky's "The Psychology of
Man's Possible Evolution" about our spending most of our mental
energy in a sleep-walking state and the absolute group (you cannot
do it alone) discipline required to develop self-awareness from a
few seconds now and then into a continuous process.
It is one thing to develop a prosthetic arm, leg, internal organ,
or an eye or an ear. It is possible that you might interconnect
them into a human system to provide sensory data to the brain
through the nervous system well enough to substitute for a loss.
It is quite something else altogether to believe that you can with
a language process that which is non-linguistic, non-verbal,
retaining it continuously in that form while at the same time
making sense of it and in abstracting (by definition leaving
something out) from this into a language. We cannot do it as
human beings in which all this occurs. No language transmits
non-verbal sensory data. If you consider art and music as
languages, they result in non-verbal experience, in which language
only interferes.
You see it really doesn't make any difference whether all
languages fall under some unification theme (www.sil.org) or how
reflexive, elaborate, sophisticated, or high-level individually or
any combination. They cannot transmit non-verbal (sensory)
experience, only abstract it. Any "true" AI would leap at a means
of avoiding language altogether in favor of sensory transmission:
mental telepathy.
You can only write software using a programming language. That
software, that language, is incapable of comprehending the sensory
data that it processes. We do it because we can process sensory
data non-verbally, i.e. without confining it within language. We
only get into trouble when we translate/abstract it into language.
We can do it non-verbally because we do not have to process it
logically, i.e. we can engage in "free association" or "go with
the flow". You cannot process the non-logical logically,
particularly with sequential logic. Unfortunately a Turing
machine allows no other kind.<g>