Emergence of behavior through software
Alik Widge
aswst16+@pitt.edu
Sat, 30 Sep 2000 14:26:03 -0400
--On Saturday, September 30, 2000 9:44 AM -0700 "Lynn H. Maxson"
<lmaxson@pacbell.net> wrote:
> If they (the results) did, then regardless of anyone's ability to
> fathom or predict them, the software executed as instructed and
> thus brought nothing "extra" or "special" to the process. In
> short it did nothing on its own "volition".
I'd argue that this requires definition of the term volition, and also an
understanding of where exactly one obtains volition. To the best of my
knowledge, this is not a solvable problem with existing knowledge of the
human mind. See also three billion opinions on the Chinese Room.
> instruction-conforming behavior? What initiates the internally
> set behavior from the externally set? If we disallow passing it
> to it somehow as a form of inheritance, which we must disallow in
> order for its "new" behavior to be its "own", then we are left
Your point that true AI might not conform to its original purpose is
well-taken. It obviously has to conform to the architecture on which it
runs, for the simple reason that it will die if it does not. However, I
wonder at your statement that an organism which has inherited behaviors
from an external source cannot claim those behaviors as its own. I have
behaviors inherited from my parents, from my society, and from my
evolutionary ancestors going back to single-celled life. There is a
credible argument that all my actions can be predicted by a sufficiently
complex simulation containing all these terms. Do I no longer have any
behaviors of my own? If I combine two actions previously taken by others
into one which no-one has yet taken, does that count as my own behavior? (A
program could achieve this by stringing together two function calls in a
way no programmer had instructed it to do.)
> levels. That presupposes that we have some ability to encode a
> "triggering" event in the execution which will spawn the necessary
> spontaneous generation. We do not. What we have is the machine
> instruction set, the only thing that the software can direct the
> machine to execute.
But being limited to an instruction set does not preclude generation of new
strings of instructions. One may argue that a human is limited to the
actions possible within the known laws of physics, and yet we believe that
humans have free will (or a strong illusion thereof).
> an "error", something to be fixed. Contrary to his statement that
> no one can know completely the internal logic of the machine, I
You can know the circuit diagrams. You can know the physical equations
governing the circuit components. However, you cannot actually know the
behavior of the individual particles which comprise the machine, and
perturbing a few of those can have significant effects, especially as
component size decreases and we shove fewer charges per operation.
> Spontaneous generation then in a von Neumann machine is an error,
> again something to be fixed. The hardware does not support it
This is half true. Behavior outside the specification is indeed an error.
I'm not sure that this is the only possible form of spontaneous generation
(at least for my understanding of such a term)... see below.
> except as an error. There is no means in software translated into
> machine instructions to make this possible.
Many have proposed building a true random-number generator into processors
--- something that would sample noisy physical data and produce genuinely
unpredictable (as guaranteed by Dr. Heisenberg) numbers. What if I use
those numbers to generate valid opcodes and feed those back into the
processor? If I do this an infinite number of times, probability says that
I will eventually produce working programs. (Some might say that there are
many programs already extant which were produced in such a manner.)
I'm guessing your answer is that this is still not really spontaneous,
because those numbers are still being made to obey the rules of the
architecture. However, the programs written by humans also conform to those
rules (barring the existence of bugs (a perhaps ridiculous assumption)). Do
my programs also not count as spontaneous acts? If they don't, what exactly
does?
> Meaning, if it exists at all, does so only in the observer.
This is an acceptable claim, but how does it exist in this observer? Our
limited understanding of the mind suggests that it is somehow encoded in
the structure of the brain and the currents flowing therein. If one
constructs an analogue of that within the computer, is it not then capable
of deriving meaning from data?
Alik