Emergence of behavior through software
Lynn H. Maxson
lmaxson@pacbell.net
Sat, 30 Sep 2000 23:42:10 -0700 (PDT)
Alik Widge wrote:
"I'd argue that this requires definition of the term volition, and
also an understanding of where exactly one obtains volition. ..."
A reasonable argument. Let me provide you with a definition
within the context of this discussion. Are the results consistent
with the embedded logic of the software? If they are, then no
volition, no "independent" action on the part of the software
occurs. On the other hand if the results are inconsistent with
the embedded logic, then the software on it own, i.e.
independently, created instructions (or data). If it did so
behave independently, then it acted on its own volition.
" To the best of my knowledge, this is not a solvable problem with
existing knowledge of the human mind."
Fortunately the question does not involve volition within the
human mind. Thus how it occurs there it of no interest to us
here. If no one can provide a instance whereby software behaves
inconsistent with the its embedded logic, then it makes no
difference whether we understand volition or not. At least in the
human system it occurs without our knowing why. It does not occur
in a computing system of hardware and software.
"However, I wonder at your statement that an organism which has
inherited behaviors from an external source cannot claim those
behaviors as its own I have behaviors inherited from my parents,
from my society, and from my evolutionary ancestors going back to
single-celled life."
Well, you are going to have some difficulty developing software in
the same manner of all other living organisms which for the sake
of argument let's agree evolved from single-cell life. In fact
even considering a computing system as an organism puts you in a
rather deep hole. Now you have to take something not derived from
single-cell life, expanding the definition of organism such that
it becomes the universal class as now there is nothing which we
cannot consider in some manner belonging to it, e.g. my radial
saw. All we have to have is a system and bingo we have an
organism.
Software by itself cannot execute. It must reside in a machine
and together they constitute a computing system. The software in
or out of the machine is not alive nor is the host machine. Thus
we do not have what biology defines as an organism.
Secondly you may have inherited physical traits, but you certainly
did not inherit behavior. Behavior in society is not inherited,
neither the society's behavior nor the individuals which compose
it. The behavior of software is not inherited for software does
not engage in procreative activities as organisms do.
Technically we construct the software's behavior. We do so
entirely within the realm of formal logic. We do the same with
the machine. We define both as 100% formal logic systems. Jecel
disagrees with me on this, but the machine is 100% based on the
use of logic circuits (which obey formal logic) and the software
can do no more than supply a sequence of 100% logic-based
instructions. Organisms are not bound by logic. They cannot be
constructed with "and", "or", and "not" circuits. The computer is
not a brain and the brain is not a computer.
"There is a credible argument that all my actions can be predicted
by a sufficiently complex simulation containing all these terms."
On the contrary it is an incredible argument. You should stop
listening to such drivel. As humans we can posit the impossible,
the sufficiently complex simulation, in this instance. I'm not
going to invoke Penrose here, but any time you believe that you
can simulate a living organism to the quantum detail, you best
rethink it.
"A program could achieve this by stringing together two function
calls in a way no programmer had instructed it to do."
That's the crux of the argument here. A program "could" if it
could free itself of its own instructions. Then you see you would
have to come up with where it "acquired" the instructions, i.e.
told itself, to do this and then where it acquired the
instructions to do this. Then you have to at least point out the
means it used to generate both these sets of instructions without
acquiring control of the processor. There is no means from within
software to address a non-existent set of instructions, to pass
control to something which does not exist. In all computing
systems of which I am aware this generates a "hard" error (or at
least an address exception<g>).
"But being limited to an instruction set does not preclude
generation of new strings of instructions."
Not at all. Again the issue is whether or not any such generated
string is consistent with the embedded logic. If you say it may
not be, then you have to explain how it can occur. It must occur
through invoking logic not present with the software. By
definition as every meta-program has embedded logic. Therefore it
cannot occur through meta-programming.
"You can know the circuit diagrams. You can know the physical
equations governing the circuit components. However, you cannot
actually know the behavior of the individual particles which
comprise the machine, and perturbing a few of those can have
significant effects, especially as component size decreases and we
shove fewer charges per operation."
Considering the logic of this I might have saved myself some
effort by letting you destroy your own "credible argument" about a
"sufficiently complex simulation".<g> Nevertheless regardless of
how small the circuits become their logical function remains the
same.
"Many have proposed building a true random-number generator into
processors --- something that would sample noisy physical data and
produce genuinely unpredictable (as guaranteed by Dr. Heisenberg)
numbers. What if I use those numbers to generate valid opcodes and
feed those back into the processor? If I do this an infinite
number of times, probability says that I will eventually produce
working programs."
It doesn't bother me to have someone talk about doing the
impossible, e.g. performing an operation an infinite number of
times. I have a somewhat clear picture of the difference between
science fiction and science fact.
The fascination with random numbers or randomness in general as a
source for spontaneity in a computing system I find amusing.
Decision logic in software (if...then...else,
case...when...otherwise) determines what occurs with any random
number regardless of its source. There is no randomness in
software logic, all possibilities are covered...or else you have a
software error.
We keep acting as if software were only a set of instructions when
in reality it has two inter-related spaces, an instruction space
and a data space. Moreover the data space has two subspaces, a
read-only subspace and a read/write subspace. Instructions
operate on data or on machine state indicators e.g. branch on
overflow.
As one who began his career writing in machine language (actual)
as no other option existed for the system let me assure you that
beginning with that time and continuing up until now (and into the
future) great care in maintaining harmony among data and
instructions and among instructions and instructions takes place.
Otherwise the "system" fails.
Now Tunes is involved with avoiding such failures, to have
reliability not present in current software. Supposedly this
occurs through elaboration and sophistication of sufficiently high
HLL in combination with meta^n-programming and the use of
reflexive programming. None of these, however, "allow"
inconsistent software behavior as they insure consistency with
their embedded logic. They have no means within themselves to
escape their own consistency nor to transfer it somehow to virtual
software which has no means of self-generation.
Software cannot escape its own consistency. It cannot avoid its
own errors. Randomness does no more than transfer control
(decision control structure) within consistent boundaries. It is
simply another way of making a decision on which path to take
next.
"> Meaning, if it exists at all, does so only in the observer.
This is an acceptable claim, but how does it exist in this
observer? Our limited understanding of the mind suggests that it
is somehow encoded in the structure of the brain and the currents
flowing therein. If one constructs an analogue of that within the
computer, is it not then capable of deriving meaning from data?"
While you say analog here instead of "sufficiently complex
simulation" the same piece of science fiction comes to the fore.
You cannot create a brain or any part of a human system with a
computer. One is an organism, fashioned in the same manner as any
other, while a computer is not. von Neumann architecture is not.
Turing rules of computation are not. Machines of any stripe are
not.
I do not know how an observer acquires meaning from data. I do
know that you can train observers to do so. However I do not know
how that training does what it does. Basically from what you have
said I assume that we agree that we do not know. At that we are
one up on a non-intelligent computing system whose current
architecture hasn't a chance in hell of becoming anything else.
At least we know we don't know.