Emergence of behavior through software
Lynn H. Maxson
lmaxson@pacbell.net
Mon, 02 Oct 2000 10:34:38 -0700 (PDT)
Fare, you have been so kind to answer my question(s). It is only
fair that I in turn respond to yours. First allow me to state two
logically equivalent forms on what we agree:
(1)No software executes in a manner inconsistent with its embedded
logic,
(2)All software executes in a manner consistent with its embedded
logic.
The only subnote to this lies in your "Artificial emergent systems
are _already_ useful and can be improved upon for more utility,
independently from their eventually reaching "AI" or not." Does
even reaching this change the agreement?
"Volition does NOT consist in choosing the rules. No single human
chose his genetic code, his grown brain structure, his education.
Yet all humans are considered having a will."
Volition deals ultimately with choice. In the dictionary this is
qualified as a "conscious" choice. Consciousness lies in
self-awareness. Rather than introduce extraneous elements like
genetic code, brain structure, or will here, let's just stick with
whether or not software has the property of self-awareness (Does
it know what it is doing?), consciousness (Does it know it is
doing anything?), and volition (Does it have a choice in what it
is doing?).
The answer to all three questions is "no". It doesn't make any
difference if the embedded logic uses any amount of random
selection, any amount of meta^n-programming, or the highest in
HLLs, elaoboration, complexity, or sophistication. It cannot
escape consistency with the embedded logic.
That embedded logic is solely based on the formal logic embedded
in the host machine. No tweaking of logic, through simulation,
emulation, or adulation<g>, will ever transfer an intrinsic
property of an organism (which does occur within its structuring)
into software. Whatever AI may reach it will remain "artificial",
not identical to the real thing.
"My initial structure has no purpose so to speak. Purpose is NOT
a structural property. I cannot negate what I am. I can choose my
purpose. When I choose to do something, I am what I am; I "obey"
my nature."
Well, one of us is "dead wrong". You certainly can make a choice.
However your ability to do so depends upon your structure, your
human system and whatever in it is responsible for "life". When
that leaves you, when you die, you can no longer make a choice nor
exhibit purpose. If you obey your nature, it must be contained in
what you are physically. That my scientific friend is structure.
If you believe that mental activity is not totally physically
based, then it is you who introduce theistic notions.
But let's get back to your question. I quote:
"For giving instructions do not mean understanding. I may well
genetically engineer a human clone with some genetic instructions,
and be unable to fully understand and predict the detailed the
behavior of the clone I created.
In other words, the piece of running software you write is not
free from its own code that you originally provided;but it IS free
from the programmer, from you and me or other programs.
You give the initial seed, but it grows have a life of its own;"
"The question is about drawing a separation between doing and
understanding."
So let's answer the question about drawing a separation between
doing and understanding. The first thing we need to do is put
"understanding" in its place in the scheme of things. First comes
"knowledge", then "understanding", and then "wisdom".
Knowledge comes from "knowing" you are doing something.
Understanding comes from knowing what you are doing and if
possible why. Wisdom comes from using the understanding of what
you know to possible change what you do.
What you should notice here is that humans engage in all three
levels and software in exactly none. Software doesn't know that
it is doing anything for the simple reason that it is
non-intelligent. It certainly doesn't understand what it doesn't
know for the same reason. It cannot have wisdom for the same
reason. Non-intelligent means not having the property of
intelligence, which so far as we know exists only in living
organisms.
The only "seed" for an organism is an organism. Man thus far has
had no success in creating an organism in any other manner.
Clearly software is not an organism and thus speaking of it
"having a life of its own" is metaphorical, not scientific nor
factual. It has no life. Moreover we have no means currently of
giving it life.
However, for the sake of argument let's assume that it does. The
issue comes down to prediction and understanding of observable
reality. We have a history of increasing our knowledge and
understanding of such events leading in turn an increasing ability
to predict them. Following our assumption and the basis of Fare's
metaphor, this means our gains have occurred at the loss of life
within those events. That, my friends, is logic.<g>
Fare takes this, our inability at times to predict and understand
the results of software execution, as a means of giving software
something (life, independence, freedom from the programmer) that
it must lose in the event that we gain the ability to do either.
Note that this "loss" occurs without a change in the software
logic or in its execution. Therefore it must be a property
independent of them, perhaps even a "soul".<g>
This property arises from a more serious claim by Fare that we
have software whose results we do not understand or we cannot
predict. To me both are patently false. As one who patronizes
cybernetic Fare should know better. For cybernetics as described
by Ashby in his "Introduction to Cybernetics" relies on the IPO
model.
To say that we cannot understand results (output) means that the P
(process) which we must know (in order to have written it
logically) exceeds our intellectual capacity. To say that we
cannot predict results (given that we can understand the process)
means that we know the input and the process but lack the
intellectual capacity to apply the one to the other.
Now if we do not know the input and understand the process,
certainly we cannot predict. All this means is that we must
"know" the same input used within the execution instance of the
software. Fare pooh-pooh's this by saying it is "postdict" not
"predict". No. It is acquiring the necessary input in order to
apply the process to it, in which now we can achieve the same
results as the software. If the execution instance provides us
with the input and we can now predict the outcome, then the
software has lost any life of its own.
The questions surrounding prediction and understanding get even
more nefarious. Fare seems to forget that we write software and
construct host machines to form a tool system. We do so as a
means of extending "our" own ability. Moreover we do so for
"reasons of our own". These reasons are the "causal" processes
that lead to "effects" or the means of satisfying.
Among these "reasons of our own" are curiosity, amusement, and a
desire to increase our knowledge, our understanding, and our
ability to predict. We use tools to assist us in this. Now we
are bound by time, the amount that we can achieve in a given
interval. To increase our productivity we use tools which allows
us to achieve more of what we want in the interval. That we
cannot predict or cannot understand does accrue to an intellectual
failure or weakness. Instead we do not want to be bothered when a
more efficient "time" mechanism is available. We "choose" not to
have to predict or understand a priori. Why? It saves us a hell
of a lot of time.
We may not take the time to either know or understand what
occurred and why within such a process. That is our choice.
Again it is not an intellectual deficiency. As long as we can
verify that the tool works correctly, then how it did what it did
becomes unimportant. It allowed us to achieve "our" purpose. In
so doing the tool did not acquire a "life of its own" because we
chose to neither know nor understand.
You see all this derives from software executing consistent with
its embedded logic. In answering in turn Fare's question about
doing and understanding we have clearly made a distinct between
non-intelligent software and an intelligent organism like a human
doing something. One knows that it is doing something
(knowledge), can determine what and why it is doing it
(understanding), and can modify its future doings (wisdom).
East is East and West is West and ne'er the twain shall meet.
"It [change in behavior] is born in continuous transformation.
Meta^n-programming is not about directed design, but about
selection. You have a Lamarckian (or even creationist) model of
programming in mind; I have a Darwinist (or even Dawkinsian) model
of programming in mind."
An interesting side note is that changing behavior in software,
i.e. software maintenance, is increasingly expensive in time and
cost. Tunes is doing nothing to address this nor addresses it in
its HLL requirements. Obviously the answer lies in an "intrinsic
continuous transformation" process. The question becomes how do
we best implement this. The answer is process improvement, the
process of developing and maintaining software. Tunes nowhere to
the best of my knowledge addresses this.
Instead I am once more faced with a false metaphor: writing
software according to Lamarck, Darwin, or Hawkin theory. Actually
I write it according to the software development process of
specification, analysis, design, construction, and testing. If
any "evolution" occurs at all, it occurs through those stages. As
I do not confuse biological development (and maintenance) with
that of software despite some "perceived" similarities I have no
problem with the distinction between "writing" software and
"growing" organisms.<g>
But the issue in "meta^n-programming is not about directed design,
but of selection". So let's talk about that. Selection of what?
Answer. Pre-determined choices. No random anything will change
that. Decision logic in software regardless of where it appears
only allows certain paths to follow. Furthermore "Change in
behavior is no magic event", in which we agree, and "It is born in
continuous transformation". The first is certainly true in
software. The second in software can only occur through
pre-determined choices whether randomly selected or not. That is
the "condition" of a pure logic system. Fortunately organisms,
among them human beings, are not pure logic systems. Therefore
the continuous transformations that can occur in software happen
through the continuous transformations in humans which are not so
restricted. The one can determine the choices (and thus the
transformations) of the other and not vice versa. Not even
Hawkins can change that.
"If your "encoded logic" is universal, just any behavior is
consistent with it. So conformance to the logic is a null
statement."
Yes, but you see the encoded logic, particularly that of software,
is not universal. So conformance to (consistent with) the logic
is not a null statement. Nice try.
"I strongly dislike your way of turning around arguments,
completely avoiding to answer to points others claim as relevant,
not even acknowledging them, and claiming as victories their
concessions on points they claim are irrelevant. This is no
rational discussion, only self-delusion thereof."
I imagine that you do, considering the arguments you make. The
matter of relevance or not lies in the eye of the observer. To me
the issue of software execution "always" consistent with its
encoded logic is relevant. That consistency keeps it from doing
other than what it was "told" (an external agent). By your own
admission that does not occur in an organism (your cloning
example). What do we need more than this to know only
metaphorically, not factually, that software has a life of its
own?
Certainly it answers your question about doing and understanding
as well as prediction and understanding where one, software,
"does" without "understanding" (or "knowing") while this "life"
form "does", "knows" it is doing, and "understands" what it is
doing. I, therefore, have something that the software does not: a
life of my own.<g>
I suggest that the charge of self-delusion here is one of
projection, in the source and not the target. I doubt very
seriously if progress in software, in doing the things specified
in the Tunes HLL requirements, has any need for any properties
outside those available in formal logic. It hasn't required
anything else in getting to this point. We certainly haven't
exhausted all the logical possibilities. When and if we do,
considering parallel progress in other fields including biology,
then we can consider non-logical, statistical-based approaches.
Meanwhile let's complete what we can with von Neumann and Turing.