Emergence of behavior through software
Lynn H. Maxson
Mon, 25 Sep 2000 11:26:33 -0700 (PDT)
Fran=87ois-Ren=82 VB Rideau wrote:
"What part of "universal machine" don't you understand? Just
every kind of information processing doable with a device in
classical physics is emulable in constant time factor overhead by
any universal machine ..."
Fran=87ois-Ren=82 VB Rideau quoted:
"Alan Turing thought about criteria to settle the question of
whether machines can think, a question of which we now know that
it is about as relevant as the question of whether submarines can
swim. -- Dijkstra"
Wow. And to think I was accused of platitudes with my "Let people
do what machines cannot and machines do what people need not."
The only question remaining with respect to this "universal
machine" lies in whether such a division of labor is necessary,
whether people are necessary or not. Fare, I believe we are
entitled to your answer to insure no confusion on this point.
More to the point do you believe sentient behavior possible in a
You see I don't confuse information processing by a machine,
universal (whatever that means) or not, as intelligent behavior.
Coming from the "data processing" days I have yet to see any
difference arising out of upscaling it to "information
processing". Humans apply "meaning" to data. If a human doesn't
understand the data presented it is "meaningless". It's the same
data, only the human is different. That implies that meaning
itself is somewhat arbitrary, in relativity terms dependent upon
There is no question that we can use machines as tools with which
to do data mining. The question is can machines use machines to
do data mining? Given its absence previously (as occurs in
humans) how does a machine develop a concept of data mining?
Through sophistication, elaboration, unlimited levels of dynamic
meta-programming, or by accident? How do you program accidental
behavior? Certainly not through any machine-producable, random
"Oh, you may certainly check that each step in the multi-gigabyte
trace is correct, but that won't make you _understand_ the
computer's work at all."
Then I probably will take comfort in that the computer will not
know that it does not understand. Therein lies the difference.
In none of your examples does the computer "know" what it is doing
it simply does. What it does we have to verify to insure that we
instructed it correctly. The computer (the hardware and the
software) has no such means available to it.
I do not say any of this out of fear. I do not fear their gaining
this capability, but so far we have been unable to transfer (or
even understand) what allows it in us to them. For example,
regardless of variations on a theme all computers use sequential
(linear) memory. It is a distinct hardware component. Yet no
such memory, no such component exists in the human brain.
The question arises, not out of fear, is the behavior intrinsic to
the constuction, the composition? If it is, then how much of it
is possible to emulate within a different construction,
composition? As I pointed out in a private response to Derek
Verlee you can neither sit nor cool yourself in the shade of a
"By automating physical work, machines freed our bodies.
By automating intellectual work, they will free our minds."
I found this one more than a little interesting. Obviously the
unwritten but necessary "our" after the "By" means that the
machines did as they were told, not that they took it upon
themselves.<g> I have no argument with any of your examples of
such automation that allows us to achieve with machines that we
could not reasonably achieve on our own. In none of them did the
machines "know" what they were doing nor did they take it on their
own to do anything not conforming to the instructions given.
"Don't fear machines. Welcome their enhancement, and learn to use
them. Or better even, enhance them for the betterment of
As one whose career lay in automating client processes automation
had no fear for me. In fact my current (other) project lies in
the (greater) automation of software development and maintenance,
which as in other successful automated processes will lead to a
significant decline in the IT population. With luck it will
reduce computer science to an option under some other major.<g>
If successful, it will certainly lead to the betterment of mankind
in further elimination of repetitive, non-intellectual (where you
definition differs from mine) tasks.<g>
"It is irrational fear only that makes you think this
and dismiss the most fundamental result of computer science:
the existence of universal machines"
I must have missed this one. We must have a different definition
of universal. A universal machine by definition must be capable
of replacing any other. I am not aware that computer science,
one, had this as a goal, or, two, achieved it. If by this you
mean some existing computer architecture produced by computer
science, then we are wasting our time with Intel-based nonsense.
"Moreover, your presentation of software as "externally imposed
instructions" is flawed at best. You make it sound as if an
all-understanding designer would provide a complete detailed
design from which all the behavior of the machine would
Indeed, no human may create any emerging behavior this way
(by very definition of emergence)!"
Exactly. What part of software design do you not understand?
Where in any of your examples do you stray from the intrinsic
determinism of the software?
"To create an emergent behavior, you precisely have to loosen on
the design, and let behavior gradually appear, not fully
"externally imposed", but precisely "internally grown", from the
feedback of experience."
Wow again. No wonder I'm such a moron. I have no idea of how to
do any of these. Not to worry because what I can't impose
obviously the software which has somehow acquired the "feedback of
experience" (cognizant that both are present and creating names
for them) will do on its on. I do have difficulty believing that
you teach this in computer science.
"Instead of designing the machine, you meta^n-design it so that it
grows the right behavior."
I thought so. It's not a computer you're talking about. Or at
least not one that we have managed to design so far. I'm still
having difficulty with how it can "precisely internally grow" and
at the same time not interfere with the "meta^n-design" producing
a behavior that it deems "right" on its own.
"You cannot dismiss well-established theory by mere belief.
You cannot even reasonably believe that the human brain is more
complex than what our industry can already or may soon
manufacture; the raw accumulated computing power of general
purpose processors in the world is already superior to all known
estimates for the complexity of the human brain (10^12 nerve
cells, each with thousands of connections); with a grossest
overestimation, you may encode a brain and its state with 2^60
bits, which will be, by Moore's Law, on every desktop by 72
Whee, not even a wow this time.<g> The wrong one of us is
accusing the other of dismissing "well-established theory by mere
belief".<g> Allow me.
Yes, I can reasonably believe "that the human brain is more
complex than what our industry can already or may soon
manufacture". It is not a numbers game. The brain is not a
computer (nor is a computer a brain). What we call computing is
but a fraction, perhaps even an infinitesimal one, of the total
brain activity. Whether you use the example of Big Blue or some
other herculean effort to try to match the capability of the
brain, you still come up short when it comes to "packaging" the
entire product. Short of duplicating it exactly (forget
emulation) you are not going to do it. Specifically you are not
going to do it with computers (machines) as we know them
today--linear memory, von Neumann architecture, Turing rules,
"As the systems becomes more elaborate, we'll switch to
higher-and-higher level languages!"
It's amazing that thus far we have only managed a few generations
of HLLs up from machine code. What is not amazing is that each
generation builds on the previous. For better that twenty years
now we have been stuck with a mere handful (by my count 5). Thus
far nothing suggested for a Tunes HLL has changed any of that.
Certainly not Slate. Nor Self. Scheme. Or any other candidate.
The problem here with any higher level language is that a clear
path must exist between it and the language of the machine.
Currently that path flows downward through the previous
generations. I draw from that that an higher HLL may allow an
"expressive ease" but not a "behavior" (emergent or otherwise) not
inherent in the lower levels. Basically it gets down to not doing
anything not possible in the machine instruction set. Now you may
have found a way free of such dependence, but I know of none.
Ultimately it gets down to the machine instruction set and the
design of a machine which can create its own. That implies an
ability to create its own architecture. To evolve. I don't fear
such machines. I simply don't know how to make them other than
the way they have occurred since the beginning which lead to our
More to the point you neither design (loosely or otherwise) such
machines. If you don't design them, then you certainly don't
program them with software. You have no need for a HLL regardless
of level.<g> Moreover "it" has no need for one that "you"
develop.<g> At this point "you" become unnecessary. In the
ultimate "wisdom" of a dynamic universe in which things appear and
disappear that is a "truism".<vbg>
In geometry the whole is equal to the sum of its parts and is
greater than any one of them. I see in your writing in the Tunes
project a desire to change the first equality. I have seen
writings in which such a claim made for living organisms. I'm not
here to argue one way or the other.
When I objected at the beginning of this series of messages to
your continued reference to the "system" doing this or that for a
system under development I did so because we are engaged in
producing a single system, not one in which we could separate an
activity from its dependents. It only "does" it if we have "done"
it previously and thus have a reference to it. If we have not,
then the system is incomplete (and possibly either impossible or
I read into what you have written a belief that a little
"something extra" can spontaneously occur in software not inherent
in the instructions provided (externally). You have confirmed
that. I do not argue against it in the sense of opposing it. I
do not fear it in the sense of any danger it offers. I just think
that in the interest of the "science" in computer science as you
have defined it that you will state how you feel it will occur.
The continued references to sophisticate, elaborate,
meta^n-programming in software somehow evolving to such a state is
to this moron unconvincing based on experience to date. To the
best of my knowledge (and within all the examples you have quoted)
such has never occurred.
If you believe in "spontaneous generation" in software, then
certainly we should set up the scientific conditions in which in
your terms such is "predictable", "reproducible", and
"repeatable". So far no software hierarchy developed in any
language, singularly or in combination, has experienced this
phenonmenon. Until it does it remains speculative and so stated
in references. Otherwise it is misleading.