Emergence of behavior through software
Lynn H. Maxson
Fri, 06 Oct 2000 00:58:25 -0700 (PDT)
I apologize for not getting Alik right. It's sitting in front of
me, a mistake I should not have made. Nevertheless Alik Widge
"The basic problem is that we can't set up the biochemical
clockwork and then add the single push to get it all rolling. We
need to start with a running engine and cobble the parts onto it
as we go. OTOH, there are plenty of people trying to recreate the
primordial soup, so perhaps someday they will demonstrate
spontaneous generation of self-sustaining processes. There is
nothing which says that we *need* an organism, but starting with
one is significantly simpler, so we do that."
We are basic agreement here. That's why we are in disagreement
with respect to software-initiated life. As far as I know all
organism are carbon-based. I assume that's why we call the
chemistry associated with carbon organic chemistry. I have no
doubt at some point we will crack the code that initiates the
temporal instability within a chemical substance that we call
life. A temporal instability, a transient in a process that leads
to death. A high-information, low-entropic instability incapable
of sustaining itself indefinitely from a low-information,
high-entropic state called death.
What is it that you would do in software? Certainly not create a
life form. With software you can only mimic. The best you can
achieve, the best we have ever achieved is useful mimicry. Now
you have two problems, one, that a difference remains between
being logically equivalent (the best that you could ever achieve)
and identical (which you can never achieve in software), and, two,
even logical equivalence to the degree is probably neither
possible nor practical.
Why? Call it chemistry. I gave you the example of growing
silicon crystals because that's a much simpler chemistry to mimic
in software than that of an organism. Yet no matter how you write
the software or select the host computer are you going to end up
with a silicon crystal. Logically equivalent, yes. Identical,
no. Is there a practical difference? Which one can you use to
build the computer in which you will run your silicon generating
"But it *does* have a beginning. There is a point in what we call
time (which, although it may have arbitrary divisions, is also
considered to be a physical dimension, and is therefore "real" in
some sense) before which there appears to have been nothing."
Nothing which exists only in human-created systems is real outside
that context. Not the physical rules. Not the chemical ones.
Not the mathematical ones. Not even time. They are no more than
continually changing maps distinct from the territory they
supposedly describe. That doesn't mean that we do not find them
useful. It simply means that they fill a need we have, not one of
a universe which has no such problem, which has no needs period.
I realise that plus and minus infinity are useful crutches due to
our language and the impact it has on our thinking. We have
beginning and end because we cannot accept that any process can
have avoided either. That's a trap we have set for ourselves, not
one for a universe which doesn't reflect on what's happening. If
you want to accept a theory that all matter did not exist before
the big bang simply because the mathematics dictates it, you may.
I will simply assume it's a map error.<g> Or it was an act of
God, because we cannot have an effect without a cause.
"This seems rather disjointed. Crystals must be formed in a
specific way, yes. It may be true that minds require specific
underlying patterns. However, there is no evidence that those
patterns cannot be implemented as software or hardware."
Here again we are dealing with an organism where what it does and
how it does it are indistinguishable. What we call hardware and
software are one and the same. We do not have high-level software
and low-level machines. They have identical levels, because "it"
is not a "they". As someone with experience in neuroscience you
also know that the brain does not engage in sequential logic.
That we can does not mean its use in support of our ability.
The problem is that you want to program something that doesn't use
programming. No matter the genetic code or the cell
differentiation they only spawn the abilities. They do not direct
them. Ashby with his homeostat showed that you only needed an
interconnected structure which adapted without instruction because
it was "inherent", "intrinsic". He upset no end of people who
would play God by showing that God (deus ex machina) was
"Except that a neuron can in fact compute in just that manner. I
would refer *you* to something as simple as the cells of your
retina. Shine a light on one, and it turns on. Remove the light,
and it turns off. (Others turn off by light and on by dark. Same
principle.) The thresholding behavior of neurons is not much
different from a digital gate: if you're close enough to +5V, you
get 1, otherwise you get 0."
The difference, of course, lies in "not much different". It is a
difference which counts. First off, a neuron is not an on/off
digital gate. One it gets "tired" and sometimes doesn't produce
an output logically indicative of the input. Sometimes what it
produces is not sufficient to excite the next connection depending
upon its current state. What you get is a statistical mishmash of
a highly parallel, distributed, interconnected flow. Much the
same occurs within the cells of the retina which may excite one
time and not the next.
Given how well you understand the retina, I surprised that you
don't implement it with software and a host computer. I don't
know what it would see, but maybe if you connect it to that which
mimics the brain, you could be on your way.<g>
"But that is *not* how the brain behaves. You will most likely get
very poor results if you rewire the optic nerves to auditory
cortex and auditory to visual cortex. Brains do not begin as
randomly wired networks; the DNA itself contains "bootstrap code"
to organize structures and begin regulation."
The point of the homeostasis-based autopilot and the fixed-logic
one was not to suggest that the brain operated in such a manner,
but that there was a means of exhibiting goal-seeking (adaptive)
behavior structurally without a separation between the direction
and the doing. In short it is builtin, integral within the
structure. What we call "adaptive behavior" or even "life" arises
from the conditions of the processing structure. One thing that
it is not is sequential logic. One thing that software is and
always will be is sequential logic. That's your Turing machine
that you say can do anything the brain can do.
Take a look at languages specifically designed for parallel,
distributed systems and at the hardware specifically designed as
well. Find one simultaneous, majority-logic computer architecture
that has an HLL with the same capability. It's not that one or
the other doesn't exist. I suspect that if you looked at the
"innards" of Big Blue which accomplished only in part a very small
piece of what you propose to implement in software and a host
computer, that ease of rolling off your tongue volition, emotion,
mind, thinking, feeling, seeing, acting will be far different in
"This is also an assertion, and one which I think has been at
least partially proven false. We can in fact produce the signals.
We need to get the physical wiring down, but that's a simple
matter of engineering. Give it two decades."
Why should any simple matter take two decades? It must not be
"Do you claim, then, that physics does not derive from the
eminently logical system of mathematics?"
I have no clue what connects this to the non-logic-circuit basis
of an amoeba. For the record I make no such claim.<g> Although
you may get an argument from physicists.
"If this is true, explain those who have sensory deficits. Seems
to me that they're functioning quite nicely on a subset of their
The point is that whatever sensory capability they have is
integral with the brain. If they lose a sensory capability, that
in no way diminishes the functionality of the brain: the
capability remains. If they lose a sensory capability of the
brain, the sense retains the capability. For the system to work
they must both work as "one".
"And you cannot show that this is impossible. We may not know how
to do it, but that does not mean it is impossible."
Well, it gets back to chemistry and whatever it is that allows
life its interval with an organism. Software is not chemistry nor
is the instruction set of a host computer. The host computer may
be chemistry, but it is not of the kind that sustains life. Now
you either believe that life is formed only from carbon-based
matter or you do not.
If you do, then what you propose even in creating an artificial
life is impossible. If you do not, then it is up to you to show
how software in a computer can exhibit all the properties we
associate with life forms. Not the least of which is the lack of
software distinguishable from the hardware. An organism is an
integrated system and functions as such. You keep wanting to
program that which requires no programming.
"That's not the point, though. If you accept this as true, you see
how any program could be created without anyone having the intent
to create that specific program. The process would need to be
optimized, but I only desired proof-of-concept. This seems to
partially deny your idea that randomness can do nothing for the
idea of the emergent system."
I think here your problem is greater than any objection I raise.
I will concede that it is theoretically possible to create any
specific program using random opcode selection. I will not
concede that it is practical or that given any zillion of
interconnected machines at a 1000MHz that it will occur in less
than a million years. I leave it up to others more familiar with
probability to give you the actual odds.
Nevertheless you have your proof-of-concept even if useless. Now
you propose to optimize a random process. I can only assume that
you intend to do what we do now which is to remove the randomness
through the use of fixed logic.
I'm not aware that I said that randomness can do nothing for the
idea of emergent systems. You have two choices for random
selection, you can choose data or you can choose an instruction
path. What you do with either choice is completely determined
(consistent) with the embedded logic. The software may use random
selection, but there is nothing random in the embedded logic.
Thus references to emergent software systems differ not one whit
relative to their consistency to the embedded logic. They have
the same logical consistency as does non-emergent software. This
consistency prevents a software system from ever acquiring a
capability not contained within the embedded logic. Fare believes
that you can somehow transcend this from within the software
itself using meta^n-programming or ever higher-level programming
" I do not say that a TM "has emotion"; I am rather saying that
emotion may simply be the output of a particular computational
process within the brain."
You see it all hinges on what is included in compute. If you mean
that which is possible on a von Neumann machine, the answer is no.
Emotion is not an output of a process, but part and parcel of it.
Emotion is a process as is volition, thinking, feeling, etc..
They are not separate nor separable from each other, but melded
within the same overall process. As one who studied neuroscience
you should know that. Don't make Descartes' error. Read the
"Each individual neuron has a definable input-output behavior. As
such, it computes a function, and as such, it is theoretically
replaceable by a TM."
Nice try, but no. Once you get by the difficulties of logically
representing the "definable" part relative to energy levels,
interconnection resistance, persistence, and repetitive rates, you
are going to run into a wall on the "function" part, if for no
other reason than it doesn't exist at this level. A neuron either
fires or it does not depending upon the circumstances at that
moment. That's it's only function at its level. If you want a
Turing machine to execute or not execute billions of neurons
simultaneously, be my guest. I guess it is one of those
theoretical proof-of-concepts that you enjoy.
"Chain enough of those together to replicate the limbic circuits
and you may well have artificial emotion."
"Until we get clever enough to try it, you cannot claim that it is
I'm not aware that my claims have any less validity than yours.
However I am more than willing to change it to highly improbable,
mimicking it in the limit as you would life in software to say
that you can't tell the difference.<g>
"I am making no argument that the Tunes project should try to
build software organisms. That is not possible based on current
knowledge. However, you are apparently arguing that it will never
be possible, and I consider this exceptionally short-sighted."
Interesting. Both you and Fare hold that we are some decades away
from any ability to state it one way or the other. I consider it
short-sighted to pursue the unknown when we have yet to exhaust
the known. I believe that you will only create life as such with
all its properties using carbon-based technology and never with
von Neumann architecture and Turing-based software. There is a
chemistry of life relegated to actual physical material that no
matter how you mimic them in software will always have something
Beyond that I see no purpose in it. There is nothing in Tunes in
terms of results either in operating systems or HLLs which
requires more than what we know currently. Fare wants to give
software a "life of its own" except for "purpose" which we will
retain. He doesn't see that the one contradicts the other,
because life's processes, simultaneously present in the process,
does not allow for such separation.
You want to create artificial life because everything in your
universe is somehow expressible in a Turing machine. I would
simply suggest that you reexamine it. I see no sense in
artificial life, because success means loss of a tool. Do you
want to reinstitute slavery? Do you want yet another source of
mis-communication? Do you think that artificial life offer us any
more than what they could offer without it?
The point is to use software and hardware technology in ways that
extend our capabilities. Who can be opposed to that? Artificial
life, something that replicates what we are only thousands of
times slower on 1,000,000MHz machines and at 100,000,000 times the
cost, makes no sense at all in my opinion. Artificial limbs,
artificial organs, yes. I personally would prefer non-artificial
either regenerated through biology.
I see software and hardware as a tool. I don't see artificial
life as such. Your choice.