Emergence of behavior through software

Lynn H. Maxson lmaxson@pacbell.net
Thu, 05 Oct 2000 09:27:32 -0700 (PDT)

Fare wrote:

"That said, your theories really sound like you believe life comes
from some extraphysical divine "soul" that somehow directs empty
physical receptacles that are bodies.  I bet your theory is 
isomorphical to this soul thing."

I would remind you that it is not I who gave software a "life of 
its own" based strictly on whether we understood it or not.  I try 
not to inject my faith or belief system as a causal element in a 
discussion of this nature.  Such are unprovable and have no place 
in such a discussion.

"I bet, that, by induction, you can recurse down the bigbang,
at which time there was some fully developed seed for each of 
modern-time species, as created by god.  This is just ridiculous."

You lose the bet. I've been through this in another response.  In 
it I also did not inject God.  We do not know how life or the 
organism started or if this even makes sense in a timeless 
universe.  What we do know is that currently we have no means of 
creating life except through transmission of a living organism.

The question easily becomes is there a difference between 
artificial (man-made) life and life as somehow formed within the 
universe?  The answer I hope you would agree is "no".  The 
difference lies in the process man uses to create life forms.  
Will that difference lie in attempting to replicate the process in 
the manner in which it occurs or not.

"Why couldn't they? I imagine AI and A-life programs precisely
as starting as some simple symbolic organism, and evolving 

There are two problems here.  One lies in the difference between a 
physical organism, one that exists physically in the universe, and 
a symbolic one that exists entirely within human systems.  The 
second problem lies in the means of their evolving.  You presume 
that you can give a symbolic organism life.  You regard software 
as such a symbolic organism and when initiated within a hardware 
body that the combination can become a life form, an organism.

"Maybe not yours. Maybe not most anyone's today.  Yet, I've seen 
people who did grow software.  Genetic systems, reflective expert 
systems, data-mining pattern-extracting systems, etc, do evolve
(and I hope to eventually bring them all together)."

How does software grow?  How does software evolve?  It grows by 
someone writing.  It evolves by someone writing.  In both it 
involves direction by an external agent.  I don't care if it is 
genetic, reflective, data-mining, whatever.  It is not me who 
injects God-like responsibility into this discussion.

I accept that software will always reflect the writing of its 
authors regardless of how well they understand or can predict the 
results of what they have writing.  They may very well have 
written it for just that purpose.  However that purpose remains in 
them and does not transfer into the software, which as a 
non-intelligent mechanism, as a non-life form, does exactly as 

Evolution within a sequence of organism generations occurs from 
some intrinsic "property" within it with respect to the 
environment of which it is a part.  To achieve this with software 
means having no "external" writing, only "internal".  The 
challenge lies in programming the seed, that initial piece of 
software that acquires a "sense", an "awareness", and a "purpose"
of its own.

Now can that happen?  Without a better understanding of how these 
occur in living organisms we cannot rule one way or the other.  
However we can fairly safely rule out von Neumann architectures 
for the hardware and Turing rules for the software.  I'm not into 
denial here.  I do say we do not have the right "materials" 
currently.  If and when we determine just what those right 
materials are, we may find that hard-coding, not soft-, is the 

You keep talking about meta^n-programming and ever higher levels 
of languages, all of which we may understand, but none of which 
have we invested in the physical means, the computer architecture.  
No existing computer executes a meta^n-program or a HLL.  What it 
executes is their translation into a language (its instruction 
set) it is constructed to respond to.  I have to exercise caution 
here and not use such terms as "understand" or "know", because a 
non-intelligent mechanism can do neither.  Here the metaphorical 
use of language deceives.

If the answer lies in what you propose in terms of languages, in 
terms of symbolic forms, then the machine must have the ability as 
the authors do in terms of "understanding", "intent", and 
"purpose".  That says that you cannot do it through software.  The 
software cannot execute without a machine.  The machine cannot 
"know" more than its instruction set.  Therefore that instruction 
set must be capable on its own to "understand" and "create" ever 
higher level of abstraction of its own.  That, my friend, is 
"evolution" from internal growing.

That is not a von Neumann machine.  Nor are the governing rules 
Turing.  The secret here lies in developing both in sync as a 
single system with no separation between directing and doing, the 
same system that occurs in every organism.  That means they do not 
grow with "external" assistance, but only in response to it as 
part of their interaction with their environment.

I would have thought as one respecting cybernetics that you would 
have at least picked this up from Ashby's work, an understanding 
of homeostasis, and the homeostat.  In retrospect I should have 
gotten a clue when you proposed that software should acquire all 
these attributes except "purpose" which remained that of the 
authors.  That would imply that you hold volition, awareness, 
thinking, understanding, knowing, and purpose as separable 
components and not interdependent, integrated, interacting 

"You blank out the notions of input and persistent state.  Not to 
talk about chaotic behavior and evolution."

None of these occur within software.  None of them change the 
execution sequences in software.  All execution sequences are 
programmer determined, i.e. consistent with the embedded logic.  I 
haven't blanked out anything.  They don't change anything relative 
to the embedded logic of the software.

"Rules offer partial information.  Internal state provides another 
body of information.  Still same blanking out."

Rules offer no information whatsoever.  Rules apply to inputs 
producing outputs as well as changing internal states.  Internal 
states are data, inputs are data.  Software processes data.  It 
has no ability to "view" it as information.  It is a mistake to 
imply that software sees meaning in anything that it does.  

Software executes.  It doesn't even know it is doing that.  
Truthfully it will never know that.  Only that within which it 
executes, the physical form, can acquire that capability.  That 
form is not von Neumann-based.

"This is seemingly an essential point of disagreement between us:
you're obsessed with the low-level aspect of things, and do not 
accept that high-level structure may be independent from 
underlying implementational details."

I guess there is a difference between my view that the whole is 
equal to the sum of its parts and yours that it is greater than 
that sum.  Apparently also in your view not all "wholes" are 
created equal.  That apparently in the evolving development of a 
whole than an inequality appears spontaneously.  Now just where 
and when remains a question.

Thus far in software we have not created higher level abstractions 
not composed directly of lower level ones.  And these eventually 
into the instruction set of the host machine.  In fact in 
executable form no higher levels exist, only the instruction set.  
So where and how in this architectural model, the von Neumann 
machine, do you program in an equality?

You will on the one hand berate me for injecting "life" as an 
inequality into a physio-chemical equation.  To you this means 
that I see the hand of God in the process as well as a soul.  On 
the other hand you berate me for not allowing it in software which 
you do.  The truth is that I don't inject an inequality in 
material composition to account for life but something in material 
organization, some difference that exists when it exhibits "life" 
than it exhibits "death".  That says I am more for "composing" as 
a life process than "decomposing" as a death process.  In either 
case a continuum in process (sequence of sub-processes) occurs.  
This means in no instance does a material inequality occur.  I 
leave it to you to resolve your own contradiction.

"The details vary a _lot_, but the abstract structure stays the 
same.  Similarly, in as much as some high-level structure can 
implement a system capable of having "intelligent" conversation,
it doesn't matter whether the underlying implementational hardware
be human brain cells, interconnected electronics, silicon, or
software running on a von Neuman machine."

Here I think you and Alik Widge make the same mistake (IMHO).  You 
posit that some (existing) high-level system is capable of 
"intelligent" conversation even though you know if it is a human 
conversing with it, the intelligence is actually one-sided.  I do 
not know if two of these machines converse "intelligently" with 
each other or, if like us, tend to argue more.<g>  The fact is 
that it does matter greatly the underlying hardware 
implementation.  Furthermore it lies well-beyond existing software 
techniques to equip a machine with human-equivalent conversation 
capabilities.  Neither a von Neumann machine nor Turing rules will 
ever approach the conversational levels of humans...or even 

You keep acting like I am denying something or have some fear of 
future developments.  What I deny is the ability of current 
developments to do the job.  Instead of beating our heads against 
an impossible task let's get to the developments in which all this 
is possible, if not more likely.  Until you have embedded 
high-level contructs within a machine, something higher than the 
current instruction set, and embed the ability for it expand and 
extend them on its own, what you achieve with your 
meta^n-progamming and ever higher HLLs will never transfer to the 
machine, the only place in which software comes to "life".

You have yet to provide an example not burdened by the 
restrictions of a von Neumann machine.  No amount of elaboration 
or sophistication will overcome those restrictions.  This makes it 
increasingly difficult to transfer (communicate) the author's 
thought processes to corresponding machine behavior.  The 
transmitter (the author) has the ability.  The receiver (the 
machine) does not.  Therefore the author has to translate his 
communication (the software) into the language of the machine.  It 
doesn't take much perusing of executable code to determine that 
considerable is lost in translation.  Obviously in terms of 
communication at a human level (which is what you desire at the 
machine level) considerable is lost in translation.

Improve the machine, make basic changes in its abstract form, and 
the need for software (external direction) becomes minimal.  Of 
course, what you get may be as blind, blanked out, and bull-headed 
as me.<g>

I think the current conversation between Billy and Alik offer more 
in substance relative to current hardware and software than will 
the pursuit of this.  Maybe we can return to it later after 
resolving some more practical issues.