A revolutionary OS/Programming Idea

Lynn H. Maxson Lynn H. Maxson" <lmaxson@pacbell.net
Wed Oct 8 23:50:02 2003


Li, Xiao Feng writes:
"Hi, Dear Lynn, what you described are really interesting, but I 
believe the adaptiveness for an individual system is not 
essential if we have an reproduction and selection mechanism 
among a group, where the death of the non-adaptive 
individual is even more vital and the adaptiveness is achieved 
generation by generation in a species-level instead of 
individual-level."

If you want me to agree to a general species survival principle 
that there is safety in numbers, you have it.  I don't believe I 
have said anything to the contrary.

You have to appreciate what Ashby demonstrated with his 
homeostat: that a goal-oriented system, basically that 
associated with homeostasis in humans, could adapt to 
changes in its environment without any form of external 
intervention.  No "deus ex machina", no finger of God, no 
initial instruction set, no starting program.

Moreover the system, the homeostat, responded to external 
changes as a whole in an entirely unpredictable, 
non-predeterministic manner: state A did not result in a state 
B.  This flies in the face of control system theory or control 
system synthesis.  A connection involving feedback 
dynamically, i.e. unpredictably, switched between positive and 
negative.

The homeostat demonstrated adaptive behavior.  If it could 
not achieve homeostasis within some indeterminable interval 
of time, it failed, i.e. died.  In short it constantly kept 
attempting to maintain homeostasis until its resources failed it.

Now Ashby was pillored for this work, because he eliminated 
the need for God, the "deus ex machina".  You would think 
scientists would welcome such a demonstration.  
Unfortunately it also rendered unnecessary the need to apply 
control system synthesis to adaptive behavior in living 
organisms or prevalent belief among those in this thread that 
it was even possible to do so.

Now we haven't established the basis for adaptive behavior in 
living organisms.  We have this one demonstration of it 
occurring outside the realm of control system synthesis.  We 
speculate, something far short of demonstrate, that at some 
given future time when we can "realistically" emulate the 
neurons of the brain we will be able to write the program.

If we cannot, then artificial intelligence will remain as such: 
artificial, never real.  That doesn't make it useless.  Because 
we may never "crossover" does not mean we should not 
continue the quest for its improvement.  That it's not the way 
we humans actually adapt does not detract from the value to 
us of its advancement.

Yes, in general there is safety in numbers.<g>