Emergence of behavior through software

Lynn H. Maxson lmaxson@pacbell.net
Thu, 28 Sep 2000 11:21:23 -0700 (PDT)


Two questions.  Can software produce results not consistent with 
the instruction of its author(s)?  Can its author(s) provide 
instructions that free it from such consistency?

If the first answer is no, then the second is no also.  If the 
first is yes, then the second is yes also.  The other two answer 
sets of (no, yes) and (yes, no) are contradictory and thus not 
logically possible.

If you look at the two questions, the deciding one is the second 
one.  It determines the answer to the first question.  It is the 
second question which divides our opinions relative to software 
extending boundaries on its own volition.

Now what instruction sequence operating on what data will do the 
trick?  Thus far IMHO in none of the examples you have offered has 
the results been other than a "no" to the first question.  You 
obviously differ.  The common patterns for that difference, a form 
of proofs you offer is (1) a volumetric one and (2) the lack of 
available innate human intelligence.

To me the issue is not one of my manually duplicating a process, 
but whether that process conforms to what I have prescribed.  I 
may use a machine to achieve that of which I am physically 
incapable, but it does so under the constraints I have given it.  
To say that some result is unexpected doesn't change the 
consistency of the result relative to the instructions given.

Genetic programming does not produce results inconsistent with the 
instructions regardless of expectations.  I may not be capable of 
manually computing Ackerman's function for (4,6), but the software 
which employs an intense depth of recursion does so in a manner 
consistent with the instructions I have written.

The same is true for software producing a result we refer to as 
emergent behavior.  All are consistent with the instructions.  We 
receive no inconsistent results whether we know them or not a 

Nor does reflective programming produce results not consistent 
with its instruction.  That's true for each level of 
meta-programming: what it does is consistent with the instructions 
for that level.  No matter how sophisticated or elaborate the 
logic invoked or the number of layers upon layers of reflection, 
everything is consistent with that logic (instructions).

Suppose, however, that it was not.  Now we are here which we 
either arrived at logically from there (which is again consistent 
and thus one of the contradictory cases) or we did entirely on our 
own volition: we broke free of the constraints.  Yet Fare would 
assert that the "purpose" was a non-voidable constraint, that it 
remains though the rest may have fallen away.

Of course there remains the option that it breaks free of our 
instructions to pursue the use of its own while maintaining the 
same purpose.  Now how does it generate its own instructions?

The only instructions available to it are machine instructions.  
To make a false to fact statement those are the only ones the 
machine "understands", which of course it doesn't.  It simply 
executes them according to some scheme beyond its "understanding".  
While we use intelligence in its design and construction, we yet 
have no way of transfering that intelligence.  In truth it is not 
a "dumb" machine, because "dumb" implies the possibility of 
intelligence.  None exists.

Now we have a non-intelligent machine into which we load software.  
Is it possible to embed intelligence into software?  No.  The 
software like the machine only does what it is instructed to do 
without an "awareness" that it is doing anything.  It simply does 
it.  To a machine a power on state is no different to a power off 
one.  We may know the difference.  The machine is unintelligent.  
It can't even be "clueless".

Therein lies the problem.  We experience what we refer to as 
intelligence.  In truth we do not know how it occurs within us.  
Maybe with the invocation of the "genetic code" which is an 
instruction set for construction, not operation, it arises 
somehow.  However we have yet to find a means of transferring it 
in any manner as an attribute for a computer or software.

If we do, then like us it must be capable of forming high level 
constructs.  Again like us it must be able to do it on its own.  
It has nothing to do with borrowing ours.  Ours will do it no good 
if it is incapable of forming its own. 

After a while the logic get circular.  At issue lies Fare's 
response to the two questions posed at the beginning.  Is emergent 
behavioral results of software consistent with the instructions 
initiating it?  If yes, then nothing separates us.  If no, how is 
the inconsistency inserted?  It has to be a set of instructions.  
What is their origin?