Emergence of behavior through software

Francois-Rene Rideau fare@tunes.org
Sat, 30 Sep 2000 22:47:18 +0200


[Note to cybernethicians:
this is a discussion that was moved from the tunes@tunes.org mailing list,
that originated on the ambition for Tunes to eventually manage using "AI"
techniques implementation choices that usually are done manually, out of
a declarative description of possible elementary choices; the discussion
then derived on the topic of emergent systems in general and AI in particular]

Dear Lynn,

On Thu, Sep 28, 2000 at 11:21:23AM -0700, Lynn H. Maxson wrote:
> Two questions.  Can software produce results not consistent with 
> the instruction of its author(s)?  Can its author(s) provide
> instructions that free it from such consistency?
>
> If the first answer is no, then the second is no also.
>
Indeed, the answer is no, but once again, the question misses the point.
For giving instructions do not mean understanding.
I may well genetically engineer a human clone with some genetic instructions,
and be unable to fully understand and predict the detailed the behavior
of the clone I created.

In other words, the piece of running software you write
is not free from its own code that you originally provided;
but it IS free from the programmer, from you and me or other programs.
You give the initial seed, but it grows have a life of its own;
it does have an individual history of information processed,
and you have to be it to fully grok its detailed behavior.
Even for simple programs, I may know the original instructions and still be
unable to predict the results (see: simulation of chaotic dynamic systems).
The only way to "predict" is to make the same computation independently;
which thus no more predict but postdict; the program brings unique
worthwhile information that will be unavailable afterwards.

> It is the 
> second question which divides our opinions relative to software 
> extending boundaries on its own volition.
NO NO and NO.
We all agree on this question, and have since the beginning.
You believe it is the question, and you systematically avoided
our remarks that it ISN'T the question.
The question is about drawing a separation between doing and understanding.
We agree that doing is foremost, both more primitive and more
As far as doing goes, we agree, and thus there needs be no debate on

> To me the issue is not one of my manually duplicating a process,
> but whether that process conforms to what I have prescribed.
We agree that the process conforms to what the programmer programs (duh!).
What we claim is that
1) in the general case, the program does not contain _all_ the information
 about the running machine, for persistent state accumulated
 from integrated I/O, internal growth, etc, does matter.
 The program is _partial_ information about the running machine.
 Hopefully, yes, it is correct information.
2) even for deterministic programs where it does "potentially" contain
 the information, nothing short of running the program will realize the
 potentiality, so that the program can achieve effects neither designed
 nor predicted nor predictable nor intended by the programmer.
 In other words, the cost of prediction does matter.
 This is not pure abstract maths.
3) not only can such emergent behavior can be _beneficial_
 if constrained, checked and selected against
 some expressible utility criteria; but many more beneficial behavior
 can be achieved by such selection that cannot be achieved by direct design.
4) actually, if you know Popperian epistemology, even the "design" phenomenon
 within the human brain works by such emergence and selection principle;
 what we claim is just that the principle works with machines as well.
5) many programs are already such emergent machines;
 in a complex compiler with thousands of rules, no one can claim a design
 or the system details; only a design of the constraint system that force
 the production rules to conform to the declared semantics of the language.
 Artificial emergent systems are _already_ useful and can be improved upon
 for more utility, independently from their eventually reaching "AI" or not.

> Now we have a non-intelligent machine into which we load software.
> Is it possible to embed intelligence into software?  No.
I deny meaning to your sentence.
Intelligence is NOT a constructive feature
that you code or do not code into software.
Just like purpose, it is an observable property of emergent phenomena.
In as much as you can constrain an observable property,
you cannot build an explicit construction of that property;

Only for very simple kinds of programs can we control
both the construction and the observable properties;
these programs are all the more interesting for this extraordinary property,
and constitue the bricks and mortar with which we can engineer software.
But to limit the study and development of programs to these simple ones
is a very short-minded or timid approach to software development.


> The software like the machine only does what it is instructed to do
> without an "awareness" that it is doing anything.
This is a gratuitous statement.
Your brain works according to the instructions from its genetical
and educational background, yet is "aware" of its doing something
(though of course it's only "aware" of a tiny part of itself).
The same applies to machines.

> Therein lies the problem.  We experience what we refer to as
> intelligence.  In truth we do not know how it occurs within us.
Exactly. There's no reason why machines couldn't be intelligent
without either them or us knowing how it occurs within them.

> Maybe with the invocation of the "genetic code" which is an
> instruction set for construction, not operation, it arises
> somehow.  However we have yet to find a means of transferring it 
> in any manner as an attribute for a computer or software.
Indeed. I do NOT claim that we know how to do it yet,
neither do I claim certainship that we'll ever know,
or that we won't find a theoretical impossibility to our even knowing.
But I do claim
* that even stupid, emerging systems can do more than fully designed systems,
* that there is room for a lot of improvement
 in the engineering of emerging systems, and
* that a reflective infrastructure is well suited to such innovative
 use of emerging behaviors.
[Note: technical questions about the latter are
to Followup-To: tunes@tunes.org; philosophical discussions here.]

> After a while the logic get circular.
YOU make it circular, and then make us the reproach.
I fully agree with your assertions; but I claim their lack of relevance.
I am trying to say something quite different than you hereby oppose.
Do you acknowledge what I'm trying to say?

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
To converse at the distance of the Indes by means of sympathetic contrivances
may be as natural to future times as to us is a literary correspondence.
		-- Joseph Glanvill, 1661