Emergence of behavior through software

Francois-Rene Rideau fare@tunes.org
Tue, 26 Sep 2000 15:21:34 +0200


Meta-message for list control:

Dear all,
   this discussion is far astray for mailing-list tunes@tunes.org.
I propose that it be moved to cybernethics@tunes.org where it is on-topic,
and otherwise stop cluttering the tunes list.

------>8------>8------>8------>8------>8------>8------>8------>8------>8------
Base message, about meta control:

On Mon, Sep 25, 2000 at 11:26:33AM -0700, Lynn H. Maxson wrote:
> Fran‡ois-Ren‚  VB Rideau wrote:
Ouch. Bad IBM extended characters declared as US-ASCII!
Make that Francois-Rene DVB Rideau in real ASCII.

> The only question remaining with respect to this "universal 
> machine" lies in whether such a division of labor is necessary, 
> whether people are necessary or not.
No one is "necessary". A better criterion is "useful"
(but let's not discuss ethics too deeply in this forum;
maybe cybernethics@tunes.org is better suited).
Nothing remotely suggests that people will be made useless
by the gradual emergence of more and more complex information processors.

> More to the point do you believe sentient behavior possible in a
> machine?
I do not exclude the possibility,
that I think is _eventually_ likely,
although I predict that no such thing
will happen within the next few decades.
See also the quote below my .sig...

However, once again, there is a wide gap between dumb machines
that are fully designed and sentient machines that can rival with us.
I contend that useful emergent behaviors whose detailed design
is essentially unfathomable to even the most expert humans
already exist and will ever grow in complexity and in application range.
I contend that this is good, since it relieves us from many boring tasks,
independently from the growth eventually leading to AI or not.
I content that we may accelerate the growth of such emergence
by using a reflective system.

> Humans apply "meaning" to data.  If a human doesn't 
> understand the data presented it is "meaningless".
Meaning is a knot that ties the world of internal events
to a world of external events.
It is acquired by years-long feedback between the two worlds.
Indeed, providing the right feedback is a major challenge
for making information processing more "intelligent".

> The question is can machines use machines to do data mining?
They already do.

> Given its absence previously (as occurs in
> humans) how does a machine develop a concept of data mining?
How do humans grow the concept? By interaction with other humans.
Human intelligence is largely a fruit of social behavior.
Why should machines be forbidden such interaction?

> Through sophistication, elaboration, unlimited levels of dynamic
> meta-programming, or by accident?
By the very definition of a discovery,
we can't predict the right path before we actually take it.
Don't close opportunities.
Let each one pick one's path, according to one's own heuristics,
and let the right path win.
This is valid for unaided humans.
This is also valid for computer-aided humans, or human-aided computers.

> How do you program accidental behavior?
> Certainly not through any machine-producable, random
> selection process?
Why not? It seems to me that genetic programming already produces
unexpected results through selection of random processes.

> "Oh, you may certainly check that each step in the multi-gigabyte 
> trace is correct, but that won't make you _understand_ the 
> computer's work at all."
>
> Then I probably will take comfort in that the computer will not
> know that it does not understand. Therein lies the difference.
>
How is that a comfort (or a discomfort, or anything relevant)?
There are many things I do not understand,
and I happily do not waste my time thinking about them.
The computer needs not "understand" the stuff it does
(it will never; neither will we "understand" what we do),
it needs only do it.

> In none of your examples does the computer "know" what it is doing 
> it simply does.
Do _you_ know what you are doing?
Do you sentiently control the primitive mechanisms within yourself?
Of _course_ doing is more primitive than knowing! So what?

> What it does we have to verify to insure that we
> instructed it correctly.
Not quite; what you say is valid for
real-time or otherwise critical controllers in computers or appliances,
that ought to run in time, unchecked by any human.
But when we're talking of interactive aids in information processing,
this constraints does not apply, and
we may be quite satisfied with checking the result instead;
and indeed even this checking can be computer-aided.

> The computer (the hardware and the
> software) has no such means available to it.
Of course it can include automated validity checks to ensure
the programs dumped by its metaprograms are up to spec.

> I do not say any of this out of fear.  I do not fear their gaining 
> this capability, but so far we have been unable to transfer (or 
> even understand) what allows it in us to them.
Recognizing our present failure is quite different
from excluding any success for the centuries to come.

> For example,
> regardless of variations on a theme all computers use sequential
> (linear) memory.  It is a distinct hardware component.  Yet no
> such memory, no such component exists in the human brain.
The problem here lies more with human economics
than with intrinsic technological limitations.
Proprietary software competing for individual efficiency
means low-level binary compatibility
and induces excessive use of low-level languages,
hence strong inertia in hardware design.
I'm convinced that when free software wins (soon),
the corrupting effects of proprietary software will gradually fade away,
and that we'll once again see higher-level languages,
massively parallel machines, etc.

> As I pointed out in a private response to Derek 
> Verlee you can neither sit nor cool yourself in the shade of a 
> simulated tree.
Neither can I cool myself under the tree of a child's drawing,
under the Weeping Willow sung by Billie Holiday,
or under the hierarchical chart for my company. So what?
On the other hand, I can use a simulated theorem proof,
or enjoy a simulated poem,
or use a tool manufactured out of a simulated design,
or follow the navigation path determined by a simulation.
You've just pointed out the difference between information and matter.

> "By automating physical work, machines freed our bodies.
> By automating intellectual work, they will free our minds."
>
> I found this one more than a little interesting.  Obviously the 
> unwritten but necessary "our" after the "By" means that the 
> machines did as they were told, not that they took it upon 
> themselves.<g>
No. Only that we select them for doing something that we value.
When I enter a good restaurant, and pay a cook to prepare food for me,
I conspicuously do NOT want him to cook as I tell him.
If I could tell him, I myself would be the chef of some restaurant.
However, I do choose restaurants I eat at, and restaurants I come back to.

The complexity needed to select is much less
than the complexity needed to design,
and is much easier to parallelize.

> I have no argument with any of your examples of 
> such automation that allows us to achieve with machines that we
> could not reasonably achieve on our own.
Ok. Try compile the linux kernel into optimized ia64 code by hand.
No compiler allowed. No assembler, either; you must write binary.
I don't even know why I allow you a hex-to-bin translator;
ideally, you should write it yourself on the surface of a floppy
with a magnet. Or why should a magnet be allowed at all?
Then, let you interpret the electrical signals from various sensors
and guide a space rocket's launch manually, in real time.
Now, write the detailed routes of wires in a modern computer all by hand.
Finally, let you without machine determine a suitable control matrix
for having a robot walk.

> In none of them did the machines "know" what they were doing
So what? We don't want machines that boast about their intelligence,
and feel that better machines should be forbidden to protect their jobs.
We want machines that DO something useful.

> nor did they take it on their
> own to do anything not conforming to the instructions given.
There is no contradiction between conforming to instructions
and having initiative. All humans obey to their genetic instructions,
to their psychological determinations, yet all do show initiative.
Recently, I even faced a left-wing "philosopher" who argued that I had
no free will because I was determined by psychologic factors.
But these genetic instructions, these psychologic factors determined by
my education, they are ME. I _AM_ these instructions and factors.
If anything comes out of these instructions and factors,
they come out of me; they are _my_ initiative;
conversely anything that is my initiative comes out of these factors.
Of course I am not free relatively to myself. I am myself.
I am bound to be identical to myself.
But I am free with respect to you, or anyone else.
If I come up with an idea, it'll depend on what makes me,
but not on what makes you.
You might know my genetic code, and have filmed all my life,
with sensors all over my body. You'll still not be able to understand
why I say this or that. You might accurately recognize vague trends.
You won't fathom the depth of my soul. To know what I think, you'll
have to ask me, or look inside me with a scanner.

> We must have a different definition 
> of universal.  A universal machine by definition must be capable 
> of replacing any other.  I am not aware that computer science, 
> one, had this as a goal, or, two, achieved it.
One, computer Science has it as a starting point,
achieved before the first computer was built.
Two, just because the machine can do anything
doesn't mean you have an adapted program and an efficient machine;
you must pay the price of using the universal machine,
which in many cases is still higher than using a specific one,
perhaps unaffordable at the time being.

> "Moreover, your presentation of software as "externally imposed
> instructions" is flawed at best. You make it sound as if an
> all-understanding designer would provide a complete detailed
> design from which all the behavior of the machine would
> deterministically follow.
>
> Indeed, no human may create any emerging behavior this way
> (by very definition of emergence)!"
>
> Exactly.  What part of software design do you not understand?
I'm specifically not speaking of software design, but software emergence,
out of meta^n design, which is something altogether completely different.

> Where in any of your examples do you stray from the intrinsic
> determinism of the software?
* Interaction with other humans or self, that by cannot be foreseen
 (or else, said costly interaction would be disposed with).
* Asynchronous interacting agents, that race for partial solutions
 to a search problem, with an unforeseable winner.
* Randomized algorithms, including genetic programming,
 that integrate a large amount of randomness
 into a small semi-random but already unforeseeable way.
* Use of meta-rules,
 that amplify the non-determinism rather than reduce it.
* External randomness source.
* Feedback from robotic sensors.
* Third party components,
 that the designer cannot take time to understand,
 and must consider as non-deterministic, within spec.
* Large database as input,
 whose contents cannot be grasped by the designer.
* Time, whose effect on complex running program with chaotic structure
 gives results that are unfathomable to any designer.

Remember: we're not talking of determinism relative to an all-knowning god
(yes, everything is deterministic to an all-knowing god, by definition),
but of determinism relative to one programmer's, would-be designer's mind.
In the end, complex enough software _IS_ non-deterministic wrt the programmer.
It _is_ deterministic with respect to itself,
but then everything and everyone is.

> "To create an emergent behavior, you precisely have to loosen on
> the design, and let behavior gradually appear, not fully 
> "externally imposed", but precisely "internally grown", from the 
> feedback of experience."
>
> Wow again.  No wonder I'm such a moron.  I have no idea of how to 
> do any of these.
Ignorant != moron. As Bertrand Russell put it:
"Men are born ignorant not stupid; they are made stupid by education."
Ignorance is lack of knowledge. Stupidity is unability to learn.

> Not to worry because what I can't impose 
> obviously the software which has somehow acquired the "feedback of 
> experience" (cognizant that both are present and creating names 
> for them) will do on its on.
No, you have to metaprogram the software so that it will integrate
the feedback into bettering its program. Nothing magical.
You accept to drop design of a program,
and instead design the metaprogram that will control it.
And when you master the metaprogram well enough,
you can move on to metametaprogram, etc.

> I do have difficulty believing that 
> you teach this in computer science.
That's precisely a field of CS called "AI".
As techniques developed in AI become better understood and modularized,
they move to traditional CS. Sometimes, the techniques depend on such
a non-modular complex body that they cannot be moved to traditional CS.

> Yes, I can reasonably believe "that the human brain is more
> complex than what our industry can already or may soon
> manufacture". It is not a numbers game.
It _is_ a number game. Everything is.
To disprove a statement, you reduce it to 0 < 1,
to prove it, you reduce it to 0 = 0.
If some argument establishes the impossibility of AI,
it can be expressed in numbers.

> The brain is not a
> computer (nor is a computer a brain).
For a narrow definition of computer, indeed.
The brain _is_ a device that processes information.

> What we call computing is
> but a fraction, perhaps even an infinitesimal one, of the total 
> brain activity.
Integrating information is 100% of the brain's work.
That it doesn't do it according to a
Knuth-Bendix rewrite system completion algorithm
is widely agreed upon.

> Short of duplicating it exactly (forget 
> emulation) you are not going to do it.
Why dismiss emulation without a try?
When 10^20 transistor machines become cheap,
we can certainly give it a try,
although by that time, we should develop techniques
to control the convergence of the large number of parameters to tune.

> Specifically you are not
> going to do it with computers (machines) as we know them
> today--linear memory, von Neumann architecture, Turing rules,
> etc..
Perhaps not directly. But I'm sure such machines will be most useful
at the meta^n level (with n no so high) to control the emergence of
suitable behavior in other architectures.

> The problem here with any higher level language is that a clear
> path must exist between it and the language of the machine.
Clear to whom? To the machine? Certainly. To the human? Certainly not.
I'm quite happy to be able to use high-level languages without having
to understand the innards. No single human being understands the whole
of the computer system he uses. Well, OK (pun intended), there exists
one exception, who is Chuck Moore.

> I draw from that that an higher HLL may allow an
> "expressive ease" but not a "behavior" (emergent or otherwise) not 
> inherent in the lower levels.
Depends on "inherent". Lower levels do participate in the emerging behavior;
by definition, they do not "explain" it, though, in that examining the
steps of lower-level behavior gives you no idea of what happens at a more
abstract level. Recommended reading at this point is of course Hofstadter's
chapter X in GEB.

> At this point "you" become unnecessary.
To become unnecessary would mean that the machine has a purpose of its own,
which is not the case since you precisely select (NOT design) the machine
to fulfill your own very purposes.

> In geometry the whole is equal to the sum of its parts and is 
> greater than any one of them.
Such a statement is often repeated by people who don't understand it.
Here is the only valid interpretation I know of it:
In a syntax tree, the tree top is equal to the combination of its sons
by its topmost node. Certainly, if you remove the topmost node, you don't
have all the information to reconstruct the tree.
"If you remove essential information,
you no longer have as much information!"
Doh! Talk about wisdom.

> I see in your writing in the Tunes
> project a desire to change the first equality.
Nonsense. What I mean is that a growing part
of the essential information needed to operate machines
will reside in other machines and no more in human brains.
Which is already the case.
However, the amount of information in human brains will not decrease,
on the contrary, it will increase.
But the _nature_ of this information will mostly shift
both from lower-level to higher-level concerns
and from generic understanding to specific expertise.
Which is already the case.

> I have seen
> writings in which such a claim made for living organisms.  I'm not
> here to argue one way or the other.
I have also seen lots of irrational drivel about life and intelligence.
And I reject irrational arguments even when I do not have
a definitive opinion on the argued question.

> When I objected at the beginning of this series of messages to 
> your continued reference to the "system" doing this or that for a 
> system under development I did so because we are engaged in
> producing a single system, not one in which we could separate an
> activity from its dependents. It only "does" it if we have "done"
> it previously and thus have a reference to it. If we have not,
> then the system is incomplete (and possibly either impossible or
> incorrect).
Indeed. But this is not incompatible with the fact that what
the system aim to do is precisely making implementation choices
that are currently devoluted to on the human programmer,
with the hope to encourage a more declarative programming style.
Automatically making benchmarks to select threading tactics
is typically something I'd like Tunes to do as part of
the expert system used in high optimization mode.

> I read into what you have written a belief that a little 
> "something extra" can spontaneously occur in software not inherent 
> in the instructions provided (externally).
Actually, the very reason why we use computers, to begin with,
is to give answers to questions we are unable
to solve ourselves at better cost, if at all.
Hence to bring something "extra" that we did not put in.
"Extra" with respect to what the programmer can predict beforehand;
not extra with respect to the total input of the program.
For the input does not comprise only the initial program,
but also information from other sources, as partially listed above.
And I don't present it as a future potentialiy, but as a hard present fact.

What I present as future potentiality is
the opportunity of pushing further the amount of
extra information we can tame without designing it,
by moving part of our skills from _design_ to _control_,
by using metaprograms interactively with a persistent internal state
so as to have them accumulate experience, etc.
I don't have any estimate as to how far in automation we can go how fast;
but I do predict that the techniques I propose will lead
to appreciable increase in overall productivity by information workers.
I also claim that any possible affordable path to AI
if reachable (which I believe might be, in the long run),
will use similar techniques.

> So far no software hierarchy developed in any 
> language, singularly or in combination, has experienced this
> phenonmenon.
On the contrary, I'd say that emerging phenomena are a daily fact of life.
Yes, they _are_ confined in areas we control,
and that's precisely the point; that's how we make them useful
and that's how we are safely relieved from previous burdens,
that we no more have to think about.
What we all mean is to extend such control to larger regions.

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
Le risque est que si, un jour, les machines deviennent intelligentes,
nous ne serons peut-être pas équipés mentalement pour nous en apercevoir.
	-- Tirésias, in J.-P. Petit, "A quoi rêvent les robots?"
[EN: the risk is that if, someday, machines become intelligent,
we may not be mentally prepared to notice it]