Emergence of behavior through software

Francois-Rene Rideau fare@tunes.org
Mon, 25 Sep 2000 15:16:17 +0200


On Sun, Sep 24, 2000 at 06:21:57PM -0700, Lynn H. Maxson wrote:
> "My point being: the change of view from machine-as-tool to
> machine-as-an-entity is inevitable."
>
> Well, you're entitled to your view and I will respect it.
> But that machine will not be exhibit von Neumann architecture nor
> Turing rules. I will go out on a limb further to say that its
> software, what it does, will not be separable from its hardware,
> how it does it.
What part of "universal machine" don't you understand?
Just every kind of information processing
doable with a device in classical physics
is emulable in constant time factor overhead by any universal machine
(well, actually log factor, due to size effect on information propagation).
Of course, you may reduce the factor with a proper architecture;
but you may _always_ emulate said architecture in software, at a cost.
Or maybe you think with Penrose that intelligence is intrinsically
a quantum phenomenon?

> As programmers in a universe where such 
> separation exists will find ourselves "useless", not necessary in 
> that universe you suggest.
I think that's the fear that makes you stop listening to reason,
but that it's a belief not supported by fact.

We already have emerging behaviors from machines that no human can fathom.
Certainly, it is all made of rules set by men.
But when these rules are meta-rules (that dynamically produce other rules),
when they comprise a large part of unpredicability such as in
randomized algorithms, genetic programming,
asynchronous multi-agent interaction,
or just heuristics from third-party components,
when they process large enough data
(data-mining using PAC-learning analyzers based on K-complexity theory),
or just do a big enough search (Deep Blue),
then you just can't fathom the "why" behind the computer result.
Oh, you may certainly check that each step in the multi-gigabyte trace
is correct, but that won't make you _understand_ the computer's work at all.
Try understand the trace of a global program optimizer (such as Stalin),
or of a first-order-logic automated theorem prover. You just can't!

Now, does that make us less? Does that make us useless? Certainly NOT!
That makes us _more_, and _more useful_.
We may not understand "why", but we may understand the "why" of the "why" of
the "why", etc, n times, if we wrote the meta^n program.
And the computer relieves us from tasks that we needn't do,
so we can focus on different tasks.

So it can compile and optimize millions of lines of code
faster than we code 100 lines of assembly?
Great! We'll focus on high-level techniques, and leave optimization to it.
So it can automatically prove theorems that we couldn't?
Great! We'll focus on more interesting structural theorems,
and leave petty combinatorial proofs to the machine.
So it can automatically compose music from a spec (OpenMusic) ?
Great! We'll focus on writing better specs for better music composers.

By automating physical work, machines freed our bodies.
By automating intellectual work, they will free our minds.
Physical work hasn't stopped; but where there are machines,
it has been moved to more noble activities.
Same goes with intellectual work; wherever machines come,
they relieve us from stubborn repetitive tasks and let us focus
on nobler activities.

Don't fear machines. Welcome their enhancement, and learn to use them.
Or better even, enhance them for the betterment of mankind.

> It is the height of human folly to 
> believe that we can achieve "entity" status for a "machine" 
> through externally imposed instructions, i.e. software.
It is irrational fear only that makes you think this
and dismiss the most fundamental result of computer science:
the existence of universal machines.

Moreover, your presentation of software as "externally imposed instructions"
is flawed at best. You make it sound as if an all-understanding designer
would provide a complete detailed design from which
all the behavior of the machine would deterministically follow.
Indeed, no human may create any emerging behavior this way
(by very definition of emergence)!
But this is not the only way to use machines.
To create an emergent behavior, you precisely have to loosen on the design,
and let behavior gradually appear, not fully "externally imposed",
but precisely "internally grown", from the feedback of experience.
Instead of designing the machine, you meta^n-design it
so that it grows the right behavior.

> Given that the observation itself at that level affects the result 
> and at higher levels any attempt to effect divisions in what is 
> observed will in turn affect the result, I think you are going to 
> have a hard time either proving or disproving determinism or free 
> will.  All that regardless of the amount of computing power 
> available to you.
You seem to fail to comprehend the fact that determinism is always relative
to some knowledge. By the usual diagonal argument of
Epimenides/Cantor/Russell/Godel/Turing, you will always be
"free" (non-deterministic) with respect to your own self-knowledge.
So what?

On the other hand it is trivial to construct
for every non-deterministic theory of the universe
a deterministic extension of it that is elementary equivalent
(i.e. is indistinguishable, has the exact same observations from within),
and conversely with deterministic theories have elementary-equivalent
non-deterministic extensions.
Which means that determinism has no absolute "true/false" value.
Yes, people ARE deterministic, with respect to some unreachable knowledge,
and yes, there IS a theoretically reachable way to build
on any universal machine software that will make the machine behave
in a way elementarily equivalent to human behavior
(albeit perhaps with a very large program with a large constant time factor),
that is, that will be undistinguishable from a human by any human discussing
with it by e-mail.
So what?

You cannot dismiss well-established theory by mere belief.
You cannot even reasonably believe that the human brain is more
complex than what our industry can already or may soon manufacture;
the raw accumulated computing power of general purpose processors
in the world is already superior to all known estimates
for the complexity of the human brain (10^12 nerve cells,
each with thousands of connections); with a grossest overestimation,
you may encode a brain and its state with 2^60 bits, which will be,
by Moore's Law, on every desktop by 72 years.

What you may reasonably object is that
even if we can produce big enough systems,
we mightn't be able to tame their complexity.
Just by throwing resources at random,
we won't ever make anything emerge in any reasonable time.
That _is_ a valid objection, a very serious one,
and the only one I recognize at the time being.
If humans go on using petty anti-meta low-level languages,
and do not develop completely different development techniques,
I'm sure that objection is more than enough to prevent
the emergence of "AI".
And even if we develop high-level metaprogramming development techniques,
it mightn't be enough to make an AI emerge.

> Here we engage in computer science.
Exactly, and thus, you must acknowledge that
AI isn't the sole target for emergent behaviors.
_Today_, some people already make behavior emerge for fun and profit,
and if it is up to spec, Tunes ought to be a platform of choice 
to make more complex behavior emerge than has ever emerged before.
And to hell with AI or not AI (see quote in .sig).

> Moreover we judge our success by how much we can produce 
> predictable results, whether we know those results or not.
NO. That is not science. For science, you must
s/predictable/reproducible/

> Whatever occurs must be predictable from the instructions we 
> issue. That is deterministic.
Most conspicuously not.

> I have no objection to the pursuit of machine-as-entity.  Should 
> Tunes shift its direction toward such a production we obviously 
> can cease any and all concerns about a Tunes HLL.<g>  Or 
> programming.<g>
On the contrary!
As the systems becomes more elaborate,
we'll switch to higher-and-higher level languages!

People don't sew by hand; they use sewing machines;
industries use industrial sewing machines that automate even more.
They won't low-level-program by hand; they'll use programming machines;
industrial programming machines will automate even more.
And NO, this isn't science fiction; this already exists:
assemblers, compilers, 4GL's, RAD tools, etc.
_are_ programming machines of various sophistication.
We're just trying to get it one step further.

> However like you I believe it is a possibility.  However the
> machine will be biological in its entirety.  If you believe a
> non-biological basis is possible, then more power to you.
I can _already_ make an intelligent biological entity emerge.
Well, actually, not really: it'd take a consenting female,
and she'd cost more than I am willing to afford right now.
[Note: the female I'm longing after is herself longing after
making intelligence emerge, but, unfortunately as far as I am concerned,
she wants it to emerge non-biologically].

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
Alan Turing thought about criteria to settle the question of whether
machines can think, a question of which we now know that it is about
as relevant as the question of whether submarines can swim.
		-- Dijkstra