Emergence of behavior through software
Thu, 5 Oct 2000 13:08:20 +0200
On Sun, Oct 01, 2000 at 07:17:48PM -0700, Lynn H. Maxson wrote:
> The thing that stumps me most in communicating with Fare lies in
> his metaphors, of confusing similar with identical,
I do not use more metaphors than anyone.
But you seem to fail to understand what metaphors are all about.
Metaphors are about sharing mental structures.
It's code factoring inside one's brain.
Now, try to grok this piece of wisdom:
There is NO SUCH THING AS OBJECTIVE IDENTITY that be accessible to the mind.
Everything one sees and understand is but metaphor.
Metaphor is the basic encoding technique
with which the human brain integrates information from the environment
and tries to find patterns in it.
Any "identity" in anyone's mind is but an old deeply rooted metaphor.
You may question the range and precision of a metaphor,
but questioning the existence of a metaphor is ridiculous.
> Here you posit an impossible situation, a system deciding that it
> can become an organism.
You're a system. Did you ever decide to become an organism?
The earth is a system. Did it ever decide to become an organism?
The initial puddle whence life sprung is a system. Dit it ...?
Your argument is gratuitous, rooted in some deeply flawed notion of yours
about life that you'd better question.
There's no use discussing about the superficial consequences
of whatever notion of life you have
when we have such a deep disagreement.
If you stick to your notion of things,
at least explicit your root beliefs, so we can agree to disagree.
That said, your theories really sound like you believe life comes
from some extraphysical divine "soul" that somehow directs empty
physical receptacles that are bodies.
I bet your theory is isomorphical to this soul thing.
> The truth is that we have no means of
> creating an organism without starting with one: procreation.
I bet, that, by induction, you can recurse down the bigbang,
at which time there was some fully developed seed
for each of modern-time species, as created by god.
This is just ridiculous.
> It is a problem with a computer and software in that
> neither start as an organism.
Why couldn't they? I imagine AI and A-life programs precisely
as starting as some simple symbolic organism, and evolving thereof.
> One passing note. Artificial means not real.
DEAD WRONG. Artificial means man-made.
My toothbrush is real, yet it doesn't grow on trees.
> Software is not an organism.
Maybe not yours. Maybe not most anyone's today.
Yet, I've seen people who did grow software.
Genetic systems, reflective expert systems,
data-mining pattern-extracting systems, etc, do evolve
(and I hope to eventually bring them all together).
You may blank out this assertion, as you did up to now;
but if you do, then there's nothing left to discuss,
and I wish this whole thread dies right away.
> the fact that two
> non-organisms cannot join to form an organism.
Maybe not two. What about 10^28?
Your body is made of about 10^28 atoms.
So, ok, it took interaction of many more atoms
so as to create such an organism from scratch.
But then, we need not work at the atom level,
and we do not start from scratch. As said Carl Sagan,
"To make an apple pie from scratch,
you must first create the universe."
> [Software] does no more than what the programmer told it.
DEAD WRONG. You blank out the notions of input and persistent state.
Not to talk about chaotic behavior and evolution.
If you're only to blank out what other people say,
let's stop the whole "discussion" and return to more productive activities.
> Humans do not "obey" mathematically expressible rules.
DEAD WRONG. Humans do obey the mathematically expressible statistical rules
of physics, of genetics, of demographics, of economics, of psychology, etc.
So these are not enough to predict their final behavior,
because the too many unknown parameters? That's precisely the point.
Same with a programmer's code for an evolving meta^n-program that runs
with lots of persistent state, including meta^(n-1)-program-level state.
> Unfortunately software is a fragile beast,
DEAD WRONG. Designed software is only as fragile as it is designed to be.
Fragile with respect to what?
Some people work on very resistant software.
Organic software will likely differ a lot from designed software.
> overly sensitive to "illogic"
Organism are sensitive to toxical intrusions in their chemistry. So what?
> and prone to failure.
Organisms may die. Eventually, they do. So what?
You provided no intrinsic reason why AI be impossible.
Certainly, you proved that it can't be done
with current designed software technology.
But there's no need to expand this point on which everyone agrees.
> It is one thing to put monkeys in front of typewriters
> where whatever order of letters is acceptable. That's simply not
> true of software.
The difference between random and fit? Selection.
> If I have no choice but to follow them, then spontaneity is out.
DEAD WRONG. Rules offer partial information.
Internal state provides another body of information.
Still same blanking out.
> That piece of "magic" which exists at least in human organisms
Yes, you believe in magical soul.
All is said. Now let's stop it all.
> Now meld it with the remainder of the human system. Without this
> remainder, without a system in place, the brain has no function
> and in fact can do nothing. The brain, the nervous system, the
> blood system, the organs, the skeleton, the muscles, the skin--all
> exist as a single entity, all interconnected.
> The fascination with the brain and with emulating it in software
> deliberately "abstracts" it from the system of which it is a part.
No, it doesn't abstract "from", but just abstract.
The role of the brain is to integrate information
so as to drive interaction towards selected behaviour.
Well, an abstract brain will have abstract interaction to drive;
it will input and output text, sound, video, sensors from an arm, etc.
Certainly, a human-understandable AI will have to have interaction
devices similar enough to those of humans, at some abstraction level.
We're not here yet. There will be a lot of research in dumb A-life
before we can seriously tackle complex brains. We'll have to tame some
lower forms of information integration before we can tame the higher ones.
Karl Popper distinguishes roughly 4 levels of languages
(expressive, communicative, descriptive, and argumentative);
before we reach the latter, we may have to master the former.
How is that an absolute barrier to AI?
> All computers are based 100% in pure logic.
All organisms are 100% in pure chemistry.
> All software which executes successfully
> cannot violate the rules of the host computer.
No organism can violate the rules of chemistry. So what?
Chemistry is the underlying low-level paradigm.
It is irrelevant as to the general structure of higher-level phenomena.
This is seemingly an essential point of disagreement between us:
you're obsessed with the low-level aspect of things,
and do not accept that high-level structure may be independent
from underlying implementational details.
Now, think about it: if you consider the logic gate model with which
all digital electronics is designed, you cannot deny that the underlying
implementation hardware has changed considerably in 2 centuries
(rotating wheels, electromagnetic relays, tubes, transistors, and then
a lot of finer and finer transistor technologies).
The details vary a _lot_, but the abstract structure stays the same.
Similarly, in as much as some high-level structure can implement
a system capable of having "intelligent" conversation,
it doesn't matter whether the underlying implementational hardware
be human brain cells, interconnected electronics, silicon, or
software running on a von Neuman machine.
[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
If the human mind were simple enough to understand,
we'd be too simple to understand it.
-- Pat Bahn