Emergence of behavior through software

Francois-Rene Rideau fare@tunes.org
Thu, 5 Oct 2000 16:22:35 +0200


On Mon, Oct 02, 2000 at 10:34:38AM -0700, Lynn H. Maxson wrote:
> (2)All software executes in a manner consistent with its embedded logic.
Again, you insist on this trivial point that is completely irrelevant
to any debate regarding emerging systems and artificial intelligence.

> "Volition does NOT consist in choosing the rules.  No single human 
> chose his genetic code, his grown brain structure, his education. 
> Yet all humans are considered having a will."

> Volition deals ultimately with choice.
NO. There is no absolute notion of "choice".
Volition, free will, or whatever name you give to it,
is a property of a system with respect to its environment.
It is about a system's behavior being largely determined
by its own internal state rather than by externally modifiable factors.
You cannot easily change what I think by simply pushing a lever;
hence I am largely free from your own opinions (however, obviously,
my behavior is affected by yours, since I'm replying to you).
That a system obeys its own rules is no offense to its own free will.
What would be an offense to its free will would be its having to track
the dynamic state of another system that it cannot affect;
for instance, I wouldn't be as free had I to obey your whims
(in as much as I can't influence these whims).

> In the dictionary this is qualified as a "conscious" choice.
I don't care about dictionaries written by people
who have no clue about what intelligence is.

> Consciousness lies in self-awareness.
You've just been using "conscious" in two different meanings.
Welcome among the users of the fallacy of equivocation.
	http://www.intrepidsoftware.com/fallacy/equiv.htm

I deny meaning to the very words "volition", "consciousness" and
"self-awareness" in the context where you use them.
I refuse to disagree with you when I consider you didn't assert
anything meaningful. I similarly refuse to disagree to the statement
"the smurf boinks", because I deny meaning to this sentence.
If you think there is more than empty words in your discourse,
I prompt you to explain what you mean in terms of observable behaviour
of dynamic systems. Or maybe instead can we agree to deeply disagree on
the validity of our respective points of view, and possibly explore
each other's meta-arguments about this validity.

> transfer an intrinsic property of an organism [...]
Once again, you're a believer of the soul.
I deny any meaning to the notion of immaterial soul.
And I reject the notion of a material soul, for which not no evidence exists.

> You certainly can make a choice.  
> However your ability to do so depends upon your structure, your 
> human system and whatever in it is responsible for "life".  When 
> that leaves you, when you die, you can no longer make a choice nor 
> exhibit purpose.
So what? How does a running program differ?
People _routinely_ use computers to make choices for them.
The computers make choice depending on their own structure and state,
and when they're shut down, they no longer make any choice.
Choice is in no way an exclusive property of "the living".
As far as we can observe it, even the Sun chooses to behave
in impredictable ways.
And in the most regular physical systems, you have spontaneous events
that break the symmetry of the system, thereby making a "choice".
Choice for a system is about some event being dependent on the system's
internal state, and independent from external events.


> First comes "knowledge", then "understanding", and then "wisdom".
> Knowledge comes from "knowing" you are doing something.
> Understanding comes from knowing what you are doing and if 
> possible why.  Wisdom comes from using the understanding of what 
> you know to possible change what you do.
Excuse me, but this sounds as meaningless ranting to me.
To me, there is no absolute magical notion
of knowledge, understanding or wisdom.
It's all about the structure of the feedback between a system and
its environment, and the relative ability of the system to anticipate
it's environment's potential behaviour during its internal decision process,
as compared to a potentially different system in the "same" environment.


> The only "seed" for an organism is an organism.
Once again, you're blanking out 15 billion years of evolution.
Which of the chicken or egg has come first?

> Man thus far has
> had no success in creating an organism in any other manner.
You keep bringing up irrelevant points on which everyone agrees,
with a fallacious tint of equivocation.
(In this case, the term "organism" has stricter or larger meanings).

> Clearly software is not an organism
Once again a subtle semantic shift that brings equivocation. I'm sick of it.

> We have a history of increasing our knowledge and 
> understanding of such events leading in turn an increasing ability 
> to predict them.  Following our assumption and the basis of Fare's 
> metaphor, this means our gains have occurred at the loss of life 
> within those events.  That, my friends, is logic.<g>

I reject your inference, and I am find your grin preposterous.

	The more one knows, the more one knows that one knows not.
	Science extends the field of our (meta)ignorance even more
	than the field of our knowledge. -- Faré

Actually, this is the very quantitative basis of Cantor's, Russell's,
Gödel's, etc, diagonal argument: the complexity of a system grows
exponentially with its size, including the size of subsystems used
for an internal model of itself. You cannot add (reflective or not)
information in the system without introducing room for even more
information to gather so as to have a "complete" view of the system.
Acquiring knowledge about yourself may increase your freedom with respect
to the rest of the world, by bringing more opportunities of action.


> Fare takes this, our inability at times to predict and understand 
> the results of software execution, as a means of giving software 
> something (life, independence, freedom from the programmer) that 
> it must lose in the event that we gain the ability to do either.
> Note that this "loss" occurs without a change in the software
> logic or in its execution.  Therefore it must be a property 
> independent of them, perhaps even a "soul".<g>
Bullshit. In the circumstances you describe,
the relative "loss" of freedom of the computer wrt us
comes from our "gain" of knowledge about it.
The software didn't change. We did.
Hence the relative change between it and us.
You think of "freedom" and "life" as absolute terms. I don't.
Not only don't I, but I reject as meaningless
any absolute notion of freedom or life.

Once again, a fallacy of equivocation between
my conspicuously relative rational notion of freedom
and some undefined absolute notion of freedom.
Your grins don't make me smile. They are no support for your fallacies.


> This property arises from a more serious claim by Fare that we
> have software whose results we do not understand or we cannot
> predict.  To me both are patently false.
Let's stop it here, then.
When the fundamental disagreements have been identified,
it's time to terminate the (successful) discussion.

> Fare pooh-pooh's this by saying it is "postdict" not "predict".  No.
"Predict" is different from "postdict", because cost matters.
Only in mathematicians count inferences as free.
Computer scientists know better.
If you can predict the outcome of a brute-force attack against
mainstream cryptographic protocols, I'm most interested.
If from equations of a system, you can "understand" the system
to the point of predicting the outcome, a lot of physicists will want you.

"Understanding" is about anticipation (see Popper).
If you cannot gather enough information to make a useful decision
before it's too late, then for all that matters,
you haven't understood anything.
Understanding matters only in as much as it is a prelude to doing.
What ultimately matters is doing.
I deny as meaningless any notion of understanding that cannot lead to action.

> Fare seems to forget that we write software and 
> construct host machines to form a tool system.
Don't you resort in insulting other people so as to explain disagreement.
I am most aware of machines being tools, but we seem to have
wildly different explanation structures in our respective minds
about what "a tool", "to use", "useful" mean.
Of course, each one is convinced that his conscious mental structure
is more adequate, or he'd change it.
Don't be so irrational in your mental modelling of other people!

> We do so as a means of extending "our" own ability.
Which is precisely why cost matters,
and why prediction is not same as postdiction.
Computers are useful only in as much as they can assert things
that we couldn't otherwise assert (in time|as cheaply|as precisely|at all).
If we could effectively predict the outcome, we wouldn't need them.

> Moreover we do so for "reasons of our own".
Machines might have reasons of their own, too.
You're back to your clichés, describing common agreed features of systems,
intelligent or not, that have ZERO relevance to the possibility or
impossibility of artificial intelligence.

> Among these "reasons of our own" are curiosity, amusement, and a
> desire to increase our knowledge, our understanding, and our
> ability to predict.
I'd rather say lust, urge, and anxiety for
food, defecation, sleep, social approval, sex.

There's no reason why vital urges cannot be built into computers.
This has been done before.
Moreover, in a sense, the urge of the system to reply to human queries
is such a builtin urge, even in primitive interactive computers.

> That we
> cannot predict or cannot understand does accrue to an intellectual 
> failure or weakness.
Of course it does. You sound like a dogmatic human supremacist.
That we use tools to cope with our failures and weaknesses
is precisely our victory.

> We may not take the time to either know or understand what 
> occurred and why within such a process.  That is our choice.
I didn't choose to not brute-force crack RC5-64 by hand.
I am just unable to do it. At the speed I am able to do it,
the expected completion time before I manage it by hand is longer
than the expected life of the universe, not to talk about my own.
And even "by hand", I'd use paper and pencil, i.e. tools.
Same about most all computer-solved problems.

> the tool did not acquire a "life of its own" because we
> chose to neither know nor understand.
It did acquire some independance from our choice of not caring.
But even with our caring, it would still be very much independent,
in as much as we are unable to fully understand, even when caring.
The more complex the emergent system, the more independent it is,
relatively to the severe limits of our potential understanding.

>> If your "encoded logic" is universal, just any behavior is 
>> consistent with it.  So conformance to the logic is a null 
>> statement.
> Yes, but you see the encoded logic, particularly that of software, 
> is not universal.
I think that's the root of our disagreement. Let's agree to disagree here.
The ability of software to accurately simulate complex physical systems
seems strong evidence that it is, but I'd rather not argue about that.

> "I strongly dislike your way of turning around arguments,
> completely avoiding to answer to points others claim as relevant,
> not even acknowledging them, and claiming as victories their 
> concessions on points they claim are irrelevant.  This is no 
> rational discussion, only self-delusion thereof."
>
> I imagine that you do, considering the arguments you make.  The 
> matter of relevance or not lies in the eye of the observer.
>
I didn't reproach you your disagreement in opinion,
but your complete lack of acknowledgement of other people's opinion,
your consequential repeating over and over again of
the same points that were long agreed upon,
and your lack of any response (up to then) to my objections.
Even now, you still blank out half of my arguments.

> To me
> the issue of software execution "always" consistent with its 
> encoded logic is relevant.
But it isn't the root of the disagreement, so stop boring everyone with it,
instead of discussing the deeper disagreement.
There's no use whatsoever in discussing stuff everyone agrees upon.
"Do you agree that 2+2=4? Haha! Then the rest follows from it!"
Obviously, it doesn't.

> I, therefore, have something that the software does not:
> a life of my own.<g>
You keep gratuitously extending the same statements
from existing designed software to all software.
This is no rational discussion. Just repeated ranting.
You don't try to identify initial disagreements,
you just repeat your opinion over and over again,
without much regard for other people's argument structure.

> I suggest that the charge of self-delusion here is one of 
> projection, in the source and not the target.
I claim once again that you're in a self-delusion of rational discussion,
in an objective way observable by neutral observers, independently from
the outcome of the main debate. A rational discussion isn't about
repeating one's point over and over in the hope of getting it accross,
but about discovering the other people's argumentative structure
and identifying root disagreements between parties.
Gratuitously repeating is noise. Rational parties look for signal.
I'm not sure I want to waste more time on this particular discussion
if you don't improve your attitude.

> I doubt very 
> seriously if progress in software, in doing the things specified 
> in the Tunes HLL requirements, has any need for any properties 
> outside those available in formal logic.
I admit said page is antique and lacking in both contents and rationale.
I will state so on the page.

> It hasn't required 
> anything else in getting to this point.
Note that at no point in time has it required any further progress
to get up to said point in the past of said point.

> We certainly haven't 
> exhausted all the logical possibilities.
> When and if we do,
> considering parallel progress in other fields including biology,
> then we can consider non-logical, statistical-based approaches.
> Meanwhile let's complete what we can with von Neumann and Turing.
None of these is the point discussed.
I've been pushing for a direction of development that is none of
those you propose, that is mostly independent from them, whereas
the development of one need not prevent the development of the other,
on the contrary.


Anyway, this whole discussion is becoming more and more pointless.
Again, understanding is about taking action,
and whether AI is ultimately possible or not,
people in the TUNES project agree that in the foreseeable future,
we'll be working towards making much more primitive emergent systems,
that will be used as a complement, tool, amplifier, to human intelligence,
rather than an all-or-nothing complete replacement thereof.

[Note to Billy: I prefer to avoid the acronyms IA vs AI,
because in french, the acronyms are reversed!
Also I see no dynamic opposition between the two points of views.]

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
In a reasonable discussion, you can't communicate opinions, and you don't try
to, for each person's opinions depend on a body of unshared assumptions rooted
beyond reason. What you communicate is arguments, whose value is independent
from the assumptions. When the arguments are exchanged, the parties can better
understand each other's and their own assumptions, take the former into
account, and evolve the latter for the better.