Emergence of behavior through software

Alik Widge aswst16+@pitt.edu
Sun, 01 Oct 2000 12:26:53 -0400


--On Saturday, September 30, 2000 11:42 PM -0700 "Lynn H. Maxson" 
<lmaxson@pacbell.net> wrote:

> A reasonable argument.  Let me provide you with a definition
> within the context of this discussion.  Are the results consistent
> with the embedded logic of the software?  If they are, then no
> volition, no "independent" action on the part of the software

All right. That's a definition. Now I ask for a justification. Why can 
volition not arise within the constraints of a rule set?

> rather deep hole.  Now you have to take something not derived from
> single-cell life, expanding the definition of organism such that
> it becomes the universal class as now there is nothing which we
> cannot consider in some manner belonging to it, e.g. my radial
> saw.  All we have to have is a system and bingo we have an
> organism.

I personally would put some requirements on that, such that a system which 
wished to be an organism must at least be capable of sustaining itself 
indefinitely, but otherwise, I do not see this as a problem.

> Software by itself cannot execute.  It must reside in a machine
> and together they constitute a computing system.  The software in
> or out of the machine is not alive nor is the host machine.  Thus
> we do not have what biology defines as an organism.

Careful. A parasite cannot survive on its own --- it must live in a host. 
In fact, all known species exist only as part of ecosystems. Being 
dependent on other parts of a system does not preclude being an organism.

> Secondly you may have inherited physical traits, but you certainly
> did not inherit behavior.  Behavior in society is not inherited,
> neither the society's behavior nor the individuals which compose
> it.  The behavior of software is not inherited for software does
> not engage in procreative activities as organisms do.

1) Behavior has been shown to be partially inherited, especially in the 
case of mental disorders. It's not a very strong effect, but it's 
statistically significant. (I will admit that I was being loose with the 
word "inherit" and including those things I picked up from my parents by 
simple imitation.)

2) Why can software not procreate? What about viruses? I could program a 
virus (well, if I knew anything about virus-writing) which went around and 
extracted bits of code from programs on its host and then tried to breed 
with other copies of itself. It would take a long time to be an effective 
virus, and someone would kill it first, but it could be done.

> instructions.  Organisms are not bound by logic.  They cannot be
> constructed with "and", "or", and "not" circuits.  The computer is
> not a brain and the brain is not a computer.

Again, be careful. You can't prove either of those. I have yet to find a 
task which a human can do and a Turing machine cannot. The brain and 
computer are superficially different, but that doesn't mean that they 
aren't just two implementations of a central theme.

> On the contrary it is an incredible argument.  You should stop
> listening to such drivel.  As humans we can posit the impossible,
> the sufficiently complex simulation, in this instance.  I'm not
> going to invoke Penrose here, but any time you believe that you
> can simulate a living organism to the quantum detail, you best
> rethink it.

You yourself say that  it doesn't matter whether or not we can actually do 
the prediction, as long as it's possible on paper.  Would it take more 
space and time than is available in the  universe? Sure.

> acquiring control of the processor.  There is no means from within
> software to address a non-existent set of instructions, to pass
> control to something which does not exist.  In all computing
> systems of which I am aware this generates a "hard" error (or at
> least an address exception<g>).

But they're not non-existent. Have the program create them, then pass 
control to them. I see where you're coming from --- you're saying that this 
is still the programmer telling the program to make them. But did the 
programmer have no volition if someone else told him to write that program? 
Seems like we're chasing a chain back to the Big Bang.

> Considering the logic of this I might have saved myself some
> effort by letting you destroy your own "credible argument" about a
> "sufficiently complex simulation".<g>  Nevertheless regardless of
> how small the circuits become their logical function remains the
> same.

But their conformance to that logical function does not. At some point, 
their statistical nonconformance becomes perceptible.

I'm trying to catch you in a contradiction here. If obeying some rules, any 
rules, which are mathematically expressible precludes volition, I argue 
that you must declare humans non-volitional. Since you don't seem willing 
to do that, I sense a contradiction.

> It doesn't bother me to have someone talk about doing the
> impossible, e.g. performing an operation an infinite number of
> times.  I have a somewhat clear picture of the difference between
> science fiction and science fact.

All right... I'll set myself up a warehouse of old x86es and let them 
compute for as long as they can continue to run. If they go a hundred 
years, do you think that no valid programs will be generated? Give me some 
numbers for the size (in ops) of "Hello, world" and the average 
ops-per-second for the entire warehouse, and we'll do the calculation.

> The fascination with random numbers or randomness in general as a
> source for spontaneity in a computing system I find amusing.

Hm. We keep coming back to this idea that following rules means you're not 
spontaneous. I suppose that as long as you're using that as an assumption, 
your argument is consistent.

> We keep acting as if software were only a set of instructions when
> in reality it has two inter-related spaces, an instruction space
> and a data space.  Moreover the data space has two subspaces, a
> read-only subspace and a read/write subspace.  Instructions
> operate on data or on machine state indicators e.g. branch on
> overflow.

This isn't inherent to the system, though. A processor may be able to 
detect overflow, and it may raise a signal, but it makes no requirement 
that you do anything about it. It doesn't have inherent code/data 
separation (or at least, it need not). It just fetches things from the 
memory and puts them into instruction or data registers as needed

Moreover, you can use HLLs to cheat. Consider the LISPs. Their code space 
is simply the interpreter. In the data space, one can put any executable 
program. These programs can be used to generate other programs, and in fact 
this is one of the standard stupid LISP tricks. If the instruction space 
contains "Look in data for things that look like programs and try to run 
them, letting them work on other things in the data space", you effectively 
have a single space.

> Software cannot escape its own consistency.  It cannot avoid its
> own errors.  Randomness does no more than transfer control
> (decision control structure) within consistent boundaries.  It is
> simply another way of making a decision on which path to take
> next.

All quite true, and I do not argue it. I merely challenge your further 
statement that this means software can never have volition.

> You cannot create a brain or any part of a human system with a
> computer.  One is an organism, fashioned in the same manner as any
> other, while a computer is not.  von Neumann architecture is not.
> Turing rules of computation are not.  Machines of any stripe are
> not.

This is *definitely* an assumption. All known neural pathways can be 
modeled in software. It's a statistical process, but so is the brain, from 
what we know. Now, if you want to say that that's still never going to be a 
real organism, that's fine, but you're heading for the realm of theology.

Is it hard to put an entire brain into software? Of course. We're going to 
need something that can automate the process, because the connections are 
too numerous to be coded by hand. On the other hand, the process of 
brain-construction is by definition automatable, since the brain 
self-assembles from embryonic tissue.