Emergence of behavior through software

Lynn H. Maxson lmaxson@pacbell.net
Mon, 25 Sep 2000 21:51:22 -0700 (PDT)


Billy wrote:
"The part where someone started believing that "universal machine" 
has ANY connection whatsoever to reality."

Massimo Dentico wrote:
"This is the fun part of your message: you *seem* covertly despise
the philosophy and then you propose the same theme of a 
philosopher like Penrose."

Then finally Kyle Lahnadoski wrote:
"But I suspect that QM is just a statistical approach to an 
unknown deterministic process."

First off I have to apologize for not being familiar with Penrose.  
As I said early on in this thread my reference relative to the 
brain is Antonio R. Demasio's "Descartes' Error: Emotion, Reason, 
and the Human Brain".  I am not quite the blunt disbelief of 
Billy, don't wish to argue an unprovable belief in either 
determinism or non-determinism, or dispute that at the quantum 
level the observation (which involves quanta) interferes with 
(becomes part of) the process: the Heisenberg principle of 
uncertainty.

At one (earliest) point in my career as a technician my 
responsibility included diagnosing and correcting computer logic 
circuits (in the days of tubes, diodes, resistors, and 
capacitors).  Much has changed in that time of what was used to 
construct computers, but basically nothing has changed relative to 
how.  It is still logic circuits aggregated into instruction sets.

IMHO that's the key here, the instruction set, particularly the 
fixed instruction set.  No matter how elaborate, sophisticated, or 
levels of meta-programming, no matter how high the level of the 
HLL, it all occurs within the instruction set.  No executable 
software exists no so translated into the instruction set of the 
machine for which it is intended.

Basically it does not make any difference how high a level of 
abstraction occurs in our HLL of choice.  When it comes to 
execution, to the components of that execution, it never lies 
outside the boundaries of the instruction set.

That occurs in reflection, in reflection on reflection, in 
reflection on reflection on reflection, and so on for as much as 
we choose (as it is our choice and not that of the software).  
That does not mean all mimicry is alike, only that all mimicry is 
mimicry.  It is mimicry because we instituted and recognize it.  
No software written to date has any means on its own to change the 
levels or have any cognizance of what it is doing.  Any such tests 
are those which we have instituted.

Beyond the instruction set we have the linear memory.  While we 
may construct elaborate non-linear aggregates (data and programs) 
for them to execute we must translate them into a linear form.  
For software to exhibit higher level abstractions on its own 
(other than the patterns we have encoded into it) it must overcome 
the limits of linear memory and gestalt patterns and families of 
patterns as well as define them conceptually and give them names 
(means of reference).

When you look at the human brain and nervous system with what 
little we have learned of it and then look at a computing system 
of hardware and software, both of which we know to the most 
intimate detail they are different constructs entirely.  There are 
no fixed logic circuits in the brain (and, or, and not), no linear 
memory, no linear addressing, no instruction set.  As what is 
there has been sufficient over time to allow us to construct 
computers and software, i.e. that their components have a 
realization, the question arises can the reverse also occur?

Therein lies the crux of our differences.  Can the computer do for 
the brain what the brain has done for it?  Even with extensive 
assistance from us?  If it is von Neumann architecture, Turing 
computational rules, fixed instruction set, fixed internal logic 
circuits, and linear addressable memory, I say no.  There's no 
"magic" in that box.

Fare believes otherwise, that you can go up levels of abstraction 
and that at some point in that upward path you achieve a 
capability not present in any of the lower.  Something additional 
happens entirely free of all that has gone before.  If I 
understand Kyle Lahnakoski correctly with his purely deterministic 
universe, this doesn't happen even in the brain: everything that 
occurs can be accounted for by everything below it.  What cofuses 
me is that he offers this in support of Fare.<g>

A natural question lies in asking the conditions under which this 
"spontaneous generation" occurs.  If it is levels of abstraction, 
then how many levels is it?  What is the magic number?  Where has 
it occurred.  Certainly not in any of the examples he has 
furnished.  He says in commenting on one example that we cannot 
fathom the result, i.e. we cannot in an interval which we can 
commit follow the logic which produced the results.  However we 
can write software with deterministic logic that can produce 
results which we cannot replicate on our own.  It still doesn't 
mean that anything "extra" occurred only that we used a tool as an 
extension of our capabilities.  It does in the large what we can 
only do in the small.  It extends our limits.  Good tool.

Fare is entitled to his opinions and the means he has chosen for 
his path to discovery.  If at some point his opinion becomes 
provable fact in a scientific sense, then no such argument pursued 
here will continue.  I wish him well.  Personally I don't feel any 
of it is necessary to achieve the goals or meet the requirements 
of the Tunes project or the Tunes HLL.  If we achieve them without 
the need for something extra, then I question even bringing it up.

Let's just say it is an example of Occam's Razor.