A revolutionary OS/Programming Idea

John Newman jmn381@yahoo.com
Thu Oct 9 01:04:03 2003


--0-65038263-1065686600=:46997
Content-Type: text/plain; charset=us-ascii

Sorry to stick on the philosophy, but...
 
Lynn said,
"
The truth is we want the brain to have an underlying 
instruction set.  We don't want it to exhibit free will.  Our 
whole control system philosophy, including all of cybernetics 
(except for Ashby) is based on pre-determinism.
"
 
Just 'determinism.'  Drop the 'pre.'  Our whole body of science and rational thought is based on causality (including irrational thought as well, indirectly ;) ).  
 
Sometimes we think that when evolution theory refers to 'chance,' we mean randomness.  What we really mean is 'unpredictable,' relative to present human standards.  But for one to talk rationally about any concept, beit evolution or whatever, one must apply cause and effect.  Any other method employs the 'god of the gaps.'  We don't need to be creating unnecessary black boxes (however tantalizing that may sound).
 
"
In short intelligence and sentience is not von Neumann based.
"
 
An assertion not founded in evidence.  Von Neumann machines are based on the concept of universality--It can emulate any other machine.  The human brain is Universal machine precisely because the brain can abstract a conception of any given 'possible' machine.  The catch is, one's brain must be at least as complex as any machine it can emulate.  
 
Ordinarily, though, the emulation layer on top of a von Neumann machine will be necessarilly many times more complex than the machine being emulated.  So, the universality is only limited by the finitude of the system doing the emulating.  
 
Many agree that computers are universal, because a mac can emulate a pc and vice versa, etc.
 
"  
Therefore a von Neumann architecture can never emulate "exactly" intelligence and sentience.  
"
 
Statement based on unfounded assertion (see above).  Matter is the machine that exhibits, or one could say, emulates life and intelligence.  We can emulate matter in computers(when we figure it out).  
 
So, first, you must formulate an argument against a mechanistic nature.  Then you have the problem of proving that this 'non-mechanical' black box can interface in a mechanistic way with the mechanisms that are evidenced by nature (like evolution and sentience--both prominent black boxes in creationism).
 
"
It cannot cross the "threshold".  It cannot escape its own programming.  It 
therefore cannot evolve on its own.
"
 
You are assuming that we can 'escape' matter.  And that evolution also escapes its bounds.  I'm not saying you are wrong, but, thus far, this has not proven to be a very scientific argument, and more often an appeal to emotion.
 
What of a function like phi/Golden-Ratio?  Is its inexolerably resultant infinite sequence an intention of someone who implements the function?  How can it be when it is impossible for any programmer to know the entirety of the outputting sequence?  No one can influence the evolution of phi or pi.  For all intents and purposes, they evolve on their own.
 
These 'naturally complex' functions can be found by programmers.  Wolfram found some, like rule 30 and 45.  As it turns out, the novelty of the infinite output of some deceptively simple functions, or machines, are actually not quantifiable.  So, evolution likely harnesses search functions which exhibit this type of unboundedly complex output, thus maximizing the search space.  From what I know about complexity science, that's my take on it.

 
"Lynn H. Maxson" <lmaxson@pacbell.net> wrote:
Alaric B Snell writes:
"...And do we actually know if the human brain can be 
described as something not based on an instruction set? ..."

Again I refer you to Ashby's "Design for a Brain" where you 
will find a description of his "homeostat", a non-programmed 
adaptive device. This does not say that the brain works in 
this manner. We simply do not know how the brain works its 
magic. Ashby simply illustrates that adaptive behavior can 
occur or not occur through connection-based conditions. 

In the particular instance of the homeostat, an 
electro-mechanical-chemical device of identical 
interconnected components (neuron analogs), he 
demonstrated a form of homeostasis. Homeostasis describes 
the process which occurs in living things to keep composite 
"vital signs" within certain value ranges. Stay inside and you 
live. Step outside and you die.

Now process engineers attempt to maintain vital signs in 
operating refineries. They basically program the process, 
centrally monitor it, and more importantly shut it down when 
their programming cannot handle an "out of control" situation.

Refineries are not living organism. You can kill and 
resuscitate a refinery, but a living organism gets only one 
chance. If the program fails it dies. The program can do no 
more than its authors can pre-conceive. It has no means to 
dynamically adjust (adapt) except in pre-conceived ways.

Such pre-conceptions do not exist--or according to Ashy need 
not exist--for a living organism to demonstrate adaptive 
behavior. Thus no "deus ex machina", no finger of God, no 
external (pre-conceived) programming required.

You could construct a refinery in this manner, but you would 
not. At some point measured in a nanosecond or a hundred 
years it would die. No one would invest money in such a 
venture. We do our best through programming to minimize 
risks by making systems which we stop (kill), correct 
(reprogram), and restart (give life). Living organism only get 
a start.

Control systems depend on feedback, positive and negative, 
determined through inter-connections. You change the 
inter-connections you get a different system. That's partly 
why we call them "control" systems. What do you call a 
system unconcerned with the inter-connections to produce 
adaptive behavior of a given type? Living organisms.

Ashby points out that an control system like an automatic 
pilot will fail if the connections (program) are not made in a 
specific manner. Yet he illustrates that you can build an 
automatic pilot which attempts the same adaptive behavior 
regardless of the connections. The keyword here is 
"attempts". If it doesn't succeed in time, i.e. adapt, you get to 
go down with the plane.

The truth is we want the brain to have an underlying 
instruction set. We don't want it to exhibit free will. Our 
whole control system philosophy, including all of cybernetics 
(except for Ashby) is based on pre-determinism.

So if Ashby is on the mark with respect to the brain, you can't 
emulate it with von Neumann architecture. Why? Because 
von Neumann architecture depends on the "deus ex machina", 
the human programmer. That's why the dynamic modification 
of source in LISP is overrated and frequently leads to leaps of 
faith into what is possible with it. But until you can eliminate 
entirely the "deus ex machina" its dynamic modification will 
always follow pre-conceived paths.

"...Hmmm, neurons do have long term state as well as short 
term state, but even the long term state is mutable so 
perhaps not 'static'. The synaptic weightings change slowly as 
the neuron 'learns', and this influences the chance of the 
neuron firing or not in a given situation ..."

You see that's what happens when you have a word like 
"learns" and apply it inappropriately to a situation. You have 
no basis other than a human preference that it takes place for 
"learning" in a neuron. Moreover you have no predictive basis 
for what constitutes "learning" in humans or why all learning 
is not "universal" in them. That's its not, that it varies by 
individual, should indicate it does not rely on a von Neumann 
architecture. Humans are not computers. That's why 
software, in the form of instruction, does not have a 
predictable outcome on an individual basis.

In short intelligence and sentience is not von Neumann based. 
Therefore a von Neumann architecture can never emulate 
"exactly" intelligence and sentience. It cannot cross the 
"threshold". It cannot escape its own programming. It 
therefore cannot evolve on its own.

"...I'm interested in learning about more 'alternative' 
realisations of OO. Things I have alreaded studied are the 
generic function / multiple dispatch idea, which is very 
interesting since it lets you add methods to existing classes; 
..."

In truth we used alternatives up to the point of getting this 
one. We can put this one down as a learning experience. 
The plain fact of the matter is that we don't need classes, 
class structures, or class libraries defined in this matter. We 
don't need this particular form to impose inheritance in order 
to simplify (?) the concept of reuse.

We have logic programming. We have rules. We can 
associate the rules with the processing of data and with 
processes (source segments) themselves. If I want to say that 
only certain procedures can maintain a given set a data, an 
element or aggegate (array or structure), then I only have to 
name them in declariing the data. I do this in SL/I with a 
"range" option as part of the data declaration, e.g. 'dcl able 
(-47, 50) fixed dec (7,2) range (proc1, proc2, ...procN);'. That 
tells the software that only those procedures can access this 
data aggregate. If I wanted it to have the same range as 
another set of data, i.e. exhibit inheritance, I can simply 
include the declared data name within the 'range' option. I 
could then have a class structure apply to only a range of 
data declarations and procedures within the entire body of 
such. Thus it doesn't have to be an "all or nothing" affair.


---------------------------------
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
--0-65038263-1065686600=:46997
Content-Type: text/html; charset=us-ascii

<DIV>Sorry to stick on the philosophy, but...</DIV>
<DIV>&nbsp;</DIV>
<DIV>Lynn said,</DIV>
<DIV>"</DIV>
<DIV>The truth is we want the brain to have an underlying <BR>instruction set.&nbsp; We don't want it to exhibit free will.&nbsp; Our <BR>whole control system philosophy, including all of cybernetics <BR>(except for Ashby) is based on pre-determinism.<BR>"</DIV>
<DIV>&nbsp;</DIV>
<DIV>Just 'determinism.'&nbsp; Drop the 'pre.'&nbsp; Our whole body of science and rational thought is based on causality (including irrational thought as well, indirectly ;) ).&nbsp; </DIV>
<DIV>&nbsp;</DIV>
<DIV>Sometimes we think that when evolution theory refers to 'chance,' we mean randomness.&nbsp; What we really mean is 'unpredictable,' relative to present human standards.&nbsp; But for one to talk rationally about any concept, beit evolution or whatever, one must apply cause and effect.&nbsp; Any other method employs the 'god of the gaps.'&nbsp; We don't need to be creating unnecessary black boxes (however tantalizing that may sound).</DIV>
<DIV>&nbsp;</DIV>
<DIV>"</DIV>
<DIV>In short intelligence and sentience is not von Neumann based.</DIV>
<DIV>"</DIV>
<DIV>&nbsp;</DIV>
<DIV>An assertion not founded in evidence.&nbsp; Von Neumann machines are based on the concept of universality--It can emulate any other machine.&nbsp; The human brain is&nbsp;Universal machine precisely because the brain can abstract a conception of any given 'possible' machine.&nbsp; The catch is, one's brain must be at least as complex as any machine it can emulate.&nbsp; </DIV>
<DIV>&nbsp;</DIV>
<DIV>Ordinarily, though, the emulation layer on top of a von Neumann machine will be necessarilly many times more complex than the machine being emulated.&nbsp; So, the universality is only limited by the finitude of the system doing the emulating.&nbsp; </DIV>
<DIV>&nbsp;</DIV>
<DIV>Many agree that computers are universal, because a mac can emulate a pc and vice versa, etc.</DIV>
<DIV>&nbsp;</DIV>
<DIV>"&nbsp; <BR>Therefore a von Neumann architecture can never emulate "exactly" intelligence and sentience.&nbsp; </DIV>
<DIV>"</DIV>
<DIV>&nbsp;</DIV>
<DIV>Statement based on unfounded assertion (see above).&nbsp; Matter is the machine that exhibits, or&nbsp;one could say, emulates&nbsp;life and intelligence.&nbsp; We can emulate matter in computers(when we figure it out).&nbsp; </DIV>
<DIV>&nbsp;</DIV>
<DIV>So, first, you must formulate an argument against a mechanistic nature.&nbsp; Then you have the problem of proving that this 'non-mechanical' black box can interface in a mechanistic way with the mechanisms that are evidenced by nature (like evolution and sentience--both prominent&nbsp;black boxes&nbsp;in creationism).</DIV>
<DIV>&nbsp;</DIV>
<DIV>"</DIV>
<DIV>It cannot cross the "threshold".&nbsp; It cannot escape its own programming.&nbsp; It <BR>therefore cannot evolve on its own.</DIV>
<DIV>"</DIV>
<DIV>&nbsp;</DIV>
<DIV>You are assuming that we can 'escape' matter.&nbsp; And that evolution also escapes its bounds.&nbsp; I'm not saying you are wrong, but, thus far, this has not proven to be a very scientific argument, and more often an appeal to emotion.</DIV>
<DIV>&nbsp;</DIV>
<DIV>What of a function like phi/Golden-Ratio?&nbsp; Is&nbsp;its inexolerably resultant infinite sequence an intention of someone who implements the function?&nbsp; How can it be when it is impossible for any programmer to know the entirety of the outputting sequence?&nbsp; No one can influence the evolution of phi or pi.&nbsp; For all intents and purposes,&nbsp;they evolve on&nbsp;their own.</DIV>
<DIV>&nbsp;</DIV>
<DIV>These 'naturally complex' functions can be found by programmers.&nbsp; Wolfram found some, like rule 30 and 45.&nbsp; As it turns out, the novelty of the&nbsp;infinite output of some deceptively simple functions, or machines,&nbsp;are actually not quantifiable.&nbsp; So, evolution likely harnesses search functions which exhibit this type of unboundedly complex output, thus maximizing the search space.&nbsp; From what I know about complexity science, that's my take on it.<BR></DIV>
<DIV>&nbsp;</DIV>
<DIV><B><I>"Lynn H. Maxson" &lt;lmaxson@pacbell.net&gt;</I></B> wrote:</DIV>
<BLOCKQUOTE class=replbq style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid">Alaric B Snell writes:<BR>"...And do we actually know if the human brain can be <BR>described as something not based on an instruction set? ..."<BR><BR>Again I refer you to Ashby's "Design for a Brain" where you <BR>will find a description of his "homeostat", a non-programmed <BR>adaptive device. This does not say that the brain works in <BR>this manner. We simply do not know how the brain works its <BR>magic. Ashby simply illustrates that adaptive behavior can <BR>occur or not occur through connection-based conditions. <BR><BR>In the particular instance of the homeostat, an <BR>electro-mechanical-chemical device of identical <BR>interconnected components (neuron analogs), he <BR>demonstrated a form of homeostasis. Homeostasis describes <BR>the process which occurs in living things to keep composite <BR>"vital signs" within certain value ranges. Stay inside and you <BR>live. Step
 outside and you die.<BR><BR>Now process engineers attempt to maintain vital signs in <BR>operating refineries. They basically program the process, <BR>centrally monitor it, and more importantly shut it down when <BR>their programming cannot handle an "out of control" situation.<BR><BR>Refineries are not living organism. You can kill and <BR>resuscitate a refinery, but a living organism gets only one <BR>chance. If the program fails it dies. The program can do no <BR>more than its authors can pre-conceive. It has no means to <BR>dynamically adjust (adapt) except in pre-conceived ways.<BR><BR>Such pre-conceptions do not exist--or according to Ashy need <BR>not exist--for a living organism to demonstrate adaptive <BR>behavior. Thus no "deus ex machina", no finger of God, no <BR>external (pre-conceived) programming required.<BR><BR>You could construct a refinery in this manner, but you would <BR>not. At some point measured in a nanosecond or a hundred <BR>years it would die. No one
 would invest money in such a <BR>venture. We do our best through programming to minimize <BR>risks by making systems which we stop (kill), correct <BR>(reprogram), and restart (give life). Living organism only get <BR>a start.<BR><BR>Control systems depend on feedback, positive and negative, <BR>determined through inter-connections. You change the <BR>inter-connections you get a different system. That's partly <BR>why we call them "control" systems. What do you call a <BR>system unconcerned with the inter-connections to produce <BR>adaptive behavior of a given type? Living organisms.<G><BR><BR>Ashby points out that an control system like an automatic <BR>pilot will fail if the connections (program) are not made in a <BR>specific manner. Yet he illustrates that you can build an <BR>automatic pilot which attempts the same adaptive behavior <BR>regardless of the connections. The keyword here is <BR>"attempts". If it doesn't succeed in time, i.e. adapt, you get to <BR>go down with the
 plane.<BR><BR>The truth is we want the brain to have an underlying <BR>instruction set. We don't want it to exhibit free will. Our <BR>whole control system philosophy, including all of cybernetics <BR>(except for Ashby) is based on pre-determinism.<BR><BR>So if Ashby is on the mark with respect to the brain, you can't <BR>emulate it with von Neumann architecture. Why? Because <BR>von Neumann architecture depends on the "deus ex machina", <BR>the human programmer. That's why the dynamic modification <BR>of source in LISP is overrated and frequently leads to leaps of <BR>faith into what is possible with it. But until you can eliminate <BR>entirely the "deus ex machina" its dynamic modification will <BR>always follow pre-conceived paths.<BR><BR>"...Hmmm, neurons do have long term state as well as short <BR>term state, but even the long term state is mutable so <BR>perhaps not 'static'. The synaptic weightings change slowly as <BR>the neuron 'learns', and this influences the chance of
 the <BR>neuron firing or not in a given situation ..."<BR><BR>You see that's what happens when you have a word like <BR>"learns" and apply it inappropriately to a situation. You have <BR>no basis other than a human preference that it takes place for <BR>"learning" in a neuron. Moreover you have no predictive basis <BR>for what constitutes "learning" in humans or why all learning <BR>is not "universal" in them. That's its not, that it varies by <BR>individual, should indicate it does not rely on a von Neumann <BR>architecture. Humans are not computers. That's why <BR>software, in the form of instruction, does not have a <BR>predictable outcome on an individual basis.<BR><BR>In short intelligence and sentience is not von Neumann based. <BR>Therefore a von Neumann architecture can never emulate <BR>"exactly" intelligence and sentience. It cannot cross the <BR>"threshold". It cannot escape its own programming. It <BR>therefore cannot evolve on its own.<BR><BR>"...I'm interested in
 learning about more 'alternative' <BR>realisations of OO. Things I have alreaded studied are the <BR>generic function / multiple dispatch idea, which is very <BR>interesting since it lets you add methods to existing classes; <BR>..."<BR><BR>In truth we used alternatives up to the point of getting this <BR>one.<G> We can put this one down as a learning experience. <BR>The plain fact of the matter is that we don't need classes, <BR>class structures, or class libraries defined in this matter. We <BR>don't need this particular form to impose inheritance in order <BR>to simplify (?) the concept of reuse.<BR><BR>We have logic programming. We have rules. We can <BR>associate the rules with the processing of data and with <BR>processes (source segments) themselves. If I want to say that <BR>only certain procedures can maintain a given set a data, an <BR>element or aggegate (array or structure), then I only have to <BR>name them in declariing the data. I do this in SL/I with a <BR>"range"
 option as part of the data declaration, e.g. 'dcl able <BR>(-47, 50) fixed dec (7,2) range (proc1, proc2, ...procN);'. That <BR>tells the software that only those procedures can access this <BR>data aggregate. If I wanted it to have the same range as <BR>another set of data, i.e. exhibit inheritance, I can simply <BR>include the declared data name within the 'range' option. I <BR>could then have a class structure apply to only a range of <BR>data declarations and procedures within the entire body of <BR>such. Thus it doesn't have to be an "all or nothing" affair.<BR></BLOCKQUOTE><p><hr SIZE=1>
Do you Yahoo!?<br>
<a href="http://shopping.yahoo.com/?__yltc=s%3A150000443%2Cd%3A22708228%2Cslk%3Atext%2Csec%3Amail">The New Yahoo! Shopping</a> - with improved product search
--0-65038263-1065686600=:46997--