A revolutionary OS/Programming Idea
Lynn H. Maxson
Lynn H. Maxson" <lmaxson@pacbell.net
Wed Oct 8 08:05:02 2003
Alaric B Snell writes:
"...And do we actually know if the human brain can be
described as something not based on an instruction set? ..."
Again I refer you to Ashby's "Design for a Brain" where you
will find a description of his "homeostat", a non-programmed
adaptive device. This does not say that the brain works in
this manner. We simply do not know how the brain works its
magic. Ashby simply illustrates that adaptive behavior can
occur or not occur through connection-based conditions.
In the particular instance of the homeostat, an
electro-mechanical-chemical device of identical
interconnected components (neuron analogs), he
demonstrated a form of homeostasis. Homeostasis describes
the process which occurs in living things to keep composite
"vital signs" within certain value ranges. Stay inside and you
live. Step outside and you die.
Now process engineers attempt to maintain vital signs in
operating refineries. They basically program the process,
centrally monitor it, and more importantly shut it down when
their programming cannot handle an "out of control" situation.
Refineries are not living organism. You can kill and
resuscitate a refinery, but a living organism gets only one
chance. If the program fails it dies. The program can do no
more than its authors can pre-conceive. It has no means to
dynamically adjust (adapt) except in pre-conceived ways.
Such pre-conceptions do not exist--or according to Ashy need
not exist--for a living organism to demonstrate adaptive
behavior. Thus no "deus ex machina", no finger of God, no
external (pre-conceived) programming required.
You could construct a refinery in this manner, but you would
not. At some point measured in a nanosecond or a hundred
years it would die. No one would invest money in such a
venture. We do our best through programming to minimize
risks by making systems which we stop (kill), correct
(reprogram), and restart (give life). Living organism only get
a start.
Control systems depend on feedback, positive and negative,
determined through inter-connections. You change the
inter-connections you get a different system. That's partly
why we call them "control" systems. What do you call a
system unconcerned with the inter-connections to produce
adaptive behavior of a given type? Living organisms.<g>
Ashby points out that an control system like an automatic
pilot will fail if the connections (program) are not made in a
specific manner. Yet he illustrates that you can build an
automatic pilot which attempts the same adaptive behavior
regardless of the connections. The keyword here is
"attempts". If it doesn't succeed in time, i.e. adapt, you get to
go down with the plane.
The truth is we want the brain to have an underlying
instruction set. We don't want it to exhibit free will. Our
whole control system philosophy, including all of cybernetics
(except for Ashby) is based on pre-determinism.
So if Ashby is on the mark with respect to the brain, you can't
emulate it with von Neumann architecture. Why? Because
von Neumann architecture depends on the "deus ex machina",
the human programmer. That's why the dynamic modification
of source in LISP is overrated and frequently leads to leaps of
faith into what is possible with it. But until you can eliminate
entirely the "deus ex machina" its dynamic modification will
always follow pre-conceived paths.
"...Hmmm, neurons do have long term state as well as short
term state, but even the long term state is mutable so
perhaps not 'static'. The synaptic weightings change slowly as
the neuron 'learns', and this influences the chance of the
neuron firing or not in a given situation ..."
You see that's what happens when you have a word like
"learns" and apply it inappropriately to a situation. You have
no basis other than a human preference that it takes place for
"learning" in a neuron. Moreover you have no predictive basis
for what constitutes "learning" in humans or why all learning
is not "universal" in them. That's its not, that it varies by
individual, should indicate it does not rely on a von Neumann
architecture. Humans are not computers. That's why
software, in the form of instruction, does not have a
predictable outcome on an individual basis.
In short intelligence and sentience is not von Neumann based.
Therefore a von Neumann architecture can never emulate
"exactly" intelligence and sentience. It cannot cross the
"threshold". It cannot escape its own programming. It
therefore cannot evolve on its own.
"...I'm interested in learning about more 'alternative'
realisations of OO. Things I have alreaded studied are the
generic function / multiple dispatch idea, which is very
interesting since it lets you add methods to existing classes;
..."
In truth we used alternatives up to the point of getting this
one.<g> We can put this one down as a learning experience.
The plain fact of the matter is that we don't need classes,
class structures, or class libraries defined in this matter. We
don't need this particular form to impose inheritance in order
to simplify (?) the concept of reuse.
We have logic programming. We have rules. We can
associate the rules with the processing of data and with
processes (source segments) themselves. If I want to say that
only certain procedures can maintain a given set a data, an
element or aggegate (array or structure), then I only have to
name them in declariing the data. I do this in SL/I with a
"range" option as part of the data declaration, e.g. 'dcl able
(-47, 50) fixed dec (7,2) range (proc1, proc2, ...procN);'. That
tells the software that only those procedures can access this
data aggregate. If I wanted it to have the same range as
another set of data, i.e. exhibit inheritance, I can simply
include the declared data name within the 'range' option. I
could then have a class structure apply to only a range of
data declarations and procedures within the entire body of
such. Thus it doesn't have to be an "all or nothing" affair.