Emergence of behavior through software
Fri, 06 Oct 2000 20:45:33 -0400
--On Friday, October 06, 2000 12:58 AM -0700 "Lynn H. Maxson"
> We are basic agreement here. That's why we are in disagreement
> with respect to software-initiated life. As far as I know all
> organism are carbon-based. I assume that's why we call the
Actually, that's not 100% true. We have found some bacteria in deep-sea
vents which use a sulfur-based synthetic chain. Furthermore, life on other
worlds, especially bacterial-analogues, seems quite common, and will have
to deal with very different element ratios.
> What is it that you would do in software? Certainly not create a
> life form. With software you can only mimic. The best you can
This depends on what your definition of life is. If you limit it to
carbon-based physical creatures, of course it cannot be created in
software. I do not share your definition. I prefer that which you call
logical equivalence --- if I cannot tell the behavior of a program from the
expected behavior of a human mind, then I feel that the program may well be
said to have a mind. This is the same test I use on other human beings, so
why should it not apply to software?
> with a silicon crystal. Logically equivalent, yes. Identical,
> no. Is there a practical difference? Which one can you use to
> build the computer in which you will run your silicon generating
If I were so silly as to simulate a silicon crystal in software, though, I
could certainly then start carving my virtual crystal into virtual chips. I
think the example is somewhat silly --- we can already make crystals to
almost-arbitrary specifications, whereas we definitely cannot make minds
with the same level of precision. Software is being examined as a possible
material for constructing those minds.
> Nothing which exists only in human-created systems is real outside
> that context. Not the physical rules. Not the chemical ones.
But is not our entire perception of the universe human-created? By your
logic, nothing is provably real, and we're back to Descartes' idea that the
only thing each of us knows is that he exists.
> useful. It simply means that they fill a need we have, not one of
> a universe which has no such problem, which has no needs period.
If the universe does not need rules, why does it obey them? I don't see how
it could exist as a system without following rules.
> one for a universe which doesn't reflect on what's happening. If
> you want to accept a theory that all matter did not exist before
> the big bang simply because the mathematics dictates it, you may.
> I will simply assume it's a map error.<g> Or it was an act of
> God, because we cannot have an effect without a cause.
I am willing to accept God as an explanation. I am furthermore willing to
accept that if the mathematics dictates something and there is no
counterevidence, it may be taken as true. If we can't accept the output of
physics as true, what good is it?
> The problem is that you want to program something that doesn't use
> programming. No matter the genetic code or the cell
> differentiation they only spawn the abilities. They do not direct
> them. Ashby with his homeostat showed that you only needed an
That's not really true. Your particular genetic particularities continue to
be expressed throughout life, and they can have a profound effect on the
mental process. Consider the effect of the biological process of
adolescence. Those hormones have a powerful effect on thought.
> difference which counts. First off, a neuron is not an on/off
> digital gate. One it gets "tired" and sometimes doesn't produce
> an output logically indicative of the input. Sometimes what it
Cell fatigue can happen, yes. On the other hand, a transistor may overheat.
Systems have failure conditions.
> produces is not sufficient to excite the next connection depending
> upon its current state. What you get is a statistical mishmash of
And sometimes the output of a circuit is not 1. Again, I see no problem.
> Given how well you understand the retina, I surprised that you
> don't implement it with software and a host computer. I don't
> know what it would see, but maybe if you connect it to that which
> mimics the brain, you could be on your way.<g>
Your idea is several years too late. Check the neuroengineering projects at
UPenn. They're already past retina and working their way back towards
> but that there was a means of exhibiting goal-seeking (adaptive)
> behavior structurally without a separation between the direction
> and the doing. In short it is builtin, integral within the
> structure. What we call "adaptive behavior" or even "life" arises
At the same time, though, one can easily say that the instructions stored
in RAM/ROM/disk are part of the structure, since the charges representing
them are part of the system. Sure, we can change them. We can also
transfect neurons with abitrary DNA. That can kill the brain, but you can
also trash a computer by sending in the wrong instructions.
> well. Find one simultaneous, majority-logic computer architecture
> that has an HLL with the same capability. It's not that one or
Which capability do you mean? Emotions and the like? Of course we don't
have it yet --- we don't know what instructions to give. I will point out,
however, that Kasparov had a sense of playing a thinking and intelligent
opponent while battling Deep Blue. Again, only logical equivalence, but for
me that's a decent start.
> Why should any simple matter take two decades? It must not be
> that simple.
The phrase "simple matter of engineering", like "simple matter of
programming", is highly sarcastic. Consult "SMOP" in the Jargon File if you
> "Do you claim, then, that physics does not derive from the
> eminently logical system of mathematics?"
> I have no clue what connects this to the non-logic-circuit basis
> of an amoeba. For the record I make no such claim.<g> Although
> you may get an argument from physicists.
Amoebas behave according to physical laws. Physical laws are
mathematical/logical. Therefore, ameobas are constrained just as software
is, and being constrained within a ruleset does not deny life.
> The point is that whatever sensory capability they have is
> integral with the brain. If they lose a sensory capability, that
> in no way diminishes the functionality of the brain: the
> capability remains. If they lose a sensory capability of the
This is not really true either. The brain is a "use or lose" system. If the
visual neurons get no stimulation, they will shrink and die, and the brain
now only contains a subset of the standard functionality.
> you either believe that life is formed only from carbon-based
> matter or you do not.
And I do not, nor do I see why I should believe this. Seems kind of
geocentric to me.
> associate with life forms. Not the least of which is the lack of
> software distinguishable from the hardware. An organism is an
> integrated system and functions as such. You keep wanting to
> program that which requires no programming.
I don't see how I need this at all. Part of the Turing hypothesis is that
it doesn't matter whether I've got my TM in hardware or a hardware/software
combination; they are equivalent.
> specific program using random opcode selection. I will not
> concede that it is practical or that given any zillion of
> interconnected machines at a 1000MHz that it will occur in less
> than a million years. I leave it up to others more familiar with
> probability to give you the actual odds.
It depends significantly on the length of the program. "Hello, world" is
doable. Win2000 probably isn't.
> Nevertheless you have your proof-of-concept even if useless. Now
> you propose to optimize a random process. I can only assume that
> you intend to do what we do now which is to remove the randomness
> through the use of fixed logic.
Of sorts. It seems to me that the bottleneck is more on the verification
than the testing side. Might be worth coming up with a few fast heuristics
to rapidly reject the majority of output. (It doesn't matter if we reject a
few correct programs with those, either;any program may be written in
> path. What you do with either choice is completely determined
> (consistent) with the embedded logic. The software may use random
> selection, but there is nothing random in the embedded logic.
And thus I wonder once more what's so limiting about having a ruleset.
Everything is consistent with some set of rules.
> You see it all hinges on what is included in compute. If you mean
> that which is possible on a von Neumann machine, the answer is no.
> Emotion is not an output of a process, but part and parcel of it.
> Emotion is a process as is volition, thinking, feeling, etc..
But computation may also be said to be a process. Furthermore, I see no
proof that emotion is not the output of a computational process. (I also
wonder if emotion is truly a requirement of mind, but that's another
> They are not separate nor separable from each other, but melded
> within the same overall process. As one who studied neuroscience
> you should know that. Don't make Descartes' error. Read the
But the very point of neuroscience *is* that the brain may be separated
into functional areas and that those areas perform recognizable
computations. If this were not true, we could not study it.
> other reason than it doesn't exist at this level. A neuron either
> fires or it does not depending upon the circumstances at that
> moment. That's it's only function at its level. If you want a
But a function is simply something which maps an input to an output in a
consistent manner. The output is the firing; the input is the neuron's
physiological environment. What can it be if not a function?
> Turing machine to execute or not execute billions of neurons
> simultaneously, be my guest. I guess it is one of those
> theoretical proof-of-concepts that you enjoy.
This is a bit closer to possibility than simple theory, though. It is
possible to cause a brainlike configuration to self-assemble. That's a
proven fact. Therefore, it is logically possible to write a bootstrapping
program that will put together the neuron-simulation-units for you in a
reasonable amount of time. Is it easy? No. I'd want the genome properly
mapped to proteins and those proteins well-characterized before I'd be
willing to try it. Nonetheless, it is possible.
> "Chain enough of those together to replicate the limbic circuits
> and you may well have artificial emotion."
Yes. If you want to make assertions, you'd better do more than just smile.
> I'm not aware that my claims have any less validity than yours.
> However I am more than willing to change it to highly improbable,
> mimicking it in the limit as you would life in software to say
> that you can't tell the difference.<g>
Ah, but probabilities can be reduced. Infinity can't.
> Interesting. Both you and Fare hold that we are some decades away
> from any ability to state it one way or the other. I consider it
> short-sighted to pursue the unknown when we have yet to exhaust
> the known. I believe that you will only create life as such with
But how else will we get to the unknown? Again, there are an infinite
number of possible programs. If we stick to the kinds of things we know how
to write, we will never exhaust that space --- there's always one more
feature or heuristic that could be slapped on.
> all its properties using carbon-based technology and never with
> von Neumann architecture and Turing-based software. There is a
> chemistry of life relegated to actual physical material that no
> matter how you mimic them in software will always have something
And I claim that this chemistry is not important, that it's the
computational functions of the neurons that matter. I cannot prove this,
but it cannot be disproven.
> requires more than what we know currently. Fare wants to give
> software a "life of its own" except for "purpose" which we will
> retain. He doesn't see that the one contradicts the other,
> because life's processes, simultaneously present in the process,
> does not allow for such separation.
This is a good point, and I agree with you here. I don't think you could
build truly intelligent software without it deciding to have its own
> simply suggest that you reexamine it. I see no sense in
> artificial life, because success means loss of a tool. Do you
> want to reinstitute slavery? Do you want yet another source of
> mis-communication? Do you think that artificial life offer us any
> more than what they could offer without it?
Success does not mean loss of a tool. If some programs are intelligent,
that does not mean that all programs are. I believe that creating
artificial life, as well as attempting to create physical life, is
something which humans must do as part of our progress as an intelligent
species. However, this is getting once more into the realm of theology. I
also believe that by having other forms of life to compare ourselves to, we
will have a deeper understanding of what it means to be alive.
Obviously, it would be wrong to enslave intelligent programs. I don't see
that we could, really. Active, sentient programs would be quite hard to
control short of yanking the plug out of the wall.
> extend our capabilities. Who can be opposed to that? Artificial
> life, something that replicates what we are only thousands of
> times slower on 1,000,000MHz machines and at 100,000,000 times the
> cost, makes no sense at all in my opinion. Artificial limbs,
But it will not be that slow forever, and there is nothing in the laws of
physics that says it must be that slow. Furthermore, it has at any point
the opportunity to diverge from what we are. We cannot consciously rewire
our brains. An intelligent program could. (This is obviously also a source
of great danger if for some reason we mistreat our creations.)
> artificial organs, yes. I personally would prefer non-artificial
> either regenerated through biology.
For now, tissue engineered stuff will be better. IMHO, at some point we're
going to get around to improving on the design of organs, at which point
the artificials may pull ahead once more.
> I see software and hardware as a tool. I don't see artificial
> life as such. Your choice.
It need not be a tool in the sense that a hammer is a tool. It could be a
tool in the sense that a valued teammate is a tool. There are some things
which computers do very well and which humans do poorly, and therefore we
might want to ask intelligent machines to help us with those things. Of
course, we need something to offer in return, even if it's only some
processor cycles to run on. (I'm hoping that there'll be some aspect of
human creativity that *does* turn out to be untransferable, so that AI is
like us but not totally like us. We can then offer them those services in
return.) Yes, this is speculation and science fiction. So what? At CMU, the
Robotics program prides itself on being the only grad program to have
arisen from a science fiction story. Robots were fiction once. Does that
make them somehow less real?