From lmaxson@pacbell.net Sat, 30 Sep 2000 09:44:20 -0700 (PDT) Date: Sat, 30 Sep 2000 09:44:20 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software I have to thank Jecel Assumpcao for clarifying some issues about emergent and non-deterministic behavior in software. I did not set up the two-question pair to Fare as trick questions. I use "consistent" as the qualifying term to determine whether the results conformed to the embedded logic of the software or not. If they (the results) did, then regardless of anyone's ability to fathom or predict them, the software executed as instructed and thus brought nothing "extra" or "special" to the process. In short it did nothing on its own "volition". If they did not conform, then we have to account for (1) the means through which it achieved "take off", the ability to fly on its own, and (2) why given this "independence" it would choose (as clearly it has the choice) to continue to pursue the "original" purpose and not determine to pursue a different one as part of its "nonconformist" behavior. Fare despite his cybernetic leanings will not grant the software any choice other than pursuit of the original purpose. He will allow it to improve upon its internal logic, but this raises now a larger issue. How does software on its own organize itself internally to "develop", "recognize", and "construct" higher level of abstractions from the only thing it can execute: the machine instruction set. You see, how do we bridge this gap from externally set, instruction-conforming behavior to internally set, instruction-conforming behavior? What initiates the internally set behavior from the externally set? If we disallow passing it to it somehow as a form of inheritance, which we must disallow in order for its "new" behavior to be its "own", then we are left with the issue of "spontaneous generation", something occurring on its own independent of the current consistent execution. You see it cannot occur through meta^n-programming regardless of levels. That presupposes that we have some ability to encode a "triggering" event in the execution which will spawn the necessary spontaneous generation. We do not. What we have is the machine instruction set, the only thing that the software can direct the machine to execute. The only direction it can offer is that externally set by its authors. The authors may or may not know (predict) the results. They may or may not take or have the time necessary to retrace the executed logic. But what they do know (something that the non-intelligent software cannot) is that whatever results is consistent with the encoded logic. Truth is that Fare knows this as well. He knows that there are no triggering events not present in nor otherwise determined inconsistent with the embedded logic of the software. It can't happen, because in von Neumann architecture any such occurrence is an "error", something to be fixed. Contrary to his statement that no one can know completely the internal logic of the machine, I began my career being trained to know just that. If it failed, if the results differed from the expected result of a machine instruction execution, the embedded IPO (input-process-output) logic of a machine instruction, my job was to diagnose and repair. Spontaneous generation then in a von Neumann machine is an error, again something to be fixed. The hardware does not support it except as an error. There is no means in software translated into machine instructions to make this possible. Fare in acceding that software retains its "purpose" throughout knows this. He also knows that regardless of our ability to predict or fathom software-produced results it has nothing to do with the results consistency with the encoded logic. We may be surprised. The non-intelligent software lacks this altogether. This is true for AI systems whether rule-based or neural-net. Neural-net implemented in software is rule-based. Rule-based software systems created by humans may reflect the result of human "intelligence" (determining the set of rules), but neither the input, the process, nor the output (result) do more than "reflect" but not "absorb", that intelligence. In an of themselves absent of an "observer" they cannot produce information. For information implies meaning. Meaning does not exist in data or software. Meaning, if it exists at all, does so only in the observer. Are the results consistent with the embedded logic of the software regardless of our ability to fully predict or fathom them? If the answer is yes, then we do not differ in terms of causes and effects. If the answer is no, then what spontaneous generation does not occur in error? If it does occur, how does it still stay within the "purpose" encoded in the software? If we can all agree on results consistent with the embedded software logic, then we can ignore considering inconsistency. That leaves us then free to differ in our degree of individual wonderment relative to what "we" have "created" through software. Therein we can expect that in terms of wonder some are less or more so than others. If that's the only thing which separates us, then we can return to the main Tunes list. From fare@tunes.org Sat, 30 Sep 2000 22:47:18 +0200 Date: Sat, 30 Sep 2000 22:47:18 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: Emergence of behavior through software [Note to cybernethicians: this is a discussion that was moved from the tunes@tunes.org mailing list, that originated on the ambition for Tunes to eventually manage using "AI" techniques implementation choices that usually are done manually, out of a declarative description of possible elementary choices; the discussion then derived on the topic of emergent systems in general and AI in particular] Dear Lynn, On Thu, Sep 28, 2000 at 11:21:23AM -0700, Lynn H. Maxson wrote: > Two questions. Can software produce results not consistent with > the instruction of its author(s)? Can its author(s) provide > instructions that free it from such consistency? > > If the first answer is no, then the second is no also. > Indeed, the answer is no, but once again, the question misses the point. For giving instructions do not mean understanding. I may well genetically engineer a human clone with some genetic instructions, and be unable to fully understand and predict the detailed the behavior of the clone I created. In other words, the piece of running software you write is not free from its own code that you originally provided; but it IS free from the programmer, from you and me or other programs. You give the initial seed, but it grows have a life of its own; it does have an individual history of information processed, and you have to be it to fully grok its detailed behavior. Even for simple programs, I may know the original instructions and still be unable to predict the results (see: simulation of chaotic dynamic systems). The only way to "predict" is to make the same computation independently; which thus no more predict but postdict; the program brings unique worthwhile information that will be unavailable afterwards. > It is the > second question which divides our opinions relative to software > extending boundaries on its own volition. NO NO and NO. We all agree on this question, and have since the beginning. You believe it is the question, and you systematically avoided our remarks that it ISN'T the question. The question is about drawing a separation between doing and understanding. We agree that doing is foremost, both more primitive and more As far as doing goes, we agree, and thus there needs be no debate on > To me the issue is not one of my manually duplicating a process, > but whether that process conforms to what I have prescribed. We agree that the process conforms to what the programmer programs (duh!). What we claim is that 1) in the general case, the program does not contain _all_ the information about the running machine, for persistent state accumulated from integrated I/O, internal growth, etc, does matter. The program is _partial_ information about the running machine. Hopefully, yes, it is correct information. 2) even for deterministic programs where it does "potentially" contain the information, nothing short of running the program will realize the potentiality, so that the program can achieve effects neither designed nor predicted nor predictable nor intended by the programmer. In other words, the cost of prediction does matter. This is not pure abstract maths. 3) not only can such emergent behavior can be _beneficial_ if constrained, checked and selected against some expressible utility criteria; but many more beneficial behavior can be achieved by such selection that cannot be achieved by direct design. 4) actually, if you know Popperian epistemology, even the "design" phenomenon within the human brain works by such emergence and selection principle; what we claim is just that the principle works with machines as well. 5) many programs are already such emergent machines; in a complex compiler with thousands of rules, no one can claim a design or the system details; only a design of the constraint system that force the production rules to conform to the declared semantics of the language. Artificial emergent systems are _already_ useful and can be improved upon for more utility, independently from their eventually reaching "AI" or not. > Now we have a non-intelligent machine into which we load software. > Is it possible to embed intelligence into software? No. I deny meaning to your sentence. Intelligence is NOT a constructive feature that you code or do not code into software. Just like purpose, it is an observable property of emergent phenomena. In as much as you can constrain an observable property, you cannot build an explicit construction of that property; Only for very simple kinds of programs can we control both the construction and the observable properties; these programs are all the more interesting for this extraordinary property, and constitue the bricks and mortar with which we can engineer software. But to limit the study and development of programs to these simple ones is a very short-minded or timid approach to software development. > The software like the machine only does what it is instructed to do > without an "awareness" that it is doing anything. This is a gratuitous statement. Your brain works according to the instructions from its genetical and educational background, yet is "aware" of its doing something (though of course it's only "aware" of a tiny part of itself). The same applies to machines. > Therein lies the problem. We experience what we refer to as > intelligence. In truth we do not know how it occurs within us. Exactly. There's no reason why machines couldn't be intelligent without either them or us knowing how it occurs within them. > Maybe with the invocation of the "genetic code" which is an > instruction set for construction, not operation, it arises > somehow. However we have yet to find a means of transferring it > in any manner as an attribute for a computer or software. Indeed. I do NOT claim that we know how to do it yet, neither do I claim certainship that we'll ever know, or that we won't find a theoretical impossibility to our even knowing. But I do claim * that even stupid, emerging systems can do more than fully designed systems, * that there is room for a lot of improvement in the engineering of emerging systems, and * that a reflective infrastructure is well suited to such innovative use of emerging behaviors. [Note: technical questions about the latter are to Followup-To: tunes@tunes.org; philosophical discussions here.] > After a while the logic get circular. YOU make it circular, and then make us the reproach. I fully agree with your assertions; but I claim their lack of relevance. I am trying to say something quite different than you hereby oppose. Do you acknowledge what I'm trying to say? Yours freely, [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] [ TUNES project for a Free Reflective Computing System | http://tunes.org ] To converse at the distance of the Indes by means of sympathetic contrivances may be as natural to future times as to us is a literary correspondence. -- Joseph Glanvill, 1661 From aswst16+@pitt.edu Sat, 30 Sep 2000 14:26:03 -0400 Date: Sat, 30 Sep 2000 14:26:03 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Saturday, September 30, 2000 9:44 AM -0700 "Lynn H. Maxson" wrote: > If they (the results) did, then regardless of anyone's ability to > fathom or predict them, the software executed as instructed and > thus brought nothing "extra" or "special" to the process. In > short it did nothing on its own "volition". I'd argue that this requires definition of the term volition, and also an understanding of where exactly one obtains volition. To the best of my knowledge, this is not a solvable problem with existing knowledge of the human mind. See also three billion opinions on the Chinese Room. > instruction-conforming behavior? What initiates the internally > set behavior from the externally set? If we disallow passing it > to it somehow as a form of inheritance, which we must disallow in > order for its "new" behavior to be its "own", then we are left Your point that true AI might not conform to its original purpose is well-taken. It obviously has to conform to the architecture on which it runs, for the simple reason that it will die if it does not. However, I wonder at your statement that an organism which has inherited behaviors from an external source cannot claim those behaviors as its own. I have behaviors inherited from my parents, from my society, and from my evolutionary ancestors going back to single-celled life. There is a credible argument that all my actions can be predicted by a sufficiently complex simulation containing all these terms. Do I no longer have any behaviors of my own? If I combine two actions previously taken by others into one which no-one has yet taken, does that count as my own behavior? (A program could achieve this by stringing together two function calls in a way no programmer had instructed it to do.) > levels. That presupposes that we have some ability to encode a > "triggering" event in the execution which will spawn the necessary > spontaneous generation. We do not. What we have is the machine > instruction set, the only thing that the software can direct the > machine to execute. But being limited to an instruction set does not preclude generation of new strings of instructions. One may argue that a human is limited to the actions possible within the known laws of physics, and yet we believe that humans have free will (or a strong illusion thereof). > an "error", something to be fixed. Contrary to his statement that > no one can know completely the internal logic of the machine, I You can know the circuit diagrams. You can know the physical equations governing the circuit components. However, you cannot actually know the behavior of the individual particles which comprise the machine, and perturbing a few of those can have significant effects, especially as component size decreases and we shove fewer charges per operation. > Spontaneous generation then in a von Neumann machine is an error, > again something to be fixed. The hardware does not support it This is half true. Behavior outside the specification is indeed an error. I'm not sure that this is the only possible form of spontaneous generation (at least for my understanding of such a term)... see below. > except as an error. There is no means in software translated into > machine instructions to make this possible. Many have proposed building a true random-number generator into processors --- something that would sample noisy physical data and produce genuinely unpredictable (as guaranteed by Dr. Heisenberg) numbers. What if I use those numbers to generate valid opcodes and feed those back into the processor? If I do this an infinite number of times, probability says that I will eventually produce working programs. (Some might say that there are many programs already extant which were produced in such a manner.) I'm guessing your answer is that this is still not really spontaneous, because those numbers are still being made to obey the rules of the architecture. However, the programs written by humans also conform to those rules (barring the existence of bugs (a perhaps ridiculous assumption)). Do my programs also not count as spontaneous acts? If they don't, what exactly does? > Meaning, if it exists at all, does so only in the observer. This is an acceptable claim, but how does it exist in this observer? Our limited understanding of the mind suggests that it is somehow encoded in the structure of the brain and the currents flowing therein. If one constructs an analogue of that within the computer, is it not then capable of deriving meaning from data? Alik From lmaxson@pacbell.net Sat, 30 Sep 2000 23:42:10 -0700 (PDT) Date: Sat, 30 Sep 2000 23:42:10 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Alik Widge wrote: "I'd argue that this requires definition of the term volition, and also an understanding of where exactly one obtains volition. ..." A reasonable argument. Let me provide you with a definition within the context of this discussion. Are the results consistent with the embedded logic of the software? If they are, then no volition, no "independent" action on the part of the software occurs. On the other hand if the results are inconsistent with the embedded logic, then the software on it own, i.e. independently, created instructions (or data). If it did so behave independently, then it acted on its own volition. " To the best of my knowledge, this is not a solvable problem with existing knowledge of the human mind." Fortunately the question does not involve volition within the human mind. Thus how it occurs there it of no interest to us here. If no one can provide a instance whereby software behaves inconsistent with the its embedded logic, then it makes no difference whether we understand volition or not. At least in the human system it occurs without our knowing why. It does not occur in a computing system of hardware and software. "However, I wonder at your statement that an organism which has inherited behaviors from an external source cannot claim those behaviors as its own I have behaviors inherited from my parents, from my society, and from my evolutionary ancestors going back to single-celled life." Well, you are going to have some difficulty developing software in the same manner of all other living organisms which for the sake of argument let's agree evolved from single-cell life. In fact even considering a computing system as an organism puts you in a rather deep hole. Now you have to take something not derived from single-cell life, expanding the definition of organism such that it becomes the universal class as now there is nothing which we cannot consider in some manner belonging to it, e.g. my radial saw. All we have to have is a system and bingo we have an organism. Software by itself cannot execute. It must reside in a machine and together they constitute a computing system. The software in or out of the machine is not alive nor is the host machine. Thus we do not have what biology defines as an organism. Secondly you may have inherited physical traits, but you certainly did not inherit behavior. Behavior in society is not inherited, neither the society's behavior nor the individuals which compose it. The behavior of software is not inherited for software does not engage in procreative activities as organisms do. Technically we construct the software's behavior. We do so entirely within the realm of formal logic. We do the same with the machine. We define both as 100% formal logic systems. Jecel disagrees with me on this, but the machine is 100% based on the use of logic circuits (which obey formal logic) and the software can do no more than supply a sequence of 100% logic-based instructions. Organisms are not bound by logic. They cannot be constructed with "and", "or", and "not" circuits. The computer is not a brain and the brain is not a computer. "There is a credible argument that all my actions can be predicted by a sufficiently complex simulation containing all these terms." On the contrary it is an incredible argument. You should stop listening to such drivel. As humans we can posit the impossible, the sufficiently complex simulation, in this instance. I'm not going to invoke Penrose here, but any time you believe that you can simulate a living organism to the quantum detail, you best rethink it. "A program could achieve this by stringing together two function calls in a way no programmer had instructed it to do." That's the crux of the argument here. A program "could" if it could free itself of its own instructions. Then you see you would have to come up with where it "acquired" the instructions, i.e. told itself, to do this and then where it acquired the instructions to do this. Then you have to at least point out the means it used to generate both these sets of instructions without acquiring control of the processor. There is no means from within software to address a non-existent set of instructions, to pass control to something which does not exist. In all computing systems of which I am aware this generates a "hard" error (or at least an address exception). "But being limited to an instruction set does not preclude generation of new strings of instructions." Not at all. Again the issue is whether or not any such generated string is consistent with the embedded logic. If you say it may not be, then you have to explain how it can occur. It must occur through invoking logic not present with the software. By definition as every meta-program has embedded logic. Therefore it cannot occur through meta-programming. "You can know the circuit diagrams. You can know the physical equations governing the circuit components. However, you cannot actually know the behavior of the individual particles which comprise the machine, and perturbing a few of those can have significant effects, especially as component size decreases and we shove fewer charges per operation." Considering the logic of this I might have saved myself some effort by letting you destroy your own "credible argument" about a "sufficiently complex simulation". Nevertheless regardless of how small the circuits become their logical function remains the same. "Many have proposed building a true random-number generator into processors --- something that would sample noisy physical data and produce genuinely unpredictable (as guaranteed by Dr. Heisenberg) numbers. What if I use those numbers to generate valid opcodes and feed those back into the processor? If I do this an infinite number of times, probability says that I will eventually produce working programs." It doesn't bother me to have someone talk about doing the impossible, e.g. performing an operation an infinite number of times. I have a somewhat clear picture of the difference between science fiction and science fact. The fascination with random numbers or randomness in general as a source for spontaneity in a computing system I find amusing. Decision logic in software (if...then...else, case...when...otherwise) determines what occurs with any random number regardless of its source. There is no randomness in software logic, all possibilities are covered...or else you have a software error. We keep acting as if software were only a set of instructions when in reality it has two inter-related spaces, an instruction space and a data space. Moreover the data space has two subspaces, a read-only subspace and a read/write subspace. Instructions operate on data or on machine state indicators e.g. branch on overflow. As one who began his career writing in machine language (actual) as no other option existed for the system let me assure you that beginning with that time and continuing up until now (and into the future) great care in maintaining harmony among data and instructions and among instructions and instructions takes place. Otherwise the "system" fails. Now Tunes is involved with avoiding such failures, to have reliability not present in current software. Supposedly this occurs through elaboration and sophistication of sufficiently high HLL in combination with meta^n-programming and the use of reflexive programming. None of these, however, "allow" inconsistent software behavior as they insure consistency with their embedded logic. They have no means within themselves to escape their own consistency nor to transfer it somehow to virtual software which has no means of self-generation. Software cannot escape its own consistency. It cannot avoid its own errors. Randomness does no more than transfer control (decision control structure) within consistent boundaries. It is simply another way of making a decision on which path to take next. "> Meaning, if it exists at all, does so only in the observer. This is an acceptable claim, but how does it exist in this observer? Our limited understanding of the mind suggests that it is somehow encoded in the structure of the brain and the currents flowing therein. If one constructs an analogue of that within the computer, is it not then capable of deriving meaning from data?" While you say analog here instead of "sufficiently complex simulation" the same piece of science fiction comes to the fore. You cannot create a brain or any part of a human system with a computer. One is an organism, fashioned in the same manner as any other, while a computer is not. von Neumann architecture is not. Turing rules of computation are not. Machines of any stripe are not. I do not know how an observer acquires meaning from data. I do know that you can train observers to do so. However I do not know how that training does what it does. Basically from what you have said I assume that we agree that we do not know. At that we are one up on a non-intelligent computing system whose current architecture hasn't a chance in hell of becoming anything else. At least we know we don't know. From aswst16+@pitt.edu Sun, 01 Oct 2000 12:26:53 -0400 Date: Sun, 01 Oct 2000 12:26:53 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Saturday, September 30, 2000 11:42 PM -0700 "Lynn H. Maxson" wrote: > A reasonable argument. Let me provide you with a definition > within the context of this discussion. Are the results consistent > with the embedded logic of the software? If they are, then no > volition, no "independent" action on the part of the software All right. That's a definition. Now I ask for a justification. Why can volition not arise within the constraints of a rule set? > rather deep hole. Now you have to take something not derived from > single-cell life, expanding the definition of organism such that > it becomes the universal class as now there is nothing which we > cannot consider in some manner belonging to it, e.g. my radial > saw. All we have to have is a system and bingo we have an > organism. I personally would put some requirements on that, such that a system which wished to be an organism must at least be capable of sustaining itself indefinitely, but otherwise, I do not see this as a problem. > Software by itself cannot execute. It must reside in a machine > and together they constitute a computing system. The software in > or out of the machine is not alive nor is the host machine. Thus > we do not have what biology defines as an organism. Careful. A parasite cannot survive on its own --- it must live in a host. In fact, all known species exist only as part of ecosystems. Being dependent on other parts of a system does not preclude being an organism. > Secondly you may have inherited physical traits, but you certainly > did not inherit behavior. Behavior in society is not inherited, > neither the society's behavior nor the individuals which compose > it. The behavior of software is not inherited for software does > not engage in procreative activities as organisms do. 1) Behavior has been shown to be partially inherited, especially in the case of mental disorders. It's not a very strong effect, but it's statistically significant. (I will admit that I was being loose with the word "inherit" and including those things I picked up from my parents by simple imitation.) 2) Why can software not procreate? What about viruses? I could program a virus (well, if I knew anything about virus-writing) which went around and extracted bits of code from programs on its host and then tried to breed with other copies of itself. It would take a long time to be an effective virus, and someone would kill it first, but it could be done. > instructions. Organisms are not bound by logic. They cannot be > constructed with "and", "or", and "not" circuits. The computer is > not a brain and the brain is not a computer. Again, be careful. You can't prove either of those. I have yet to find a task which a human can do and a Turing machine cannot. The brain and computer are superficially different, but that doesn't mean that they aren't just two implementations of a central theme. > On the contrary it is an incredible argument. You should stop > listening to such drivel. As humans we can posit the impossible, > the sufficiently complex simulation, in this instance. I'm not > going to invoke Penrose here, but any time you believe that you > can simulate a living organism to the quantum detail, you best > rethink it. You yourself say that it doesn't matter whether or not we can actually do the prediction, as long as it's possible on paper. Would it take more space and time than is available in the universe? Sure. > acquiring control of the processor. There is no means from within > software to address a non-existent set of instructions, to pass > control to something which does not exist. In all computing > systems of which I am aware this generates a "hard" error (or at > least an address exception). But they're not non-existent. Have the program create them, then pass control to them. I see where you're coming from --- you're saying that this is still the programmer telling the program to make them. But did the programmer have no volition if someone else told him to write that program? Seems like we're chasing a chain back to the Big Bang. > Considering the logic of this I might have saved myself some > effort by letting you destroy your own "credible argument" about a > "sufficiently complex simulation". Nevertheless regardless of > how small the circuits become their logical function remains the > same. But their conformance to that logical function does not. At some point, their statistical nonconformance becomes perceptible. I'm trying to catch you in a contradiction here. If obeying some rules, any rules, which are mathematically expressible precludes volition, I argue that you must declare humans non-volitional. Since you don't seem willing to do that, I sense a contradiction. > It doesn't bother me to have someone talk about doing the > impossible, e.g. performing an operation an infinite number of > times. I have a somewhat clear picture of the difference between > science fiction and science fact. All right... I'll set myself up a warehouse of old x86es and let them compute for as long as they can continue to run. If they go a hundred years, do you think that no valid programs will be generated? Give me some numbers for the size (in ops) of "Hello, world" and the average ops-per-second for the entire warehouse, and we'll do the calculation. > The fascination with random numbers or randomness in general as a > source for spontaneity in a computing system I find amusing. Hm. We keep coming back to this idea that following rules means you're not spontaneous. I suppose that as long as you're using that as an assumption, your argument is consistent. > We keep acting as if software were only a set of instructions when > in reality it has two inter-related spaces, an instruction space > and a data space. Moreover the data space has two subspaces, a > read-only subspace and a read/write subspace. Instructions > operate on data or on machine state indicators e.g. branch on > overflow. This isn't inherent to the system, though. A processor may be able to detect overflow, and it may raise a signal, but it makes no requirement that you do anything about it. It doesn't have inherent code/data separation (or at least, it need not). It just fetches things from the memory and puts them into instruction or data registers as needed Moreover, you can use HLLs to cheat. Consider the LISPs. Their code space is simply the interpreter. In the data space, one can put any executable program. These programs can be used to generate other programs, and in fact this is one of the standard stupid LISP tricks. If the instruction space contains "Look in data for things that look like programs and try to run them, letting them work on other things in the data space", you effectively have a single space. > Software cannot escape its own consistency. It cannot avoid its > own errors. Randomness does no more than transfer control > (decision control structure) within consistent boundaries. It is > simply another way of making a decision on which path to take > next. All quite true, and I do not argue it. I merely challenge your further statement that this means software can never have volition. > You cannot create a brain or any part of a human system with a > computer. One is an organism, fashioned in the same manner as any > other, while a computer is not. von Neumann architecture is not. > Turing rules of computation are not. Machines of any stripe are > not. This is *definitely* an assumption. All known neural pathways can be modeled in software. It's a statistical process, but so is the brain, from what we know. Now, if you want to say that that's still never going to be a real organism, that's fine, but you're heading for the realm of theology. Is it hard to put an entire brain into software? Of course. We're going to need something that can automate the process, because the connections are too numerous to be coded by hand. On the other hand, the process of brain-construction is by definition automatable, since the brain self-assembles from embryonic tissue. From lmaxson@pacbell.net Sun, 01 Oct 2000 19:17:48 -0700 (PDT) Date: Sun, 01 Oct 2000 19:17:48 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Alik Widge wrote: "[re volition]All right. That's a definition. Now I ask for a justification. Why can volition not arise within the constraints of a rule set?" It depends upon who is executing the rule set. If it is you or I deciding that we don't want to operate within those constraints, then we can choose otherwise. I don't know what the process is within "living" organisms that allows this. We have it, computers and software don't. As Fare has admitted all software executes in a manner consistent with its embedded logic. No one knows how to program volition because no one knows the process from which it arises. If we determine that it is a non-transferable property of cell-based organisms, we can never incorporate it in software regardless of how well we mimic it. "I personally would put some requirements on that, such that a system which wished to be an organism must at least be capable of sustaining itself indefinitely, but otherwise, I do not see this as a problem." The thing that stumps me most in communicating with Fare lies in his metaphors, of confusing similar with identical, as if the properties of one became those of another identically. They do not of course or we would not invoke metaphors. Here you posit an impossible situation, a system deciding that it can become an organism. Organisms don't have that choice. Neither do non-organisms. The truth is that we have no means of creating an organism without starting with one: procreation. For the record cloning does not change that. As one who gardens extensively and raises fruit trees propagation always begins with an organism. It is a problem with a computer and software in that neither start as an organism. No matter how you mix, mash, meld, and merge them if you don't start with an organism, you don't end up with one. One passing note. Artificial means not real. AI means now and forever more not real intelligence, but something else altogether. No matter what we do to it or with it, it will never cross the line:it will remain artificial. "Careful. A parasite cannot survive on its own --- it must live in a host. In fact, all known species exist only as part of ecosystems. Being dependent on other parts of a system does not preclude being an organism." I thought I exercised extreme care. Software is not an organism. Computers are not an organism. We have only one means of producing organisms, that according to a process we do not understand. However, it does not change the fact that two non-organisms cannot join to form an organism. Two "wrongs" cannot make a "right". " Why can software not procreate? What about [software] viruses?" Software is not an organism. That's the long and the short of it. Software viruses work because they receive control from the processor and execute a "behavior" consistent with their embedded rules. Nothing changes. "Again, be careful. You can't prove either of those. I have yet to find a task which a human can do and a Turing machine cannot. The brain and computer are superficially different, but that doesn't mean that they aren't just two implementations of a central theme." I don't want to touch this one. I am somewhat disappointed that your contact with other humans and organisms hasn't introduced you to processes not duplicable by a Turing machine. I don't know what computer architecture to which you have been exposed, but if it was von Neumann-based, the differences are not superficially different. We should hear more about what you think is the central theme common to both of them. In biology it is survival. In evolution it is survival of the fittest. "But they're not non-existent. Have the program create them, then pass control to them. I see where you're coming from --- you're saying that this is still the programmer telling the program to make them. But did the programmer have no volition if someone else told him to write that program? Seems like we're chasing a chain back to the Big Bang." I think you're getting the idea of what must occur in software which must execute in a manner consistent with its embedded logic. It can never be free of the actions of the programmer, regardless of the programmer's ability to predict all possible outcomes or understand them completely. It does no more than what the programmer told it. "I'm trying to catch you in a contradiction here. If obeying some rules, any rules, which are mathematically expressible precludes volition, I argue that you must declare humans non-volitional. Since you don't seem willing to do that, I sense a contradiction." Seems fair (or even Fare). Humans do not "obey" mathematically expressible rules. Mathematics is one of a multitude of human systems along with religion, politics, education, social, psychological, and all the rest. Whether we stay "within" the rules or stray outside them is a choice we can make at any time. That choice is not available to software nor have we any means of programming it into it. "All right... I'll set myself up a warehouse of old x86es and let them compute for as long as they can continue to run." You'll do better with just one. Unfortunately software is a fragile beast, overly sensitive to "illogic", and prone to failure. It is one thing to put monkeys in front of typewriters where whatever order of letters is acceptable. That's simply not true of software. "We keep coming back to this idea that following rules means you're not spontaneous. I suppose that as long as you're using that as an assumption, your argument is consistent." If I have no choice but to follow them, then spontaneity is out. That the situation with software which must follow the rules set by some source other than itself. On the other hand I (or you) can be following rules "suddenly" seeing a different path to pursue than the current one. The difference (perhaps) is that as an organism we are aware that we are following rules. Software, being non-intelligent as well as a non-life form, is not aware that it is even following rules. "All quite true, and I do not argue it. I merely challenge your further statement that this means software can never have volition." If it doesn't have choice or even a choice in its choices, it cannot have volition. It is not aware that it does not have a choice. That piece of "magic" which exists at least in human organisms does not exist in software nor have we any means of putting it there. At least until we know what it is that we have to put. I suspect that we will find it intrinsic to the organism and therefore non-transferable. " All known neural pathways can be modeled in software. It's a statistical process, but so is the brain, from what we know." "On the other hand, the process of brain-construction is by definition automatable, since the brain self-assembles from embryonic tissue." You can simulate how a brain works down to the quantum detail and you still will not end up with a brain. If you want to say that there is no difference between this "artificial" brain and a real one then develop it in its entirety through procreation. Now meld it with the remainder of the human system. Without this remainder, without a system in place, the brain has no function and in fact can do nothing. The brain, the nervous system, the blood system, the organs, the skeleton, the muscles, the skin--all exist as a single entity, all interconnected. The fascination with the brain and with emulating it in software deliberately "abstracts" it from the system of which it is a part. There's no way that such an abstraction, i.e. selectively leaving something out, and implementing it (if it were possible) in software will result in an artificial brain that "behaves" like a real one. If you don't go for the abstraction, then you must go for the whole ball of wax, the human system. The brain is not constructed according to the rules of logic. Nothing so constructed can ever be a brain. That is true for the most sophisticated and detailed implementation of a simulation. There's no crossover point, no point at which the artificial acquires a "property" (or "properties") and becomes the real thing. All computers are based 100% in pure logic. All software which executes successfully cannot violate the rules of the host computer. It's 100% logic based. No organism from the single-cell amoeba on up is so constructed. Logic is our creation, not vice versa. I should remind you of the difference between automatic and automated. The self-assembly you refer to, which does not effect the brain alone but the entire organism, is automatic. If it were automated, then its source would have been another process different from the current one. From fare@tunes.org Mon, 2 Oct 2000 15:59:03 +0200 Date: Mon, 2 Oct 2000 15:59:03 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: Emergence of behavior through software On Sat, Sep 30, 2000 at 09:44:20AM -0700, Lynn H. Maxson wrote: > If they (the results) did, then regardless of anyone's ability to > fathom or predict them, the software executed as instructed and > thus brought nothing "extra" or "special" to the process. In > short it did nothing on its own "volition". DEAD WRONG. Volition does NOT consist in choosing the rules. No single human chose his genetic code, his grown brain structure, his education. Yet all humans are considered having a will. > Fare despite his cybernetic leanings will not grant the software > any choice other than pursuit of the original purpose. DEAD WRONG. Structure is NOT purpose. My initial structure has no purpose so to speak. Purpose is NOT a structural property. I cannot negate what I am. I can choose my purpose. When I choose to do something, I am what I am; I "obey" my nature. Moreover, you completely blank out the fact that I insisted how the initial program was but a tiny portion of what makes my identity. My identity is made of my dynamic persistent state, not of my static structure (even though the latter constrains the former). > You see it cannot occur through meta^n-programming regardless of > levels. That presupposes that we have some ability to encode a > "triggering" event in the execution which will spawn the necessary > spontaneous generation. DEAD WRONG. You blank out 150 years of evolution theory. Change in behavior is no magic event. It is born in continuous transformation. Meta^n-programming is not about directed design, but about selection. You have a Lamarckian (or even creationist) model of programming in mind; I have a Darwinist (or even Dawkinsian) model of programming in mind. > that whatever results is consistent with the encoded logic. So what? This is a very week statement. Knowing that my socks are blue, whatever outcome in the world will be consistent with my socks being blue. But this is a completely irrelevant fact for most outcomes. If your "encoded logic" is universal, just any behavior is consistent with it. So conformance to the logic is a null statement. > Truth is that Fare knows this as well. Don't you take my statements as endorsement of your positions. I specifically reject your very problem situation. I strongly dislike your way of turning around arguments, completely avoiding to answer to points others claim as relevant, not even acknowledging them, and claiming as victories their concessions on points they claim are irrelevant. This is no rational discussion, only self-delusion thereof. > what spontaneous generation does not occur in error? There is no error. By definition, anything generated is correct. In the mass of potential and actual generated data, patterns survive or die according to higher-level selection rules. This is where any purpose comes into play. > If it does occur, how does it still stay > within the "purpose" encoded in the software? You have a flawed, theistic notion of purpose. Purge it. What is encoded is structure, and purpose is not structural. If there be any "purpose" encoded in the program, it is encoded in the meta-rules for differential survival; if these meta-rules have any purpose, it lies in meta-meta-rules. And so on. In the end, you have a purposeless, fully structured meta^n system. In quantum mechanical terms, I'd say that structure and purpose are dual: when you have some of one, you can only have so much of the other. If you fully encode your program, it has no purpose (YOU may have; IT won't). When you give purpose to your program, you loosen its structure, and accept the fact that persistent state may completely alter its behavior. Yours freely, [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] [ TUNES project for a Free Reflective Computing System | http://tunes.org ] You may have original good ideas, but it is no excuse for not learning good ideas that are already common knowledge. From lmaxson@pacbell.net Mon, 02 Oct 2000 10:34:38 -0700 (PDT) Date: Mon, 02 Oct 2000 10:34:38 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Fare, you have been so kind to answer my question(s). It is only fair that I in turn respond to yours. First allow me to state two logically equivalent forms on what we agree: (1)No software executes in a manner inconsistent with its embedded logic, (2)All software executes in a manner consistent with its embedded logic. The only subnote to this lies in your "Artificial emergent systems are _already_ useful and can be improved upon for more utility, independently from their eventually reaching "AI" or not." Does even reaching this change the agreement? "Volition does NOT consist in choosing the rules. No single human chose his genetic code, his grown brain structure, his education. Yet all humans are considered having a will." Volition deals ultimately with choice. In the dictionary this is qualified as a "conscious" choice. Consciousness lies in self-awareness. Rather than introduce extraneous elements like genetic code, brain structure, or will here, let's just stick with whether or not software has the property of self-awareness (Does it know what it is doing?), consciousness (Does it know it is doing anything?), and volition (Does it have a choice in what it is doing?). The answer to all three questions is "no". It doesn't make any difference if the embedded logic uses any amount of random selection, any amount of meta^n-programming, or the highest in HLLs, elaoboration, complexity, or sophistication. It cannot escape consistency with the embedded logic. That embedded logic is solely based on the formal logic embedded in the host machine. No tweaking of logic, through simulation, emulation, or adulation, will ever transfer an intrinsic property of an organism (which does occur within its structuring) into software. Whatever AI may reach it will remain "artificial", not identical to the real thing. "My initial structure has no purpose so to speak. Purpose is NOT a structural property. I cannot negate what I am. I can choose my purpose. When I choose to do something, I am what I am; I "obey" my nature." Well, one of us is "dead wrong". You certainly can make a choice. However your ability to do so depends upon your structure, your human system and whatever in it is responsible for "life". When that leaves you, when you die, you can no longer make a choice nor exhibit purpose. If you obey your nature, it must be contained in what you are physically. That my scientific friend is structure. If you believe that mental activity is not totally physically based, then it is you who introduce theistic notions. But let's get back to your question. I quote: "For giving instructions do not mean understanding. I may well genetically engineer a human clone with some genetic instructions, and be unable to fully understand and predict the detailed the behavior of the clone I created. In other words, the piece of running software you write is not free from its own code that you originally provided;but it IS free from the programmer, from you and me or other programs. You give the initial seed, but it grows have a life of its own;" "The question is about drawing a separation between doing and understanding." So let's answer the question about drawing a separation between doing and understanding. The first thing we need to do is put "understanding" in its place in the scheme of things. First comes "knowledge", then "understanding", and then "wisdom". Knowledge comes from "knowing" you are doing something. Understanding comes from knowing what you are doing and if possible why. Wisdom comes from using the understanding of what you know to possible change what you do. What you should notice here is that humans engage in all three levels and software in exactly none. Software doesn't know that it is doing anything for the simple reason that it is non-intelligent. It certainly doesn't understand what it doesn't know for the same reason. It cannot have wisdom for the same reason. Non-intelligent means not having the property of intelligence, which so far as we know exists only in living organisms. The only "seed" for an organism is an organism. Man thus far has had no success in creating an organism in any other manner. Clearly software is not an organism and thus speaking of it "having a life of its own" is metaphorical, not scientific nor factual. It has no life. Moreover we have no means currently of giving it life. However, for the sake of argument let's assume that it does. The issue comes down to prediction and understanding of observable reality. We have a history of increasing our knowledge and understanding of such events leading in turn an increasing ability to predict them. Following our assumption and the basis of Fare's metaphor, this means our gains have occurred at the loss of life within those events. That, my friends, is logic. Fare takes this, our inability at times to predict and understand the results of software execution, as a means of giving software something (life, independence, freedom from the programmer) that it must lose in the event that we gain the ability to do either. Note that this "loss" occurs without a change in the software logic or in its execution. Therefore it must be a property independent of them, perhaps even a "soul". This property arises from a more serious claim by Fare that we have software whose results we do not understand or we cannot predict. To me both are patently false. As one who patronizes cybernetic Fare should know better. For cybernetics as described by Ashby in his "Introduction to Cybernetics" relies on the IPO model. To say that we cannot understand results (output) means that the P (process) which we must know (in order to have written it logically) exceeds our intellectual capacity. To say that we cannot predict results (given that we can understand the process) means that we know the input and the process but lack the intellectual capacity to apply the one to the other. Now if we do not know the input and understand the process, certainly we cannot predict. All this means is that we must "know" the same input used within the execution instance of the software. Fare pooh-pooh's this by saying it is "postdict" not "predict". No. It is acquiring the necessary input in order to apply the process to it, in which now we can achieve the same results as the software. If the execution instance provides us with the input and we can now predict the outcome, then the software has lost any life of its own. The questions surrounding prediction and understanding get even more nefarious. Fare seems to forget that we write software and construct host machines to form a tool system. We do so as a means of extending "our" own ability. Moreover we do so for "reasons of our own". These reasons are the "causal" processes that lead to "effects" or the means of satisfying. Among these "reasons of our own" are curiosity, amusement, and a desire to increase our knowledge, our understanding, and our ability to predict. We use tools to assist us in this. Now we are bound by time, the amount that we can achieve in a given interval. To increase our productivity we use tools which allows us to achieve more of what we want in the interval. That we cannot predict or cannot understand does accrue to an intellectual failure or weakness. Instead we do not want to be bothered when a more efficient "time" mechanism is available. We "choose" not to have to predict or understand a priori. Why? It saves us a hell of a lot of time. We may not take the time to either know or understand what occurred and why within such a process. That is our choice. Again it is not an intellectual deficiency. As long as we can verify that the tool works correctly, then how it did what it did becomes unimportant. It allowed us to achieve "our" purpose. In so doing the tool did not acquire a "life of its own" because we chose to neither know nor understand. You see all this derives from software executing consistent with its embedded logic. In answering in turn Fare's question about doing and understanding we have clearly made a distinct between non-intelligent software and an intelligent organism like a human doing something. One knows that it is doing something (knowledge), can determine what and why it is doing it (understanding), and can modify its future doings (wisdom). East is East and West is West and ne'er the twain shall meet. "It [change in behavior] is born in continuous transformation. Meta^n-programming is not about directed design, but about selection. You have a Lamarckian (or even creationist) model of programming in mind; I have a Darwinist (or even Dawkinsian) model of programming in mind." An interesting side note is that changing behavior in software, i.e. software maintenance, is increasingly expensive in time and cost. Tunes is doing nothing to address this nor addresses it in its HLL requirements. Obviously the answer lies in an "intrinsic continuous transformation" process. The question becomes how do we best implement this. The answer is process improvement, the process of developing and maintaining software. Tunes nowhere to the best of my knowledge addresses this. Instead I am once more faced with a false metaphor: writing software according to Lamarck, Darwin, or Hawkin theory. Actually I write it according to the software development process of specification, analysis, design, construction, and testing. If any "evolution" occurs at all, it occurs through those stages. As I do not confuse biological development (and maintenance) with that of software despite some "perceived" similarities I have no problem with the distinction between "writing" software and "growing" organisms. But the issue in "meta^n-programming is not about directed design, but of selection". So let's talk about that. Selection of what? Answer. Pre-determined choices. No random anything will change that. Decision logic in software regardless of where it appears only allows certain paths to follow. Furthermore "Change in behavior is no magic event", in which we agree, and "It is born in continuous transformation". The first is certainly true in software. The second in software can only occur through pre-determined choices whether randomly selected or not. That is the "condition" of a pure logic system. Fortunately organisms, among them human beings, are not pure logic systems. Therefore the continuous transformations that can occur in software happen through the continuous transformations in humans which are not so restricted. The one can determine the choices (and thus the transformations) of the other and not vice versa. Not even Hawkins can change that. "If your "encoded logic" is universal, just any behavior is consistent with it. So conformance to the logic is a null statement." Yes, but you see the encoded logic, particularly that of software, is not universal. So conformance to (consistent with) the logic is not a null statement. Nice try. "I strongly dislike your way of turning around arguments, completely avoiding to answer to points others claim as relevant, not even acknowledging them, and claiming as victories their concessions on points they claim are irrelevant. This is no rational discussion, only self-delusion thereof." I imagine that you do, considering the arguments you make. The matter of relevance or not lies in the eye of the observer. To me the issue of software execution "always" consistent with its encoded logic is relevant. That consistency keeps it from doing other than what it was "told" (an external agent). By your own admission that does not occur in an organism (your cloning example). What do we need more than this to know only metaphorically, not factually, that software has a life of its own? Certainly it answers your question about doing and understanding as well as prediction and understanding where one, software, "does" without "understanding" (or "knowing") while this "life" form "does", "knows" it is doing, and "understands" what it is doing. I, therefore, have something that the software does not: a life of my own. I suggest that the charge of self-delusion here is one of projection, in the source and not the target. I doubt very seriously if progress in software, in doing the things specified in the Tunes HLL requirements, has any need for any properties outside those available in formal logic. It hasn't required anything else in getting to this point. We certainly haven't exhausted all the logical possibilities. When and if we do, considering parallel progress in other fields including biology, then we can consider non-logical, statistical-based approaches. Meanwhile let's complete what we can with von Neumann and Turing. From lmaxson@pacbell.net Mon, 02 Oct 2000 10:34:38 -0700 (PDT) Date: Mon, 02 Oct 2000 10:34:38 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Fare, you have been so kind to answer my question(s). It is only fair that I in turn respond to yours. First allow me to state two logically equivalent forms on what we agree: (1)No software executes in a manner inconsistent with its embedded logic, (2)All software executes in a manner consistent with its embedded logic. The only subnote to this lies in your "Artificial emergent systems are _already_ useful and can be improved upon for more utility, independently from their eventually reaching "AI" or not." Does even reaching this change the agreement? "Volition does NOT consist in choosing the rules. No single human chose his genetic code, his grown brain structure, his education. Yet all humans are considered having a will." Volition deals ultimately with choice. In the dictionary this is qualified as a "conscious" choice. Consciousness lies in self-awareness. Rather than introduce extraneous elements like genetic code, brain structure, or will here, let's just stick with whether or not software has the property of self-awareness (Does it know what it is doing?), consciousness (Does it know it is doing anything?), and volition (Does it have a choice in what it is doing?). The answer to all three questions is "no". It doesn't make any difference if the embedded logic uses any amount of random selection, any amount of meta^n-programming, or the highest in HLLs, elaoboration, complexity, or sophistication. It cannot escape consistency with the embedded logic. That embedded logic is solely based on the formal logic embedded in the host machine. No tweaking of logic, through simulation, emulation, or adulation, will ever transfer an intrinsic property of an organism (which does occur within its structuring) into software. Whatever AI may reach it will remain "artificial", not identical to the real thing. "My initial structure has no purpose so to speak. Purpose is NOT a structural property. I cannot negate what I am. I can choose my purpose. When I choose to do something, I am what I am; I "obey" my nature." Well, one of us is "dead wrong". You certainly can make a choice. However your ability to do so depends upon your structure, your human system and whatever in it is responsible for "life". When that leaves you, when you die, you can no longer make a choice nor exhibit purpose. If you obey your nature, it must be contained in what you are physically. That my scientific friend is structure. If you believe that mental activity is not totally physically based, then it is you who introduce theistic notions. But let's get back to your question. I quote: "For giving instructions do not mean understanding. I may well genetically engineer a human clone with some genetic instructions, and be unable to fully understand and predict the detailed the behavior of the clone I created. In other words, the piece of running software you write is not free from its own code that you originally provided;but it IS free from the programmer, from you and me or other programs. You give the initial seed, but it grows have a life of its own;" "The question is about drawing a separation between doing and understanding." So let's answer the question about drawing a separation between doing and understanding. The first thing we need to do is put "understanding" in its place in the scheme of things. First comes "knowledge", then "understanding", and then "wisdom". Knowledge comes from "knowing" you are doing something. Understanding comes from knowing what you are doing and if possible why. Wisdom comes from using the understanding of what you know to possible change what you do. What you should notice here is that humans engage in all three levels and software in exactly none. Software doesn't know that it is doing anything for the simple reason that it is non-intelligent. It certainly doesn't understand what it doesn't know for the same reason. It cannot have wisdom for the same reason. Non-intelligent means not having the property of intelligence, which so far as we know exists only in living organisms. The only "seed" for an organism is an organism. Man thus far has had no success in creating an organism in any other manner. Clearly software is not an organism and thus speaking of it "having a life of its own" is metaphorical, not scientific nor factual. It has no life. Moreover we have no means currently of giving it life. However, for the sake of argument let's assume that it does. The issue comes down to prediction and understanding of observable reality. We have a history of increasing our knowledge and understanding of such events leading in turn an increasing ability to predict them. Following our assumption and the basis of Fare's metaphor, this means our gains have occurred at the loss of life within those events. That, my friends, is logic. Fare takes this, our inability at times to predict and understand the results of software execution, as a means of giving software something (life, independence, freedom from the programmer) that it must lose in the event that we gain the ability to do either. Note that this "loss" occurs without a change in the software logic or in its execution. Therefore it must be a property independent of them, perhaps even a "soul". This property arises from a more serious claim by Fare that we have software whose results we do not understand or we cannot predict. To me both are patently false. As one who patronizes cybernetic Fare should know better. For cybernetics as described by Ashby in his "Introduction to Cybernetics" relies on the IPO model. To say that we cannot understand results (output) means that the P (process) which we must know (in order to have written it logically) exceeds our intellectual capacity. To say that we cannot predict results (given that we can understand the process) means that we know the input and the process but lack the intellectual capacity to apply the one to the other. Now if we do not know the input and understand the process, certainly we cannot predict. All this means is that we must "know" the same input used within the execution instance of the software. Fare pooh-pooh's this by saying it is "postdict" not "predict". No. It is acquiring the necessary input in order to apply the process to it, in which now we can achieve the same results as the software. If the execution instance provides us with the input and we can now predict the outcome, then the software has lost any life of its own. The questions surrounding prediction and understanding get even more nefarious. Fare seems to forget that we write software and construct host machines to form a tool system. We do so as a means of extending "our" own ability. Moreover we do so for "reasons of our own". These reasons are the "causal" processes that lead to "effects" or the means of satisfying. Among these "reasons of our own" are curiosity, amusement, and a desire to increase our knowledge, our understanding, and our ability to predict. We use tools to assist us in this. Now we are bound by time, the amount that we can achieve in a given interval. To increase our productivity we use tools which allows us to achieve more of what we want in the interval. That we cannot predict or cannot understand does accrue to an intellectual failure or weakness. Instead we do not want to be bothered when a more efficient "time" mechanism is available. We "choose" not to have to predict or understand a priori. Why? It saves us a hell of a lot of time. We may not take the time to either know or understand what occurred and why within such a process. That is our choice. Again it is not an intellectual deficiency. As long as we can verify that the tool works correctly, then how it did what it did becomes unimportant. It allowed us to achieve "our" purpose. In so doing the tool did not acquire a "life of its own" because we chose to neither know nor understand. You see all this derives from software executing consistent with its embedded logic. In answering in turn Fare's question about doing and understanding we have clearly made a distinct between non-intelligent software and an intelligent organism like a human doing something. One knows that it is doing something (knowledge), can determine what and why it is doing it (understanding), and can modify its future doings (wisdom). East is East and West is West and ne'er the twain shall meet. "It [change in behavior] is born in continuous transformation. Meta^n-programming is not about directed design, but about selection. You have a Lamarckian (or even creationist) model of programming in mind; I have a Darwinist (or even Dawkinsian) model of programming in mind." An interesting side note is that changing behavior in software, i.e. software maintenance, is increasingly expensive in time and cost. Tunes is doing nothing to address this nor addresses it in its HLL requirements. Obviously the answer lies in an "intrinsic continuous transformation" process. The question becomes how do we best implement this. The answer is process improvement, the process of developing and maintaining software. Tunes nowhere to the best of my knowledge addresses this. Instead I am once more faced with a false metaphor: writing software according to Lamarck, Darwin, or Hawkin theory. Actually I write it according to the software development process of specification, analysis, design, construction, and testing. If any "evolution" occurs at all, it occurs through those stages. As I do not confuse biological development (and maintenance) with that of software despite some "perceived" similarities I have no problem with the distinction between "writing" software and "growing" organisms. But the issue in "meta^n-programming is not about directed design, but of selection". So let's talk about that. Selection of what? Answer. Pre-determined choices. No random anything will change that. Decision logic in software regardless of where it appears only allows certain paths to follow. Furthermore "Change in behavior is no magic event", in which we agree, and "It is born in continuous transformation". The first is certainly true in software. The second in software can only occur through pre-determined choices whether randomly selected or not. That is the "condition" of a pure logic system. Fortunately organisms, among them human beings, are not pure logic systems. Therefore the continuous transformations that can occur in software happen through the continuous transformations in humans which are not so restricted. The one can determine the choices (and thus the transformations) of the other and not vice versa. Not even Hawkins can change that. "If your "encoded logic" is universal, just any behavior is consistent with it. So conformance to the logic is a null statement." Yes, but you see the encoded logic, particularly that of software, is not universal. So conformance to (consistent with) the logic is not a null statement. Nice try. "I strongly dislike your way of turning around arguments, completely avoiding to answer to points others claim as relevant, not even acknowledging them, and claiming as victories their concessions on points they claim are irrelevant. This is no rational discussion, only self-delusion thereof." I imagine that you do, considering the arguments you make. The matter of relevance or not lies in the eye of the observer. To me the issue of software execution "always" consistent with its encoded logic is relevant. That consistency keeps it from doing other than what it was "told" (an external agent). By your own admission that does not occur in an organism (your cloning example). What do we need more than this to know only metaphorically, not factually, that software has a life of its own? Certainly it answers your question about doing and understanding as well as prediction and understanding where one, software, "does" without "understanding" (or "knowing") while this "life" form "does", "knows" it is doing, and "understands" what it is doing. I, therefore, have something that the software does not: a life of my own. I suggest that the charge of self-delusion here is one of projection, in the source and not the target. I doubt very seriously if progress in software, in doing the things specified in the Tunes HLL requirements, has any need for any properties outside those available in formal logic. It hasn't required anything else in getting to this point. We certainly haven't exhausted all the logical possibilities. When and if we do, considering parallel progress in other fields including biology, then we can consider non-logical, statistical-based approaches. Meanwhile let's complete what we can with von Neumann and Turing. From aswst16+@pitt.edu Wed, 04 Oct 2000 06:28:50 -0400 Date: Wed, 04 Oct 2000 06:28:50 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software (Apologies to Lynn, who gets this twice because I screwed up the headers the first time.) --On Sunday, October 01, 2000 7:17 PM -0700 "Lynn H. Maxson" wrote: > It depends upon who is executing the rule set. If it is you or I > deciding that we don't want to operate within those constraints, > then we can choose otherwise. I don't know what the process is > within "living" organisms that allows this. We have it, computers > and software don't. As Fare has admitted all software executes in > a manner consistent with its embedded logic. I agree --- currently, humans have volition, software doesn't. However, given that we don't know what causes volition, why do you believe that it is guaranteed not to be generateable algorithmically? > Here you posit an impossible situation, a system deciding that it > can become an organism. Organisms don't have that choice. I was being, in fact, metaphorical. :-) You can rephrase this as "If a programmer wants me to call his creation an organism, it should be self-sustaining." > an organism. It is a problem with a computer and software in that > neither start as an organism. No matter how you mix, mash, meld, > and merge them if you don't start with an organism, you don't end > up with one. However, we do start with an organism: the human programmer. I argue that whatever magical things are passed through sexual (or asexual) reproduction may also be passed through programming. After all, both are just an exchange of information. Also, at some point, there were no organisms on Earth. An infinitesmal time later, there was at least one. This seems to contradict your statement that organisms may not spontaneously arise, unless you'd care to introduce God. (Even then, where the heck did he come from?) > One passing note. Artificial means not real. AI means now and > forever more not real intelligence, but something else altogether. > No matter what we do to it or with it, it will never cross the > line:it will remain artificial. But the word artificial is used very loosely; it also can mean "manufactured" or "not naturally arising", as in the term "artificial color and flavor". (Or are those tastes and sights somehow not real?) > understand. However, it does not change the fact that two > non-organisms cannot join to form an organism. Two "wrongs" > cannot make a "right". You were arguing, though, that software couldn't be an organism because it's dependent on its hardware. I'm saying that there could be a software organism, with the hardware playing the same role that the planet does for us. > Software is not an organism. That's the long and the short of it. Well, if you're just going to assume that, I'm not exactly able to argue it, am I? :-) > Software viruses work because they receive control from the > processor and execute a "behavior" consistent with their embedded > rules. Nothing changes. But why does that make them not an organism? Why can organisms not be algorithmic? > I don't want to touch this one. I am somewhat disappointed that > your contact with other humans and organisms hasn't introduced you > to processes not duplicable by a Turing machine. I don't know Please give me an example of something humans do that Turing machines don't, then. I've put this to my profs in both comp. theory and AI, and they couldn't provide an answer. (Note that emotions and volition and such definitely don't count --- since we don't know what causes these, we cannot prove that they are not reducible to a TM.) > We should hear more about what you think is the central theme > common to both of them. In biology it is survival. In evolution > it is survival of the fittest. And for a computer organism, would it not also be survival? The "central theme" I'm alluding to is that the brain very well may be a TM. A very odd one, quite different from the state machines we think of, but it nonetheless may be one. > It can never be free of the actions of the programmer, regardless > of the programmer's ability to predict all possible outcomes or > understand them completely. It does no more than what the > programmer told it. All right, but we're never free of the laws of physics. So what? > Seems fair (or even Fare). Humans do not "obey" mathematically > expressible rules. Mathematics is one of a multitude of human Again, what else would you call physics? I haven't seen anyone who can choose to break that. Yes, some of those rules are statistical, but they are nonetheless mathematical. (Moreover, if they are statistical, then this simply means they're a nondeterministic state machine, and we already know that NFAs may become DFAs if one is willing to suffer the performance hit.) > You'll do better with just one. Unfortunately software is a > fragile beast, overly sensitive to "illogic", and prone to > failure. It is one thing to put monkeys in front of typewriters > where whatever order of letters is acceptable. That's simply not > true of software. I chose a few hundred because I wanted to make the probability come out right. Do you concede the point, then, that a program may be generated through random opcode-picking? > pursue than the current one. The difference (perhaps) is that as > an organism we are aware that we are following rules. Software, > being non-intelligent as well as a non-life form, is not aware > that it is even following rules. Ah. So where does awareness come from, and why is that also not algorithmic? > If it doesn't have choice or even a choice in its choices, it > cannot have volition. It is not aware that it does not have a But you're using a very narrow definition of choice. I don't agree that "conform or don't conform to the opcodes" is the only choice available. This is, to me, like saying that the only choices currently available to me are "shoot myself or don't". > putting it there. At least until we know what it is that we have > to put. I suspect that we will find it intrinsic to the > organism and therefore non-transferable. Hm. Is your argument, then, not so much that software with will/awareness is impossible, as that it is impossible right now? I certainly cannot argue that; this is why my heart sinks every time I see another "We're going to solve AI!" effort announced. I personally am not convinced that awareness is intrinsic to the human brain, but that again veers into my personal theology. > You can simulate how a brain works down to the quantum detail and > you still will not end up with a brain. If you want to say that Why not? > there is no difference between this "artificial" brain and a real > one then develop it in its entirety through procreation. Why is procreation so key, if the artificial brain functions just like the real one? > Now meld it with the remainder of the human system. Without this > remainder, without a system in place, the brain has no function > and in fact can do nothing. The brain, the nervous system, the Ah, the embodiment hypothesis. I agree that a brain is obviously useless without I/O, but I don't think that has to be a body as we know it. If we understand a brain well enough to make one, we also understand sensory coding enough to let it see through cameras, hear through microphones, and so on. We can transduce the directory listing of the hard drive on which it resides directly to its optical inputs and go from there. > There's no way that such an abstraction, i.e. selectively leaving > something out, and implementing it (if it were possible) in > software will result in an artificial brain that "behaves" like a > real one. Why not? If you remove the brain from the body and "fool" it by making sure that all the inputs and outputs are receiving data (of any kind), why is it not behaving properly? > whole ball of wax, the human system. The brain is not constructed > according to the rules of logic. Nothing so constructed can ever > be a brain. That is true for the most sophisticated and detailed Again, why not? And what makes you say that the brain is not logically constructed? There are fairly rigidly defined systems of connection in and between all its subparts. These connections vary slightly between individuals, but we've seen that all humans have the same cognitive processes, and therefore those variations are really just noise. > computer. It's 100% logic based. No organism from the > single-cell amoeba on up is so constructed. Logic is our > creation, not vice versa. But an amoeba cannot choose to violate the rules of its own internal workings anymore than I may grow wings or a program may start executing invalid opcodes. > I should remind you of the difference between automatic and > automated. The self-assembly you refer to, which does not effect > the brain alone but the entire organism, is automatic. If it were > automated, then its source would have been another process > different from the current one. Noted. I don't see how it makes a difference, though, as long as the end product is the same. From aswst16+@pitt.edu Wed, 04 Oct 2000 15:35:18 -0400 Date: Wed, 04 Oct 2000 15:35:18 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Wednesday, October 04, 2000, 9:21 AM -0700 btanksley@hifn.com wrote: > Therefore, I would rather either > > 1) Claim that Tunes cannot be started until volition is achieved, and > divert all work to discovering how to create software volition. > > 2) Base Tunes on something else. Ah. That's another matter entirely. I do not hack on Tunes, and thus I can offer no definite opinions on how it should be built. My argument is solely to establish the idea that sentient software remains theoretically possible although highly difficult and unproven. It is my highly uninformed opinion that if you want to see actual progress on a system anytime in the next twenty years, scrap volition and just implement some decent adaptive heuristics with a good UI. From btanksley@hifn.com Wed, 4 Oct 2000 09:21:46 -0700 Date: Wed, 4 Oct 2000 09:21:46 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: Emergence of behavior through software From: Alik Widge [mailto:aswst16+@pitt.edu] >--On Sunday, October 01, 2000 7:17 PM -0700 "Lynn H. Maxson" >I agree --- currently, humans have volition, software doesn't. >However, >given that we don't know what causes volition, why do you >believe that it >is guaranteed not to be generateable algorithmically? I agree that volition *may* be producible in software. I furthermore agree that it's a very worthy topic for research. However, the problem is that, as you say, we know nothing about volition. We don't have the faintest inkling about how to make volition. Therefore, I would rather either 1) Claim that Tunes cannot be started until volition is achieved, and divert all work to discovering how to create software volition. 2) Base Tunes on something else. That's it. -Billy From btanksley@hifn.com Wed, 4 Oct 2000 12:51:27 -0700 Date: Wed, 4 Oct 2000 12:51:27 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: Emergence of behavior through software From: Alik Widge [mailto:aswst16+@pitt.edu] >btanksley@hifn.com wrote: >> Therefore, I would rather either >> 1) Claim that Tunes cannot be started until volition is achieved, and >> divert all work to discovering how to create software volition. >> 2) Base Tunes on something else. >Ah. That's another matter entirely. I do not hack on Tunes, >and thus I can offer no definite opinions on how it should be built. I share your advantage, although I lack the ability to repress my opinions on the topic. :-) >My argument is solely to establish the idea that sentient software >remains theoretically possible although highly difficult and unproven. I'm not familiar with any work which showed that sentient software is theoretically possible. There is some work which attempted to show impossibility, but I don't buy it. >It is my highly uninformed opinion >that if you want to see actual progress on a system anytime in the next >twenty years, scrap volition and just implement some decent adaptive >heuristics with a good UI. That's one way to do it. IMO, a better way is to scrap all of the talk about AI, and instead implement Intelligence Amplification (IA). People are already smart and already have volition and sentience. Let's just build software which conforms more closely to how they work, so that it can help them with data lookup, precise reasoning, math, and so on. In the process of making the software more usable for humans, we're going to have to make software which displays certain aspects of human behavior -- for example, it's going to have to recognise when the human's expressing a vision, and "buy in" to it. So to some people, some of the software will look intelligent some of the time -- except that it'll never disagree with anything you say (except perhaps the most prosaic points of fact), and of course it'll never initiate anything. -Billy From aswst16+@pitt.edu Wed, 04 Oct 2000 15:58:56 -0400 Date: Wed, 04 Oct 2000 15:58:56 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Wednesday, October 04, 2000, 12:51 PM -0700 btanksley@hifn.com wrote: > I'm not familiar with any work which showed that sentient software is > theoretically possible. There is some work which attempted to show > impossibility, but I don't buy it. But that's the point --- if you cannot prove it impossible, it is theoretically possible. (Just as P may still equal NP. I really ought to get around to proving that...) > IMO, a better way is to scrap all of the talk about AI, and instead > implement Intelligence Amplification (IA). People are already smart and Well, that's basically what I'm trying to say. Don't think *for* the user, because it is almost impossible to know what the user really wants. Let the user express an intention, and *then* do what he wants. Detect patterns in his behavior (such as saying "No" whenever you ask if he needs "help" writing a letter) and comply. > which displays certain aspects of human behavior -- for example, it's > going to have to recognise when the human's expressing a vision, and "buy > in" to it. Visions are awfully vague things, and I don't see what you mean by buying in. I'm not sure this is something I want my OS to do, either. I mainly want computers to send things through the network for me and to notice when I'm repeating an action and offer to automate it for me in a relatively flexible manner. Of course, some argue that this alone is AI-complete. From btanksley@hifn.com Wed, 4 Oct 2000 13:12:18 -0700 Date: Wed, 4 Oct 2000 13:12:18 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: Emergence of behavior through software From: Alik Widge [mailto:aswst16+@pitt.edu] >btanksley@hifn.com wrote: >> IMO, a better way is to scrap all of the talk about AI, and instead >> implement Intelligence Amplification (IA). People are >> already smart and >> which displays certain aspects of human behavior -- for example, it's >> going to have to recognise when the human's expressing a >> vision, and "buy in" to it. >Visions are awfully vague things, and I don't see what you >mean by buying in. It's a marketing term. A "vision" is an expression of what you want the future to be. "Buying in" is when the person who hears the vision adopts it as their own, and it's marked by certain predictable behaviors. I'm not saying that the software should do voice recognition and parse for future-tense subjunctives; I would expect that the user would have to explicitly phrase commands. The point is that the way humans are built, seeing "buy-in" is a rewarding experience. So humans seeing buy-in will be motivated to learn to work more of the system so they'll be able to do more. -Billy From aswst16+@pitt.edu Wed, 04 Oct 2000 16:43:21 -0400 Date: Wed, 04 Oct 2000 16:43:21 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Wednesday, October 04, 2000, 1:12 PM -0700 btanksley@hifn.com wrote: > not saying that the software should do voice recognition and parse for > future-tense subjunctives; I would expect that the user would have to > explicitly phrase commands. The point is that the way humans are built, > seeing "buy-in" is a rewarding experience. I know what a vision is. I happen to consider most "vision statements" to be vague and bogus, but that is personal opinion. I still don't see how a computer can show a human that it buys in to the human's vision, though. Is it just going to say "That's a great command!" or "It looks like you're writing a great letter here!"? From btanksley@hifn.com Wed, 4 Oct 2000 14:08:49 -0700 Date: Wed, 4 Oct 2000 14:08:49 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: Emergence of behavior through software From: Alik Widge [mailto:aswst16+@pitt.edu] >btanksley@hifn.com wrote: >> not saying that the software should do voice recognition and >> parse for >> future-tense subjunctives; I would expect that the user would have to >> explicitly phrase commands. The point is that the way >> humans are built, seeing "buy-in" is a rewarding experience. >I know what a vision is. I happen to consider most "vision >statements" to be vague and bogus, but that is personal opinion. It's hard to come up with a vision at the drop of a hat. Plus, most people writing those statements actually have the vision "make some money and then retire." A nice vision, but hardly one which can inspire buy-in. It also doesn't help that most of those vision statements were probably from companies doing something you could never be interested in. >I still don't see how a >computer can show a human that it buys in to the human's >vision, though. Is >it just going to say "That's a great command!" or "It looks like you're >writing a great letter here!"? I would be irritated if the software interrupted me with things like that. I don't know what it'll take; it's likely that for many people, the only solution will be an avatar (a humanoid form). I don't know exactly how buy-in should be expressed; that's not even close to my field. I do know that many people are very good at detecting it, and some people are very good at generating it. As for an example of when it might be appropriate... Um... You know, I'm drawing a blank here. It's times like this that I wish I could remember that URL at which I read about this. One more thing to add. We're techies; we don't need to see buy-in, and in fact that tends to hinder our social life. So the fact that I can't think of examples simply proves that I'm a techie. -Billy From lmaxson@pacbell.net Wed, 04 Oct 2000 20:14:01 -0700 (PDT) Date: Wed, 04 Oct 2000 20:14:01 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Alan Widge raises a number of issues that need addressing to his satisfaction. I will attempt to do so here. I am struck by the fascination of some about the "power" of software and that it differs in some manner from the "power" of any other human-created tool. While I putter about with my wood butchering and others may become true wood craftsmen (and the difference in terms of results is significant) it does not change in writing software (where again the difference in terms of results is significant): the quality of the product, the end result, occurs through the efforts of its author(s). The author(s) have no means of transferring that creative property in them to make it intrinsic to their creation, i.e. that their creation can replicate the processes that occur within their author(s). To begin if only slightly(?) out of order. "Ah, the embodiment hypothesis. I agree that a brain is obviously useless without I/O, but I don't think that has to be a body as we know it. If we understand a brain well enough to make one, we also understand sensory coding enough to let it see through cameras, hear through microphones, and so on. We can transduce the directory listing of the hard drive on which it resides directly to its optical inputs and go from there." I refer you again to Antonio R. Demasio's book "Descartes' Error: Emotion, Reason, and the Human Brain". I think you need to spend some time "listening" to those who engage actively with the brain, how it works (in so far as we know), and when it doesn't: its disorders. The point here lies in how little we actually know of the brain and as a result even less our understanding of it. As in any other area of scientific interest we do make progress, however slowly. As a result I cannot guarantee that we cannot make a brain. In fact as a father of four and now a grandfather of nine I can describe to you very well the process necessary for constructing a human system in which the brain participates. So where do we begin? At the beginning? With no need to invoke God what was before the beginning? Even the big bang theory had something before the explosion. You get the feeling that there was (and is) no beginning, that it always was, is, and will remain. This is further reinforced because time, this thing we measure in intervals is entirely an invention of our own to satisfy our own needs. The universe, reality, what's out there has no need for it. Now the real question, which we cannot answer and only speculate, lies in when, if ever, did the first organism, the first single-cell appear. Nothing logical, physical, or otherwise says that it isn't as timeless as the universe itself. We have used very accurate instruments to measure the weight, for example, of an organism, attempting to see if in death some deviation occurs. So far, no. That leads us to believe that it is in the embodiment, the construction, of a living organism passing life onto another, whatever its cause, that separates it from non-organism. So organisms die, return only to their physio-chemical state in death. Yet no organism begins from this state. It takes an organism to begat an organism. Now why we have yet to discover. In discovering it part of that discovery may find it impossible to begat in any other manner, that we cannot assemble it from non-organism components. "However, given that we don't know what causes volition, why do you believe that it is guaranteed not to be generateable algorithmically?" Well, it's a lot like time, another one of our inventions along with others like "mind", "will", "spirit", "purpose" and Lord knows a multitude of others. What we know of the universe is that it is a continuous process, overall in motion from the very smallest quanta to the entire universe. It has no breaks, no separations. It has no separate actors performing a separate action. No subject. No verb. Just a complete universe. In that universe lightning doesn't flash, because the lightning, the flashing, the process just before it, and the process just after it is a continuum. No separation. Our language, my language, your language, our means of creating verbal maps of the universe does not accurately do its job. Moreover it is a map and the map is not the territory. Just because you can express something in the map does not mean that you can do so with the territory. The point is that you draw your maps from the territory. That allows you to gather fact. When you attempt to impose your map on the territory that's when you engage in fiction. What is the difference between an algorithm and a recipe? In terms of function, none. You have this drive to want to construct a brain as you would an erector set except using computer components. Now your computer uses silicon-based circuity. The silicon wafers are cut from a silicon crystal "grown" under laboratory conditions to restrict the introduction of "impurities". So far no one has even suggested creating silicon crystals using software and computer hardware. Why not. Here you have a pure non-organism of only one component type. Why is it that you have to grow them. Why can we not just crowd them together? The answer that you seem to sneer at is "embodiment", the means of construction. The means that occurs in nature we basically follow in the laboratory. When we create computer logic we do so with three basic logic components: the "and", "or", and "not" circuits. The logic components have two-state (on/off) inputs (legs), an internal process (which does the anding or oring) and a two-state result. A pure IPO model. For the "not" we use an inverter, converting a result on to off or off to on. I am fortunate in that I began in the tube era when such logic was visible (and in my first job, reparable). That basic process remains today. Ask Jecel who designs systems. It's a 100% pure logic system. Now there is none of that in a neuron. No logic. No ands. No ors. No nots. What you have is a connection, an interconnection, unlike any in any computer. I would refer you to Ashby's homeostat described in his "Design for a Brain". No logic necessary. No programming necessary. Just a set of interconnecting physio-chemical-electrical components. It exhibits "adaptive" or "goal-seeking" behavior. Let me carry this a step further. Take a programmed automatic pilot. How you connect it into the plane makes a difference because the connection must correspond to the internal logic. It's a feedback system. On the other hand constructing a non-programmed, but adaptive homeostatic unit means only that you have to connect it. Completely random. Not this to that nor that to this. Now the difference is that a properly connected programmed unit when you flip the switch will maintain current altitude and speed. The homeostatic one may give you a wild ride as it adapts to the changes and their distance from its goals. In fact you may very well crash while it is in the process of learning. The truth is that you can fake it out, have it in a simulated run while on the ground, never putting it into an actual plane until it has "learned", until it has become "stable". Now which one, which process, would you as an airline company use? Organisms use homeostasis to maintain a very intricate balance in order to continue to "live". Failure to do so results in "death". Having recently lost a business associate and more recently a sister to liver failure, I have had the importance of this balance brought home to me. When you write of faking out the brain by somehow switching it instantaneously or gradually from its natural system into an artificial one you are engaged in science fiction. You do not appreciate how intricate a system the human organism is. Having experienced a stroke, albeit a minor one, for just denying blood flow for an instant to the brain, and for a period not having your legs "obey" your orders, this is not a plug-and-play system. So the brain in combination with the nervous system has no logic-based circuits. The eye is not a camera nor the camera an eye. The connection is not a cable. This is a completely non-logic-circuit-based system that you propose replicating with a completely logic-circuit-based one. It is one thing to have a logic in the construction which somehow forms, integrates, and differentiates functions within the human organism. It is another to replicate a logic we do not understand using pure logic circuitry. The brain is not a computer nor the computer a brain. Moreover the human organism is "hard-coded". That means there is no separation from what is doing from what is telling it to do. Both occur as part of the same process. Now you want to take hardware which is entirely differently constructed and add software to it which is entirely differently constructed to create a whole which is entirely differently constructed in an attempt to replicate an isolated brain which does not exist. "There are fairly rigidly defined systems of connection in and between all its subparts." You see there are connections and what they connect. You can't replicate the connections or what they connect with a von Neumann machine operating under Turing rules. The brain is not a von Neumann machine nor does it follow Turing rules for computability. You would talk then about replicating the function of one with the other. "But an amoeba cannot choose to violate the rules of its own internal workings anymore than I may grow wings or a program may start executing invalid opcodes." To you rules are logic-based only. We have no reason to believe (or disbelieve) that the internal-working rules for the amoeba are based "strictly" in logic. That systems of logic can arise from organisms not so derived (from non-logic-based) suggests that we have examples of fact for the one direction and only speculation for the other (also conditioned by the same system). "Why is procreation so key, if the artificial brain functions just like the real one?" That's a big "if", you see. To function like the real one means embodying it within an organism that functions like the human organism. That's how the brain functions. It does not function in isolation nor does it operate on simply a subset of its capabilities. "However, we do start with an organism: the human programmer. I argue that whatever magical things are passed through sexual (or asexual) reproduction may also be passed through programming. After all, both are just an exchange of information." "Hello, Miss, I'm a programmer. Would you like to experience some magical transformations." If you succeed in this approach, put aside any thoughts of software and enjoy the magic. The answer here is strictly, no. There is no transference, no organism-based seed, in programming. If there were, programs would develop on their own without need for further assistance. "You were arguing, though, that software couldn't be an organism because it's dependent on its hardware. I'm saying that there could be a software organism, with the hardware playing the same role that the planet does for us." Nope. There cannot be a software organism. "I chose a few hundred because I wanted to make the probability come out right. Do you concede the point, then, that a program may be generated through random opcode-picking?" I'll concede the point as it is theoretically true on the basis of probability theory (another human invention not present in the universe). However, take a look at the probabilities for a simple program like "hello, world". You get one right and umpteen zillion wrong. Whereas if you eliminate the random opcode picking and use logic, it comes more in balance. I'll leave it to your employer which he prefers you use. A Turing machine has no intrisic purpose, will, emotion, feeling, imagination, concept building, sense of the universe, or any of the other things which differentiate it from organisms in general and humans in particular. You are stuck with achieving your goals through logic while an organism has no such limits and is never separate from its environment. You have an imaginary world which does not accurately portray the real one. Once our real world accuracy reaches a certain threshold of knowledge and corresponding understanding chances are that we will stick to procreation for humans and their integrated brains, using non-organism-based means of providing tools for their use. I suggest that Billy has the correct approach in terms of constructing software to support and extend human capabilities, something within our current ability. From fare@tunes.org Thu, 5 Oct 2000 13:08:20 +0200 Date: Thu, 5 Oct 2000 13:08:20 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: Emergence of behavior through software On Sun, Oct 01, 2000 at 07:17:48PM -0700, Lynn H. Maxson wrote: > The thing that stumps me most in communicating with Fare lies in > his metaphors, of confusing similar with identical, I do not use more metaphors than anyone. But you seem to fail to understand what metaphors are all about. Metaphors are about sharing mental structures. It's code factoring inside one's brain. Now, try to grok this piece of wisdom: There is NO SUCH THING AS OBJECTIVE IDENTITY that be accessible to the mind. Everything one sees and understand is but metaphor. Metaphor is the basic encoding technique with which the human brain integrates information from the environment and tries to find patterns in it. Any "identity" in anyone's mind is but an old deeply rooted metaphor. You may question the range and precision of a metaphor, but questioning the existence of a metaphor is ridiculous. > Here you posit an impossible situation, a system deciding that it > can become an organism. You're a system. Did you ever decide to become an organism? The earth is a system. Did it ever decide to become an organism? The initial puddle whence life sprung is a system. Dit it ...? Your argument is gratuitous, rooted in some deeply flawed notion of yours about life that you'd better question. There's no use discussing about the superficial consequences of whatever notion of life you have when we have such a deep disagreement. If you stick to your notion of things, at least explicit your root beliefs, so we can agree to disagree. That said, your theories really sound like you believe life comes from some extraphysical divine "soul" that somehow directs empty physical receptacles that are bodies. I bet your theory is isomorphical to this soul thing. > The truth is that we have no means of > creating an organism without starting with one: procreation. I bet, that, by induction, you can recurse down the bigbang, at which time there was some fully developed seed for each of modern-time species, as created by god. This is just ridiculous. > It is a problem with a computer and software in that > neither start as an organism. Why couldn't they? I imagine AI and A-life programs precisely as starting as some simple symbolic organism, and evolving thereof. > One passing note. Artificial means not real. DEAD WRONG. Artificial means man-made. My toothbrush is real, yet it doesn't grow on trees. > Software is not an organism. Maybe not yours. Maybe not most anyone's today. Yet, I've seen people who did grow software. Genetic systems, reflective expert systems, data-mining pattern-extracting systems, etc, do evolve (and I hope to eventually bring them all together). You may blank out this assertion, as you did up to now; but if you do, then there's nothing left to discuss, and I wish this whole thread dies right away. > the fact that two > non-organisms cannot join to form an organism. Maybe not two. What about 10^28? Your body is made of about 10^28 atoms. So, ok, it took interaction of many more atoms so as to create such an organism from scratch. But then, we need not work at the atom level, and we do not start from scratch. As said Carl Sagan, "To make an apple pie from scratch, you must first create the universe." > [Software] does no more than what the programmer told it. DEAD WRONG. You blank out the notions of input and persistent state. Not to talk about chaotic behavior and evolution. If you're only to blank out what other people say, let's stop the whole "discussion" and return to more productive activities. > Humans do not "obey" mathematically expressible rules. DEAD WRONG. Humans do obey the mathematically expressible statistical rules of physics, of genetics, of demographics, of economics, of psychology, etc. So these are not enough to predict their final behavior, because the too many unknown parameters? That's precisely the point. Same with a programmer's code for an evolving meta^n-program that runs with lots of persistent state, including meta^(n-1)-program-level state. > Unfortunately software is a fragile beast, DEAD WRONG. Designed software is only as fragile as it is designed to be. Fragile with respect to what? Some people work on very resistant software. Organic software will likely differ a lot from designed software. > overly sensitive to "illogic" Organism are sensitive to toxical intrusions in their chemistry. So what? > and prone to failure. Organisms may die. Eventually, they do. So what? You provided no intrinsic reason why AI be impossible. Certainly, you proved that it can't be done with current designed software technology. But there's no need to expand this point on which everyone agrees. > It is one thing to put monkeys in front of typewriters > where whatever order of letters is acceptable. That's simply not > true of software. The difference between random and fit? Selection. > If I have no choice but to follow them, then spontaneity is out. DEAD WRONG. Rules offer partial information. Internal state provides another body of information. Still same blanking out. > That piece of "magic" which exists at least in human organisms Yes, you believe in magical soul. All is said. Now let's stop it all. > Now meld it with the remainder of the human system. Without this > remainder, without a system in place, the brain has no function > and in fact can do nothing. The brain, the nervous system, the > blood system, the organs, the skeleton, the muscles, the skin--all > exist as a single entity, all interconnected. YES. > The fascination with the brain and with emulating it in software > deliberately "abstracts" it from the system of which it is a part. No, it doesn't abstract "from", but just abstract. The role of the brain is to integrate information so as to drive interaction towards selected behaviour. Well, an abstract brain will have abstract interaction to drive; it will input and output text, sound, video, sensors from an arm, etc. Certainly, a human-understandable AI will have to have interaction devices similar enough to those of humans, at some abstraction level. We're not here yet. There will be a lot of research in dumb A-life before we can seriously tackle complex brains. We'll have to tame some lower forms of information integration before we can tame the higher ones. Karl Popper distinguishes roughly 4 levels of languages (expressive, communicative, descriptive, and argumentative); before we reach the latter, we may have to master the former. How is that an absolute barrier to AI? > All computers are based 100% in pure logic. All organisms are 100% in pure chemistry. > All software which executes successfully > cannot violate the rules of the host computer. No organism can violate the rules of chemistry. So what? Chemistry is the underlying low-level paradigm. It is irrelevant as to the general structure of higher-level phenomena. This is seemingly an essential point of disagreement between us: you're obsessed with the low-level aspect of things, and do not accept that high-level structure may be independent from underlying implementational details. Now, think about it: if you consider the logic gate model with which all digital electronics is designed, you cannot deny that the underlying implementation hardware has changed considerably in 2 centuries (rotating wheels, electromagnetic relays, tubes, transistors, and then a lot of finer and finer transistor technologies). The details vary a _lot_, but the abstract structure stays the same. Similarly, in as much as some high-level structure can implement a system capable of having "intelligent" conversation, it doesn't matter whether the underlying implementational hardware be human brain cells, interconnected electronics, silicon, or software running on a von Neuman machine. Yours freely, [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] [ TUNES project for a Free Reflective Computing System | http://tunes.org ] If the human mind were simple enough to understand, we'd be too simple to understand it. -- Pat Bahn From fare@tunes.org Thu, 5 Oct 2000 16:22:35 +0200 Date: Thu, 5 Oct 2000 16:22:35 +0200 From: Francois-Rene Rideau fare@tunes.org Subject: Emergence of behavior through software On Mon, Oct 02, 2000 at 10:34:38AM -0700, Lynn H. Maxson wrote: > (2)All software executes in a manner consistent with its embedded logic. Again, you insist on this trivial point that is completely irrelevant to any debate regarding emerging systems and artificial intelligence. > "Volition does NOT consist in choosing the rules. No single human > chose his genetic code, his grown brain structure, his education. > Yet all humans are considered having a will." > Volition deals ultimately with choice. NO. There is no absolute notion of "choice". Volition, free will, or whatever name you give to it, is a property of a system with respect to its environment. It is about a system's behavior being largely determined by its own internal state rather than by externally modifiable factors. You cannot easily change what I think by simply pushing a lever; hence I am largely free from your own opinions (however, obviously, my behavior is affected by yours, since I'm replying to you). That a system obeys its own rules is no offense to its own free will. What would be an offense to its free will would be its having to track the dynamic state of another system that it cannot affect; for instance, I wouldn't be as free had I to obey your whims (in as much as I can't influence these whims). > In the dictionary this is qualified as a "conscious" choice. I don't care about dictionaries written by people who have no clue about what intelligence is. > Consciousness lies in self-awareness. You've just been using "conscious" in two different meanings. Welcome among the users of the fallacy of equivocation. http://www.intrepidsoftware.com/fallacy/equiv.htm I deny meaning to the very words "volition", "consciousness" and "self-awareness" in the context where you use them. I refuse to disagree with you when I consider you didn't assert anything meaningful. I similarly refuse to disagree to the statement "the smurf boinks", because I deny meaning to this sentence. If you think there is more than empty words in your discourse, I prompt you to explain what you mean in terms of observable behaviour of dynamic systems. Or maybe instead can we agree to deeply disagree on the validity of our respective points of view, and possibly explore each other's meta-arguments about this validity. > transfer an intrinsic property of an organism [...] Once again, you're a believer of the soul. I deny any meaning to the notion of immaterial soul. And I reject the notion of a material soul, for which not no evidence exists. > You certainly can make a choice. > However your ability to do so depends upon your structure, your > human system and whatever in it is responsible for "life". When > that leaves you, when you die, you can no longer make a choice nor > exhibit purpose. So what? How does a running program differ? People _routinely_ use computers to make choices for them. The computers make choice depending on their own structure and state, and when they're shut down, they no longer make any choice. Choice is in no way an exclusive property of "the living". As far as we can observe it, even the Sun chooses to behave in impredictable ways. And in the most regular physical systems, you have spontaneous events that break the symmetry of the system, thereby making a "choice". Choice for a system is about some event being dependent on the system's internal state, and independent from external events. > First comes "knowledge", then "understanding", and then "wisdom". > Knowledge comes from "knowing" you are doing something. > Understanding comes from knowing what you are doing and if > possible why. Wisdom comes from using the understanding of what > you know to possible change what you do. Excuse me, but this sounds as meaningless ranting to me. To me, there is no absolute magical notion of knowledge, understanding or wisdom. It's all about the structure of the feedback between a system and its environment, and the relative ability of the system to anticipate it's environment's potential behaviour during its internal decision process, as compared to a potentially different system in the "same" environment. > The only "seed" for an organism is an organism. Once again, you're blanking out 15 billion years of evolution. Which of the chicken or egg has come first? > Man thus far has > had no success in creating an organism in any other manner. You keep bringing up irrelevant points on which everyone agrees, with a fallacious tint of equivocation. (In this case, the term "organism" has stricter or larger meanings). > Clearly software is not an organism Once again a subtle semantic shift that brings equivocation. I'm sick of it. > We have a history of increasing our knowledge and > understanding of such events leading in turn an increasing ability > to predict them. Following our assumption and the basis of Fare's > metaphor, this means our gains have occurred at the loss of life > within those events. That, my friends, is logic. I reject your inference, and I am find your grin preposterous. The more one knows, the more one knows that one knows not. Science extends the field of our (meta)ignorance even more than the field of our knowledge. -- Faré Actually, this is the very quantitative basis of Cantor's, Russell's, Gödel's, etc, diagonal argument: the complexity of a system grows exponentially with its size, including the size of subsystems used for an internal model of itself. You cannot add (reflective or not) information in the system without introducing room for even more information to gather so as to have a "complete" view of the system. Acquiring knowledge about yourself may increase your freedom with respect to the rest of the world, by bringing more opportunities of action. > Fare takes this, our inability at times to predict and understand > the results of software execution, as a means of giving software > something (life, independence, freedom from the programmer) that > it must lose in the event that we gain the ability to do either. > Note that this "loss" occurs without a change in the software > logic or in its execution. Therefore it must be a property > independent of them, perhaps even a "soul". Bullshit. In the circumstances you describe, the relative "loss" of freedom of the computer wrt us comes from our "gain" of knowledge about it. The software didn't change. We did. Hence the relative change between it and us. You think of "freedom" and "life" as absolute terms. I don't. Not only don't I, but I reject as meaningless any absolute notion of freedom or life. Once again, a fallacy of equivocation between my conspicuously relative rational notion of freedom and some undefined absolute notion of freedom. Your grins don't make me smile. They are no support for your fallacies. > This property arises from a more serious claim by Fare that we > have software whose results we do not understand or we cannot > predict. To me both are patently false. Let's stop it here, then. When the fundamental disagreements have been identified, it's time to terminate the (successful) discussion. > Fare pooh-pooh's this by saying it is "postdict" not "predict". No. "Predict" is different from "postdict", because cost matters. Only in mathematicians count inferences as free. Computer scientists know better. If you can predict the outcome of a brute-force attack against mainstream cryptographic protocols, I'm most interested. If from equations of a system, you can "understand" the system to the point of predicting the outcome, a lot of physicists will want you. "Understanding" is about anticipation (see Popper). If you cannot gather enough information to make a useful decision before it's too late, then for all that matters, you haven't understood anything. Understanding matters only in as much as it is a prelude to doing. What ultimately matters is doing. I deny as meaningless any notion of understanding that cannot lead to action. > Fare seems to forget that we write software and > construct host machines to form a tool system. Don't you resort in insulting other people so as to explain disagreement. I am most aware of machines being tools, but we seem to have wildly different explanation structures in our respective minds about what "a tool", "to use", "useful" mean. Of course, each one is convinced that his conscious mental structure is more adequate, or he'd change it. Don't be so irrational in your mental modelling of other people! > We do so as a means of extending "our" own ability. Which is precisely why cost matters, and why prediction is not same as postdiction. Computers are useful only in as much as they can assert things that we couldn't otherwise assert (in time|as cheaply|as precisely|at all). If we could effectively predict the outcome, we wouldn't need them. > Moreover we do so for "reasons of our own". Machines might have reasons of their own, too. You're back to your clichés, describing common agreed features of systems, intelligent or not, that have ZERO relevance to the possibility or impossibility of artificial intelligence. > Among these "reasons of our own" are curiosity, amusement, and a > desire to increase our knowledge, our understanding, and our > ability to predict. I'd rather say lust, urge, and anxiety for food, defecation, sleep, social approval, sex. There's no reason why vital urges cannot be built into computers. This has been done before. Moreover, in a sense, the urge of the system to reply to human queries is such a builtin urge, even in primitive interactive computers. > That we > cannot predict or cannot understand does accrue to an intellectual > failure or weakness. Of course it does. You sound like a dogmatic human supremacist. That we use tools to cope with our failures and weaknesses is precisely our victory. > We may not take the time to either know or understand what > occurred and why within such a process. That is our choice. I didn't choose to not brute-force crack RC5-64 by hand. I am just unable to do it. At the speed I am able to do it, the expected completion time before I manage it by hand is longer than the expected life of the universe, not to talk about my own. And even "by hand", I'd use paper and pencil, i.e. tools. Same about most all computer-solved problems. > the tool did not acquire a "life of its own" because we > chose to neither know nor understand. It did acquire some independance from our choice of not caring. But even with our caring, it would still be very much independent, in as much as we are unable to fully understand, even when caring. The more complex the emergent system, the more independent it is, relatively to the severe limits of our potential understanding. >> If your "encoded logic" is universal, just any behavior is >> consistent with it. So conformance to the logic is a null >> statement. > Yes, but you see the encoded logic, particularly that of software, > is not universal. I think that's the root of our disagreement. Let's agree to disagree here. The ability of software to accurately simulate complex physical systems seems strong evidence that it is, but I'd rather not argue about that. > "I strongly dislike your way of turning around arguments, > completely avoiding to answer to points others claim as relevant, > not even acknowledging them, and claiming as victories their > concessions on points they claim are irrelevant. This is no > rational discussion, only self-delusion thereof." > > I imagine that you do, considering the arguments you make. The > matter of relevance or not lies in the eye of the observer. > I didn't reproach you your disagreement in opinion, but your complete lack of acknowledgement of other people's opinion, your consequential repeating over and over again of the same points that were long agreed upon, and your lack of any response (up to then) to my objections. Even now, you still blank out half of my arguments. > To me > the issue of software execution "always" consistent with its > encoded logic is relevant. But it isn't the root of the disagreement, so stop boring everyone with it, instead of discussing the deeper disagreement. There's no use whatsoever in discussing stuff everyone agrees upon. "Do you agree that 2+2=4? Haha! Then the rest follows from it!" Obviously, it doesn't. > I, therefore, have something that the software does not: > a life of my own. You keep gratuitously extending the same statements from existing designed software to all software. This is no rational discussion. Just repeated ranting. You don't try to identify initial disagreements, you just repeat your opinion over and over again, without much regard for other people's argument structure. > I suggest that the charge of self-delusion here is one of > projection, in the source and not the target. I claim once again that you're in a self-delusion of rational discussion, in an objective way observable by neutral observers, independently from the outcome of the main debate. A rational discussion isn't about repeating one's point over and over in the hope of getting it accross, but about discovering the other people's argumentative structure and identifying root disagreements between parties. Gratuitously repeating is noise. Rational parties look for signal. I'm not sure I want to waste more time on this particular discussion if you don't improve your attitude. > I doubt very > seriously if progress in software, in doing the things specified > in the Tunes HLL requirements, has any need for any properties > outside those available in formal logic. I admit said page is antique and lacking in both contents and rationale. I will state so on the page. > It hasn't required > anything else in getting to this point. Note that at no point in time has it required any further progress to get up to said point in the past of said point. > We certainly haven't > exhausted all the logical possibilities. > When and if we do, > considering parallel progress in other fields including biology, > then we can consider non-logical, statistical-based approaches. > Meanwhile let's complete what we can with von Neumann and Turing. None of these is the point discussed. I've been pushing for a direction of development that is none of those you propose, that is mostly independent from them, whereas the development of one need not prevent the development of the other, on the contrary. Anyway, this whole discussion is becoming more and more pointless. Again, understanding is about taking action, and whether AI is ultimately possible or not, people in the TUNES project agree that in the foreseeable future, we'll be working towards making much more primitive emergent systems, that will be used as a complement, tool, amplifier, to human intelligence, rather than an all-or-nothing complete replacement thereof. [Note to Billy: I prefer to avoid the acronyms IA vs AI, because in french, the acronyms are reversed! Also I see no dynamic opposition between the two points of views.] Yours freely, [ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ] [ TUNES project for a Free Reflective Computing System | http://tunes.org ] In a reasonable discussion, you can't communicate opinions, and you don't try to, for each person's opinions depend on a body of unshared assumptions rooted beyond reason. What you communicate is arguments, whose value is independent from the assumptions. When the arguments are exchanged, the parties can better understand each other's and their own assumptions, take the former into account, and evolve the latter for the better. From lmaxson@pacbell.net Thu, 05 Oct 2000 09:27:32 -0700 (PDT) Date: Thu, 05 Oct 2000 09:27:32 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Fare wrote: "That said, your theories really sound like you believe life comes from some extraphysical divine "soul" that somehow directs empty physical receptacles that are bodies. I bet your theory is isomorphical to this soul thing." I would remind you that it is not I who gave software a "life of its own" based strictly on whether we understood it or not. I try not to inject my faith or belief system as a causal element in a discussion of this nature. Such are unprovable and have no place in such a discussion. "I bet, that, by induction, you can recurse down the bigbang, at which time there was some fully developed seed for each of modern-time species, as created by god. This is just ridiculous." You lose the bet. I've been through this in another response. In it I also did not inject God. We do not know how life or the organism started or if this even makes sense in a timeless universe. What we do know is that currently we have no means of creating life except through transmission of a living organism. The question easily becomes is there a difference between artificial (man-made) life and life as somehow formed within the universe? The answer I hope you would agree is "no". The difference lies in the process man uses to create life forms. Will that difference lie in attempting to replicate the process in the manner in which it occurs or not. "Why couldn't they? I imagine AI and A-life programs precisely as starting as some simple symbolic organism, and evolving thereof." There are two problems here. One lies in the difference between a physical organism, one that exists physically in the universe, and a symbolic one that exists entirely within human systems. The second problem lies in the means of their evolving. You presume that you can give a symbolic organism life. You regard software as such a symbolic organism and when initiated within a hardware body that the combination can become a life form, an organism. "Maybe not yours. Maybe not most anyone's today. Yet, I've seen people who did grow software. Genetic systems, reflective expert systems, data-mining pattern-extracting systems, etc, do evolve (and I hope to eventually bring them all together)." How does software grow? How does software evolve? It grows by someone writing. It evolves by someone writing. In both it involves direction by an external agent. I don't care if it is genetic, reflective, data-mining, whatever. It is not me who injects God-like responsibility into this discussion. I accept that software will always reflect the writing of its authors regardless of how well they understand or can predict the results of what they have writing. They may very well have written it for just that purpose. However that purpose remains in them and does not transfer into the software, which as a non-intelligent mechanism, as a non-life form, does exactly as instructed. Evolution within a sequence of organism generations occurs from some intrinsic "property" within it with respect to the environment of which it is a part. To achieve this with software means having no "external" writing, only "internal". The challenge lies in programming the seed, that initial piece of software that acquires a "sense", an "awareness", and a "purpose" of its own. Now can that happen? Without a better understanding of how these occur in living organisms we cannot rule one way or the other. However we can fairly safely rule out von Neumann architectures for the hardware and Turing rules for the software. I'm not into denial here. I do say we do not have the right "materials" currently. If and when we determine just what those right materials are, we may find that hard-coding, not soft-, is the means. You keep talking about meta^n-programming and ever higher levels of languages, all of which we may understand, but none of which have we invested in the physical means, the computer architecture. No existing computer executes a meta^n-program or a HLL. What it executes is their translation into a language (its instruction set) it is constructed to respond to. I have to exercise caution here and not use such terms as "understand" or "know", because a non-intelligent mechanism can do neither. Here the metaphorical use of language deceives. If the answer lies in what you propose in terms of languages, in terms of symbolic forms, then the machine must have the ability as the authors do in terms of "understanding", "intent", and "purpose". That says that you cannot do it through software. The software cannot execute without a machine. The machine cannot "know" more than its instruction set. Therefore that instruction set must be capable on its own to "understand" and "create" ever higher level of abstraction of its own. That, my friend, is "evolution" from internal growing. That is not a von Neumann machine. Nor are the governing rules Turing. The secret here lies in developing both in sync as a single system with no separation between directing and doing, the same system that occurs in every organism. That means they do not grow with "external" assistance, but only in response to it as part of their interaction with their environment. I would have thought as one respecting cybernetics that you would have at least picked this up from Ashby's work, an understanding of homeostasis, and the homeostat. In retrospect I should have gotten a clue when you proposed that software should acquire all these attributes except "purpose" which remained that of the authors. That would imply that you hold volition, awareness, thinking, understanding, knowing, and purpose as separable components and not interdependent, integrated, interacting processes. "You blank out the notions of input and persistent state. Not to talk about chaotic behavior and evolution." None of these occur within software. None of them change the execution sequences in software. All execution sequences are programmer determined, i.e. consistent with the embedded logic. I haven't blanked out anything. They don't change anything relative to the embedded logic of the software. "Rules offer partial information. Internal state provides another body of information. Still same blanking out." Rules offer no information whatsoever. Rules apply to inputs producing outputs as well as changing internal states. Internal states are data, inputs are data. Software processes data. It has no ability to "view" it as information. It is a mistake to imply that software sees meaning in anything that it does. Software executes. It doesn't even know it is doing that. Truthfully it will never know that. Only that within which it executes, the physical form, can acquire that capability. That form is not von Neumann-based. "This is seemingly an essential point of disagreement between us: you're obsessed with the low-level aspect of things, and do not accept that high-level structure may be independent from underlying implementational details." I guess there is a difference between my view that the whole is equal to the sum of its parts and yours that it is greater than that sum. Apparently also in your view not all "wholes" are created equal. That apparently in the evolving development of a whole than an inequality appears spontaneously. Now just where and when remains a question. Thus far in software we have not created higher level abstractions not composed directly of lower level ones. And these eventually into the instruction set of the host machine. In fact in executable form no higher levels exist, only the instruction set. So where and how in this architectural model, the von Neumann machine, do you program in an equality? You will on the one hand berate me for injecting "life" as an inequality into a physio-chemical equation. To you this means that I see the hand of God in the process as well as a soul. On the other hand you berate me for not allowing it in software which you do. The truth is that I don't inject an inequality in material composition to account for life but something in material organization, some difference that exists when it exhibits "life" than it exhibits "death". That says I am more for "composing" as a life process than "decomposing" as a death process. In either case a continuum in process (sequence of sub-processes) occurs. This means in no instance does a material inequality occur. I leave it to you to resolve your own contradiction. "The details vary a _lot_, but the abstract structure stays the same. Similarly, in as much as some high-level structure can implement a system capable of having "intelligent" conversation, it doesn't matter whether the underlying implementational hardware be human brain cells, interconnected electronics, silicon, or software running on a von Neuman machine." Here I think you and Alik Widge make the same mistake (IMHO). You posit that some (existing) high-level system is capable of "intelligent" conversation even though you know if it is a human conversing with it, the intelligence is actually one-sided. I do not know if two of these machines converse "intelligently" with each other or, if like us, tend to argue more. The fact is that it does matter greatly the underlying hardware implementation. Furthermore it lies well-beyond existing software techniques to equip a machine with human-equivalent conversation capabilities. Neither a von Neumann machine nor Turing rules will ever approach the conversational levels of humans...or even ants. You keep acting like I am denying something or have some fear of future developments. What I deny is the ability of current developments to do the job. Instead of beating our heads against an impossible task let's get to the developments in which all this is possible, if not more likely. Until you have embedded high-level contructs within a machine, something higher than the current instruction set, and embed the ability for it expand and extend them on its own, what you achieve with your meta^n-progamming and ever higher HLLs will never transfer to the machine, the only place in which software comes to "life". You have yet to provide an example not burdened by the restrictions of a von Neumann machine. No amount of elaboration or sophistication will overcome those restrictions. This makes it increasingly difficult to transfer (communicate) the author's thought processes to corresponding machine behavior. The transmitter (the author) has the ability. The receiver (the machine) does not. Therefore the author has to translate his communication (the software) into the language of the machine. It doesn't take much perusing of executable code to determine that considerable is lost in translation. Obviously in terms of communication at a human level (which is what you desire at the machine level) considerable is lost in translation. Improve the machine, make basic changes in its abstract form, and the need for software (external direction) becomes minimal. Of course, what you get may be as blind, blanked out, and bull-headed as me. I think the current conversation between Billy and Alik offer more in substance relative to current hardware and software than will the pursuit of this. Maybe we can return to it later after resolving some more practical issues. From lmaxson@pacbell.net Thu, 05 Oct 2000 12:25:15 -0700 (PDT) Date: Thu, 05 Oct 2000 12:25:15 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Alik Widge wrote: "Don't think *for* the user, because it is almost impossible to know what the user really wants. Let the user express an intention, and *then* do what he wants. Detect patterns in his behavior (such as saying "No" whenever you ask if he needs "help" writing a letter) and comply." I come to all through a devious route, from Warpicity a proposal I presented at WarpStock 98 in Chicago to the FreeOS project and then to Tunes. My original proposal dealt with a single tool, the Developer's Assistant, and a HLL, Specification Language/One (SL/I). The purpose of the specification language was manifold: (1) construct the tool, (2) construct itself, and (3) construct any HLL. Thus the language was simply a means to an end. The tool, the Developer's Assistant (DA), as its name implies performed in a manner similar to what Alik describes: non-intrusive, compliant, and reflective. In short a developer's assistant. It's not a programmer's assistant, because no programming, only the writing of specifications occur. This may seem a silly nit because both involve writing. The difference in my mind lies in the software development process and its sequence of stages: (1) specification, (2) analysis, (3) design, (4) construction (programming), and (5) testing. Input into the specification stage consists of user requirements or change requests. Thus only two writing tasks occur up front, one of the user requirements and the other of their translation into formal (compilable) specifications. Everything then after that is performed entirely within the tool: no manual effort. The only other writing which occurs is that of the remainder of the user documentation: the reference manuals, user guides, operator guides, etc.. Most importantly the only writing which occurs within the process bounded by the stages is specification. It occurs only in the first stage manually as the rest of the stages are automated (tool performed). If on the input you perform a syntax analysis, semantic analysis, and allow the logic engine to do the construction as occurs today in Prolog and in AI expert systems using logic programming, then you have all the input necessary to perform stages of analysis (dataflow) and design (structure chart). Thus you have the original source specifications and their three possible results (analysis, design, and construction). Again all this from a single set of writing specifications. In this manner the tool reflects in three different ways, two of them graphical, what the developer has submitted. Along with this, of course, are the results of the semantic analysis. The only changes which occur, the only writing in which the developer engages, is specifications. As they occur at the earliest possible point in the process, initiating an automated rippling change process, all remain in sync in terms of documentation. As the tool uses a "standard" logic engine with a two-stage proof process, one of completeness and one of exhaustive true/false, the tool again depicted in the results the current state (level of completeness). It notes ambiguities, incomplete logic, and contradictions (a variant on an ambiguity). In short all of the possible "errors" it can detect. It does so in an non-intrusive manner of simply providing the developer with multiple views of the current state. The developer then adds, deletes, or modifies (makes a new version) specifications in a sequence of his choice with the tool responding to each change as it is entered. The tool then is an interactive one, performing all of the activities of the software development process except the writing of the user documentation and of the specifications. It is not only interactive, but also interpretive allowing independent execution of any denoted set of specifications. When the developer is satisfied that this version of the software is complete, he can so indicate to the tool. The tool then will compile the code into a target system of choice. That occurs because all the target systems exist as specifications as well. Two things are different. One, there are no source files, only a data repository with a central directory. The only access to the repository is through the directory. All source statements, user text and specification source, are stored individually, separately, and uniquely named in the repository. Thus no source statement is ever replicated. Two, the scope of compilation is determined strictly by that implied within the input specifications. It can be anything from a single statement on up, including entire application systems (multiple programs), sets of such systems, and entire operating systems. This allows a global application of semantic and logic rules not available in current methods. There are no copy books, no manual synchronization of effort, no peer reviews required. This means that once specifications are written, once translation of user requirements occur, that a single developer using a single tool can now achieve results that now require tens, hundreds, and thousands of IT support personnel. 50 times time reduction. 200 times cost reduction. Over current methods. Now that you have optimized the development process while minimizing the human effort involved, a derivative of "let people do what machines can not and machines do what people need not", what remains is further minimizing the human effort. Nominally this occurs through the tool "observing" the developer's style, detecting patterns (tendencies) of the developer. Again in a non-intrusive, helpful manner simply making what it detects available (on demand) to the developer. The developer can then opt for a choice which in essence is a confirmation of his style. There is no reason not to allow the developer to choose to automate this aspect of his behavior. All this is possible with today's technologies as each and every piece exists today. The problem today is not in what we do or which language we do it in, but in how we do it, i.e. the process employed. This is a first step in process improvement. The specification language which is self-defining, self-extensible, and self-sufficient is simply a means of getting there. The only remaining issue is staying there, of being able to adapt to the dynamics of the environment at the rate at which the dynamics, the changes, occurred. Here is where the logic engine and the use of an unordered set of specifications shines. For the only thing that the developer must do is add, delete, and modify existing specification statements and assemblies. Specification assemblies do not consist of more than 20 to 30 specification statements. The implications of a change (even a proposed one) are immediately reflected in the results produced by the tool. Now the developer can implement changes across an entire application system as part of a single process. He has no limit on the globalness of a change. He leaves it intact without decomposing or distributing separately. His ability to exhaustively test a change is unmatched by any non-logic-programming-based method, including OO. What that means is the ability to make changes faster than they can occur, which means you can make them as fast as they occur. The only glitches in the dynamic continuum that Fare speaks of is the time necessary to write the specifications. No system currently proposed to Tunes comes close. Both Brian Rice and Fare are working on the wrong end of this pony. It is not a language deficiency or one that can be cured by language. It is a process deficiency. Its cure lies in process improvement, not obscure language features. From aswst16+@pitt.edu Thu, 05 Oct 2000 22:23:01 -0400 Date: Thu, 05 Oct 2000 22:23:01 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Wednesday, October 04, 2000 8:14 PM -0700 "Lynn H. Maxson" wrote: > Alan Widge raises a number of issues that need addressing to his That's *Alik*. Get it right, please. > of its author(s). The author(s) have no means of transferring > that creative property in them to make it intrinsic to their > creation, i.e. that their creation can replicate the processes > that occur within their author(s). This is true. However, I think you owe us a "yet". > Emotion, Reason, and the Human Brain". I think you need to spend > some time "listening" to those who engage actively with the brain, > how it works (in so far as we know), and when it doesn't: its > disorders. The point here lies in how little we actually know of > the brain and as a result even less our understanding of it. It may interest you to know that one of my majors was cognitive science, with a heavy neuroscience component. I am well aware of how little we currently know about how the brain works. This does not mean that we can never know. > single-cell appear. Nothing logical, physical, or otherwise says > that it isn't as timeless as the universe itself. We have used This is not really true, at least not if one accepts current hypotheses regarding the Big Bang. If we apply our concepts of time, the mathematics seems to suggest that at least the first few minutes were utterly incapable of supporting anything we would call an organism. > death. Yet no organism begins from this state. It takes an > organism to begat an organism. Now why we have yet to discover. The basic problem is that we can't set up the biochemical clockwork and then add the single push to get it all rolling. We need to start with a running engine and cobble the parts onto it as we go. OTOH, there are plenty of people trying to recreate the primordial soup, so perhaps someday they will demonstrate spontaneous generation of self-sustaining processes. There is nothing which says that we *need* an organism, but starting with one is significantly simpler, so we do that. > separations. It has no separate actors performing a separate > action. No subject. No verb. Just a complete universe. But it *does* have a beginning. There is a point in what we call time (which, although it may have arbitrary divisions, is also considered to be a physical dimension, and is therefore "real" in some sense) before which there appears to have been nothing. > When you attempt to impose your map on the territory that's when > you engage in fiction. Perhaps, but the very point of a map is to let you plan a route within the territory, and it generally serves that function quite well. > So far no one has even suggested creating silicon crystals using > software and computer hardware. Why not. Here you have a pure > non-organism of only one component type. Why is it that you have > to grow them. Why can we not just crowd them together? The > answer that you seem to sneer at is "embodiment", the means of > construction. The means that occurs in nature we basically follow > in the laboratory. This seems rather disjointed. Crystals must be formed in a specific way, yes. It may be true that minds require specific underlying patterns. However, there is no evidence that those patterns cannot be implemented as software or hardware. > Now there is none of that in a neuron. No logic. No ands. No > ors. No nots. What you have is a connection, an interconnection, > unlike any in any computer. I would refer you to Ashby's Except that a neuron can in fact compute in just that manner. I would refer *you* to something as simple as the cells of your retina. Shine a light on one, and it turns on. Remove the light, and it turns off. (Others turn off by light and on by dark. Same principle.) The thresholding behavior of neurons is not much different from a digital gate: if you're close enough to +5V, you get 1, otherwise you get 0. > It's a feedback system. On the other hand constructing a > non-programmed, but adaptive homeostatic unit means only that you > have to connect it. Completely random. Not this to that nor that > to this. But that is *not* how the brain behaves. You will most likely get very poor results if you rewire the optic nerves to auditory cortex and auditory to visual cortex. Brains do not begin as randomly wired networks; the DNA itself contains "bootstrap code" to organize structures and begin regulation. > The truth is that you can fake it out, have it in a simulated run > while on the ground, never putting it into an actual plane until > it has "learned", until it has become "stable". Now which one, > which process, would you as an airline company use? But this is exactly the point you try to deny later when I suggest that a body need not be exactly a human body. > brought home to me. When you write of faking out the brain by > somehow switching it instantaneously or gradually from its natural > system into an artificial one you are engaged in science fiction. > You do not appreciate how intricate a system the human organism > is. Having experienced a stroke, albeit a minor one, for just > denying blood flow for an instant to the brain, and for a period > not having your legs "obey" your orders, this is not a > plug-and-play system. I also happen to be a medical student. One of my particular areas of interest is sensory prostheses. We can already replicate the cochlea to a reasonable degree. There are more people working on the eye than I care to count. It *can* be made plug-and-play (well, if "plug" is defined as "damn tricky surgery") if you can decode the protocol. > eye. The connection is not a cable. This is a completely > non-logic-circuit-based system that you propose replicating with a > completely logic-circuit-based one. It is one thing to have a Yes? So? Our senses have very stereotyped signals. These can be generated from standard digital and analog logic. Right now, the problem seems to be a matter of getting sufficiently large arrays of sufficiently small transducers and wiring them in properly. If a software brain existed, the problem would barely exist for that system. > circuitry. The brain is not a computer nor the computer a brain. This is an assertion. You are entitled to it as opinion, but I do not believe that you have proved it or can prove it. > You see there are connections and what they connect. You can't > replicate the connections or what they connect with a von Neumann > machine operating under Turing rules. The brain is not a von This is also an assertion, and one which I think has been at least partially proven false. We can in fact produce the signals. We need to get the physical wiring down, but that's a simple matter of engineering. Give it two decades. > To you rules are logic-based only. We have no reason to believe > (or disbelieve) that the internal-working rules for the amoeba are > based "strictly" in logic. That systems of logic can arise from Do you claim, then, that physics does not derive from the eminently logical system of mathematics? > That's a big "if", you see. To function like the real one means > embodying it within an organism that functions like the human > organism. That's how the brain functions. It does not function > in isolation nor does it operate on simply a subset of its > capabilities. If this is true, explain those who have sensory deficits. Seems to me that they're functioning quite nicely on a subset of their brain's capabilities. > aside any thoughts of software and enjoy the magic. The answer > here is strictly, no. There is no transference, no organism-based > seed, in programming. If there were, programs would develop on > their own without need for further assistance. And you cannot show that this is impossible. We may not know how to do it, but that does not mean it is impossible. > Nope. There cannot be a software organism. Again, assertion. > I'll concede the point as it is theoretically true on the basis of > probability theory (another human invention not present in the > universe). However, take a look at the probabilities for a simple > program like "hello, world". You get one right and umpteen > zillion wrong. Whereas if you eliminate the random opcode picking > and use logic, it comes more in balance. I'll leave it to your > employer which he prefers you use. That's not the point, though. If you accept this as true, you see how any program could be created without anyone having the intent to create that specific program. The process would need to be optimized, but I only desired proof-of-concept. This seems to partially deny your idea that randomness can do nothing for the idea of the emergent system. > A Turing machine has no intrisic purpose, will, emotion, feeling, > imagination, concept building, sense of the universe, or any of > the other things which differentiate it from organisms in general > and humans in particular. You are stuck with achieving your goals You cannot prove that these things cannot be subfunctions of a TM. Again, simply not knowing how to do something does not make it impossible. I do not say that a TM "has emotion"; I am rather saying that emotion may simply be the output of a particular computational process within the brain. Each individual neuron has a definable input-output behavior. As such, it computes a function, and as such, it is theoretically replaceable by a TM. Chain enough of those together to replicate the limbic circuits and you may well have artificial emotion. Until we get clever enough to try it, you cannot claim that it is impossible. > non-organism-based means of providing tools for their use. I > suggest that Billy has the correct approach in terms of > constructing software to support and extend human capabilities, > something within our current ability. I am making no argument that the Tunes project should try to build software organisms. That is not possible based on current knowledge. However, you are apparently arguing that it will never be possible, and I consider this exceptionally short-sighted. From lmaxson@pacbell.net Fri, 06 Oct 2000 00:58:25 -0700 (PDT) Date: Fri, 06 Oct 2000 00:58:25 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software I apologize for not getting Alik right. It's sitting in front of me, a mistake I should not have made. Nevertheless Alik Widge wrote: "The basic problem is that we can't set up the biochemical clockwork and then add the single push to get it all rolling. We need to start with a running engine and cobble the parts onto it as we go. OTOH, there are plenty of people trying to recreate the primordial soup, so perhaps someday they will demonstrate spontaneous generation of self-sustaining processes. There is nothing which says that we *need* an organism, but starting with one is significantly simpler, so we do that." We are basic agreement here. That's why we are in disagreement with respect to software-initiated life. As far as I know all organism are carbon-based. I assume that's why we call the chemistry associated with carbon organic chemistry. I have no doubt at some point we will crack the code that initiates the temporal instability within a chemical substance that we call life. A temporal instability, a transient in a process that leads to death. A high-information, low-entropic instability incapable of sustaining itself indefinitely from a low-information, high-entropic state called death. What is it that you would do in software? Certainly not create a life form. With software you can only mimic. The best you can achieve, the best we have ever achieved is useful mimicry. Now you have two problems, one, that a difference remains between being logically equivalent (the best that you could ever achieve) and identical (which you can never achieve in software), and, two, even logical equivalence to the degree is probably neither possible nor practical. Why? Call it chemistry. I gave you the example of growing silicon crystals because that's a much simpler chemistry to mimic in software than that of an organism. Yet no matter how you write the software or select the host computer are you going to end up with a silicon crystal. Logically equivalent, yes. Identical, no. Is there a practical difference? Which one can you use to build the computer in which you will run your silicon generating software? "But it *does* have a beginning. There is a point in what we call time (which, although it may have arbitrary divisions, is also considered to be a physical dimension, and is therefore "real" in some sense) before which there appears to have been nothing." Nothing which exists only in human-created systems is real outside that context. Not the physical rules. Not the chemical ones. Not the mathematical ones. Not even time. They are no more than continually changing maps distinct from the territory they supposedly describe. That doesn't mean that we do not find them useful. It simply means that they fill a need we have, not one of a universe which has no such problem, which has no needs period. I realise that plus and minus infinity are useful crutches due to our language and the impact it has on our thinking. We have beginning and end because we cannot accept that any process can have avoided either. That's a trap we have set for ourselves, not one for a universe which doesn't reflect on what's happening. If you want to accept a theory that all matter did not exist before the big bang simply because the mathematics dictates it, you may. I will simply assume it's a map error. Or it was an act of God, because we cannot have an effect without a cause. "This seems rather disjointed. Crystals must be formed in a specific way, yes. It may be true that minds require specific underlying patterns. However, there is no evidence that those patterns cannot be implemented as software or hardware." Here again we are dealing with an organism where what it does and how it does it are indistinguishable. What we call hardware and software are one and the same. We do not have high-level software and low-level machines. They have identical levels, because "it" is not a "they". As someone with experience in neuroscience you also know that the brain does not engage in sequential logic. That we can does not mean its use in support of our ability. The problem is that you want to program something that doesn't use programming. No matter the genetic code or the cell differentiation they only spawn the abilities. They do not direct them. Ashby with his homeostat showed that you only needed an interconnected structure which adapted without instruction because it was "inherent", "intrinsic". He upset no end of people who would play God by showing that God (deus ex machina) was unnecessary. "Except that a neuron can in fact compute in just that manner. I would refer *you* to something as simple as the cells of your retina. Shine a light on one, and it turns on. Remove the light, and it turns off. (Others turn off by light and on by dark. Same principle.) The thresholding behavior of neurons is not much different from a digital gate: if you're close enough to +5V, you get 1, otherwise you get 0." The difference, of course, lies in "not much different". It is a difference which counts. First off, a neuron is not an on/off digital gate. One it gets "tired" and sometimes doesn't produce an output logically indicative of the input. Sometimes what it produces is not sufficient to excite the next connection depending upon its current state. What you get is a statistical mishmash of a highly parallel, distributed, interconnected flow. Much the same occurs within the cells of the retina which may excite one time and not the next. Given how well you understand the retina, I surprised that you don't implement it with software and a host computer. I don't know what it would see, but maybe if you connect it to that which mimics the brain, you could be on your way. "But that is *not* how the brain behaves. You will most likely get very poor results if you rewire the optic nerves to auditory cortex and auditory to visual cortex. Brains do not begin as randomly wired networks; the DNA itself contains "bootstrap code" to organize structures and begin regulation." The point of the homeostasis-based autopilot and the fixed-logic one was not to suggest that the brain operated in such a manner, but that there was a means of exhibiting goal-seeking (adaptive) behavior structurally without a separation between the direction and the doing. In short it is builtin, integral within the structure. What we call "adaptive behavior" or even "life" arises from the conditions of the processing structure. One thing that it is not is sequential logic. One thing that software is and always will be is sequential logic. That's your Turing machine that you say can do anything the brain can do. Take a look at languages specifically designed for parallel, distributed systems and at the hardware specifically designed as well. Find one simultaneous, majority-logic computer architecture that has an HLL with the same capability. It's not that one or the other doesn't exist. I suspect that if you looked at the "innards" of Big Blue which accomplished only in part a very small piece of what you propose to implement in software and a host computer, that ease of rolling off your tongue volition, emotion, mind, thinking, feeling, seeing, acting will be far different in fact. "This is also an assertion, and one which I think has been at least partially proven false. We can in fact produce the signals. We need to get the physical wiring down, but that's a simple matter of engineering. Give it two decades." Why should any simple matter take two decades? It must not be that simple. "Do you claim, then, that physics does not derive from the eminently logical system of mathematics?" I have no clue what connects this to the non-logic-circuit basis of an amoeba. For the record I make no such claim. Although you may get an argument from physicists. "If this is true, explain those who have sensory deficits. Seems to me that they're functioning quite nicely on a subset of their brain's capabilities." The point is that whatever sensory capability they have is integral with the brain. If they lose a sensory capability, that in no way diminishes the functionality of the brain: the capability remains. If they lose a sensory capability of the brain, the sense retains the capability. For the system to work they must both work as "one". "And you cannot show that this is impossible. We may not know how to do it, but that does not mean it is impossible." Well, it gets back to chemistry and whatever it is that allows life its interval with an organism. Software is not chemistry nor is the instruction set of a host computer. The host computer may be chemistry, but it is not of the kind that sustains life. Now you either believe that life is formed only from carbon-based matter or you do not. If you do, then what you propose even in creating an artificial life is impossible. If you do not, then it is up to you to show how software in a computer can exhibit all the properties we associate with life forms. Not the least of which is the lack of software distinguishable from the hardware. An organism is an integrated system and functions as such. You keep wanting to program that which requires no programming. "That's not the point, though. If you accept this as true, you see how any program could be created without anyone having the intent to create that specific program. The process would need to be optimized, but I only desired proof-of-concept. This seems to partially deny your idea that randomness can do nothing for the idea of the emergent system." I think here your problem is greater than any objection I raise. I will concede that it is theoretically possible to create any specific program using random opcode selection. I will not concede that it is practical or that given any zillion of interconnected machines at a 1000MHz that it will occur in less than a million years. I leave it up to others more familiar with probability to give you the actual odds. Nevertheless you have your proof-of-concept even if useless. Now you propose to optimize a random process. I can only assume that you intend to do what we do now which is to remove the randomness through the use of fixed logic. I'm not aware that I said that randomness can do nothing for the idea of emergent systems. You have two choices for random selection, you can choose data or you can choose an instruction path. What you do with either choice is completely determined (consistent) with the embedded logic. The software may use random selection, but there is nothing random in the embedded logic. Thus references to emergent software systems differ not one whit relative to their consistency to the embedded logic. They have the same logical consistency as does non-emergent software. This consistency prevents a software system from ever acquiring a capability not contained within the embedded logic. Fare believes that you can somehow transcend this from within the software itself using meta^n-programming or ever higher-level programming languages. " I do not say that a TM "has emotion"; I am rather saying that emotion may simply be the output of a particular computational process within the brain." You see it all hinges on what is included in compute. If you mean that which is possible on a von Neumann machine, the answer is no. Emotion is not an output of a process, but part and parcel of it. Emotion is a process as is volition, thinking, feeling, etc.. They are not separate nor separable from each other, but melded within the same overall process. As one who studied neuroscience you should know that. Don't make Descartes' error. Read the book. "Each individual neuron has a definable input-output behavior. As such, it computes a function, and as such, it is theoretically replaceable by a TM." Nice try, but no. Once you get by the difficulties of logically representing the "definable" part relative to energy levels, interconnection resistance, persistence, and repetitive rates, you are going to run into a wall on the "function" part, if for no other reason than it doesn't exist at this level. A neuron either fires or it does not depending upon the circumstances at that moment. That's it's only function at its level. If you want a Turing machine to execute or not execute billions of neurons simultaneously, be my guest. I guess it is one of those theoretical proof-of-concepts that you enjoy. "Chain enough of those together to replicate the limbic circuits and you may well have artificial emotion." No. "Until we get clever enough to try it, you cannot claim that it is impossible." I'm not aware that my claims have any less validity than yours. However I am more than willing to change it to highly improbable, mimicking it in the limit as you would life in software to say that you can't tell the difference. "I am making no argument that the Tunes project should try to build software organisms. That is not possible based on current knowledge. However, you are apparently arguing that it will never be possible, and I consider this exceptionally short-sighted." Interesting. Both you and Fare hold that we are some decades away from any ability to state it one way or the other. I consider it short-sighted to pursue the unknown when we have yet to exhaust the known. I believe that you will only create life as such with all its properties using carbon-based technology and never with von Neumann architecture and Turing-based software. There is a chemistry of life relegated to actual physical material that no matter how you mimic them in software will always have something missing. Beyond that I see no purpose in it. There is nothing in Tunes in terms of results either in operating systems or HLLs which requires more than what we know currently. Fare wants to give software a "life of its own" except for "purpose" which we will retain. He doesn't see that the one contradicts the other, because life's processes, simultaneously present in the process, does not allow for such separation. You want to create artificial life because everything in your universe is somehow expressible in a Turing machine. I would simply suggest that you reexamine it. I see no sense in artificial life, because success means loss of a tool. Do you want to reinstitute slavery? Do you want yet another source of mis-communication? Do you think that artificial life offer us any more than what they could offer without it? The point is to use software and hardware technology in ways that extend our capabilities. Who can be opposed to that? Artificial life, something that replicates what we are only thousands of times slower on 1,000,000MHz machines and at 100,000,000 times the cost, makes no sense at all in my opinion. Artificial limbs, artificial organs, yes. I personally would prefer non-artificial either regenerated through biology. I see software and hardware as a tool. I don't see artificial life as such. Your choice. From lmaxson@pacbell.net Fri, 06 Oct 2000 08:56:44 -0700 (PDT) Date: Fri, 06 Oct 2000 08:56:44 -0700 (PDT) From: Lynn H. Maxson lmaxson@pacbell.net Subject: Emergence of behavior through software Fare wrote: "Again, you insist on this trivial point that is completely irrelevant to any debate regarding emerging systems and artificial intelligence." Irrelevant? We have an agreement on this point. The logical implications of this agreement prove (1) that software does not have a life of its own, (2) that no software executes independently of its programmer(s), (3) that software utilizes nothing extra regardless of our ability to understand or predict the results. In effect that emerging systems have the same causal relationship to their source as non-emerging. Now these are claims that Fare entered into this debate. If he agrees to this irrelevant point, then he must agree to drop his claims for the others. Under the circumstances it hardly seems irrelevant. This does not make emerging systems less entertaining, curious, fascinating, useful, or challenging. It does eliminate any magical properties of randomness in software, pointing out the difference in its "use" here (externally controlled) and its occurrence in real world events. There is a fundamental difference between a "planned" random selection of software and one which simply arises within the events occurring within the process which is the real world (in which we participate). The one is a simulation, mimicry, with entirely different causes than the other. The one is a map. The other is the territory. Never the twain shall meet. As to the debate regarding emerging systems and artificial intelligence I don't know of any regarding emerging systems, except whether they possess "magical" properties. With an irrelevant agreement we have assurances that they do not. Even without them they seem to produce the same (useful, curious, fascinating, challenging) results as before. With respect to artificial intelligence which version instills debate? Certainly not the current version based on logic programming and rule-based logic engines and neural nets. My own project depends upon exploiting them within the software development process to increase developer productivity and reduce costs by orders of magnitude over current methods. If you mean the version which replicates in software and a host computer that which occurs in living organisms, specifically the human organism, I know that there is some debate about whether it is possible or not. I won't say that it is impossible though I lean toward the most highly improbable approaching impossible in the extreme. I am perfectly content not to consume the energy of those who would pursue it. Their task is difficult enough. However, I do have a problem with your achieving "true" artificial intelligence which leaves "purpose" untouched. That implies that these manifestations, dynamic aspects arising from the singular process of the brain, i.e. the same causal energy, are somehow separable and not intractably melded in terms of their source. That leads me to believe that production of intrinsic intelligence through software (if possible) must give rise to intrinsic purpose as well. I have no problem with that. Just realise that it has suddenly ceased to be a tool, a directed extension of a human capability. Why we would choose to pursue a non-tool use of software and a host computer eludes me. Whatever we think we would solve with success is nothing compared to the problems it presents. You have to be careful of what you ask for as you may get it. "NO. There is no absolute notion of "choice". Volition, free will, or whatever name you give to it, is a property of a system with respect to its environment. It is about a system's behavior being largely determined by its own internal state rather than by externally modifiable factors." I don't think you grasp the real world at all. In the real world you are not separate from your environment, you don't end here and have it begin there. Whatever you are is part of that environment as well. Choice occurs in reaction to events, whether the deliberate choice that we make to events at our level or what occurs at the level of quanta. Most definitely within all software we have an absolute notion of choice, utterly predictable regardless of whether we use random selection or not: the choice has to be a path allowed in the logic. Now internal state and externally modifiable may appear separately linguistically, but no such separation occurs in the real world: we are how we have reacted to our participation in the environment. We obtain life in this manner. Choice. Volition. Emotion. Whatever. "That a system obeys its own rules is no offense to its own free will." This assumes that a system sets its rules independently of external influences. Specifically no software system ever written established its rules independently. Randomness in selection of a data value or an instruction path doesn't change this. We may not know which selection may occur in a particular instance, but we know absolutely the set of possible selections and data values because we control them. We have no known means of giving up that control and passing it intact to the software. As to the remainder of your response you offer much fuel for my engine. However, we have basic disagreements whose truth-value remain somewhere into the future. I have no desire to infuriate you in any manner. Thus let us agree to disagree, letting the future have its feedback effect on our views. It should not affect our cooperating in the goals of Tunes. I believe that it is a French term for what will be will be. I do wish to thank you for the time you have spent. From aswst16+@pitt.edu Fri, 06 Oct 2000 20:45:33 -0400 Date: Fri, 06 Oct 2000 20:45:33 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Friday, October 06, 2000 12:58 AM -0700 "Lynn H. Maxson" wrote: > We are basic agreement here. That's why we are in disagreement > with respect to software-initiated life. As far as I know all > organism are carbon-based. I assume that's why we call the Actually, that's not 100% true. We have found some bacteria in deep-sea vents which use a sulfur-based synthetic chain. Furthermore, life on other worlds, especially bacterial-analogues, seems quite common, and will have to deal with very different element ratios. > What is it that you would do in software? Certainly not create a > life form. With software you can only mimic. The best you can This depends on what your definition of life is. If you limit it to carbon-based physical creatures, of course it cannot be created in software. I do not share your definition. I prefer that which you call logical equivalence --- if I cannot tell the behavior of a program from the expected behavior of a human mind, then I feel that the program may well be said to have a mind. This is the same test I use on other human beings, so why should it not apply to software? > with a silicon crystal. Logically equivalent, yes. Identical, > no. Is there a practical difference? Which one can you use to > build the computer in which you will run your silicon generating > software? If I were so silly as to simulate a silicon crystal in software, though, I could certainly then start carving my virtual crystal into virtual chips. I think the example is somewhat silly --- we can already make crystals to almost-arbitrary specifications, whereas we definitely cannot make minds with the same level of precision. Software is being examined as a possible material for constructing those minds. > Nothing which exists only in human-created systems is real outside > that context. Not the physical rules. Not the chemical ones. But is not our entire perception of the universe human-created? By your logic, nothing is provably real, and we're back to Descartes' idea that the only thing each of us knows is that he exists. > useful. It simply means that they fill a need we have, not one of > a universe which has no such problem, which has no needs period. If the universe does not need rules, why does it obey them? I don't see how it could exist as a system without following rules. > one for a universe which doesn't reflect on what's happening. If > you want to accept a theory that all matter did not exist before > the big bang simply because the mathematics dictates it, you may. > I will simply assume it's a map error. Or it was an act of > God, because we cannot have an effect without a cause. I am willing to accept God as an explanation. I am furthermore willing to accept that if the mathematics dictates something and there is no counterevidence, it may be taken as true. If we can't accept the output of physics as true, what good is it? > The problem is that you want to program something that doesn't use > programming. No matter the genetic code or the cell > differentiation they only spawn the abilities. They do not direct > them. Ashby with his homeostat showed that you only needed an That's not really true. Your particular genetic particularities continue to be expressed throughout life, and they can have a profound effect on the mental process. Consider the effect of the biological process of adolescence. Those hormones have a powerful effect on thought. > difference which counts. First off, a neuron is not an on/off > digital gate. One it gets "tired" and sometimes doesn't produce > an output logically indicative of the input. Sometimes what it Cell fatigue can happen, yes. On the other hand, a transistor may overheat. Systems have failure conditions. > produces is not sufficient to excite the next connection depending > upon its current state. What you get is a statistical mishmash of And sometimes the output of a circuit is not 1. Again, I see no problem. > Given how well you understand the retina, I surprised that you > don't implement it with software and a host computer. I don't > know what it would see, but maybe if you connect it to that which > mimics the brain, you could be on your way. Your idea is several years too late. Check the neuroengineering projects at UPenn. They're already past retina and working their way back towards occipital cortex. > but that there was a means of exhibiting goal-seeking (adaptive) > behavior structurally without a separation between the direction > and the doing. In short it is builtin, integral within the > structure. What we call "adaptive behavior" or even "life" arises At the same time, though, one can easily say that the instructions stored in RAM/ROM/disk are part of the structure, since the charges representing them are part of the system. Sure, we can change them. We can also transfect neurons with abitrary DNA. That can kill the brain, but you can also trash a computer by sending in the wrong instructions. > well. Find one simultaneous, majority-logic computer architecture > that has an HLL with the same capability. It's not that one or Which capability do you mean? Emotions and the like? Of course we don't have it yet --- we don't know what instructions to give. I will point out, however, that Kasparov had a sense of playing a thinking and intelligent opponent while battling Deep Blue. Again, only logical equivalence, but for me that's a decent start. > Why should any simple matter take two decades? It must not be > that simple. The phrase "simple matter of engineering", like "simple matter of programming", is highly sarcastic. Consult "SMOP" in the Jargon File if you wish. > "Do you claim, then, that physics does not derive from the > eminently logical system of mathematics?" > > I have no clue what connects this to the non-logic-circuit basis > of an amoeba. For the record I make no such claim. Although > you may get an argument from physicists. Amoebas behave according to physical laws. Physical laws are mathematical/logical. Therefore, ameobas are constrained just as software is, and being constrained within a ruleset does not deny life. > The point is that whatever sensory capability they have is > integral with the brain. If they lose a sensory capability, that > in no way diminishes the functionality of the brain: the > capability remains. If they lose a sensory capability of the This is not really true either. The brain is a "use or lose" system. If the visual neurons get no stimulation, they will shrink and die, and the brain now only contains a subset of the standard functionality. > you either believe that life is formed only from carbon-based > matter or you do not. And I do not, nor do I see why I should believe this. Seems kind of geocentric to me. > associate with life forms. Not the least of which is the lack of > software distinguishable from the hardware. An organism is an > integrated system and functions as such. You keep wanting to > program that which requires no programming. I don't see how I need this at all. Part of the Turing hypothesis is that it doesn't matter whether I've got my TM in hardware or a hardware/software combination; they are equivalent. > specific program using random opcode selection. I will not > concede that it is practical or that given any zillion of > interconnected machines at a 1000MHz that it will occur in less > than a million years. I leave it up to others more familiar with > probability to give you the actual odds. It depends significantly on the length of the program. "Hello, world" is doable. Win2000 probably isn't. > Nevertheless you have your proof-of-concept even if useless. Now > you propose to optimize a random process. I can only assume that > you intend to do what we do now which is to remove the randomness > through the use of fixed logic. Of sorts. It seems to me that the bottleneck is more on the verification than the testing side. Might be worth coming up with a few fast heuristics to rapidly reject the majority of output. (It doesn't matter if we reject a few correct programs with those, either;any program may be written in infinite ways.) > path. What you do with either choice is completely determined > (consistent) with the embedded logic. The software may use random > selection, but there is nothing random in the embedded logic. And thus I wonder once more what's so limiting about having a ruleset. Everything is consistent with some set of rules. > You see it all hinges on what is included in compute. If you mean > that which is possible on a von Neumann machine, the answer is no. > Emotion is not an output of a process, but part and parcel of it. > Emotion is a process as is volition, thinking, feeling, etc.. But computation may also be said to be a process. Furthermore, I see no proof that emotion is not the output of a computational process. (I also wonder if emotion is truly a requirement of mind, but that's another matter.) > They are not separate nor separable from each other, but melded > within the same overall process. As one who studied neuroscience > you should know that. Don't make Descartes' error. Read the > book. But the very point of neuroscience *is* that the brain may be separated into functional areas and that those areas perform recognizable computations. If this were not true, we could not study it. > other reason than it doesn't exist at this level. A neuron either > fires or it does not depending upon the circumstances at that > moment. That's it's only function at its level. If you want a But a function is simply something which maps an input to an output in a consistent manner. The output is the firing; the input is the neuron's physiological environment. What can it be if not a function? > Turing machine to execute or not execute billions of neurons > simultaneously, be my guest. I guess it is one of those > theoretical proof-of-concepts that you enjoy. This is a bit closer to possibility than simple theory, though. It is possible to cause a brainlike configuration to self-assemble. That's a proven fact. Therefore, it is logically possible to write a bootstrapping program that will put together the neuron-simulation-units for you in a reasonable amount of time. Is it easy? No. I'd want the genome properly mapped to proteins and those proteins well-characterized before I'd be willing to try it. Nonetheless, it is possible. > "Chain enough of those together to replicate the limbic circuits > and you may well have artificial emotion." > > No. Yes. If you want to make assertions, you'd better do more than just smile. > I'm not aware that my claims have any less validity than yours. > However I am more than willing to change it to highly improbable, > mimicking it in the limit as you would life in software to say > that you can't tell the difference. Ah, but probabilities can be reduced. Infinity can't. > Interesting. Both you and Fare hold that we are some decades away > from any ability to state it one way or the other. I consider it > short-sighted to pursue the unknown when we have yet to exhaust > the known. I believe that you will only create life as such with But how else will we get to the unknown? Again, there are an infinite number of possible programs. If we stick to the kinds of things we know how to write, we will never exhaust that space --- there's always one more feature or heuristic that could be slapped on. > all its properties using carbon-based technology and never with > von Neumann architecture and Turing-based software. There is a > chemistry of life relegated to actual physical material that no > matter how you mimic them in software will always have something > missing. And I claim that this chemistry is not important, that it's the computational functions of the neurons that matter. I cannot prove this, but it cannot be disproven. > requires more than what we know currently. Fare wants to give > software a "life of its own" except for "purpose" which we will > retain. He doesn't see that the one contradicts the other, > because life's processes, simultaneously present in the process, > does not allow for such separation. This is a good point, and I agree with you here. I don't think you could build truly intelligent software without it deciding to have its own purpose. > simply suggest that you reexamine it. I see no sense in > artificial life, because success means loss of a tool. Do you > want to reinstitute slavery? Do you want yet another source of > mis-communication? Do you think that artificial life offer us any > more than what they could offer without it? Success does not mean loss of a tool. If some programs are intelligent, that does not mean that all programs are. I believe that creating artificial life, as well as attempting to create physical life, is something which humans must do as part of our progress as an intelligent species. However, this is getting once more into the realm of theology. I also believe that by having other forms of life to compare ourselves to, we will have a deeper understanding of what it means to be alive. Obviously, it would be wrong to enslave intelligent programs. I don't see that we could, really. Active, sentient programs would be quite hard to control short of yanking the plug out of the wall. > extend our capabilities. Who can be opposed to that? Artificial > life, something that replicates what we are only thousands of > times slower on 1,000,000MHz machines and at 100,000,000 times the > cost, makes no sense at all in my opinion. Artificial limbs, But it will not be that slow forever, and there is nothing in the laws of physics that says it must be that slow. Furthermore, it has at any point the opportunity to diverge from what we are. We cannot consciously rewire our brains. An intelligent program could. (This is obviously also a source of great danger if for some reason we mistreat our creations.) > artificial organs, yes. I personally would prefer non-artificial > either regenerated through biology. For now, tissue engineered stuff will be better. IMHO, at some point we're going to get around to improving on the design of organs, at which point the artificials may pull ahead once more. > I see software and hardware as a tool. I don't see artificial > life as such. Your choice. It need not be a tool in the sense that a hammer is a tool. It could be a tool in the sense that a valued teammate is a tool. There are some things which computers do very well and which humans do poorly, and therefore we might want to ask intelligent machines to help us with those things. Of course, we need something to offer in return, even if it's only some processor cycles to run on. (I'm hoping that there'll be some aspect of human creativity that *does* turn out to be untransferable, so that AI is like us but not totally like us. We can then offer them those services in return.) Yes, this is speculation and science fiction. So what? At CMU, the Robotics program prides itself on being the only grad program to have arisen from a science fiction story. Robots were fiction once. Does that make them somehow less real? From btanksley@hifn.com Fri, 6 Oct 2000 18:05:54 -0700 Date: Fri, 6 Oct 2000 18:05:54 -0700 From: btanksley@hifn.com btanksley@hifn.com Subject: Emergence of behavior through software From: Alik Widge [mailto:aswst16+@pitt.edu] > wrote: >> We are basic agreement here. That's why we are in disagreement >> with respect to software-initiated life. As far as I know all >> organism are carbon-based. I assume that's why we call the >Actually, that's not 100% true. We have found some bacteria in >deep-sea vents which use a sulfur-based synthetic chain. Erm -- I'm pretty sure they were carbon based. They 'oxidized' their food using sulfur rather than oxygen, though. But I'm not disagreeing with you; I don't see Lynn's point yet. It doesn't make sense to me that life could only be carbon-based. >Furthermore, life on other >worlds, especially bacterial-analogues, seems quite common, >and will have to deal with very different element ratios. Where did you find the information that life on other worlds is quite common? I wasn't aware that any had ever been discovered. -Billy From aswst16+@pitt.edu Fri, 06 Oct 2000 22:38:27 -0400 Date: Fri, 06 Oct 2000 22:38:27 -0400 From: Alik Widge aswst16+@pitt.edu Subject: Emergence of behavior through software --On Friday, October 06, 2000 6:05 PM -0700 btanksley@hifn.com wrote: > From: Alik Widge [mailto:aswst16+@pitt.edu] >> wrote: > > Erm -- I'm pretty sure they were carbon based. They 'oxidized' their food > using sulfur rather than oxygen, though. But I'm not disagreeing with > you; I don't see Lynn's point yet. It doesn't make sense to me that life > could only be carbon-based. You're right, now that I think closer; they have the same carbon-based scaffolding, and it's only the central engine that's sulfur-based. They were still a big shock to the bio community, IIRC. > Where did you find the information that life on other worlds is quite > common? I wasn't aware that any had ever been discovered. I said "seems", but you're right again, that's a bit of a poor way to phrase it. What I'm trying to get at is that many scientists believe that there is a near-100% probability that life exists beyond Earth, simply because the odds for it seem very good (see also the Drake Equation). IMHO, at least some of that life is going to end up being non-carbon. Now, if two thousand years from now we've sampled several other worlds and found only carbon-based life, that's going to raise some pretty big questions.