Fare's response on threads

Derek L. VerLee dlverlee@mtu.edu
Sun, 17 Sep 2000 06:13:42 -0400 (EDT)


Allright, I had trouble deciding if this should be a personal reply or a
reply to the list, but at risk of filling your mailboxes with
MoreCrap(TM), I proceed with my thoughts, at the risk of adding emboggled
reply to emboggled reply.  I also must disclaim, this is all based on my
concepts of the way things are, as of course, is anything I write. :P  (as
if we both havn't apologised and disclaimed enouph already!)

First, I dont think anyone is claiming sentience or free will in the way
the humans experiance it, or in even anything but the most stretched
definitions of the words.  

The way I see it I will sum up as follows:  The highest goal of computer
science is to automate that which can be automated.  Most computer
scientists, it seems to me, have forgoten this high goal (with the
exception of the TUNES crew, which is one of the things that has me so
intrigued with TUNES).  The immeadiate issue that arises is, how much is
it possable to automate.  If we wanted to get real deep, we could ask if
the process of sentiance itself can be automated, but that is far from the
point of TUNES.  TUNES, i gather, is more about what can we automate with
what we know so far.  Metaprograming, for example, is saying "hey, some
aspects of programing are fairly mechanical and, it seems, can be
described proceduraly.  Lets find how much of programing itself can be
described as an exact process, and write a program (or create a system) 
that can interpret "programs" that describe programs."

Allthough I think Lynn does have a point in that how much of this stuff
can actually be done.  I know many of you are activaly doing research,
which I admit to not having read through most (if not all) of the relevant
papers on this subject.  However, I think what were both wondering is:
Just how far out is this stuff your talking about?  Is it only a couple
feasable steps from what we allready know can be done, or is it more like 
"what we are optimisticly fairly certain might be possable"?

Either way, its got my full support! :)

-Derek "The Moebious" VerLee


                            \  /
                        -----><-----  (do you really believe this?)
                            /  \


On Sat, 16 Sep 2000, Lynn H. Maxson wrote:

> As the designated "moron" of these hallowed chambers it should 
> surprise no one that at times I get confused.  In this instance it 
> lies with Fare's response to Derek Verlee's inquiry about threads 
> within Tunes.  Maybe the phrasing does me in.
> 
> I refer to the phrase in question, "...the system must learn to 
> dynamically decide for itself; if you think some method is better 
> than others in some cases, you may tell the system about it."  You 
> see I don't know how any software system discussed here makes any 
> decision in any manner except as it is told.  In short it is not 
> an option: the system and what is does is not independent of the 
> telling thereof.
> 
> I do not oppose evolving software-based automata into a sentient 
> form.  I simply don't know how to do it, have never seen it done, 
> and no one within this group has offered any operational means to 
> achieve it.  I know I have question this interpretation of 
> "reflexive" before and received assurances that it did not "cross 
> the line" (from automata to sentient).
> 
> Unless someone believes that you can initiate an automata process 
> and with the addition of a "little something" have it crossed the 
> line from deterministic to free will (and I doubt anyone even with 
> "dynamic expert system optimization ... through monitoring by a 
> meta-system" has pulled this one off), any discussion of such a 
> system in which the body of tellers appears separate quite frankly 
> is false to fact.
> 
> Whether you have 100% monitoring or some less extensive sampling 
> rate no reflection will occur by the automata on its own.  It 
> cannot reflect that which we do not permit.  I may have missed it, 
> but so far I have seen no example of software capable of producing 
> higher-level abstractions on its own nor of finding reusable 
> patterns of logical equivalent objects.  In short such software 
> does not "understand" the instruction sequences it executes and 
> certainly has no means on its own of constructing (and 
> understanding) higher level constructs as we do with the language 
> we use in describing such processes.
> 
> It is one thing to talk the talk.  It is quite another to walk the 
> walk.  The trick lies in taking our talk to where it can walk.  No 
> system, no software system ever written, does that trick without 
> assistance, without deliberate instruction from the talkers.  That 
> includes meta-systems, meta-objects, and meta-programming.  
> 
> "I was looking back to see if you were looking back to see if I 
> was looking back to see if you were looking back at me" is a human 
> activity.  No means except human means exists to transcribe it 
> into software.  No software however cleverly written will ever 
> "reflect" in any human sense of the verb.  It cannot "understand" 
> what it is doing and without that understanding it cannot 
> "reflect" on what it is doing.  Reflection however we instill it 
> in software remains something of our doing with the software only 
> providing the means.
> 
> Therefore I must protest any claim or inference that the "system" 
> does anything on its own, dynamic or otherwise, without our 
> determining it in every detail.  I don't believe that we serve the 
> "lofty" goals of Tunes in treating software as if it were 
> something of an independent third party, a peer of its 
> progenitors.  Whatever it does it does because we told it to do 
> so.  In every instance.
> 
> 
>