Proposals

Lynn H. Maxson lmaxson@pacbell.net
Thu, 29 Jun 2000 19:01:20 -0700 (PDT)


Billy wrote:
"Pardon, I deleted your story because it made no 
sense to me whatsoever.  I mean, I understood the 
word, sentances, and paragraphs, but I don't
understand why it has anything at ALL to do with 
your point."

The point is that it is an acceptable practice by 
at least one vendor to translate different language 
source to a single (machine-independent) format and 
then apply a machine-dependent optimization 
process.  That's what's done today.  I haven't 
proposed anything different except in terms of 
implementing the optimization process.

"Because you can't do that.  It's impossible."

Therein lies the crux of the matter.  Having 
programmed in assembler for a number of years and 
certainly having been exposed to the assembly 
language version of compiler output, I tend to 
think it more possible than you.  I don't care what 
the complexity index is of a computing algorithm.  
What I do know is that it translates into an 
executable sequence in postfix operator notation.  
That's how a stack works.  It may vary due to the 
availability or not of registers on a target 
machine, i.e. how many I load either directly to 
registers or to a stack before executing an 
sequence of operators, but the process has been 
"copied" since Burroughs introduced it in hardware 
in its B500 series in the late fifties, early 
sixties.

Most of the languages have been fairly restrictive 
in terms of typing unless, like me, you have been 
privileged to program in PL/I.  When you compute an 
expression of mixed operators and mixed operands 
(character strings, binary integers, variable 
precision fixed decimal, and floating point), you 
get to read some internal gyrations to get it all 
to come out correctly.  Most other programming 
languages require that you explicit convert types 
before using them in a common expression.  Maybe 
I'm just used to a programming language world where 
they did not ask how difficult it was, just do 
it.<g>

I have offered a much broader look at the 
optimization "problem" in a response to Jason 
Marshall.  If you followed that, the key is not in 
this code generation or optimization here, which 
frankly is a done deal (and has been since the 
first program), but in generating "unnecessary" 
code.  This arises, IMHO, from the rather foolish 
need to maintain the boundaries of "objects" 
(including processes as objects) once "passed" the 
source interface.

Contrary to these ungodly volumes of logical 
equivalents through translation from one form 
(symbolic) to another (actual), the translation is 
one of "reduction", of tossing out that which on 
this side of the user interface that which is no 
longer necessary.  I find it interesting that 
somehow this lies outside the "considered" range of 
reflective programming when it clearly exists in 
the "descriptive".  I am doing or proposing no more 
than what Fare expresses in defining reflective.  
If it is "impossible", take it up with him.<g>

"No, it's not -- you were correct when you claimed 
above that all modern systems use a process of 
discovery which is not contained in the compiler.
AI systems in particular work ENTIRELY differently; 
I don't know where you get off claiming that neural 
nets have any sort of concept of logical 
equivalencies (neral nets are analog devices).  
Prolog is effectively just another compiler.  SQL 
is an interpreted language, and although many
companies try to accelerate it, they certainly 
DON'T try to apply all possible logical 
transformations to it."

Ah, where to begin?  Prolog is effectively another 
compiler due to its implementation, something its 
advocates screwed up on in reducing a specification 
language to function in the construction stage 
only.  To have not done so would have meant 
incorporating the results of analysis and design 
(both of which derive directly from the 
completeness proof) between the specification 
process (translation of user input) in the 
specification stage (first step).  You see they 
broke up (made discontinous) a continuous software 
development process.  They can hardly be faulted, 
because they had to compete with everyone else who 
was guilty of the same crime.  To know if you are 
guilty look at your development tools in 
construction to see if you can create new input 
source files or if the scope of compilation is no 
more nor no less than a single external procedure.

If the Prolog people then get off a "batch" 
compiler and go to an interactive tool which 
accepts an input specification set (which 
determines the scope of compilation) and 
automatically produces the results of analysis 
(dataflow) and design (structure charts) as well as 
the construction (completeness test) and testing 
(exhaustive true/false) which they currently 
provide, we (and they) will come much closer to 
having what we ought to have in any specification 
language.<g>

Having done considerable SQL programming in PL/I 
and COBOL and gone through the "binding" process 
required, I will assure you that it is not always 
interpretive (or at least not completely so).  The 
biggest problem that you have from a performance 
view (again IMHO) lies from dealing with "purists" 
who allow nothing but third normal form.  No since 
trying to explain to them that the performance 
issue lies in minimizing physical i-o when they can 
play in such esoteric "heavens".<g>

As to rule-based AI expert systems they work 
internally exactly like Prolog's logic engine.  
Divorce yourself from "all possible logically 
equivalent forms of an executabe", accept that they 
have only one executable form (at a time), and that 
the exhaustive true/false only occurs with their 
input however generated or presented.  They work 
the same.

I have several neural network programs.  Though 
their graphical descriptions of boxes and 
connections suggest (to you) that they are analog, 
they are not.  I have worked with analog computers.  
Believe me they operate differently.  Unless you 
have a physical A/D converter in your computer, no 
program is performing analog operations: pure 
digital.

I only continue to mention neural nets because 
Jason Marshall introduced the notion of a heuristic 
as a uniquely human activity.  If I had a set of 
successful and unsuccessful results, I may train a 
neural net to "wing it" if I cannot come up with 
the necessary rules (only examples).  Now I have 
been scolded more than once for even suggesting 
that this is possible with neural nets.  The only 
reason I hold out some hope is my experience with 
D. Ross Ashby (another of the early cyberneticians) 
in his "Design for a Brain".

The point is that you have to be willing to let a 
system fail (die) as "natural" as surviving.  It is 
possible that it will never do what it was supposed 
to do.  You can't say what it was designed to do, 
because you have no way of incorporating design 
(logic) in it.  I reference you to Ashby's 
"homeostat".

Now given that someone has allowed up to 60,000 
desktop computers to pursue a task is it possible 
to allow reflective programming to occur in which 
the behavior of the system, of the components 
within the system, will adjust the ratio of 
rule-based AI to neural-net depending upon the 
current "state" of the system?  In other words 
instead of insisting on their separation, can we 
possibly meld them dynamically in a cooperative 
process?  Ah, but that is something else entirely.

"I agree that your goals are desirable.  I disagree 
that they're possible."

In that sense we are agreed on one thing.  What 
remains then is to resolve where we disagree.  You 
may be correct.  Maybe current technology (hardware 
or software) will not support it, so that we must 
wait until a more appropriate moment.  Or I may 
convince you (and the remainder of this austere 
audience) that if we engage in a common rethinking 
process where we allow the different thoughts to 
percolate we might very well find the impossible 
possible.

"It's not?  Why do you believe that the world is 
actually made of objects?  Why should the world not 
be actually made of processes?  Or perhaps some
combination of the two?"

Well, I agree with the response offered by Jason 
Marshall, which is essentially what you suggest.  
The point for raising it lies in rethinking these 
things through why should one view obscure the 
other?  Why not a greater parity?  I think when we 
talk about "classless objects" or "no multiple 
inheritance" (perhaps even no inheritance at all) 
or "namespaces" instead of the process world of 
"names" that we insert unnecessary "blinders", 
keeping us from a more balance and complete view of 
the composition of our universe, even if it is one 
of our own creation.

I do believe that both exist concurrently 
(simultaneously) and that one has a logical 
equivalent in the other.  This allows one for user 
convenience (productivity) to exist at that level 
while transforming internally to a more productive 
form in execution.  This view does not assert that 
one is "better" than the other, only that one may 
be more appropriate than the other in a particular 
context or for a particular use.

At least until someone provides a unification.<g>

Again I hope I have communicated better on some 
issues without raising more confusion overall.  If 
there is something you feel yet incomplete or in 
error, then let's continue this process.  I do it 
because each time I learn something else about what 
I thought I knew as well as some things which I 
obviously didn't.<g>