Lynn H. Maxson
Fri, 30 Jun 2000 23:59:09 -0700 (PDT)
Response to Jason Marshall.
"But we're coming to opposite conclusions based on
the same data, which means one (or both) of us are
not grasping the situation."
Or that one or both of us is not communicating well
enough to show that we have the same grasp on the
data.<g> As I frequently find such failures on my
part, let us assume that only one of us is to blame
"1) Exhaustive searches (searching all possible
outcomes) lead to optimal answers, occasionally in
True if we change the clause "searching all
possible outcomes" to "as if testing all possible
variations of the input". Remember this is not my
idea, but the one used in logic programming of
which Prolog is an example. This has been working
20+ years with an approach that began with
rule-based AI systems and has remained the same
You are given an assertion. You attempt to prove
it false. For it to be false you must "test" all
possible enumerated variable states of the
assertion. Before you can do this you must insure
that you have the logical means of performing the
test. Thus the two-stage proof process of
completeness (which means you have an established
means of testing) and an exhaustive true/false
testing. At the end you have either no true
instances (the assertion is false) or one or more
true instances (variable state values in which the
assertion is true).
This occurs in every SQL query, every rule-base AI
system, every Prolog implementation. Believe me I
don't make it up nor is it my processor. I am
simply using what is field-tested, established
If you accept that it will employ heuristics as
part of its completeness proof to avoid going down
paths which are provably "false", something which
at least some of these do employ, then the actual
"test" results are logically equivalent to those in
which such heuristics are not employed.
I will not debate the issue of suboptimal time.
Certainly even some SQL queries get canceled for
failing to complete within a "desired" interval.
Nor will I claim that the true instances (the set
of variable state sets) are optimal, only that they
are true instances satisfying the assertion.
It is possible that the assertion like a SQL query
can employ "nesting". In logic programming you may
have a "main" goal(s) supported by a hierarchy of
sub-goals all of which must go through the same
two-stage proof process. In that manner you may
"force" the selection of a true instance of an
optimal solution, knowing full well that other
equally optimal true instances exist.
"2) There exists a large block of problems that
have prohibitively (astronomically) expensive
search times for optimal solutions."
Yes. But again I point out that is not the issue
here. The issue is minimizing what a user has to
express as the problem set. The second issue lies
in "formulating" a solution set that "matches" it.
Regardless of the time involved to complete the
exhaustive true/false testing, the "formulation"
that which we now employ programmers to encode has
been automated: the user has been able to code
(write) what he wants without worrying about how it
is done. Moreover the problem set and the solution
set are a "perfect" match, something which is
seldom achieved in alternative methods.
Since the beginning of programming we have had
certain "solutions" which were either "cost
prohibitive" or "exceeded time constraints". The
hardware technology in that time has improved such
that the class of those "solutions" has continued
to diminished. With the delivery of IBM's ASCII
White system that pool has gotten even smaller.
But also notice what this means. We at least know
how to solve the problem. What we have to express
is minimal. The issue here is "our" productivity.
That's why we have evolved the generations of
programming languages: people productivity.
"3) There are heuristics that can get you near
optimal answers in optimal time."
For the moment, let's just answer, "yes".
"4) The only alternative to heuristic answers is
I don't quite understand why an answer
heuristically solved is not optimal, meaning the
"right" answer. In structured analysis and design
when you go from a system of dataflow diagrams to
one of structure charts, you employ a heuristic
approach. In that approach you start from any
entry point and proceed (forward) through the
processes. You do the same proceeding backward
from an exit point. When you come to the process
where the input ceases and the output begins, you
have found your "central transform". In a software
system you have a hierarchy of such transforms,
each of which becomes a "process" in your structure
As one who has used and taught this method
extensively (all based on Larry Constantine's work
and who also provided the heuristic) I will assert
to you that it provides an "optimal" result, one
which occurs in far less time than the methods it
If I employ a heuristic in logic programming, it
appears as no more than a controlling condition.
If it leads to "sub-optimal" results, who gives a
damn? What it will lead to are all the true
instances which satisfy the condition (as well as
others). There is no magic in the box. It does as
it is told. It simply saves you on the telling by
taking all the rest on its shoulders.
"5) Your processor is frequently asked to perform
tasks of type #2."
It's not my processor, but I assume that such
problems do appear with logic programming
"6) Modern optimizers use heuristics for all
problems of type #2, and usually use algoritms for
problems of type #1."
Wrong. All software-encoded heuristics exist as an
algorithm. The requirement of all computing
hardware and software is that it must conform to
the rules of logic. You have no non-logical means
of expressing a heuristic in software, only
logical. Thus it falls into the category of
algorithm. Sorry.<g> Then again maybe it is a
feature of Slate.<g>
"#8 is going to be a little hard to disprove, since
there exist mathematical proofs to back it up. You
cannot find optimal solutions to some problems in
trivially finite time and space, and these problems
feature prominently in the task sets for Turing
machines. So either we're mistaken about what we
think you said, or you're mistaken about what you
I had to jump ahead here as it got a bit confusing.
The answer is relatively simple. You are mistaken
about what you think I said, whether I
miscommunicated it or you misundertood. You're
hung up on this optimal thing. There is a
difference between a "true instance" and an
"optimal" solution. The one need not be identical
to the other.
Logic programming determines true instances (or the
lack of them) through a "logically" exhaustive
true/false proof process of the execution of a
dynamically organized algorithm and (logically)
enumerating the set of input variable states.
That's all it does whether it takes 10 microseconds
or 10,000 years. Or 10,000,000,000.
There is a difference between solvability and
computability with respect to a Turing machine.
Even if something is computable, meaning that we
can encode it as an algorithm, it may never
complete. So what?
The issue here (I assume) is code optimization,
something which obviously does not fall into this
"impossible" category whether heuristically
determined or not. We can submit every known code
optimizing heuristic (which has already been
encoded into an algorithm) and assert that there is
as least one for every primitive operator and
control structure on a given piece of hardware that
is equal to or better than any other. Now you must
at least grant me that (logically).<g>
It is not necessary to an enumeration of all
possible instructions sequences beginning with a
sequence of one, then two, etc. to try to determine
in a "brute force" exhaustive test manner the
logically equivalent matches between a
machine-independent specification and a
machine-dependent one. It doesn't even require
heuristics. It is simple logic based on
instruction categories and the interdependencies
(rules) which govern their possible sequences.
Unfortunately all too often users get results that
appear as if monkeys had been randomly writing
code. That in turn as well as dynamics of their
environment brings about the need for changes, the
maintenance cross which IT must bear. Now a
specification language in and of itself does not
guarantee that you will get it "right" every time.
After all the specification writers are only human.
But what a specification language does guarantee is
that the same place that you got it wrong is where
you can do it right.
Look at the software development process from the
point of entry of user requirement/changes:
specification, analysis, design, construction, and
testing. The only point of human involvement is at
the very beginning in the specification stage.
That means you have only "specifiers" and
unemployed analysts, designers, constructors
(programmers), and testers.<g> The only "slow"
part here is where the humans are involved.
Regardless of how slow the specifiers are they can
implement changes faster than the users' dynamics
of their environment. Or those who can, stay, the
remainder join the long line of the unemployed. As
a user who continually finds his change requests in
the backlog, I don't care who gets employed or
unemployed as long as that backlog disappears, i.e.
the ability to get changes effected in "real time".
Now Prolog doesn't offer this as they opted to
present their specification language in a
programming language uniform. That means they
allow the creation of source files as input and
limit the scope of compilation to a single external
procedure (or its logical equivalent). Thus they
fall prey to the same failures as the other
programming languages. It's no good to say one is
better than the other, if it is a matter of
selecting the lesser of two evils. Instead its
time that we changed the nature, the cost, and the
time of our offering. Now Prolog (the language)
could do that if the implementations understood
anything about the software development process and
not just a "small" piece of it.<g>