Mon, 17 Jul 2000 11:38:04 -0700
From: Lynn H. Maxson [mailto:firstname.lastname@example.org]
>"Er -- no, it won't. Any programmer you ask to
>code that algorithm for you
>will either drop to the floor laughing or will be
>coding until he's dead.
>Solving the problem that way is impossible."
>I love the challenge of the impossible, infinite,
>and too finitely large that you raise.
I'm glad I can continue entertaining ;-).
>program's do in fact terminate is a part of my
>reality. Thus I do not worry about computing in
>some manner when they will terminate, but about
>having that time as short as possible. I have no
>conflict with Rice's Theorem as I have no interest
>in computing what is provably impossible.<g> I
>deal exclusively with the possible, however
>challenging it may appear to others.
I have no quibble with this. I still get the strange feeling that you're
imagining that Rice's theorem only applies to people who have studied it,
and that the fact that you've denied that it applies to you will certainly
make your program work. Well, it so happens that I'm bored of talking about
it -- the conversation seems to be one-way on that subject. Let's switch to
>It does not strike you as strange that we can
>specify, analyze, design, and test systems as
>whole, but that we cannot do so in construction.
What strikes me as strange is that sentence. What do you mean by "in
construction"? Do you mean "part by part"?
>We can specify, analyze, design, and test any
>system at any level down to its "atomic" units.
>Yet we cannot do so in construction.
Okay, that's not what you meant. (I should have read more carefully.)
>While such a
>range from the very smallest unit to the very
>largest can occur elsewhere in the software
>development process, it cannot occur in
>construction. Why? Once the programmer laughing
>subsides, it is said (by you) to be impossible.
?????????? I don't even have a clue. What does "in construction" mean?
How could I have claimed something related to it to be impossible?
Please help. I'm trying to figure this out, but it's not making sense to
me. I honestly have not the faintest clue what you're talking about;
everything before this point is completely meaningless to me.
>Well, in my system what you want to do is
>determined by the set of input specifications
>selected which can range from atomic units to
>assemblies of any scope.
This almost makes sense to me -- would I be correct in guessing that you
made a typo here, and you should have written "input and output
specifications"? Given that correction, this not only makes sense, but even
seems like a good thing.
>The only writing which
>occurs within the process (thus excluding user
>documentation which lies outside) is that of
>I do not worry about programmers
>laughing because no programmers nor programming (as
>a separate activity) is involved: executables are
>produced directly from specifications as they do
>now currently in Prolog.
You're being silly. People who work in Prolog are still, by and large,
called "programmers". This practice will continue for the forseeable
future. Furthermore, you're still going to need to programmers if you're
ever going to implement your specification system -- it won't specify itself
until it's written.
>The only difference is
>that so are the results of analysis and design. In
>short the system (however defined) is developed as
>a whole throughout the entirety of the process.
Okay. This makes some sense. Vaguely.
I don't see how both problem analysis and solution design can be produced
from the specs; usually the specs are produced at the end of the Analysis
stage. It does make sense that your hypothetical compiler could produce the
solution after only being put through the analysis stage, and that does seem
>If you concede that C (and PL/I) works the way it
>does, then you know that within a single external
>procedure you can have multiple internal ones and
>within each further multiples to an
>"implementation-" (not language-) defined depth.
>For some reason this seems "normal" to you and yet
>extending it one level upward (to begin with) in
>which multiple external procedures are considered
>as a single unit of work is somehow impossible.
Strange -- this entire paragraph makes sense to me, but then you claim that
I called something impossible which I would never question. Are you
claiming that I'm questioning the existance of files? Or of libraries? How
about programs? What do you mean by saying that I claimed this
The only things I claim to be impossible is "sorting general programs by
speed" and "generating all possible optimizations of a program". Oh, I also
shorthanded a couple of other things as impossible; in general, I claim a
process is impossible if it requires more transistors than there are quarks
in the universe (this seems safe, although it's not truly impossible). I
don't claim that a _result_ is impossible unless it's been proven so; for
example, you might actually be able to produce the perfect optimiser or code
generator; my only claim is that the method you describe will never work.
>they are all part of a single application system
>(of demand- and frequency-based) programs
>interconnected through data stores (persistent
>data), something which we achieve as a matter of
>course in dataflow analysis, you regard it as
>"impossible" in programming.
I'm having trouble parsing this. None of what you describe is unique to
dataflow analysis; every field of work uses division of effort and
encapsulation, including programming. I just don't know what you're talking
>To make this transition easier let's switch to
>Prolog and logic programming. Here we are given a
>named set of "unordered" specifications containing
>goals (main and sub), rules, relationships, and
>data. From this the logic engine creates the
>organization (the logic) necessary to satisfy the
>(main) goal. Nothing prevents having more than one
>set of main goals (other than the implementation),
>having main goals for each of the programs of an
>application system all included within a single
>input stream. Far from being the impossible task
>you envision it is simply the addition of a list
>which contains as its entries the list created to
>represent the logic of a single program. It is no
>more difficult, in fact a lot easier, to do this
>with a set of programs as a single unit of work
>than it is to do them individually as we do
Of course. This is all fine. I've written programs in logical languages
before, and this is what happened. You keep claiming that I called these
things impossible. I didn't. I simply claimed that your optimiser design
was impossible, and your code generator design was impractical and
impossible. I still claim that.
I don't care one whit about your specification language. I never have. You
can impliment it if you like; I'll neither hinder nor help you. I'm in
complete apathy about it. Well, aside from a reasonable certainty that
you'll never finish it, for two reasons: first, you have set your goals too
high to be accomplishable by a single person; and second, you're utterly
inept at communication.
Fix either of those two problems, and although I still won't care, you may
finish your project. Of the two problems, if I were you I'd fix the second;
I prefer to never lower my goals, and you obviously believe that your goals
are worthy (a feeling I very much respect). May I suugest that your start
by working on your use of antecedants; you're constantly using pronouns in
contexts where there are no clear antecedants.
>Now let's take a different tack built upon
>something on which we should agree. I am willing
>to concede for the moment the impossibility of an
>optimum solution if you will concede in turn that I
>at least can invoke existing code generation
>algorithms. Further that with these algorithms we
>can create code whose performance is "good enough".
>Now that we are off this "impossible dream" of
>optimized code generation we now have the ability
>to generate executables whose performance is (for
>the most part) good enough. In general the issue
>plaguing the IT profession is not the performance
>of our software, but our inability to develop and
>maintain it at a rate commensurate with user
>demands. Whenever supply falls below demand you
>create a "backlog". It makes little sense to
>propose a solution, particularly one based on
>language, that does not relieve this backlog, that
>does not allow our "supply" ability to keep pace
Hmm. I fail to see that as a problem to be fixed; on the contrary, it's our
livlihood. To me, the real problem is not supply and demand, but rather the
brittleness of the stuff we produce. However, I would be happy with a
solution which managed both conditions at the same time.
>Understand that this has been the major IT
>bottleneck for the last thirty years (or perhaps
>longer). We have thrown one HLL after another at
>it without in any manner slowing down the growth of
>the backlog. In the meantime our cost and the time
>involved in doing what we do managed to do has
>continued to climb.
This is not the usual description of what has been long known as "The
Software Crisis". The usual description is that we have too little control
over the quality of what we produce: e.g. we promise an accounting system,
but produce instead a computer-crashing system.
>Now why is our maintenance backlog so great? The
>almost universal answer which came back was the
>distribution of processing for data (objects). It
>was so scattered, involved so many changes, and
>coordination of those changes that it could not
>occur at the rate with which user change requests
I've never once heard anyone suggest that "distribution of processing" (or
the more usual term, which is is "cohesion") was the sole or even the major
cause of our problems. Cohesion has always been a concern; we want to
produce modules which have perfect cohesion. However, even if we could,
nobody would argue that this would solve the crisis.
The problem is software is twofold: first, it's very hard for users to
communicate with programmers what they want; and second, it's very hard for
programmers to communicate to computers what the users claim to want. Your
proposed specification language is certainly a way of communicating, so it's
actually possible that it might solve the entire problem, or maybe make it
more tractable in other ways. I personally don't think it'll help that
much, but it's undeniably possible.
>How do we solve this? End the distribution. Put
>all the processing within the scope of the data
>itself. Thus making a single change within an
>object will be automatically reflected in all its
>uses. In looking around we found something else to
>"borrow" from PARC, Smalltalk. Unfortunate much of
>its runtime performance fell outside the "good
>enough" boundaries. Fortunately Bell Labs offered
>yet another cheap solution, C++.
Good point. And more recently, Aspect Oriented Programming has attempted to
provide a solution to the cohesion problem in yet another way -- see the
papers linked to from http://aspectj.org. It's a very good system, although
still not a silver bullet.
>Does anyone reading this seriously believe that
>whatever esoteric, exotic, exclusive, elusive,
>eccentric, elegant, excellent, eclectic and
>exquisite feature(s) water brings to Slate that it
>will in and of itself resolve the backlog issue,
>resolve our inability to introduce changes as fast
>as the need arises? No. The fault does not lie
>with Slate or any other HLL.
>The fault lies in our implementation.
Huh? Implementation? Do you mean "our process"?
>Anyone involved in using an editor and creating an
>input source file in construction is guilty of
>producing a seam in a seamless process. It's that
>simple. We are not executing the software
>development process as we have defined it:
There's that word again: construction. In fact, you even used it in the
same phrase in which it was present earlier: "in construction". What do you
mean by it?
I don't recall definining the software development process as seamless, and
I don't understand how using an editor would be equivalent to creating a
seam. Extreme Programming is pretty close to seamless, but they use
>If we did, if we automated all stages after the
>introduction (or selection) of a set of input
>specifications, if we take people out of the
>process beyond this "initial" input stage, then we
>could introduce changes as quickly as they occurred
>(and in fact faster which allows us to reduce the
>backlog eventually to zero).
I think I understand this, and I think it sounds good. I also think that
what you say "input specifications" you're not talking about specifications
which describe the input of the user's desired program; you're actually
describing specifications which are about to be input into your compiler.
Am I at last correct :-)?
I don't understand your point about being able to introduce changes faster
than they occur. Doesn't mean anything to me. I intuit, though, that the
point is minor.
>Nothing exotic or special is involved in that.
Hmm. Aside from your magical specification language which is completely
comprehensible to every user, yes. Nothing special. Oh, did you know that
most users can't picture the entire solution to their problem? (Most
programmers can't, either.) Your specification language would have to solve
that as well.
>Eliminate the designation of external and internal
I understand this.
>simply allowing a set of procedures
>whose internal logic contains the references which
>allows their logical organization (hierarchical
>functionality) to be done dynamically as occurs in
>every logic programming system.
I'm lost here.
>Then allow within
>that set of procedures multiple "root" procedures,
>those not invoked by any other, and create an
>executable for each functional hierarchy for each
I understand this. Oddly enough, I just a month ago wrote a program on a
mailing list which did exactly this.
>You do that simply by adding an iterative level
>which processes a list of root procedures into the
Processes into a process? Wouldn't this sentence be better had it never
>Now my experience says that
>adding an iterative process which contains an
>existing process (or set of processes) is a piece
>of cake, far from the impossible that you assign to
When did I assign that impossibility? That's a bizzare thing to accuse me
of; you've never to my knowledge talked about this topic before. Also, I
wrote a program which does what you describe (it's a "topological sort"), so
I'm far from believing it impossible.