Lynn H. Maxson
Sun, 16 Jul 2000 18:04:38 -0700 (PDT)

Billy wrote:
"Er -- no, it won't.  Any programmer you ask to 
code that algorithm for you
will either drop to the floor laughing or will be 
coding until he's dead.
Solving the problem that way is impossible."

I love the challenge of the impossible, infinite, 
and too finitely large that you raise.  That 
program's do in fact terminate is a part of my 
reality.  Thus I do not worry about computing in 
some manner when they will terminate, but about 
having that time as short as possible.  I have no 
conflict with Rice's Theorem as I have no interest 
in computing what is provably impossible.<g>  I 
deal exclusively with the possible, however 
challenging it may appear to others.

It does not strike you as strange that we can 
specify, analyze, design, and test systems as 
whole, but that we cannot do so in construction.  
We can specify, analyze, design, and test any 
system at any level down to its "atomic" units.  
Yet we cannot do so in construction.  While such a 
range from the very smallest unit to the very 
largest can occur elsewhere in the software 
development process, it cannot occur in 
construction.  Why?  Once the programmer laughing 
subsides, it is said (by you) to be impossible.

Well, in my system what you want to do is 
determined by the set of input specifications 
selected which can range from atomic units to 
assemblies of any scope.  The only writing which 
occurs within the process (thus excluding user 
documentation which lies outside) is that of 
specifications.  I do not worry about programmers 
laughing because no programmers nor programming (as 
a separate activity) is involved: executables are 
produced directly from specifications as they do 
now currently in Prolog.  The only difference is 
that so are the results of analysis and design.  In 
short the system (however defined) is developed as 
a whole throughout the entirety of the process.

If you concede that C (and PL/I) works the way it 
does, then you know that within a single external 
procedure you can have multiple internal ones and 
within each further multiples to an 
"implementation-" (not language-) defined depth.  
For some reason this seems "normal" to you and yet 
extending it one level upward (to begin with) in 
which multiple external procedures are considered 
as a single unit of work is somehow impossible. If 
they are all part of a single application system 
(of demand- and frequency-based) programs 
interconnected through data stores (persistent 
data), something which we achieve as a matter of 
course in dataflow analysis, you regard it as 
"impossible" in programming.

To make this transition easier let's switch to 
Prolog and logic programming.  Here we are given a 
named set of "unordered" specifications containing 
goals (main and sub), rules, relationships, and 
data.  From this the logic engine creates the 
organization (the logic) necessary to satisfy the 
(main) goal.  Nothing prevents having more than one 
set of main goals (other than the implementation), 
having main goals for each of the programs of an 
application system all included within a single 
input stream.  Far from being the impossible task 
you envision it is simply the addition of a list 
which contains as its entries the list created to 
represent the logic of a single program.  It is no 
more difficult, in fact a lot easier, to do this 
with a set of programs as a single unit of work 
than it is to do them individually as we do 

Now any LISP programmer reading this will confirm 
that it is relatively trivial to take a set of 
processes against a set of inputs up a notch to 
where you apply the same processes to a list of 
such sets.  That LISP does not currently do this is 
an implementation decision, not because it is 
impossible.  It may take awhile to get use to the 
idea of multiple executables from a single set of 
input, but that does not exclude its possibility.

Now let's take a different tack built upon 
something on which we should agree.  I am willing 
to concede for the moment the impossibility of an 
optimum solution if you will concede in turn that I 
at least can invoke existing code generation 
algorithms.  Further that with these algorithms we 
can create code whose performance is "good enough".

Now that we are off this "impossible dream" of 
optimized code generation we now have the ability 
to generate executables whose performance is (for 
the most part) good enough.  In general the issue 
plaguing the IT profession is not the performance 
of our software, but our inability to develop and 
maintain it at a rate commensurate with user 
demands.  Whenever supply falls below demand you 
create a "backlog".  It makes little sense to 
propose a solution, particularly one based on 
language, that does not relieve this backlog, that 
does not allow our "supply" ability to keep pace 
with demand.

Understand that this has been the major IT 
bottleneck for the last thirty years (or perhaps 
longer).  We have thrown one HLL after another at 
it without in any manner slowing down the growth of 
the backlog.  In the meantime our cost and the time 
involved in doing what we do managed to do has 
continued to climb.

Now why is our maintenance backlog so great?  The 
almost universal answer which came back was the 
distribution of processing for data (objects).  It 
was so scattered, involved so many changes, and 
coordination of those changes that it could not 
occur at the rate with which user change requests 

How do we solve this?  End the distribution.  Put 
all the processing within the scope of the data 
itself.  Thus making a single change within an 
object will be automatically reflected in all its 
uses.  In looking around we found something else to 
"borrow" from PARC, Smalltalk.  Unfortunate much of 
its runtime performance fell outside the "good 
enough" boundaries.  Fortunately Bell Labs offered 
yet another cheap solution, C++.

Probably never has any other software methodology 
received the funds, the research, and the support 
than that which has gone into OO.  Even the three 
(competing) major analysis and design methods were 
merge into a single one, UML.  The net result so 
far as the backlog is concerned remains at less 
than zero as it continues to grow, as systems 
continue to cost more, and as the time to produce 
them increases.

Does anyone reading this seriously believe that 
whatever esoteric, exotic, exclusive, elusive, 
eccentric, elegant, excellent, eclectic and 
exquisite feature(s) water brings to Slate that it 
will in and of itself resolve the backlog issue, 
resolve our inability to introduce changes as fast 
as the need arises?  No.  The fault does not lie 
with Slate or any other HLL.  The fault lies in our 

Anyone involved in using an editor and creating an 
input source file in construction is guilty of 
producing a seam in a seamless process.  It's that 
simple.  We are not executing the software 
development process as we have defined it: 

If we did, if we automated all stages after the 
introduction (or selection) of a set of input 
specifications, if we take people out of the 
process beyond this "initial" input stage, then we 
could introduce changes as quickly as they occurred 
(and in fact faster which allows us to reduce the 
backlog eventually to zero).

If the problem was the distribution of data-related 
processes, more than one solution is possible.  
Unfortunately IT picked the wrong one: OO.  The 
more correct one would have been to incorporate all 
the changes in all the processes automatically and 

Nothing exotic or special is involved in that.  
Eliminate the designation of external and internal 
procedures, simply allowing a set of procedures 
whose internal logic contains the references which 
allows their logical organization (hierarchical 
functionality) to be done dynamically as occurs in 
every logic programming system.  Then allow within 
that set of procedures multiple "root" procedures, 
those not invoked by any other, and create an 
executable for each functional hierarchy for each 

You do that simply by adding an iterative level 
which processes a list of root procedures into the 
compilation process.  Now my experience says that 
adding an iterative process which contains an 
existing process (or set of processes) is a piece 
of cake, far from the impossible that you assign to