Emergence of behavior through software
Lynn H. Maxson
lmaxson@pacbell.net
Thu, 05 Oct 2000 12:25:15 -0700 (PDT)
Alik Widge wrote:
"Don't think *for* the user, because it is almost impossible to
know what the user really wants. Let the user express an
intention, and *then* do what he wants. Detect patterns in his
behavior (such as saying "No" whenever you ask if he needs "help"
writing a letter) and comply."
I come to all through a devious route, from Warpicity a proposal I
presented at WarpStock 98 in Chicago to the FreeOS project and
then to Tunes. My original proposal dealt with a single tool, the
Developer's Assistant, and a HLL, Specification Language/One
(SL/I). The purpose of the specification language was manifold:
(1) construct the tool, (2) construct itself, and (3) construct
any HLL. Thus the language was simply a means to an end.
The tool, the Developer's Assistant (DA), as its name implies
performed in a manner similar to what Alik describes:
non-intrusive, compliant, and reflective. In short a developer's
assistant. It's not a programmer's assistant, because no
programming, only the writing of specifications occur.
This may seem a silly nit because both involve writing. The
difference in my mind lies in the software development process and
its sequence of stages: (1) specification, (2) analysis, (3)
design, (4) construction (programming), and (5) testing. Input
into the specification stage consists of user requirements or
change requests. Thus only two writing tasks occur up front, one
of the user requirements and the other of their translation into
formal (compilable) specifications. Everything then after that is
performed entirely within the tool: no manual effort.
The only other writing which occurs is that of the remainder of
the user documentation: the reference manuals, user guides,
operator guides, etc.. Most importantly the only writing which
occurs within the process bounded by the stages is specification.
It occurs only in the first stage manually as the rest of the
stages are automated (tool performed).
If on the input you perform a syntax analysis, semantic analysis,
and allow the logic engine to do the construction as occurs today
in Prolog and in AI expert systems using logic programming, then
you have all the input necessary to perform stages of analysis
(dataflow) and design (structure chart). Thus you have the
original source specifications and their three possible results
(analysis, design, and construction). Again all this from a
single set of writing specifications.
In this manner the tool reflects in three different ways, two of
them graphical, what the developer has submitted. Along with
this, of course, are the results of the semantic analysis. The
only changes which occur, the only writing in which the developer
engages, is specifications. As they occur at the earliest
possible point in the process, initiating an automated rippling
change process, all remain in sync in terms of documentation.
As the tool uses a "standard" logic engine with a two-stage proof
process, one of completeness and one of exhaustive true/false, the
tool again depicted in the results the current state (level of
completeness). It notes ambiguities, incomplete logic, and
contradictions (a variant on an ambiguity). In short all of the
possible "errors" it can detect. It does so in an non-intrusive
manner of simply providing the developer with multiple views of
the current state. The developer then adds, deletes, or modifies
(makes a new version) specifications in a sequence of his choice
with the tool responding to each change as it is entered.
The tool then is an interactive one, performing all of the
activities of the software development process except the writing
of the user documentation and of the specifications. It is not
only interactive, but also interpretive allowing independent
execution of any denoted set of specifications. When the
developer is satisfied that this version of the software is
complete, he can so indicate to the tool. The tool then will
compile the code into a target system of choice. That occurs
because all the target systems exist as specifications as well.
Two things are different.
One, there are no source files, only a data repository with a
central directory. The only access to the repository is through
the directory. All source statements, user text and specification
source, are stored individually, separately, and uniquely named in
the repository. Thus no source statement is ever replicated.
Two, the scope of compilation is determined strictly by that
implied within the input specifications. It can be anything from
a single statement on up, including entire application systems
(multiple programs), sets of such systems, and entire operating
systems.
This allows a global application of semantic and logic rules not
available in current methods. There are no copy books, no manual
synchronization of effort, no peer reviews required. This means
that once specifications are written, once translation of user
requirements occur, that a single developer using a single tool
can now achieve results that now require tens, hundreds, and
thousands of IT support personnel.
50 times time reduction. 200 times cost reduction. Over current
methods.
Now that you have optimized the development process while
minimizing the human effort involved, a derivative of "let people
do what machines can not and machines do what people need not",
what remains is further minimizing the human effort. Nominally
this occurs through the tool "observing" the developer's style,
detecting patterns (tendencies) of the developer. Again in a
non-intrusive, helpful manner simply making what it detects
available (on demand) to the developer. The developer can then
opt for a choice which in essence is a confirmation of his style.
There is no reason not to allow the developer to choose to
automate this aspect of his behavior.
All this is possible with today's technologies as each and every
piece exists today. The problem today is not in what we do or
which language we do it in, but in how we do it, i.e. the process
employed. This is a first step in process improvement. The
specification language which is self-defining, self-extensible,
and self-sufficient is simply a means of getting there.
The only remaining issue is staying there, of being able to adapt
to the dynamics of the environment at the rate at which the
dynamics, the changes, occurred. Here is where the logic engine
and the use of an unordered set of specifications shines. For the
only thing that the developer must do is add, delete, and modify
existing specification statements and assemblies. Specification
assemblies do not consist of more than 20 to 30 specification
statements. The implications of a change (even a proposed one)
are immediately reflected in the results produced by the tool.
Now the developer can implement changes across an entire
application system as part of a single process. He has no limit
on the globalness of a change. He leaves it intact without
decomposing or distributing separately. His ability to
exhaustively test a change is unmatched by any
non-logic-programming-based method, including OO.
What that means is the ability to make changes faster than they
can occur, which means you can make them as fast as they occur.
The only glitches in the dynamic continuum that Fare speaks of is
the time necessary to write the specifications. No system
currently proposed to Tunes comes close. Both Brian Rice and Fare
are working on the wrong end of this pony. It is not a language
deficiency or one that can be cured by language. It is a process
deficiency. Its cure lies in process improvement, not obscure
language features.