A revolutionary OS/Programming Idea
Lynn H. Maxson
Lynn H. Maxson" <lmaxson@pacbell.net
Wed Oct 8 00:05:03 2003
Alaric B Snell wrote:
"...
...but that's not necessarily a true statement ;-) If we, using
our knowledge of physics and chemistry, built a large enough
con (sic) Neumann computer and told it to model the trillions
of atoms within a brain, it might be sentient, unless
as-yet-undiscovered physical processes are at work."
Well, we were executing on the same wavelength until we
got to this point. I guess I haven't been the same on this
subject since reading W. Ross Ashby's "Design for a Brain". I
guess it comes down to asserting that you cannot program a
non-programmable process. With a von Neumann
architecture, something with a pre-existing instruction set,
you cannot possibly emulate something not based on one.
We make a mistake when we equate even as an analog a
neural net with neurons. We have no reason to believe that a
neuron plays anything other than a passive role, one of a
conduit only, with respect to sentience or intelligence. It
certainly doesn't "store" anything. As far as we know such
"storage" occurs in the connections where it continuously
recycles, i.e. it has only a dynamic state and never a static
one.
Frankly speaking PL/I is a damn sight closer to assembly
language, the actual machine instruction set, than C, another
one of those myths once challenged. Probably APL lies even
closer. Weak typing doesn't make C closer. It's actually a
defect that K&R successfully passed off as a feature.
I would refer others to "Language, Thought, and Reality:
Selected Writings of Benjamin Lee Whorf" and to the
Whorf-Sapir Hypothesis: you cannot think what you cannot
express. While I in general agree with Tom Novelli's recent
remarks with respect to the Tunes effort I keep wanting to
get back to the KISS principle.
For me it's quite simple. Programming is a means of
expressing a solution set (language) in terms of a problem set
(reality) using a thought process in the translation. As a
general semanticist I know that reality is the territory and
that language is a map. Sanity occurs when the map
conforms to the territory;insanity, when the territory has to
conform to the map.
Come the 19th of this month (October) I will present "Opening
Open Source" at an OS/2 user conference in San Francisco.
There I will present much of what we have discussed here.
Only it will occur without esoterics. I have only one goal: to
provide a language with a methodology, a solution set, to
conform to the dynamics of the problem set. In short to allow
changes to occur in the solution set at least equal to their
interarrival rate (on average) in the problem set.
While I will use logic programming I will not use LISP in any
form. As I said in a earlier response the only thing I need from
LISP is the list aggregate itself and associated operators,
which already exist in non-LISP form. I certainly avoid OO like
the plague, at least what currently passes for OO.
I have proposed a specification language capable of specifying
itself. It has all the data types and variants as well as
operators found in the problem set. Thus I can write a quite
accurate map that conforms to the territory. I do not engage
in fitting the territory to the map as done in almost all the
post-1970 languages including OO.
I have proposed one tool written in that language. The
package includes the tool, its source, and the source for the
language. The tool behaves like an "enhanced" editor. It
uses a data repository/directory written in the language. The
data repository/directory offers database not file access to
data. The editor is enhanced to provide syntax and semantic
analysis as well as a two-stage proof theory based on logic
programming. Normal code generation is interpretive with
compiled as a meta-programming option to support a high
performance production mode.
I don't expect John Newman to like this strictly text source
approach. While the internal (local) logic within a source
segment is fixed (ordered), the actual source segments
appear unordered. As in all logic programming the software in
the completeness proof does the actual ordering. When
complete the software can produce any visual output, text
and graphic, of the now organized source.
When you add, delete, or make a change to a source segment
the software automatically regenerates the organized source.
So only one point of source entry exists: the writing of a
source segment, which may be a single source statement or
some recognizable assembly like a control structure or a
procedure. The formal written form more closely resembles
the informal description: the reader can see one in the other.
There's no grep, no awk, no lex, no yacc, no make, no build,
no something else to write, no other language to learn. Just
one specification language (the solution set) that comes
closest to the language of the problem set. That in turn
eases the transition from writing a description in the language
of the problem set to that of the solution set. In that manner
you make it more possible for more people to participate in
the writing of solution sets.