on the HAL-9000

RE01 Rice Brian T. EM2 BRice@vinson.navy.mil
Mon, 12 Oct 1998 15:25:50 -0700


>> i believe, for example, that the decision-making structure is the key
>> part of my question: when does Tunes recompile a method and when does it
>> simulate doing so?
>
>I don't see how we are going to come up with a finite ruleset to
>answer this question. I think that we should have a framework for
>adding static compiler/optimizer stages, and along with that, a method
>for deciding when to recompile give the existing compiler, or compiler
>aspects.

That would result in an inordinate amount of waste, I believe.  It
implies a distinct compiler-unit interface based on some arbitrary
intermediate language.  I would tend to avoid that kind of development.
I want the code in the compiler to be reusable by any other component.
A familiar optimization, I believe, known as 'register coloring' based
on combinatorial topology happens to be a general mathematical algorithm
for allocating resources.  It amounts to a scheduling algorithm, and
would be desirable to re-use.  Of course, keep in mind my intended goal
of having all sorts of lucrative technical mathematics available
everywhere.
>
>I think trying to 'answer' this question is sort of putting the cart
>before the horse.

I believe that the cart and horse are one.
>
>As an example, observe the HotSpot Java VM.
>http://java.sun.com/products/hotspot/whitepaper.html
>
>It uses several techniques (mostly adapted from the SELF optimizing
>compiler VM) to generate static code, and keep track of dependencies
>between the fully reflective system and the static code doing the
>work. I think we would be better off coming up with a framework to put
>the static compiler systems in place, instead of trying to hard-define
>the calculus for conversion between different levels. Even if such a
>logical method of transltion exists, I doubt we can pull it out of the
>sky with so little experience of such things.

(hmm. I think that this research is lurking on my archive Zip disks
somewhere...)
>I have no such doubt.  I have been studying mathematical research for several
>years now, and am now working with recent papers on multi-modal logic, arrow
>logic, dynamic logic, and some older ideas like category and model theory.
>The 'sky' as you call it exists today in the work of computer scientists
>(real ones) and mathematicians interested in language and information theory.
> This same research is what should enable the natural-language interface to
>which I have alluded.
I suggest we speak of these development threads (monolithic compilation
versus logically-dynamic) as separate, since I am fairly intractable on
this point.  It appears to me to violate the principle of re-using
mathematical logics wherever possible, or at least making the re-use
simple.
>
>For example, SELF has a system which records type information as code
>runs, much of their optimization work has followed out of this
>run-time data collection system. Perhaps we can come up with a general
>form of this data collection system, where we can allow a compiler to
>insert it's data collection blocks in the high-level 'logical code'
>and then use it's own data later during it's static compile phaze.
>
>This would still use a more traditional 'monolithic' compiler model,
>however, it would allow us to easily experiment with different ideas
>in optimization.

To me, that suggests a compromise I am not ready to make.
>
>> what defines "full" about type information? theoretically, i could have
>> the entire system of algebra recomputed every time i saw 1+1=2 in order
>> to have complete type checking. 
>
>I don't agree with your implication of the converse of the above
>statement. That converse being that "if you don't recompute the system
>of algebra each time that you don't have complete type
>checking". Under no circumstances should you not have complete type
>checking. However, optimization techniques should be used which will
>allow you to type check once for a code/logic path. That entire path
>can then be optimized for the types which are guaranteed to be present
>if it's called. If the types are different, then that static codeblock
>shouldn't be called. That's how SELF works.

True, but if I wanted exactly what SELF has, semantically, then I
wouldn't ask the question.  What I mean is that "what type is" is
relative to an ontology, which in turn, is defined(constrained) by the
logic language chosen.  Since our system is intended to support several
calculi of abstraction, to me it seems natural that complete
type-checking be far more rich (than your idea of it) in the Tunes
system from the very beginning.  Otherwise, we'll always have types
which are "just integers" in the system, for example.  Self doesn't
check information which is semantically beyond the language's
first-order atoms.  It simply can't.  In other words, it gives only one
means to abstraction relative to the many means which I propose.  This,
I believe, is the central issue with the lambda-calculus to which I
object: that abstraction is seen as one-dimensional, at least from the
frame of reference of each object.  More dimensions can be simulated,
but only at a high cost in terms of readability/clarity.  (more on this
issue later...)
>
>> what i am saying is to use the principal of
>> partial-evaluation/partial-proof to create an easily-available form
>> (or set of forms) for access at various levels of semantics, and to
>> determine what policy should be used for generating those.
>
>Okay, I think I'm with you here, but can you give a more specific
>example of how you think this would actually work?

Since the Tunes system's hll is 'reflective' as a primary feature, the
generation of arbitrary languages should be little more than trivial.
Since any language is equivalent to a calculus of atoms, this allows the
richer type system which is desirable and should be available.  These
languages would probably be generated incrementally towards specific
application-development.  Their semantics could therefore be checked
incrementally long before their application to some object, and would
persist beyond most object's lifetimes.  That is my intended means of
static type-checking.  This also ties in with my intention to have the
hll completely abstract with respect to the usual programming languages
usually used.  To me, the hll is the cart which morphs into
horse-and-cart.
In light of the obvious benefits and potential elegance of the Xerox
PARC lab's aspect-oriented language development, it seems obvious to me
to create code as an object whose interfaces are defined in terms of
statements in languages which the user picks.  These choices at first
should probably be static with reference to the code-generation phase to
allow for simpler analysis in the project's early development.