on the HAL-9000

David Jeske jeske@home.chat.net
Sun, 11 Oct 1998 19:34:17 -0700


On Sun, Oct 11, 1998 at 06:35:31PM -0700, RE01 Rice Brian T. EM2 wrote:
> i believe, for example, that the decision-making structure is the key
> part of my question: when does Tunes recompile a method and when does it
> simulate doing so?

I don't see how we are going to come up with a finite ruleset to
answer this question. I think that we should have a framework for
adding static compiler/optimizer stages, and along with that, a method
for deciding when to recompile give the existing compiler, or compiler
aspects.

I think trying to 'answer' this question is sort of putting the cart
before the horse.

As an example, observe the HotSpot Java VM.
http://java.sun.com/products/hotspot/whitepaper.html

It uses several techniques (mostly adapted from the SELF optimizing
compiler VM) to generate static code, and keep track of dependencies
between the fully reflective system and the static code doing the
work. I think we would be better off coming up with a framework to put
the static compiler systems in place, instead of trying to hard-define
the calculus for conversion between different levels. Even if such a
logical method of transltion exists, I doubt we can pull it out of the
sky with so little experience of such things.

For example, SELF has a system which records type information as code
runs, much of their optimization work has followed out of this
run-time data collection system. Perhaps we can come up with a general
form of this data collection system, where we can allow a compiler to
insert it's data collection blocks in the high-level 'logical code'
and then use it's own data later during it's static compile phaze.

This would still use a more traditional 'monolithic' compiler model,
however, it would allow us to easily experiment with different ideas
in optimization.


> what defines "full" about type information? theoretically, i could have
> the entire system of algebra recomputed every time i saw 1+1=2 in order
> to have complete type checking. 

I don't agree with your implication of the converse of the above
statement. That converse being that "if you don't recompute the system
of algebra each time that you don't have complete type
checking". Under no circumstances should you not have complete type
checking. However, optimization techniques should be used which will
allow you to type check once for a code/logic path. That entire path
can then be optimized for the types which are guaranteed to be present
if it's called. If the types are different, then that static codeblock
shouldn't be called. That's how SELF works.

> what i am saying is to use the principal of
> partial-evaluation/partial-proof to create an easily-available form
> (or set of forms) for access at various levels of semantics, and to
> determine what policy should be used for generating those.

Okay, I think I'm with you here, but can you give a more specific
example of how you think this would actually work?

> >In fact, this should not even prove that difficult. I could imagine a
> >C compiler which would store full run-time information about the names
> >and static offsets of all data structures and functions. This would
> >incur zero penalty to static code and yet would allow you to get at
> >those items in a reflective fashion.
> 
> perhaps, but in a very inefficient way, since C is very slow to compile
> (compared to the kind of code we need to have available in order to have
> a reasonable dynamic-compilation routine performance). e.g. slim
> binaries of the Oberon system.

Of course, and I agree with you completely here. I was merely
presenting a solution which didn't compromise run-time information or
speed, and which could be applied in today's world.

-- 
David Jeske (N9LCA) + http://www.chat.net/~jeske/ + jeske@chat.net