on the HAL-9000
RE01 Rice Brian T. EM2
BRice@vinson.navy.mil
Sun, 11 Oct 1998 18:35:31 -0700
>I'll respond to your previous post later..
>
>> Conceivably, every piece of code or data could be reflected upon
>> whenever the system encounters a reference to it, each with its
>> multitude of specifications, mostly shared. Certainly every piece of
>> code or data should be reflectable, but when does abstraction access
>> time become more of an issue than the run-time efficiency of the system?
>> There seems to be a trade-off curve for execution versus reflection
>> efficiency based on usual coding techniques (with traditional software
>> on a lower curve than Tunes products, of course ;). I base this
>> impression on the various programming languages and their relative
>> compilation efficiencies (and including social factors which make a
>> language more popular as part of a language's efficiency rating).
>
>In my opinion, Tunes should be as close to the 'total reflection at
>all costs' side of the curve as possible. For situations where
>performance is more important, there should be a path for letting
>people who know about optimization on the hardware plug in optimizers
>which will help make the code that actually runs more static. It
>should be Tunes' job to keep even this static code tied to the high
>level description of the same code, so that it can always reflect on it.
hmm. i see i was unclear. when i meant 'reflect', i meant that the logic
system would check for type information and possible conflicts
dynamically and uncompromisingly (without any partial evaluations of the
program to check static constraints before run-time). of course
everything gives access to its meta-information, i don't mean to
contradict that. however, i believe that optimization is quite simple,
even from a relatively hardware-independent perspective, due to
hardware-descriptions in logical form.
>As a small concrete example, if your language lets you make a
>data-structure, how tunes represents this on the hardware should be
>irrelvant. When you write code to deal with this data structure, Tunes
>(and the underlying compiler) should be able to generate static code
>which access this data structure as if it were a static structure, if
>you use some kind of reflection to get at the data, Tunes will know
>the static layout of the structure in memory. If you use reflection to
>add a data-member to the structure, Tunes can decide whether or not to
>invalidate the static code and recompile it.
i believe, for example, that the decision-making structure is the key
part of my question: when does Tunes recompile a method and when does it
simulate doing so?
>In other words, we should not be compromising full type-information
>and reflection under any circumstances. We should be focusing on full
>functionality at all times, and methods of reaching better performance
>which don't defeat having full run-time information available.
what defines "full" about type information? theoretically, i could have
the entire system of algebra recomputed every time i saw 1+1=2 in order
to have complete type checking. what i am saying is to use the principal
of partial-evaluation/partial-proof to create an easily-available form
(or set of forms) for access at various levels of semantics, and to
determine what policy should be used for generating those.
>In fact, this should not even prove that difficult. I could imagine a
>C compiler which would store full run-time information about the names
>and static offsets of all data structures and functions. This would
>incur zero penalty to static code and yet would allow you to get at
>those items in a reflective fashion.
perhaps, but in a very inefficient way, since C is very slow to compile
(compared to the kind of code we need to have available in order to have
a reasonable dynamic-compilation routine performance). e.g. slim
binaries of the Oberon system.