on the HAL-9000
RE01 Rice Brian T. EM2
BRice@vinson.navy.mil
Mon, 12 Oct 1998 18:07:41 -0700
>> I agree about linking the executable code with its high-level
>> representation. However, does just the 'definition' of the program (the
>> semantics that the user manually applied to create the code) apply when
>> speaking of a program's relation to the system? Theoretically, every
>> piece of semantics-code applies to every running algorithm. For
>> example, the memory-management aspect of generating code obviously
>> applies to everything in the run-time. After all, shouldn't the code
>> generator generate space-efficient code even if the user doesn't
>> explicitly state so? Alternatively, such aspects could be already
>> included implicitly in the language modules suggested to the user. The
>> latter seems more appropriate.
>
>I don't think these factors are so 'trivial'. For example, a compiler
>may be able to choose between less space efficient but faster, or more
>space efficient and slower, and I'm not talking about trading off the
>amount of time to compile, but things like code arrangement, cache
>boundaries, locality, etc. In fact, it may be trading off between
>making 'xyz' chunk of code 'ayz' faster or 'yz' chunk faster, knowing that
>only three pieces can fit in the I cache.
Right. And those imply semantics (more complex than the example I
mentioned) partially governed by the hardware: the 'background', if you
will, to the semantic meaning of the program as the user would see it.
The semantics would be specified, in a total sense, by a 'run-time
scheme' a.k.a. the operating system. The choice may also be between
recording a piece of calculated data once a procedure is done or
re-executing the procedure to re-create the data as a 'virtual store'
when execution time is much faster than disk access time, or other
issues. My point was that semantical definitions could be easy to come
by, freeing the user to focus on the issue-independent part of the
application, while the abstraction mechanism would allow the user
(hopefully quick) access to the issues governing how the code-generator
writes the code.
>
>How is the code generator to operate such that it can evaluate all
>such 'aspects' where the sample size approaches one instruction?
Well, we could make a code-generator that creates different types of
code output. One couldn't argue well against having two distinct
code-generators in run-time, say if making code for a machine with two
kinds of processors. Similarly, if parts of the code-generator were
designed for different types of semantic-level reflection on the code
for efficiency, or simply that a code-generator with a more
'down-to-earth' language interface added on would probably work.
However, the cases where this would be needed, since no code sequence
exists alone (i/o overhead, ...), would be where the code-generator
would match the code sequence with 'function calls' of a sort (as in a
debugging process or a 'calculator' process, i.e. interaction with small
events). Hopefully, most aspects would be 'weaved' into the code
statically before the code-generator 'sees' it: equivalently, the
code-generator at some point in the pipeline could make a persistent
compacted intermediate form to operate on incrementally on successive
optimizations (for code development).
Obviously, we should elaborate on this issue (I already have a few
ideas), possibly by classifying (in a fuzzy way) the areas and
dimensions of this 'computational manifold' as it were. (manifold =
space).
>> I've already stated my position on monolithic compilers. To me, it
>> seems against the direction of the project.
>
>I think perhaps the word monolithic was a bad choice. I'm not implying
>that the compiler would not be built from the same constructs as the
>system itself. Nor am I implying that the compiler would not share
>logical blocks describing algorithms it uses, etc. I was merely
>implying that it may be a good starting point to define an atomic
>'compile this block' interface to a compiler, and let the compiler
>work in a more traditional manner on that block. As we develop
>algorithms which make sense for this kind of multi-layer translation
>and optimization, we can try to fit them together into a better framework.
ok. Let's try to put together a holistic model.