on the HAL-9000
Mon, 12 Oct 1998 16:54:24 -0700
I'll respond to your previous email tonight when I have a bit more
time. For now:
On Mon, Oct 12, 1998 at 03:56:24PM -0700, RE01 Rice Brian T. EM2 wrote:
> >In my opinion, Tunes should be as close to the 'total reflection at
> >all costs' side of the curve as possible. For situations where
> >performance is more important, there should be a path for letting
> >people who know about optimization on the hardware plug in optimizers
> >which will help make the code that actually runs more static. It
> >should be Tunes' job to keep even this static code tied to the high
> >level description of the same code, so that it can always reflect on it.
> I agree about linking the executable code with its high-level
> representation. However, does just the 'definition' of the program (the
> semantics that the user manually applied to create the code) apply when
> speaking of a program's relation to the system? Theoretically, every
> piece of semantics-code applies to every running algorithm. For
> example, the memory-management aspect of generating code obviously
> applies to everything in the run-time. After all, shouldn't the code
> generator generate space-efficient code even if the user doesn't
> explicitly state so? Alternatively, such aspects could be already
> included implicitly in the language modules suggested to the user. The
> latter seems more appropriate.
I don't think these factors are so 'trivial'. For example, a compiler
may be able to choose between less space efficient but faster, or more
space efficient and slower, and I'm not talking about trading off the
amount of time to compile, but things like code arrangement, cache
boundaries, locality, etc. In fact, it may be trading off between
making 'xyz' chunk of code faster or 'yz' chunk faster, knowing that
only three pieces can fit in the I cache.
How is the code generator to operate such that it can evaluate all
such 'aspects' where the sample size approaches one instruction?
> I've already stated my position on monolithic compilers. To me, it
> seems against the direction of the project.
I think perhaps the word monolithic was a bad choice. I'm not implying
that the compiler would not be built from the same constructs as the
system itself. Nor am I implying that the compiler would not share
logical blocks describing algorithms it uses, etc. I was merely
implying that it may be a good starting point to define an atomic
'compile this block' interface to a compiler, and let the compiler
work in a more traditional manner on that block. As we develop
algorithms which make sense for this kind of multi-layer translation
and optimization, we can try to fit them together into a better framework.
David Jeske (N9LCA) + http://www.chat.net/~jeske/ + email@example.com