Sun, 3 Jan 1999 11:04:55 -0800
On Sun, Jan 03, 1999 at 11:25:23AM +0100, Anders Petersson wrote:
> >Language designers can do something about it. At least, they can design
> >languages that allow maximum expressibility, so the developer doesn't have
> >to obfuscate the code to adapt it to the language. This happens a lot in
> >C because programmers have to completely reorganize their program for
> >optimization. That should be the job of the compiler. Of course, I go
> >much farther and say that the choice of "data structures", which are
> >usually chosen for speed (linked list vs. arrays vs. trees etc), should be
> >abstracted so the programmer doesn't need to decide, but the system can
> >pick which structure to use for data.
> You're speaking about _language designers_. I'm not that. mOS limits itself
> to the "public" design of the system - the language used is not dealt with.
It's not just language designers, it's the compiler implementations as
well. As I explained before, C/C++ compilers throw out lots of useful
information. Keeping that information around (like structure layouts,
enum types, argument types, even part of the code structure) does not
affect performance. In fact, most compiler can keep it around
'symbolics' (i.e. part of the debugging information). However, better
compilers, or better languages, can certainly help software connect
more seamlessly. If you are limiting yourself to using existing C/C++
compilers for everything, then there will be limits to the amount of
platform-independence and 'componentization' you can get out of the
> >* Try to figure out what to change in the source code to correspond with
> >the binary change you made. (you and the system would work together on
> >this, i.e. the system would try to figure it out but you would help if it
> >failed) It may not be possible to express the change in the language used
> >in the source, if so, then one of the below options would have to be used.
> This is just too *unrealistic* to be commented.
If you consider this just 'some level of translation' and 'some other
level of translation', it may make a bit more sense. Imagine using a
'lisp -> C' compiler (which exists TODAY). In Tunes, we would hope
that even after the translation, you could go in and change the C
code, and see the change reflected in the lisp code. The translations
are not just 'one shot' operations, but instead they are links between
two different representations of the same piece of functionality.
To me, how this works becomes less clear when you are talking about
'some form of source' and 'some form of machine binaries' as Tril is
describing above. However, for source-to-source translations it's more
> >* Add a low-level "note" that the change you made should be used
> >instead of the regular binary code , next time that same code is generated
> >from the source.
> The problem is that you can rely on that no sane person would edit
> binary code with a honest purpose.
Consider 'changing binary code' just 'providing an assembly optimized
version of a function'. People do that all the time. When you do
something like provide a target optimized version of a function, there
should still be the platform independent implementation. The system
should be responsible for composing the appropriate blocks into a
running program. If possible, it would be nice to have the system be
capable of understanding the hand-optimized version. Perhaps someday
it may even be able to compare the hand-optimizer version with the
high-level language version to test their equivilance. The important
part is that enough meta-information about the blocks be stored _in
the running system_, for software to be later written to perform these
If you consider an existing program which has some C code and some
assembler, the delineation between the two is lost. The program is
'flattened' into only the target specific machine code. All other
information is lost.
> >Everything you do involves a typecheck. Moving the mouse, typing text,
> >running a program, deleting an object, etc. Nothing will happen unless
> >the operation to be done and the object it is operating on match their
> >types. If they don't match, that is a type error. All errors are type
> >errors. There is a flexible system to describe what to do on type errors
> >(each type error can have its own behavior, the user can customize, there
> >can be defaults, etc). Everything in the system has a type. All types
> >are explicitly defined... much of what you do in programming languages
> >today is just creating new types in my model.
> There must certainly be other errors than type errors. How about
> communication failure, division by zero or missing hardware?
FWIW: I don't understand what he's saying here either.
> Is this something like Java? Interpreted programs? Sounds even
> slower than my old 386...
It's something like the new Java HotSpot VM which is not yet
released. HotSpot is based on the self dynamic recompilation
technology. (self.sunlabs.com) The Java VM (IMO) is broken in many
ways, mostly because they polluted the runtime with a 'class-based
object model'. However, there are defeinetly blocks of code which are
faster in HotSpot/Self, and there are blocks of code which are faster
in traditional static languages. Some projects are trying to bring the
benefits of dynamic recompilation to static languages (go look for `C,
Tick-C, off the MIT Exokernel pages). Tunes intends to follow Self a
bit more in doing dynamic recompilation.
David Jeske (N9LCA) + http://www.chat.net/~jeske/ + firstname.lastname@example.org