Why Bytecode?

Fare Rideau rideau@ens.fr
Sun, 8 Feb 1998 00:55:06 +0100 (CET)

Hum, as is sadly usual to me,
I've spent far too much time writing on this minor question of bytecodes,
instead of doing real useful work.
Sorry about that.

I'll try to do better next time...

>>>>: Maneesh Yadav
>>>: Eric Biederman
>>: Faré
>: Eric Biederman

>>>> OK, this may be a stupid comment, but I can't figure out a good answer.
>>>> Why is that VM's like java impliment their thing with a universal
>>>> bytecode?
>>> Efficiency, and portability.  A byte code is a no brainer to
>>> interpret, just do a switch on the byte code, and go to the right
>>> place to interpret that.
>> I agree with the latter, but not with the former.
>> Bytes are neither efficient nor portable, unless your measure
>> of efficiency is time taken to code the interpreter:
>> bytecodes are too low-level for good compilation,
>> not high-level enough to amortize interpretation overhead;
>> too small to allow good/easy expansion,
>> not suited to good compression.
>> Plus modern architectures may make extracting bytesom memory inefficient
>> (especially on word-addressable architectures,
>> especially if you have to colesce non-aligned data back into words,
>> especially when word size is not multiple of 8).

> On word-accessible architectures, modifiying memory in byte size
> chunks can be a problem.
Even more so when memory word size is 20 bits (as in the F21).

> But bytes are portable.
So what? You can use bytes to encode lots of things
other than directly interpretable bytecodes.
For instance, compressed abstract program trees.
What's wrong about bytecodes is not the use of bytes as an encoding standard,
but the requirement that this standard by directly interpretable.

Bytecodes are good in cases when memory is tight,
as in embedded controllers or boot PROMs,
so you don't want to go through a decode phase.
For this precise purpose, there exists OpenBoot,
and it suits the need perfectly.
In all other settings, I think the idea of bytecode
might be an expedient quick and dirty hack,
but not a useful generic answer.
Not to reject the idea of bytecode for special-purpose
sublanguages (such as format patterns) where the size
requirement is justified.

> Almost all
> architectures support them in some fashion.  The trick on
> word-accessible architectures is to figure how bytes are accessed
> efficiently.
Only the fashion is not portable at all.
Neither is adapting the bytecode encoding to it.

> As far as compilation, bytecodes were never designed for that.
Which is why they are not a solution to code transport.

> JIT is
> a fairly new technology that people haven't completely explored yet.
JIT is an ancient technology that has gained renewed interest
with the opportunity given by the JVM.

>> All the serious experiments done show that bytecode just doesn't buy it
>> in terms of either performance or portability.
> Wrong:
> Bytecodes are a good compromise of efficiency and portability.
Bytecode are a good compromise for people not interested in efficiency,
who prefer to stress on portability and fast delivery.

> An experiment done with forth using different threading mechanisms,
> showed several mechanism were more efficient than a giant switch
> statement, but there were none more efficient than a giant switch
> statement that could be coded portably in C.
> Using computed goto's of gcc they were able to make their test
> portable, but you can't count on gcc always being there.
Caml-light already did that, with a clever use of CPP macros
to select the use of GCC computed gotos when available.
Still, wordcode OCaml was much faster, and native-compiled
OCamlopt even more so.

> They probably could have gained the efficiency with byte codes by
> holding a register with the current word, and shifting it to get the
> next byte.  And only readingom memory when their register ran dry.
The efficient decoding of bytecode is too machine dependent
to be portable, which defeats part of the purpose.
Things like availability of extra caller-saved registers,
indirect calls, relative speeds of byte transfers shifts,
multiplies, additions, code arrays, size of caches, etc,
do count much.

>> Even then a judicious choice of bytecode is not easy,
> Agreed.
>> unless you have a set of tools to consistently modify the bytecode system
>> and retarget the bytecode generators (in an optimizing way).
> These last comments comeom ideas/research I'm not familiar with.
It's again a matter of metaprogramming: having programs that help
you semi-automatically find a good bytecode interpreter.
Of course, since bytecodes are a bad solution, you'd better
be investing in meta-programs that do direct compilation.

>>> Also basic optimizations can be done ahead of time.
>> Bytecode doesn't make that easier than any representation!
> It makes it more efficient than a source interpreter, which cannot do
> that in a trivial implementation.
Optimizations can be done at just as easily at all level of the source.
Having a high-level representation only leaves room for higher-level
optimizations, that don't preclude the use of low-level optimization
on whatever site-local low-level code is compiled.
If what you mean is "a dumb interpreter with a bit of peephole optimization
is simpler than an optimizing compiler", then granted,
but this doesn't lead us far.

> The things you can do are the basic optimizations of:
> loop strength reduction, constant propogation, common subexpression
> elimiation, and redundant instruction removal.
> Those are some of the optimization with the highest payoff, so it does
> help.  Not as much as if you were targeting native code howwever.
These are optimizations with the highest payoff *for low-level languages*
such as C. And even then, you must reconstitute lots of high-level
information, and transform you bytecode back into program trees,
which completely defeats the purpose of interpretable bytecode!
On high-level languages, the most important payoffs are in having
good analysis of things that need be computed at all (including
memory (de)allocation), and the format for (including choosing
the right representation for objects).

For instance, using machine-dependent fixnums instead of bignums
is a major win, which cannot be done portably
without crippling architectures with different fixnum size.
Will you require fixnums to be 32-bit as in Java,
when there are 4-, 8-, 16-, 20-, 64-, and 128- bit machines available,
and when even on 32-bit machines, runtime conventions may make
fixnum 30- or 31- bit integers?

> And of course over an classic interpreter you save parsing and lexical
> analysis, which are not cheap.  The parsing especially typically
> requires a second pass to handle inherited attributes.  But all that cost
> you can save with a simple syntax (lisp,forth) instead of bytecodes.
And you forget the possibility to use compressed preparsed and
preattributed trees. Smaller than bytecode, better to compile.
Juice does it. Result: by the time the (longer) bytecode was
loaded from the network, they can decode and compile the tree
into native code, which results in better speed than JIT.

>> Just any code representation allows for peephole optimization;
>> bytecode tend to be too low-level and make high-level optimizations
>> impossible in the backend.
> Which optimizations are you refering to?
> And again, bytecodes are not designed to go into a compiler
> necessarily so it's not terribly important.
It's important that they cannot go into a compiler.
By definition, not all libraries are available as
native code in the local computer. If you require them
for decent speed, then you still have to design a
way to transport library code in an portable, efficient,
and secure way.

Interpretable bytecodes just get in the way.
They are the typical example of expedient, not useful, techniques,
as far as transport of code is concerned.

> And bytecodes as far as pure compilation should be no worse than a
> normal high level language, for compilation (there should be no self
> modifying code allowed).
Of course they are! Compiling a low-level language is ALWAYS
harder than compiling a high-level language, because you have less
available information on what the code means, and must guess it
among the surrounding noise of implementation details.

Call me when you can compile non-self-modifying linux/i386 code
into linux/alpha code that's competitive with code obtained from
compiling the source. All the more when you can't be assume that
all code follows usual GCC conventions.
The same kind of prowess is needed when compiling any low-level
language: you must guess the high-level meaning from the low-level
details, and even when you guess well, you can't do optimization
if expensive global code analysis couldn't justify your assumptions.

>>> Besides I haven't seen a `universal' bytecode.
>> What do you mean by "universal"?
> I guess what I meant was well suited to all languages,
> instead of functionally complete.
Oh, as in UNCOL.
And still you recommend them as a universal way to transport code?
Don't you find there's a contradiction?

>> http://www.tunes.org/~tunes/doc/Review/VMs.html#Juice

> A forth interpreter is simpler. [...]
> But bytecodes are simple even if they aren't the simplest.
Except that what we want to simplify isn't the work of people writing
the portable code transport backend.
Such simplification may be expedient in transient preliminary versions,
when we focus not on performance at all.
Even then a dumb LISP interpreter is easier to write than a
bytecode interpreter, and suits our needs better, since we
want to transport *high-level* code, and don't need to write
a LISP->bytecode compiler, either.
And even with such a compiler, it might be better to use LISP,
not bytecode, as the portable transport format, and then
replace bytecode altogether with wordcode or whatelse.

Bytecode are NEVER an optimal solution.
They're not suited to portable code transport,
they're not suited code interpretation,
they're not even suited to quick-and-dirty development,
they're not suited to anything,
except, as said multiple times, to controllers with tight memory limitation.

> The techniques of using bytecodes are well tested.
The techniques for structured interpreters and wordcode are as well tested.

> Techniques such as JIT are not.
All that time, I've not talked about JIT, but just plain compiling.
Systems that compile code as it's typed/loaded have existed

> I have not argued that bytecodes are intrinsically better.
I have argued that they are intrinsically worse,
except in a case where we already have a satisfying,
portable, and ANSI standard one (OpenBoot).

>> And we all know that if you make your bytecode a standard way to
>> transport code (see the JVM), then you're casting into stone your
>> bytecode specification and you can't upgrade. A high-level representation
>> makes upgrade much easier.
> Only temporarily.  Significant changes shouldn't be needed for years,
> at which point you can bump you version number and have a different
> representation.
Uh??? What about people being stuck with the JVM not being suited
to efficiently compiling their language (pick any language but Java;
see the "performance" of Kawa)?
The very first year the JVM appeared, it was already obsolete.

> Standards move like glaciers I admit, but once you have the basic
> capabilities decided upon that's not a bad thing.  And they _do_ move.
Some standards leave so much place of extensibility that
this is no problem (see LISP, FORTH).
Bytecodes are intrinsically unextensible
(remember? there are only 256 different byte values!).

> Bytecodes are a sound design choice in a lot of ways.
I have seen only one, very narrow, such way.

> But it has not
> been my intention to recommend them, merely to explain them.
So was mine.
I won't refuse a bytecode interpreter for Tunes, as a temporary solution.
But that sure won't be my focus as a permanent solution to any problem.

> As an aside:
> I have been working on a compiler (in my compiler construction
> class), and it is my intention to have as manyont ends as possible
> -- c, forth, scheme, c++, etc.  Although not all of them will be
> completed for my class.
manyont? ends for what? compiler front-ends? compiler back-ends?

> It is my hope that the intermediate language of the compiler will
> become a better portable assembler than C itself.
Search news:comp.compilers archives for "UNCOL".
For practical matters, see ANDF, Juice, and OpenBoot.
You may find more on
And what you find that's not there,
you should probably send a pointer to me about it.

> That plus the
> addition of a good API for runtime generated code would provide the
> basis for a lot of things.
Don't expect a lone man to provide libraries for everything on earth.
A design that's not extensible by third parties is a bad design.

> p.s.
> Never forget the number one question of research.
> Have the acheived their gains by simplifying their problem in an
> manner that isn't acceptable to the real world?
Agreed (well, I wouldn't call it number one, but who cares).
Bytecodes solve the problem of developing a dumb portable interpreter.
Which is an oversimplification of *any* non-toy real-life problem.
Except again, for extensible firmware, for which there is OpenBoot.


== Faré -=- (FR)ançois-René Rideau -=- (VN) Уng-Vû Bân -=- rideau@ens.fr ==
Join a project for aee reflective computing system! | 6 rue Augustin Thierry
TUNES is a Useful, Not Expedient System.            | 75019 PARIS     FRANCE
http://www.tunes.org/~tunes/                  -=- Reflection&Cybernethics ==