[virtmach] Definition a virtual machine ?
Francois-Rene Rideau
fare@tunes.org
Thu, 18 Nov 1999 09:49:08 +0100
On Thu, Nov 18, 1999 at 07:29:03AM -0000, Stephen Pelc wrote:
> 1) Open Firmware tokenises *Forth source* code and is language
> specific.
> 2) SENDIT/OTA tokenises at the object level.
Ok, so OF tokens are not designed to be directly executed,
whereas SENDIT/OTA tokens are (à la JVM). [I think the "direct execution
by resource-hungry hardware" criterion is more to the point than
"source" vs "object", which are very fluid concepts].
> We added direct
> support for local variable frames in main memory, and changed
> the encoding several times after profiling the compiler output.
I'm not sure what you mean by "main memory". Does that mean off-stack,
off-heap variables (like C "static" variables inside functions)?
Or does that mean strictly framing the return stack (as opposed to
free-form return stack in FORTH), or adding a third stack?
Maybe it's simpler for you to publish a URL, if there are publicly
available specifications for SENDIT/OTA. I'm also interested in
comparative reviews vs the tentatives to do about the same thing as you do
with a JVM-light. (also, where's SENDIT/OTA within its life cycle?).
One concern is also about certification and authentification
of dynamically loaded code: how does SENDIT/OTA compare to embedded JVM?
>[Constraints]
> a) Terminals are resource limited
> b) Transmission costs are significant
I understand these are hard constraints indeed,
that justify your choice of compact executable bytecode;
however, I feel this choice of VM implementation would be very suboptimal
for resource-rich networked devices like the ones I'm interested in
(>4MB RAM, >10MIPS, >10Kbps LAN/WAN).
> If you consider the virtual machine
> specification to be a "surface", then the terminal hardware
> needs to be verified against the surface. This means that all
> terminals will behave in the same way with the same binary.
> Similarly the compilers must be verified against the surface.
> The whole point is to avoid having to verify each compiler
> against each terminal.
Sure. Minimizing certification costs by having well-defined interfaces
is a very important goal in any non-trivial architecture.
I agree that it is a very good idea to have a well-defined interface
for portable code, with source-language-specific compiler frontend
on one side of the surface, and a target-architecture-specific backend
on the other side of it.
What I proposed was that the latter backend be not a bundle of a
chip+runtime+interpreter, but rather a bundle of a chip+runtime+compiler.
The idea was that by moving part of the portability support off the chip
(on chip interpreter replaced by off-chip compiler), you can take
better advantage of on-chip resources (and hence also cope with smaller
cheaper chips), while having more flexibility in the maintenance and
evolution of this portable architecture.
Also, the _infrastructure_ for such an architecture
(mobile code brokers, negotiation protocols for object code architecture,
certification and authentification protocols, APIs for compiler invocation,
caching of compiled code, maintenance of typed code databases, etc)
would also be scalable and reusable for interchange of mobile code
among resource-rich devices as well as for resource-poor devices
(allowing for sharing of costs and features, debugging and certification,
by more people; hence wider and cheaper progress).
[ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ]
[ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics | Project for a Free Reflective Computing System ]
Reporter: Mr Gandhi, what do you think of Western Civilization?
Gandhi: I think it would be a good idea.