[gclist] Java vs. ML, particularly GC
Sat, 23 Dec 2000 21:03:42 -0500
Thanks for these explainations.
My question is how can you deduce the object overhead from doing experiments with the JVM. While i agree with Eliot's model of 2 words of overhead for Objects and 3 words for Object's for JDK 1.3., on NT my estimates are different. Specifically:
Windows NT 4.0 x86
java version "1.3"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.0-C)
Java HotSpot(TM) Client VM (build 1.3.0-C, mixed mode)
I measure 8 bytes per Object and 16 bytes per Object.
java version "1.2"
Classic VM (build 1.2.2, build Linux_JDK_1.2.2_RC4, native threads, nojit)
I get 16.24 bytes per Object and 16.24 bytes per Object.
I thought the fraction .24 could be do to other work going on i'm not accounting for. However, both these examples have only 1 thread running, mine.
Some other JVM's when running this benchmark have several apparently related to JIT compiling.
To get back to the original question of Java consing, String consing is the biggest suprise i've seen - an empty String takes 40 bytes. String interning can be a valuable optimization. If functional languages naturally share structure, that could be a big advantage.
At 03:59 PM 12/22/2000 , David F. Bacon wrote:
>in particular, only two bits are needed per object for the hashcode in almost
>all cases. these bits represent three states: unhashed, hashed, and
>if an object's state is "hashed" and it is moved by the collector, then the old
>address is copied to the end of the new object and the state is changed to
>"hashed&moved". since most objects die young, and most objects don't have
>their hashCode() method invoked, this almost never happens.
>we implemented this optimization as part of the thin locks work (see pldi98).
>there is no inherent reason why a copying collector for java should perform any
>worse than for other languages.
>p.s. dylan's hashcode semantics are a heinous.