[gclist] Looking for techniques to deal with very large Java objects
JHaungs at acm.org
JHaungs at acm.org
Sat Sep 11 10:29:31 PDT 2004
Might anyone have some pointers to papers dealing with managing large
objects (>100mb) in stock Java VMs?
Given a production server environment, where large objects come in from the
network, are buffered, massaged, and sent back out again, we have a need for
a compact buffering scheme that still allows the object's sub-parts to be
traversed in RAM.
The smaller the fragments, the more header overhead, and the worse the
locality, paging, and thrashing problems. The larger the fragments, the more
time the GC spends moving and compacting them, and the more problems we have
finding a contiguous free block in a fragmented heap, even though
technically, there's enough free space.
We've thought about a separate C-managed heap for large objects, just to get
them out of the way of the Java heap, but we're reluctant to add C into an
all-Java system. We also considered a disk-based system, but given that the
objects generally can fit into RAM, a disk-buffering solution seems too
complicated and duplicates what paging already does.
There's a one-page section in the Jones/Lins book about separating headers
from bodies that don't have pointers, but that's assuming you're writing the
GC, not merely using a stock one. And there's nothing in LNCS 637 (IWMM '92)
about this.
Just wondering if anyone had done any practical research into this problem.
Thanks,
Jim
More information about the GClist
mailing list