[gclist] controlling heapsize in BDW collector

Boehm, Hans hans_boehm@hp.com
Wed, 26 Feb 2003 12:57:44 -0800


[I recently set up the gc@linux.hpl.hp.com mailing list for discussions specific to this collector.  I'm not sure that this question is completely collector specific, but if it were, that would be an alternative place to ask.]

Do you clear pointers to objects at the same point at which you would have explicitly deallocated them?  Otherwise, I would expect that the maximum amount of reachable memory is larger than the maximum amount of malloc/free allocated memory.  A factor of 3 seems unlikely, but not impossible.

You are really operating the collector at a point it wasn't designed for.  In particular, it sounds like you only have on the order of 10 live objects around.  The collector will perform suboptimally here for a variety of reasons:

1) Garbage collectors are inherently not terribly efficient with an average object size of 10K or so.  See the previous discussion on this list.

2) A conservative collector, or one with otherwise incomplete liveness information, will typically follow some small number of pointers on the stack that were used as compiler temporaries, but are really dead.  I would normally expect this number to be on the order of at most a dozen, and it usually doesn't matter.  But with only a dozen live objects ...

3) The collector needs to scan some of amount of static data, e.g. owned by libc, during each collection cycle.  Even a 300K heap is too small to amortize that cost.  (It will try to grow the heap to compensate, though GC_set_max_heap_size or the GC_MAXIMUM_HEAP_SIZE environment variable should inhibit that.)

4) The collector's data structures aren't tuned for heaps this small.  The heap expansion increment and some temporary space areas are too large by default.

If you want to debug this, try placing building a debuggable collector and placing a breakpoint in GC_expand_heap_inner().  Looking at the stack at the last heap expansion generally gives you a good idea why it decided it needed to grow the heap.  Calling GC_dump() at that point should tell you something about what the heap looks like.  (And with a 340K heap, the size of the dump will be manageable.)

Hans

> -----Original Message-----
> From: Michael Hicks [mailto:mwh@cs.umd.edu]
> Sent: Wednesday, February 26, 2003 11:50 AM
> To: gclist@iecc.com
> Subject: [gclist] controlling heapsize in BDW collector
> 
> 
> Hi all.
> 
> I wonder if anyone can provide some input on how to correctly set the
> heapsize for the BDW collector.  I'm trying to do some performance
> comparisons between GC and non-GC'ed apps, and in particular I want to
> examine the tradeoff between memory footprint and latency in a GC'ed
> setting.  The idea is that the more memory you're willing to 
> allow, the
> less latency impact there will be with GC, since you'll collect less
> often.  And the converse is also true.
> 
> So, I have an application that has about a 128K footprint when using
> GC_malloc and GC_free, and about a 348K footprint when removing the
> GC_free's so that the collector is used.  What I'd like to do is force
> the heapsize to be somewhere between 128K and 348K (as close 
> to 128K as
> possible) while still using the collector, so that garbage collections
> occur more often.  Then I can assess the latency impact.  
> However, when
> I do this by calling GC_set_max_heap_size(max_heap_size), GC_malloc
> returns NULL in basically every case unless I set max_heap_size to be
> roughly 348K.  I also set the GC_use_entire_heap flag to be true, with
> the same result.
> 
> Why would this be happening?  When using GC_free, the heap usage never
> rises above 100K, so it's not that I'm allocating a lot of batched
> objects and then freeing them all at once.  By the same token, I'd be
> really surprised if this was some kind of fragmentation 
> overhead (2/3 of
> the heap is fragmentation!!!???).  The objects being allocated are
> relatively large, ranging from 2K to 15K.  Finally, spurious retention
> also seems unlikely: to be safe I NULL all of the objects that are
> allocated (these are packets being forwarded by a proxy), and the
> results are the same.
> 
> If this is not some kind of limitation with the collector, can anyone
> suggest how I would go about debugging this behavior?  Turning off
> -DSILENT has not been too helpful.  Has anyone had success setting the
> maximum heapsize to something below what the collector would naturally
> come to?
> 
> Thanks in advance,
> Mike
>