[gclist] ratio of heap size to live data?
Boehm, Hans
hans_boehm@hp.com
Thu, 5 Oct 2000 12:10:31 -0700
> From: Nick Barnes [mailto:Nick.Barnes@pobox.com]
> At 2000-10-04 16:57:33+0000, "Boehm, Hans" writes:
> > My impression is that for copying collectors the to-space
> component of C is
> > often an issue, though probably less so for ML. Even if
> the to space for
> > the older generation is only used rarely, it still needs to
> be resident at
> > that point. That means it either needs to remain resident,
> or it needs to
> > be paged in and out. Paging in (and out) something on the
> order of (L+R) is
> > often a nontrivial operation.
>
> I'm not sure I follow you here. Let's look at a specific example for
> MLWorks. Generation 0 is the nursery, collected many times per CPU
> second; putting maybe 1 MB per second into generation 1, which gets
> collected maybe every ten CPU seconds, putting maybe 2 MB per CPU
> minute into generation 2. So perhaps generation 2 gets collected
> every five CPU minutes (these numbers are all very approximate; the
> actual collection policy, including scheduling and tenuring, is
> adaptive, depending on survival rates, trying to minimise the total
> cost of memory management).
I think the fundamental issue is that there are huge differences in client
behavior, especially across languages and programming styles. I'll assert
that for a reasonable fraction of C/C++/Cedar programs a 1 MB nursery gets
you essentially a 100% survival rate for significant stretches of program
execution. (See for example Paul Wilson's graphs's of gcc memory usage.
Zorn's old ptc benchmark is probably the worst case, since everything
survives until program exit. We also saw this with the Xerox Cedar editor
(which tended to get invoked on large mail files).
There's at least anecdotal evidence that it's less true for Java, since the
long-lived allocations are diluted by temporary objects. Nonethless,
according to the Dieckman and Hoelzle paper in ECOOP 99, two of the SPECjvm
benchmarks have 1MB survival rates above 10%, and one of those is above 20%.
(3 others have maximal live data sizes of 1.2 MB or less, and are thus not
interesting here.)
With this sort of behavior, you're driven towards much more frequent full
(or nearly full) collections than what you give above, which I assume is
based on experience with ML.
>
> Look at a generation 2 collection. Plucking numbers from the air,
> generation 0 is empty, suppose generation 1 has 4 MB, generation 2 has
> 12 MB, and maybe there's 10 MB in higher generations. So L+R+U is 26
> MB. If the policy is working well, maybe 6 MB will survive this
> collection (so S = 6 MB for this collection).
>
> The generation 2 collection proceeds like this:
>
> - map 12 MB of to-space from /dev/zero. Our VM footprint now is 38
> MB. Note that the operating system doesn't actually make anything
> resident at this point; the mapping is on-demand. If everything is
> resident, we still have a resident set size of 26 MB.
>
> - collect generation 2 into to-space. As we collect, the OS provides
> 6 MB of zero-filled pages for us on demand. These pages aren't
> being paged in, they are being manufactured on the fly. Shame about
> the zero-filling, but that's another thread. If real memory is
> tight then other pages (hopefully from other processes :-) may be
> paged out to provide the page frames.
>
> - At the end of the collection, just before we start unmapping, the
> resident set size is 32 MB and C = 6 MB. Our VM footprint is still
> 38 MB.
>
> - unmap the remainder of to-space. The only effect this has is to
> trim 6 MB from our VM footprint (now 32 MB). This memory was never
> actually resident, and cost us next to nothing.
>
> - unmap the generation 2 from-space. This doesn't get paged out, it
> just gets discarded. Now the resident set size is 20 MB (L+R+U),
> and so is the VM footprint, and C is zero again.
>
> So during this whole collection (which takes epsilon longer than
> scanning 10 MB), C rises from 0 to 6 MB and back again, but it's never
> 12 MB, and the only paging is (maybe) 6 MB worth of paging out to free
> the frames for our 6MB worth of /dev/zero.
And if you're really short on memory (the interesting case here, though
hopefully not a frequent one), you'll have to page those 6 MB back in before
the next GC. (With luck, you'll find clean pages to page out, and you
really only have to page them back in. And with luck the originals will be
on disk, not on that old 2X CDROM.) Thus you may end up with 12MB of disk
traffic, and up to something like 3000 disk seeks. My guess is that
realistically you'll see a 5-10second delay if you really need to page to
get the 6 MB, and you needed the old memory back to continue running.
>
> Does this make sense?
I think we basically agree except on most things, except the assumptions
about client behavior. I'm still very concerned about the applications that
alternately build up and drop huge data structures. They seem to be a
significant fraction of real applications. And users (well I, anyway :-))
tend to be more sensitive to GC performance during heap growth spurts, since
that tends to be the time during which GC overhead is most noticable anyway.
The net effect of that here would be to increase the generation 2 collection
frequency, possibly dramatically, and possibly to temporarily increase C to
12 instead of 6.
Hans