[gclist] ref-counting performance cost
Bill Birch
birchb@ozemail.com.au
Sat, 2 Sep 2000 00:20:05 +1000
Hi,
I built a reference-counting Lisp interpreter and about 20% of CPU cycles
were doing reference counting (RC). Raw data below. I have since reduced
this by about 10%by removing un-necessary RC in some functions. Mind you
this interpreter is not efficient so in a 'real' interpreter, the proportion
of time spent in RC would actually be higher.
[Opinion: Despite the issues with reference counting it remains probably the
most widely used resource management technique. (COM, JavaScript? are some
famous examples). It is perhaps a shame that with so many of the programmers
using it that there isn't better research available. ]
Bill Birch
17.2.2 Internal Function Calling Profile
The following table is the output from the UNIX "prof" utility run on
Version 2.28 of RefLisp, compiled with the -p option. The test was run
using sort.lsp and bignum.lsp.
Subsequent versions will have the most used functions replaced by
macros. Functions replaced with macros are indicated by a leading *.
__mcount is the profiling subroutine, and should be excluded from
discussions about resource usage. The less-frequently used functions
are not listed.
Name %Time Seconds Cumsecs #Calls msec/call
.__mcount 37.0 30.46 30.46
*car 8.6 7.06 37.52 3569237 0.0020
*cdr 7.6 6.28 43.80 3208130 0.0020
.eval 5.4 4.43 48.23 471698 0.0094
.dereference 5.3 4.38 52.61 1087608 0.0040
*atom 3.9 3.24 55.85 3509798 0.0009
.purge 3.7 3.05 58.90 745590 0.0041
.reference 3.6 2.93 61.83 1852270 0.0016
*subrp 3.0 2.45 64.28 992799 0.0025
*fsubrp 2.2 1.84 66.12 705840 0.0026
*numberp 2.1 1.71 67.83 478750 0.0036
*bigstringp 1.7 1.39 69.22 468474 0.0030
*stringp 1.7 1.38 70.60 473896 0.0029
.eval_list 1.6 1.34 71.94 323045 0.0041
.ref_evlis 1.6 1.30 73.24 136037 0.0096
.value 1.5 1.23 74.47 678077 0.0018
.get_cell 1.5 1.21 75.68 366122 0.0033
*idp 1.0 0.83 76.51 831715 0.0010
.apply 1.0 0.80 77.31 161523 0.0050
.c_release 0.9 0.77 78.08 365852 0.0021
.evprogn 0.7 0.58 78.66 65282 0.0089
.savelis 0.6 0.51 79.17 40386 0.0126
.cons 0.6 0.51 79.68 347003 0.0015
.setlis 0.4 0.35 80.03 40386 0.0087
.cfr 0.4 0.32 80.35 121137 0.0026
.set 0.3 0.24 80.59 131774 0.0018
.ap_lambda 0.2 0.17 80.76 40386 0.0042
.fixp 0.2 0.13 80.89 50869 0.0026
.cir 0.1 0.09 80.98 47130 0.0019
.restorlis 0.1 0.09 81.07 40386 0.0022
*floatp 0.1 0.09 81.16 55894 0.0016
.quotient 0.1 0.08 81.24 3378 0.024
.evcond 0.1 0.08 81.32 24486 0.0033
.equal 0.1 0.07 81.39 12634 0.0055
.bcar 0.1 0.06 81.45 21787 0.0028
.__mcount 0.1 0.06 81.51
.lessp 0.1 0.06 81.57 7232 0.008
.newicell 0.1 0.06 81.63 14239 0.0042
.eq 0.1 0.06 81.69 121158 0.0005
.plus 0.1 0.06 81.75 5052 0.012
._ptrgl 0.1 0.05 81.80
----- Original Message -----
From: "Kragen Sitaker" <kragen@pobox.com>
To: <gclist@iecc.com>
Sent: Wednesday, August 30, 2000 6:05 AM
Subject: [gclist] ref-counting performance cost
> A friend of mine is somewhat conservative on newfangled technologies
> like garbage collection, which, after all, has only been reasonably
> efficient for the last 25 years.
>
> I asserted that reference-counting is almost always much slower than
> well-implemented mark-and-sweep or copying collectors; he requested
> quantitative evidence. Real quantitative evidence requires
> measurements of real programs, and doing those is a lot of work. Has
> someone done them?
>
> I can't seem to find papers on such measurements, even in excellent
> bibliographies like
> http://www.xanalys.com/software_tools/mm/bib/authors.html ; even Zorn's
> excellent Ph.D. thesis has no useful information on the slowness of
> ref-counting.
>
> --
> <kragen@pobox.com> Kragen Sitaker
<http://www.pobox.com/~kragen/>
> Perilous to all of us are the devices of an art deeper than we ourselves
> possess.
> -- Gandalf the Grey [J.R.R. Tolkien, "Lord of the Rings"]
>