[gclist] Re: gclist-digest V2 #110
Thu, 22 Jan 1998 08:34:20 -0600 (CST)
> >If I were the on the development team, it would not be difficult
> >to decide whether I want to spend more time now debugging a precise
> >collector where I have control over all aspects of the problem or
> >whether I want to spend much more time later trying to provide a way
> >to tweak that 1 in 1,000 heap that I have no control over (times a
> >million+ users) that leaks so bad you have to restart every hour...
The "1 in 1,000" raises a question: what is the real probability,
and if known, is it acceptable? On one project, I was faced with
implementing something either using a simple technique with a
random 1 in 10^n chance of failure, or a complicated technique
with no (obvious) chance of failure. I had control of n.
I decided on the simple technique because it would result in a more
reliable system. Why:
1) Extra time spent on the complicated technique could otherwise
be spent more profitably on fixing known bugs that occur
much more frequently than 1 in 10^n.
2) Given my track record on writing perfect code, it was likely
that the complicated technique was going to end up failing
more than 1 in 10^n.
It is true that the number of users and runs per users makes a difference.
And the probabilities are often difficult to figure. But if the true
probability is actually quite small relative to other problems in the
system, then worrying about that is almost surely misplacing one's attention.
Worrying about absolute perfection is the province of academics.
In the industrial world, we have to evaluate relative risks.
- Arch Robison