[gclist] Mars Rover info from comp.risks

Jerry Leichter leichter@smarts.com
Tue, 6 Jan 1998 14:39:42 -0500 (EST)


>From what I gather of the current state-of-the-art in hard real-time work,
priority inheritance is considered an adequate solution.  However, you have
to do a complete implementation with real-time work in mind.  The case "well,
it's locked but I don't know who locked it" simply cannot come up.

Condition variables don't add anything really new.  If you want to use them,
you have to inherit priorities.

However, the real movement in hard-real-time systems is away from priorities
to begin with.  In a hard-real-time system, you have to deal with constraints
like "this must complete by 12:00:00.05 or the rocket blows up" (i.e., hard
deadlines) or "this must run to completion every 5 seconds, plus or minus .01
seconds" (i.e., required periodicity).  It's impossible to get any kind of
guarantee out of priority-based schemes; you don't even have enough informa-
tion to tell if an omniscient scheduler could meet the constraints.  The trend
instead is toward alternative scheduler policies that require more up-front
information.  The best theoretical basis is for Rate Monotonic Scheduling; an
alternative, whose exact guarantees I don't understand, is Earliest Deadline
First.  In these kinds of schemes, you must give explicit information about
your requirements ("I need 50ms of CPU time before 12:00:00.05"; "I need
100ms of CPU time and will do 100 disk reads of 1K each every 5 seconds,
plus or minus .1sec"). The scheduler is either able to satisfy your
constraints along with all others it has already accepted, in which case it
allows you in, or it rejects your request.  In such a system, there is no way
to specify a priority.

In such a system, priority inheritance is meaningless.  Every task must
specify how much time it will need on the CPU.  Lock holding time is included
- if you even need locks.  (If the schedule can be run without preemption on a
uni-processor there's no point in locking.  If preemption will be used in the
schedule, it had better be possible - lock conflicts will kill your
guarantees.  I don't know how this is dealt with, nor do I know how (or if)
this techniques deal with shared-memory multiprocessors.

Returning to the nominal topic of this list, garbage collection - there's
probably room for some interesting work on GC algorithms to work within such a
regime.  The old work on "real-time" GC, which set a bound on the work done at
each step, is probably a good start; but one would like to integrate this with
the overall methodology for estimating requirements - and, perhaps, use
feedback from the scheduler to determine how much work the GC should do at a
shot.  Beyond this, the early work on hard-real-time systems concentrated on
CPU requirements.  These days, it's being extended (as I hinted above) to deal
with other resource requirements - in particular, disk scheduling.  It would
be interesting to see if memory could be treated as a resource to be
"scheduled" in a similar way, so that "just enough" collection was done to
meet the declared memory requirements of the tasks in the mix.  I don't know
of any work done in this direction; hard-real-time work today, as I understand
it, avoids dynamic memory allocation like the plague (and some practitioners
are even leary of using stacks).

							-- Jerry