[gclist] Mars Rover info from comp.risks

Fabio Riccardi Fabio.Riccardi@inria.fr
Wed, 7 Jan 1998 17:12:32 +0100


I don't quite agree with what you say, but you rise some interesting
issues... please let me add my 2c worth of noise to the discussion.

Jerry Leichter writes:
 > From what I gather of the current state-of-the-art in hard real-time work,
 > priority inheritance is considered an adequate solution.  However, you have
 > to do a complete implementation with real-time work in mind.  The case "well,
 > it's locked but I don't know who locked it" simply cannot come up.

Indeed priority inheritance is the only solution to the scheduling
problem, in case of shared resources.

 > Condition variables don't add anything really new.  If you want to use them,
 > you have to inherit priorities.

Condition variables are ment to share resources, so...

 > However, the real movement in hard-real-time systems is away from priorities
 > to begin with.  In a hard-real-time system, you have to deal with constraints
 > like "this must complete by 12:00:00.05 or the rocket blows up" (i.e., hard
 > deadlines) or "this must run to completion every 5 seconds, plus or minus .01
 > seconds" (i.e., required periodicity).  It's impossible to get any kind of
 > guarantee out of priority-based schemes; you don't even have enough informa-
 > tion to tell if an omniscient scheduler could meet the constraints.  The trend
 > instead is toward alternative scheduler policies that require more up-front
 > information.  The best theoretical basis is for Rate Monotonic Scheduling; an
 > alternative, whose exact guarantees I don't understand, is Earliest Deadline
 > First.  In these kinds of schemes, you must give explicit information about
 > your requirements ("I need 50ms of CPU time before 12:00:00.05"; "I need
 > 100ms of CPU time and will do 100 disk reads of 1K each every 5 seconds,
 > plus or minus .1sec"). The scheduler is either able to satisfy your
 > constraints along with all others it has already accepted, in which case it
 > allows you in, or it rejects your request.  In such a system, there is no way
 > to specify a priority.

You are confusing things.

Rate monotonic analysis (RMA) is ment to understand how to assign
priorities to a set of unrelated tasks, in order to fulfill their
relative schedules. The name RMA indeed comes from the way you have to
assign such priorities, the highest goes to the process which needs to
be executed with the highest rate (frequency of scheduling).

In case of shared resources, the only way to keep the RMA results is
to implement a priority inheritance mechanism for all forms of process
synchronisation.

In normal hard real-time systems the programmer does the RMA by hand
(or with the help of some tool) based on the resource requirements of
the set of processes to run. Once the priorities have been decided,
they are not changed again. This is called off-line scheduling.

Deadline scheduling is performed by dynamically by the OS by assigning
priorities to the currently running processes: the OS will perform the
RMA on-line whenever the process load changes, based on some resource
reservation protocol: it will validate the resource budget, and will
decide about the new set of priorities.

 > In such a system, priority inheritance is meaningless.  Every task must
 > specify how much time it will need on the CPU.  Lock holding time is included
 > - if you even need locks.  (If the schedule can be run without preemption on a
 > uni-processor there's no point in locking.  If preemption will be used in the
 > schedule, it had better be possible - lock conflicts will kill your
 > guarantees.  I don't know how this is dealt with, nor do I know how (or if)
 > this techniques deal with shared-memory multiprocessors.

Naaaa. Priorities are always there, otherwise the scheduler wouldn't
know what to do with the tasks, so keep your priority inheritance
protol with you...

The only thing the scheduler can do is to verify that each task plays
safe, i.e. it doesn't exceed the allotted resource budget. If this
should ever occour, then a run-time exception is risen, since this is
a programming error (a bug).

 > Returning to the nominal topic of this list, garbage collection - there's
 > probably room for some interesting work on GC algorithms to work within such a
 > regime.  The old work on "real-time" GC, which set a bound on the work done at
 > each step, is probably a good start; but one would like to integrate this with
 > the overall methodology for estimating requirements - and, perhaps, use
 > feedback from the scheduler to determine how much work the GC should do at a
 > shot.  Beyond this, the early work on hard-real-time systems concentrated on
 > CPU requirements.  These days, it's being extended (as I hinted above) to deal
 > with other resource requirements - in particular, disk scheduling.  It would
 > be interesting to see if memory could be treated as a resource to be
 > "scheduled" in a similar way, so that "just enough" collection was done to
 > meet the declared memory requirements of the tasks in the mix.  I don't know
 > of any work done in this direction; hard-real-time work today, as I understand
 > it, avoids dynamic memory allocation like the plague (and some practitioners
 > are even leary of using stacks).

Modern scheduling problems, as you somewhat hinted, are more concerned
with resources.

In the early days CPU was the only resource. Today people start to
worry about network bandwidth, memory, peripherals, buses, and other
stuff. A task executed on a set of processors might have a unique
deadline, thereby the system (or the programmer) has to make sure that
all subtasks executed on all processors terminate at the right moment,
or (as often happens) the rocket will explode.

If you care about GC in a RT system, then memory becomes a resource
too. If you have an object memory, and you have a GC that relocates
objects and pointers around, it has to prevent other processes to
access memory while it is messing up with it (takes a lock).

If you treat the GC as a task, you can solve the problem by performing
RMA as usual, provided that you know beforehand the allocation rate in
your system. This is definitively a requirement in hard real-time
systems (rockets), where you just avoid allocating memory at all.

In softer real-time systems (your tv set-top box), you might accept
some form of degradation of the response when unusual events occour,
depending on the degree of tolerance of the user.

Distributed multimedia systems (your soon-to-come multimedia
conferencing videophone - the hot area in today's RT concerns) require
lots of scheduling in a very changing environment: the communication
channel bandwidth and latency might change abruptly with the load,
more users can cause temporary network saturations, routers can
congest, and so on. Making sure that everything happens at the right
time without disrupting the user's illusion that he or she is talking
to some other human being, is the RT problem.

Given the complexity - and the expected ROI - of such a system, this
is probably the place where you will find a good use for an RT-Java
thing.

      Fabio

-- 
Fabio Riccardi - INRIA, Project SOR  |
email : Fabio.Riccardi@inria.fr      | * time is on our side! *
http://www-sor.inria.fr/~riccardi/   |