[gclist] Mars Rover info from comp.risks

Hans Boehm boehm@hoh.mti.sgi.com
Wed, 7 Jan 1998 09:49:26 -0800


On Jan 7,  5:12pm, Fabio Riccardi wrote:
>  > Condition variables don't add anything really new.  If you want to use
them,
>  > you have to inherit priorities.
>
> Condition variables are ment to share resources, so...
>
Could one of you clarify what this means?  In non-realtime code, if I have a
pthreads-like thread interface, pthread_cond_wait has no way of telling what
thread it's waiting for.  Indeed, if it's waiting for an instance of some
instance to become available, it's waiting for whichever thread first releases
such an instance.

Thus to make priority inheritance work I (as the application programmer) think
I need to:

1) Make sure that I keep enough information around in global state about who
can provide various resources.

2) Explicitly raise the priority of at least one of the resource providers
before a high priority thread waits on a condition variable for a resource that
may be provided by a lower priority thread.

3) Have each resource provider reset its priority after providing an instance
of the resource, assuming there are no other waiters for that resource.  I
think that in general requires more user-level data structures, since the
thread interface doesn't give me access to the condition variable wait queue.
 (I think this all works with a pthreads-like interface, but it results in
spurious context switches.  If a resource becomes available, I usually will
call pthread_cond_signal while holding a lock.  I then lower my priority,
usually causing me to lose the processor while still holding the lock.  The
high priority thread will the run and try to reacquire the lock, bumping my
priority up again, and suspending itself again.)

Is this sort of manual priority inheritance what you had in mind?  If so, it
still seems to me that this isn't something you want to do unless you really
have to.  I.e. you want to use very different styles for real-time and
non-real-time applications.

Getting back somewhat to memory allocation, there seems to be another instance
in which real-time goals may seriously conflict with other kinds of
programming. I've encountered a number of queued lock implementations that on a
release operation hand off the lock to the first waiter, but continue to hold
the processor.  If you consider what happens in this scheme if you have two
threads on a uniprocessor contending for a lock, usually the allocation lock,
the result isn't pretty.  You quickly get into a convoy situation in which a
thread hands of the lock, runs until it needs the lock again, then yields to
the other thread which does the same thing.  Thus you end up with one context
switch per lock acquisition and usually one context switch per memory
allocation.  Performance drops through the floor.

Clearly this buys you fairness at a potentially drastic cost in overall
performance (factor of 100?).  I'm not happy with the results.  To what extent
is this really justified for real-time applications?

Hans



-- 
Hans-Juergen Boehm
boehm@mti.sgi.com