[gclist] Mars Rover info from comp.risks
Wed, 7 Jan 1998 14:03:41 -0600
> -----Original Message-----
> From: firstname.lastname@example.org [mailto:email@example.com]
> Sent: Wednesday, January 07, 1998 11:49 AM
> To: Fabio Riccardi; firstname.lastname@example.org
> Subject: Re: [gclist] Mars Rover info from comp.risks
> On Jan 7, 5:12pm, Fabio Riccardi wrote:
> > > Condition variables don't add anything really new. If you want
> > > you have to inherit priorities.
> > Condition variables are ment to share resources, so...
> Could one of you clarify what this means? In non-realtime code, if I
> pthreads-like thread interface, pthread_cond_wait has no way of
> thread it's waiting for. Indeed, if it's waiting for an instance of
> instance to become available, it's waiting for whichever thread first
> such an instance.
I am encountering the same problem with Java threads.
> Thus to make priority inheritance work I (as the application
> I need to:
> 1) Make sure that I keep enough information around in global state
> can provide various resources.
> 2) Explicitly raise the priority of at least one of the resource
> before a high priority thread waits on a condition variable for a
> resource that
> may be provided by a lower priority thread.
> 3) Have each resource provider reset its priority after providing an
> of the resource, assuming there are no other waiters for that
> think that in general requires more user-level data structures, since
> thread interface doesn't give me access to the condition variable wait
I would change that to:
1. For each condition variable, record which thread (if any) currently
"owns" it, and which threads (if any) are waiting for access to it.
2. When a thread enters the wait-queue for a particular condition
variable, if the thread that currently owns that condition variable has
a lower priority than the waiting thread, then temporarily "fund" the
running thread with the waiting thread's priority, i.e. temporarily
raise the priority of the lock holder.
3. As soon as the "funded" thread releases the resource of interest its
extra funding is immediately removed (since it is no longer running "on
behalf" of the higher-priority client). The waiting thread (at a higher
priority) will immediately preempt at this point.
> (I think this all works with a pthreads-like interface, but it
> spurious context switches. If a resource becomes available, I usually
> call pthread_cond_signal while holding a lock. I then lower my
> usually causing me to lose the processor while still holding the lock.
> high priority thread will the run and try to reacquire the lock,
> priority up again, and suspending itself again.)
Sounds to me like the reason for the spurious context switch in your
scenario is that you fail to release the lock before deflating your
(artificially inflated) priority. Why can't you just release the lock
first, and _then_ change your priority, _then_ call pthread_cond_signal.
> Is this sort of manual priority inheritance what you had in mind? If
> still seems to me that this isn't something you want to do unless you
> have to. I.e. you want to use very different styles for real-time and
> non-real-time applications.
You mention that the pthreads interface doesn't really expose the
wait-queue or the lock holder to the client. I am facing the same thing
with Java threads. With Java I think it is all the more unfortunate
since threads are an intrinsic part of the language. So with pthreads,
at least you could provide your own lock abstraction (with the enhanced
interface you require) and nobody gets to bent out of shape. You could
even play tricks and make this fairly transparent.
Unfortunately, since mutex is handled in Java by the synchronized
_keyword_ operating on built-in per-object locks! no one is going to be
satisfied if I tell them: oh, if you want to use my nifty priority
inheritance scheme, just make sure to forget about using the convenient
built-in language features... synchronized will no longer have any
meaning... and BTW use this here special lock class everywhere you want
to do a mutex :-(
There just doesn't seem to be a clean solution without heroic measures
(as seems to be the common case in Java, e.g. persistence/transactions,
distribution require heroic measures like "special" VM's/plug-ins or
> Getting back somewhat to memory allocation, there seems to be another
> in which real-time goals may seriously conflict with other kinds of
> programming. I've encountered a number of queued lock implementations
> that on a
> release operation hand off the lock to the first waiter, but continue
> the processor. If you consider what happens in this scheme if you
> threads on a uniprocessor contending for a lock, usually the
> the result isn't pretty. You quickly get into a convoy situation in
> thread hands of the lock, runs until it needs the lock again, then
> the other thread which does the same thing. Thus you end up with one
> switch per lock acquisition and usually one context switch per memory
> allocation. Performance drops through the floor.
> Clearly this buys you fairness at a potentially drastic cost in
> performance (factor of 100?). I'm not happy with the results. To
> is this really justified for real-time applications?
I don't know that this example indicates that real-time goals seriously
conflict with other kinds of programming so much as it indicates that
the scheduling algoritm presented is fairly BAD. I assert that a factor
of 100 performance penalty is unacceptable in just about any domain.
This whole priority scheduling thing has to go! It is just wrong.
> Hans-Juergen Boehm