Pre-emptive vs. Cooperative Multitasking (was Re: Apple?) [ #50321]

Erwann Corvellec
Thu, 13 Jul 1995 17:27:36 +0200


   Here's an article that I found of interest, as TUNES "will" be a cooperative
 multitasking system... 

------ Forwarded Article <3u1r7k$>
------ From (Bret Indrelee)

In article <>,
Mel Martinez <> wrote:
>In article <3ts7fe$k4u@cnn.Princeton.EDU>,
>(Benjamin Y. Lee) wrote:
>] In article <>,
>] David Cake <> wrote:
>] >client in the background (like an FTP download) well written cooperative
>] >is probably better (less overhead less often for an I/O bound process that
>] >probably doesn't really need the cycles). 
>] Actually, I don't think so.  What you get instead is well written
>] cooperative applications call back to the os as often as a preemptive
>] system would because the app writer cannot know that an extrememly
>] time critical app is not running simultaneously so he must hedge his
>] bets.

Very true.

>A well-written cooperative multitasking program knows that to call back to
>the OS too frequently is as innefficient and wasteful as calling back too
>infrequently.  Apple recommends 60 times a second as the most you need for
>the user to experience no response degradation.  

There is overhead for doing this check though. Overhead that would happen
exactly once every time quanta on pre-emptive multitasking but for many
would happen more often under cooperative multitasking. The faster the
processor, the more time wasted under cooperative multitasking checking
if it is time for a context switch.

> Using the time manager or
>the tickcounter makes this pretty easy to do, so you basically do the
>following to surrender the cpu context:
>currentTime = checkTime();     // use preferred method of checking time here
>if( currenttime - lasttime > interval ) {
>    WaitNextEvent(everyEvent, &gmyEvent,napTime,NULL);
>    lasttime = currenttime;
>    }
>Just make 'interval' equivalent to 1/60th of second and you're fine.  On a
>60MHz+ machine, that means you get to do a million or more processor clock
>cycles of work between context switches.

The last sentence is irrelevant. If you are switching once every 1/60 of
a second in both cases, you have the same number of clock cycles between
context switches with both methods.

[ snip ]

[ Overhead of checking is...]
>ticks, but only has 21 microseconds of overhead (on a 66MHz PMac 700). 
>Using the Time Manager is ever so slightly more complicated and takes 82
>microseconds (on the same machine), but gives you 20 microsecond

[ snip ]

There are a couple of more things you have to take into account before
you can really make a comparison.

For the recommended performance (task switch every 1/60 a second), you
need to figure where to put these checks in for the slowest model of
Macintosh that you are supporting. This means that the number of checks
will usually be more than 60/second.

Now compute the overhead of interrupt processing. For the same effect
on pre-emptive multitasking, you would always interrupt every 1/60 of
a second.

For the AVERAGE model that you support, figure out how many checks of
time interval you make. This could easily be twice as often as the
checks on the slowest model. The faster the machine you are running
on, the greater the overhead with cooperative multitasking.

Now compare the overhead of this check against interrupt processing
overhead. On a pre-emptive multitasking system, you only check once
every time quantum regardless of how fast the processor is.

Now for all the things that are too hard to actually quantify:
- The programmer on cooperative multitasking will often check the
  time more often than strictly required. The reason for this is
  because it is really tough to determine exactly how often the
  check is required.
- The cache is disturbed each time the program checks the time. On
  a pre-emptive multitasking system the cache is only disturbed
  every time quantum.
- Not every developer will be equally concerned about checking if their
  time quanta has expired. If they don't check often enough, their
  application runs slightly faster and it is very difficult to prove
  that it is their fault.

Think about the problems in writing something like quicksort(), update
of a spreedsheet, or a B-tree search algorithem that only checks the 
minimal number of times required. The problem in these cases is that 
the run-time of the procedure is dependent on the amount of data. If the 
data set is small, it is quite likely that the total time would be less 
than 1/30 of a second. With larger data sets, the time could exceed 1/60 
of a second.

How many developers do you think recoded the standard C library quicksort 
routine in order to check if it was time for a context switch? The code
you gave used a global variable to do the check. Unless the library was
written to use the same global variable as the application, it couldn't
do the check.

With pre-emptive multitasking, the code would not have to include any
checks. With cooperative multitasking, it is likely that either no
checks will be included or too many checks would be included. Either
there is more overhead than strictly required, or the machine runs
poorly when an application uses a large data set.

Pre-emptive multitasking is better, especially for the fastest models
of machines.

Bret Indrelee        | ...and if I feel a rage, I won't deny it.   | 

------ End of Forwarded Article


   Erwann Corvellec.
 Student & coder on PC (ASM, C, C++) & fond of LINUX V1.4 ;-D