multitasking with call/cc

Francois-Rene Rideau
Tue Nov 6 13:37:02 2001 (Vilhelm Sjberg) writes on comp.lang.scheme:
> The recent thread about dynamic-wind reminded me of an argument I
> heard from an acquaintance on IRC who claimed that using call/cc to
> implement threads must be inefficient. I would be interested in a
> reply.

No. Using call/cc is not inefficient *per se*. However:

1) most call/cc implementations are not tuned for such use.
 Using a "generic" construct to provide with features with different specific
 usage patterns has an intrinsic cost, because even a system tuned for all
 the cases will have to dynamically reconstruct the information necessary
 to choose the right implementation for the case at hand.
2) if you don't want to leak memory, you must be very careful to call
 non-returning functions with a null continuation (or at least a constant
 continuation created early in your program); this particularly holds when
 (re)starting threads.
3) since there is no concept of allocation regions in scheme, it is
 not directly possible to specify hierarchies of regions as ought to
 be captured or not by specific calls to call/cc, and tell that such
 variable ought be allocated in such region rather than such other region,
 so that it ought to be captured by such call/cc and not by such other
 [call/cc thus ought to take as implicit or explicit argument the region
 whose state ought to be reified]
4) unless your implementation also has support for user-defined safe-points,
 you won't be able to use this technique but for poll-based scheduling,
 that are notably slow and/or high-latency, since it's difficult to control
 polling so it's both guaranteed to be often enough and not too often.
 With a lot of metaprogramming, you could achieve a reliable system (manual
 "cooperative" multitasking is notably unreliable), but then, you might as
 well start or fork your own implementation.

Hence the endless  debates about dynamic-wind, and call/cc,
because one man's dynamic state is another one's static state, etc.
-- often within the same application (e.g. each layer of implementation
may wants its own winding, independently from the winding of higher-level
layers; some people may want call/cc to capture all of "an activity",
but some people's activity is one thread, whereas someone else's is
some group of threads, and yet another one's is the whole system;
similarly for exception handling, etc.).

In other words, a computation capture "the state of the computation",
but there is no one-size-fits-all as to what one developer considers
as the state of the computation depending on the problem domain he
addresses in one of the many parts of a big application.

So all in all, the problem (to me) is less with the concept of call/cc
than with the (in)ability to specify resource management within the
only languages/implementations that sport a call/cc (Scheme and SML/NJ).

* You might be interested in reading papers published at the various
 Workshops on Continuations that have taken place in the past.
* It seems to me that linear logic is the right framework to specify
 useful semantics to such region capture and sharing problems.
* This is just a personal opinion, unbacked by any specific academic paper
 that I know of (but I'm very interested in learning about ones that would
 address this specific problem)
* If you know of anyone interested in implementing a system that builds
 multithreading out of call/cc with user-defined regions and safe-points,
 I want to know about it! [Otherwise, that's one of the things I ought to
 do eventually for my thesis - if anyone is interested, we can team up.]

Yours freely,

[ Franois-Ren VB Rideau | Reflection&Cybernethics | ]
[  TUNES project for a Free Reflective Computing System  |  ]
"The society which scorns excellence in plumbing as a humble activity
and tolerates shoddiness in philosophy because it is an exaulted
activity will have neither good plumbing nor good philosophy.
Neither its pipes nor its theories will hold water."