Concurrency Proposal

Levi Pearson levipearson at gmail.com
Thu Mar 9 14:25:48 PST 2006


The mechanism I've proposed should not preclude transactional memory,
but may need some additional mechanisms to support it.  This is
something I need to read into further.

The current proposal creates a stack for each thread (via an
Interpreter instance) and they all share the same heap.  This means
that references to objects can be passed between threads through
references that are already shared.

Locks and transactions are one level higher than the proposal covers,
and I haven't given thought to them aside from trying not to preclude
implementing them.  They may (and probably do) require additional
low-level features.  They are not mutually exclusive in
implementation, so I will probably think about locks first and then a
lock-free system.

Eventual-sends are one level higher than primitive locks and
transactions, since they need one or the other (at least) to be
implemented.  There are plenty of other neat concurrency abstractions
at this level that it would be cool to support.

So anyway, please comment first on what is actually in the proposed
system instead of what is outside of its scope, because the
higher-level things are never going to happen without the ability to
do low-level task switching and i/o that doesn't block the whole
system.

   --Levi

On 3/9/06, Matt Revelle <mrevelle at gmail.com> wrote:
> Should transactional memory be thought of at this point?
>
> For any interested, here are some links:
> http://research.microsoft.com/~simonpj/papers/stm/index.htm
> http://en.wikipedia.org/wiki/Transactional_memory
>
>
> On 3/9/06, Brian Rice <water at tunes.org> wrote:
> <snip>
>
> > I don't have much to directly comment on except that it'd be a
> > benefit to get some discussion on the technical points so that we
> > wind up with a good initial design and don't "code ourselves into a
> > corner"; concurrent programming support can be a tricky issue in that
> > it pervades assumptions about code and what it can do.
>
> <snip>
>
>



More information about the Slate mailing list