Chris Bitmead uid(x22068) Chris.Bitmead@Alcatel.com.au
Tue, 06 May 1997 12:20:33 +1000

>> People just want to write apps, they don't want to have to deal with
>> database issues all over the place.
>Right, which makes implicit page-level locking better, since apps
>don't *have* to care at all about locking.

Object level locking schemes don't *have* to care about locking
either. As long as the instruction which maps to setq asks for a write
lock, and anything which hits the disk asks for a read lock. So
there's no advantage for either object or page locking in that

But if you want to write an app which actually works properly in a
multi-user situation (reliably I mean) then you have to do some work
and there's no getting around it.

>They *do* have to care about transaction restarts, which means

But transaction re-starts are lots harder to code for than
locking. What if the logic is interspersed with network accesses? How
are you going to undo that?

And if OOFS is the default FS, then file systems accesses will be
invisibly interspersed with everything. You don't always have the
ability to say to another network host to undo everything you just did
to a certain point.

It is absolutely essential to have a protocol so that your program can
have 100% certainty that it will be able to run to completion without
re-starts. This can be done as long as you have object level locking.

>you can't cache persistent data in transient memory, since the
>transient memory doesn't get "undone" after a restat.
>If the persistent access is fast enough, this caching won't be needed.

When OOFS is the default way of doing things it will be impossible for
the programmer to even know which data is in transient memory and
which is in persistent memory. You will just write alogorithms and
assign objects around freely. You won't always know if a particular
assignment actually resulted in the cacheing of something.

>> One or two tweaks to state lock intentions people will accept.
>I'm not sure about that, but let's assume they will.
>So you're saying not per-object locks, but just one or two?

I'm not sure what you mean by this statement.

>This can easily end up serializing everything,
>because programs must get these few locks "up front",
>so they all end up waiting in line, and don't do anything useful until
>it's their turn.

If you intend to change an object then you had damned well better
serialize access to it for obvious reasons.

>This doesn't sound like row-level locking, but the same kind of table-level
>locking that you were saying the RDB vendors have stopped using.

No, absolutely not table-level locking. I don't know how I gave you
that impression.

>Fine-grained object-level locking is clearly optimal,
>but I don't know any way it can be implemented very efficiently.

What is the problem? Lots of people have done it before.

>> Sure, but who wants locks on a single process system?
>If there are explicit locking/unlocking calls in the sources, 
>then the lock-code must be run if there is nobody else also running.
>Or are you saying that single process execution is not important
>enough to optimize for?  

Any non-trivial operating system will have lots of programs running at
once. Some might be filtering mail, some might be downloading
mail. Just because it's single user doesn't mean you won't need