[gclist] _Project Oberon_ and finalizers
Kragen Sitaker
kragen@pobox.com
Sat, 4 May 2002 01:34:43 -0400 (EDT)
I was reading _Project Oberon_ recently, a 1992 book that seemed to
describe the Oberon system around 1990 or so. I discovered two places
where the authors averred that they had run into problems they
couldn't figure out how to solve cleanly, and their resulting lack of
a solution meant that simple bugs in user code could crash the system,
but where it seemed to me that finalizers (of some flavor) would have
solved the problem.
The first case was in unloading (or reloading) modules. When a module
was loaded, other modules could obtain references to exported things
in it, and also to unexported things, if exported functions or data
made references to the unexported things available. This was
primarily a problem for data in the module --- foreign references to
functions in the module were indirected through the module's export
table, so upon reload, they would point to the corresponding function
in the new module.
The authors write that it is difficult to determine whether any
references to the module's data exist elsewhere, and therefore whether
it is safe to reload the module. This despite their having a
full-fledged (non-generational, apparently --- the GC was one of the
modules they didn't want to include in the book because it was written
in assembly[0]) mark-and-sweep GC in the system.
It seems to me that finalizers --- either ordinary finalizers attached
individually to each module datum or a specialized kind attached to
the entire region of memory owned by the module --- would have solved
this problem with minimal fuss.
The solution adopted was to allow the module to be reloaded whenever,
and to try to map the module into a different memory location every
time it was loaded, so that dereferencing a reference to the old
version's data would result in a memory exception instead of
referencing whatever new data had been loaded into that address. This
has its disadvantages, especially on computers without MMUs.
The other case was disk-block deallocation. The internal
representation Oberon used for "texts" was a linked list of
(visual-attributes, file, offset, length) tuples (where one possible
"file" was the constantly-growing "file" of the keyboard input since
the system had come up). So, as long as a file was referred to by any
text, the file couldn't be deleted from disk, although it could be
safely removed from the file directory. (Presumably modifications to
the file would result in corresponding modifications to texts that
referred to it, although I don't recall reading about the effects of
this. Perhaps files were normally replaced by newer versions, not
overwritten.)
The authors write that it was difficult to determine whether or not a
file was thus referred to, and therefore blocks from deleted files
would not be reused until the system rebooted. This is obviously
unacceptable for some applications, so a "Purge" routine is supplied;
it makes the blocks of a file available for reuse. The obvious
downside to this is that calling Purge on a file that was still being
used could result in data corruption, and depending on the situation,
data might be corrupted very infrequently.
It seems to me that a finalizer for "file" objects which deallocated
their blocks if they weren't present in the disk directory would have
solved this problem.
Much of Oberon was modeled after Cedar; didn't Cedar have finalizers?
[0] Can you believe this? They started Oberon in 1985, nearly ten
years after Unix was released, and they still went to all the trouble
of writing low-level stuff in assembly instead of C.
--
<kragen@pobox.com> Kragen Sitaker <http://www.pobox.com/~kragen/>
What we *need* is for some advanced off-world sentience to carpet nuke planet
Earth from high orbit. Call it Equal Opportunity Ethnic Cleansing. I mean,
racism is so petty. Why play favorites? -- RageBoy