[gclist] Unreal
Nigel Bree
nbree@kcbbs.gen.nz
Sun, 18 Jul 1999 03:56:57 +1200
Tim Sweeney wrote:
> In UnrealScript, each unique class is described in a unique object of class
> "UClass". The UClass contains the name of the class, a packaging info, a
> list of all functions, all consts, all variables, etc.
Well, since you seem to have got there indirectly via Java, let me commend to
you the book on CLOS internals by Kiczales et. al: _Art of the Meta-Object
Protocol_ ISBN: 0262610744. See also their web pages at:
http://www.parc.xerox.com/spl/projects/oi/default.html
It's not *specifically* related to GC, but you'll get a kick out of it.
Also, partial evaluation is one of those wonderful concepts that blurs the
traditional compile/link/run distinction. Olivier Danvy is a prolific
researcher on this subject, see http://www.brics.dk/~danvy/
> But from the garbage collector's point of view, I just do a recursive
> mark-sweep of active objects, starting at the root. It's like
[...]
> Obj->AlreadyMarked = 1;
In my own collector, I store mark bits separate from objects, in a bit-vector
local to the collection. While in my case the (C++ based) objects and indeed
program footprint is small enough to make this unnecessary, I do it as a
cautious measure to avoid causing VM stress - although any even partially
conservative collector is likely to stress the VM system, I figure a mark bit
in the object header is likely to be unhelpful overall. I await correction
from Hans, David Chase et al. :-)
I recommend examining the Boehm collecter's Win32 code; if you do want to
treat "static" objects from the world data-file specially, why not use a
read-only file mapping for them and exclude such mappings from your scan. A
mark bit in the object header seems like a lose in such a case!
Paul Wilson's research seems particularly relevant here, since the bulk of
your objects will be sourced from some persistant initial-game-state storage.
His group may well have performance measurements that can guide you.
> So there isn't any way to tell that "the object reference I'm passing to a
> function won't become rooted" except by actually calling it and seeing
> (because you might be calling a new subclass's version of the function that
> does something evil).
True, although it's still possible to deal with this as a link-time issue. In
your case, script programmers and used to dealing with the considerable
static computation that goes with world geometry (e.g. BSP visibility and
subdivision), against which this would be minor in performance terms.
Again referring to Eiffel, the question about such a keyword is one of design
intent; if the fact that a pointer is not captured is important to the
client, then it should be captured. Otherwise, it is part of the (necessarily
opportunistic) process of optimization. The proposed keyword would constrain
subclass authors to adhere to its contract, so that would suggest a benchmark
for comparison...
You can either make the difference 1) invisible, 2) inferred automagically,
3) explicit, and you can make getting it wrong either a) irrelevant, b) a
performance cost, c) a crash. You pick!
One possible scenario is to consider a link-time optimization (performed
against your release engine codebases) with a presumption references are not
captured beyond those in the base code. Code which does not match the basis
can call through a thunk which (for example) functions as a write barrier,
making an explicit note of the captured reference. The language and library
basis in your application is substantial, and under your control.
Release-to-release, a one-time load-time verification of user code against
your basis may be acceptable.
> Can you help me understand something about pre/postconditions? I
> understand the Eiffel syntax, but I don't understand how they relate
> to the compiler:
> are pre/postconditions actually analyzed and/or proven by the compiler?
Well, what compilers are capable of is always advancing; it wasn't too long
ago that SML's type inference was exotic stuff, and pipeline scheduling
came of age not that long ago either!
In practice, with current implementations the bulk of Eiffel assertions get
explicitly generated and checked at run-time in debug builds. But note that
the debug/release behaviour of C++ can be just as extreme. In my case, using
VC++6 with a conservative mostly-copying GC, the pattern of retained memory
in Debug (non-register-allocating, plus a local stack pattern fill) is
completely different to the Release (register-allocating, no pattern) mode.
If it wasn't hard enough making bugs reproducible when we have cryptographic
PRNG's producing odd bit-patterns in the mix... the GC has actually come
through with flying colours and overall it has actually made debugging easier
for me, but still the potential for problems when objects move make it hard
to treat as a reliable black box.
> I try to avoid making any assumptions
> that the language can't catch at compile-time and give you an error message
> about...because people are doing too much crazy stuff with the code.
"Compile-time" is a hazy issue... if the main criterion is that it always
works, that's one thing. Operating but with marginally degraded performance
with routines that don't exactly conform to each other's expectations is
another. A one-time messagebox saying "wait while I recompile" is another
option. A "contact the mod authors for an updated version" message is next.
Always operating at 100% of optimum performance is the holy grail, but we all
have limited resources.