Jecel Mattos de Assumpcao Jr.
Fri, 5 May 1995 22:21:58 -0300
On Fri, 5 May 95 1:19:58 MET DST email@example.com (Francois-Rene Rideau) wrote:
> Yes, yes, yes. But still, there are implementation problems about that:
> the version control system should allow:
> 1) to express extensions and restrictions of object specifications.
> (i.e. object a++ is an extension to object a)
I think this is already handled by inheritance or delegation.
> 2) to check for dependencies between objects (i.e. are objects a and b
> actually different versions of a same package; is there some object c
> that supercedes both of them at lower cost than having them both ?)
That is the problem I was talking about. I have a certain version of
the Linux kernel, a certain version of gcc and lib gcc, a certain
version of XFree, and so on. How do these things depend on each other?
What is the global version of the system?
I understood that you were talking about intrapackage object dependencies
above, but I think the two kinds of dependencies are similar and might
be handled in the same way.
> 3) to have objects as small as possible, i.e. not having to move the whole
> version and hierarchy information if every single object
> 4) to have efficient way to store and/or compute all the above information.
I don't see this as a big problem. It has to be integrated into the
persistent store system, though.
> * objects are distributed in packages.
I agree - if you need to copy an object to a disk to send to
another person, there should be some unit whose objects are
copied while the others are not. Stuffing "the whole world" into
a floppy is not very practical.
> * each object in a package points to a list of objects (possibly axioms of the
> system, possibly objects from the same package, possibly external objects),
> that define exactly the objects' semantics (its specifications),
> its dependencies (other modules it needs to run), its recent history (i.e.
> how it was obtained from a previous version and/or user interaction).
Yes. I am not too clear on the specifications part yet, though.
> * there is a user-controlled programmable inference system to track
> the above dependencies between objects.
That would help with system-wide garbage collection too. But "inference
system" makes me think of complex rule-based programming. I would
rather something as simple as possible.
> * there is a ("intelligently") locally cached global (yes, really) database
> of known dependencies, that serves as a basis to the above inference system.
> * there is some global unique addressing scheme used by the above database.
This is a hard problem. In Merlin, I am using a three part global ID:
32 bits with the TCP/IP address ( or other uniq ID ) of the machine
on which the "package" was created ( it might have moved, but its ID
doesn't change ), 32 bits as a timestamp of when the package was
created and 32 bits indicating the object within the package. Actually
I divide these last 32 bits into an 8 bit version number and a 24
bit object number ( not address - this indexes an "entry table" ).
Most objects don't have a global ID, or this would be too slow to
I might dump this in favor of something like Apertos uses.
> This database is really a world-wide knowledge database.
> To achieve any efficiency, it must be (dynamically ?) clustered.
> And this is yet another problem:
> What are good ways to (semi-automatically) group objects into packages
> of closely-related objects that do not interact as much at all
> with external objects ? What information must we ask the programmer, must
> we compute when compiling, must we gather while running, must we forget
> when freezing, etc ?
> Actually, this is *the* problem that arises when we want efficient
> distributed computing...
In Merlin, the objects are groups manually into packages. For a look
at how to group objects dynamically look at Jim Noble's www pages.
I don't remember the URL, but you can find it by starting at the
Self home page and looking at "other projects using Self".