grouping (was: Version Numbers)
Jecel Mattos de Assumpcao Jr.
jecel@lsi.usp.br
Sat, 6 May 1995 22:19:58 -0300
On Sat, 6 May 1995 19:49:35 +0200 Rainer Blome wrote:
> Jecel, you wrote that the "grouping" in Ole's and David's paper was
> different from what we are talking about here, that they were just trying
> to recreate maps (classes) at the Self level. Do you really think they'd
> write a paper about a task that trivial? Indeed, they use that information
> to allow conclusions about groups of objects, but generating it is only the
> first step of five.
As you say, grouping is trivial and is not worth a whole paper. In fact,
it only gets a paragraph or two. It is not even necessary for their
real task ( which is application extraction ), but the other steps
would be much slower with out it. It is much faster to work with
the "priorityQueue group" than with {obj19, obj37, obj41, .... } in
the type inference step.
Grouping objects with a similar structure ( the same map ) is different
from grouping objects that are used together ( which the paper also
describes: the fourth step ). I should have been more specific - it
is not that I think that the paper is not relevant to this discussion,
but just that the use of the word "grouping" is not the same.
> You omitted the central sentence from your quote of my message:
> > * there is a user-controlled programmable inference system to track
> > the above dependencies between objects.
Sorry. I answered four messages at the same time and must have
messed things up a little.
> By "above dependencies" I referred to what Fari said in his message:
> > * each object in a package points to a list of objects (possibly axioms of the
> > system, possibly objects from the same package, possibly external objects),
> > that define the object's [...] dependencies (other modules it needs to run)
I agree that there must be such a system ( I think that I complained about
"inference" as implying AI, but Ole uses the word too ).
> In another message you said: [...] there should be some unit whose objects
> > are copied while the others are not. Stuffing "the whole world" into a
> > floppy is not very practical.
>
> Exactly. In the mentioned paper, an implementation of a system is
> described that uses an inference process to extract from the Self world all
> the objects needed to perform a specific task (respond to a message send).
> (Actually, they did a little more than that: the objects were even stripped
> of unneeded slots.) In the abstract they say: "The extracted application
> runs in a tenth (of) the space of the original environment."
The problem with their system is that it extracts *everything* needed
to run the application, even things like the process scheduler. Many
of these objects will already be available to the person I want to
send the floppy to, so I would like some module system that would
limit even further the objects that have to be copied.
> Although they used their system on application startup messages (like
> "benchmark quicksort start", it may as well be used on lower levels. What
> counts is that the resulting set of objects is independent of the rest of
> the system (with respect to that particular message) and may therefore be
> viewed as a module.
Some objects will be common to several different extractions, so it
would be best to make them into a module of their own ( see above for
an example ).
> Finding dependencies or recording them or automatically creating/bounding
> modules was not the authors' main concern. But that information may easily
> (I believe) be collected during the marking phases (this might even be a
> way to do garbage collection?). When modules (packages) are introduced, the
> extractor will sometimes cross boundaries between modules when following a
> potential send. When not extracting an application but merely deducing
> dependencies, that'd be the time to have the system record an
> inter-module-dependency.
Self 3.0 and 4.0 do have modules. The extraction software ignores
the modules, but it would be very neat to do exactly what you
suggest. Currently the Self modules are managed entirely by hand.
> The central step dealing with dependencies in their system is called type
> inference. The corresponding reference is (must be s.th. like):
> <a href="http://self.stanford.edu/papers/ecoop93a.ps.Z"> Type Inference
> of Self: Analysis of Objects with Dynamic and Multiple Inheritance </a>
> I have not bothered reading that so far coz I thought it'd deal with
> run-time type inference only. Maybe I'll get to read it now.
This is a great paper - I recomend it. It is not about run-time type
inference at all ( there are tons of Self papers about that - the
compiler does run-time type inference ). This paper talks about a
seperate application which can list all of the possible results of
any message send, or all of the objects that might be found in a
given slot. This is a great way to check that your application
won't cause any "message not understood" errors before shipping it,
much in the same way Lint was used with C programs in the old days.
You don't have to give up this advantage of static type checking to
use Self :-)
BTW, Ole has sent me some samples of the output of his extraction
programs and I agree that it is a vast improvement over what the
Smalltalk people do today to deliver applications. If the VM could
also be reduced, it might even make it possible to use Self for
embedded applications!
-- jecel
P.S.: about version numbers, David MENTRE (David.Mentre@enst-bretagne.fr)
has written me about CVS, a RCS extension that deals with whole
applications rather than individual files. I plan to look at it as
soon as possible ( it can be found in sunsite and mirrors in the
Linux/develop/version_contr directory )