ask not what your object can do for you

Jecel Assumpcao Jr
Tue, 29 Aug 2000 13:47:19 -0300

Thanks Kyle, Billy and Brian for your comments in this thread.

Kyle: I did not think of adding stuff to 4 as any kind of optimization.
In fact, my experience with Self has been that we should only worry
about getting things "right", and some really smart person will come
along and find a way of making it fast.

You are totally correct about your "normalization". My point is that
somone might note that successor(4) always returns 5 while successor(7)
always returns 8 (a very different result!) and decide to include this
in each number object. Of course, since you are way smarter than that,
you note that 5=4+1 and 8=7+1, so that the same definition "self+1"
could have been used in both cases. In all cases for integers, in fact.
So you put that in the class or even in some superclass.

It might seem strange that I mentioned getting things right and then
explained how you were right but I wasn't. I'll go into this some more

> >> It's also how things are done at the _highest_ level,
> >> theGroupToWhichTheseThingsBelong add: 5 to: 7
> >> --> 1
> >> (Ahah, the group was the integers modulo 11.)
> >I don't understand how this works: where is the information about the
> >group to which the two numbers belong stored?
> Looks like it's in the variable 'theGroupToWhichTheseThingsBelong'.

Ok, so the 5 and the 7 themselves have no group related information - a
global variable does. I had considered this, but thought you were
trying to illustrate something else. I think Brian had also focused on
the 5 and 7 literal objects.

I had been thinking of something like:

  (5 asModulo: 11) + 7

> AOP doesn't _remove_ OO, though.  It's a completely different dimension.
> Very OO code can have no AOP, and vice versa.
> I don't understand reflection, and your statement about how it makes a
> system less OO makes me believe that I understand it FAR less than I thought
> I did.  I assumed that reflection referred to a system's ability to modify
> itself, right?  What does that have to do with OO?

Imagine an object that is totally convinced that it is a menu. It lives
in a world like the movie "Matrix" where everything around it
reinforces that idea - it only deals with menu-like messages. Then, one
day, you hand it a red pill by asking it for one of its supporting
meta-objects (#class, 'mirror' etc) or even something as simple as
#printString or #byteSize for the debugger. It suddenly wakes up to the
harsh reality that it is only a buch of bits in some computer's RAM.
Its illusion that it was *really* a menu has been shattered.

Note that I can have reflection in OO systems by adding reflective
interfaces to the objects themselves (probably in class Object where
everyone can inherit them from) or I can separate the object's main
functionality in a "base-object" from its implementation details in a
"meta-object". This separation is a very restricted example of Aspect
Oriented Programming. You might argue that AOP actually reinforces
OOness since it takes all non-menu code from the object and places it
elsewhere. It is hard to decide.

It would probably be a good idea for me to explain how I see the OO

We have two "schools" of objects - the Scandanavian (with Simula, C++,
Beta) and the American (Smalltalk, Actor languages). The first one
wants to create an accurate model of the world using the language,
while the second is interested in learning about the world by creating
quick and dirty models of it. One style will give you CASE, the other
Extreme Programming.

But we also have two different philosophies for models of the world -
the Platonic and Aristotelian. The first considers the physical world a
mere projection, or shadow, of the ideal world (classes are more
important than instances, as in C++ or Smalltalk). The second considers
the physical world the ultimate reality, though for convenience our
brain tends to create emergent abstractions after observing common
elements in many instances (Self or Beta).

This is a gross oversimplification, of couse. But the idea is that
there are four distinct styles of OO programming. This isn't very
obvious if you only look at the end results since they tend to be very
similar. It is the roads to get there that are different.

So is it wrong to place the successor method in the object 4? A Platonic
OOer would cringe to think of it! And saying that the programmer would
later get a clue and move this method to a more abstract object
("traits integer", for example) would be no consolation at all to a
Scandinavian OOer who feels that if something is right, you should have
gotten it right the first time...

I hope this clears up my position in this thread - what is "right" from
one OO viewpoint might not make that much sense from another.

-- Jecel