Highlevel + Lowlevel
Thomas M. Farrelly
s720@ii.uib.no
Sat, 18 Sep 1999 18:03:07 +0200
Yuriy Guskov wrote:
>
> > Also, often during development, you are forced to deal with partially
> > implemented objects ( or functions ). In the terminology above such an
> > object would just constitute an expression which is not fully specified.
> > If required things are missing then the translation to a specific
> > expression is not possible.
> >
> > If you haven't told it what to do - don't expect it to do it.
>
> One note is that now all is accepted too absolutely. So "don't expect" is
> appeared as "nothing to do", that is a machine don't want to do missed
> code. But maybe there is a reason for doing missed code somehow...
> Then "don't expect" becomes "something to do" but anyway it would be
> "don't expect"...
By "don't expect" I didn't mean "don't expect it to do nothing at all",
I ment "don't expect it to do what you haven't specified it to do". In
other words the computer picks the _default_ action.
I know using the word "default" make it all sound silly. By the usual
use of "default" we tend to assosiate whatever the user has to modify in
order to do anything. But this isn't neccasarily so.
In practice, I encountered the need for defaults in an atempt to make a
system ( TOOL ) _well_defined_. Even more pratically, using defaults has
enabled me to eliminate the null-pointer situation wich occurs in many
standard oo languages.
Here is an example of how I picture the application of defaults in an
everyday programming situation:
draw - put some drawing on the screen
draw circle - put some circle on the screen
draw line from x1,x2 - put a line wich start at x1,x2 on the screen
The first drawing could be a little colorful image saying "image". The
second circle could have radi 1 cm. And the line could have lenght 1 cm.
- by default. Another way to default graphics could be by reasonable
randomness. For example, the default circle could have a random size so
that it is obviously a circle and fits inside the current display.
So the "something to do" would be a resonably expected action wich
demonstates the potential effect of an abstaction so the the user gets
proper feedback. Actually, defaults can be a good aid in exploratory
learning - the first drawin could be an image saying "I could be an
image, a circle or a line".
But sometimes reasonable defaults are difficult to come accoss. For
example 'delete' shouldn't demonstate its action by deleting anything,
and not by deleting an arbitrary something. In this case, the default
action could be to notify the user that nothing is deleted because
he/she din't specify actsactly what to delete.
>
> > > Now there are a lot languages for each layer separately. But we must work in
> > > direction of some integration of all layers.
> >
> > i.e removing the destinction. Or rather removing the global destinction
> > between levels, and making it part of the local
> > what-does-the-computer-understand-by-this ( interpretation ) mechanism.
> >
> > I could ( very possibly ) be wrong, but wouldn't that make a tower
> > architecture - like the reflective tower thing in Maude ?
>
> It depends on what you mean under "removing the distinction"...
> Each level has to keep its benefits... If we erase them possibly we will
> obtain none of them. Brr... If realize we mixed binary and symbolic code...
> Maybe I am wrong but... At least, certainly, introducing several levels is
> directed to diminish distinctions or gradual transition eg from natural language
> to machine one and vice versa.
I don't see the need for several levels in order to smoothen the
transition to machine code. First of all I don't feel like I'm dealing
with anything close to a natrual language interface when designing a
computer language. Natrual languages are fuzzy - computer languages are
not.
And secondly every expression in a computer language must eventually, to
be useful, be translated to machinecode. If your system can translate
highlevel expression to machine code, I don't see any need for
expressing any part of the system in anything but highlevel code.
>
> There is two ways for moving from the one term to another.
> 1. General -> Specific (the way of human thought, because we have often
> already ready representations for a situation)
> 2. Specific -> General (the natural way, from the world to the idea)
>
> Either way we consider there is four kinds of their combinations. (they
> could and should be combined because of we have to accent on
> "observer-observable". That is, general usually means general rules,
> principles of a thing, their inner world, how it acts, behaves, and the like.
> But an observer can have also general or specific view of a thing.
> Thus, we have four pairs (observer-observable):
>
> 1. Specific-Specific.
> 2. Specific-General.
> 3. General-General.
> 4. General-Specific.
>
> Analysis deals with specific "obserable", but it does with general "observer".
> Well, analysis already based on general principles of human thought..
> If design is constructing general expressions it means we have general
> "observable" here. Of course, implementing is making "observer" specific.
Thats a good viewpoint.
> But, oops! We miss a pair "specific-specific". And it is happens because
> you (or rather Booch) ignore a stage after implementing. I think it is "executing".
> Of course, this stage is less related to "application developing". But...
> If don't think about evolving systems which eventually appear, I think...
I see your point. The way I see it, the execution mechanism is allready
specific. So the execution or evaluation of an expression in a specific
context is deterministic. That way, there is no actual difference
between what is evaluated and the effect/result of what is evaluated.
This is because the actual existence of the expression in it's context
implies its effect/result.
That's why I merge the specific-general and specific-specific. In the
way that the context is the observer and the expression is the
observable. So the mapping of the expression from general to special is
implisit or dictated by the context/oberver.
The observer/observable model is good - call it
Subject-Oriented-Programming. ( a sneeky way luring Farč into embracing
the pros of oop :)
cheers!
===============================================================================
Thomas M. Farrelly s720@ii.uib.no www.lstud.ii.uib.no/~s720
===============================================================================