Wed, 24 Jun 1998 20:39:12 +0000 (GMT)
[I've cut stuff throughout without noting it]
On Thu, 25 Jun 1998, Francois-Rene Rideau wrote:
> This does mean that we cannot escape an external notion of execution,
> that can never be fully grasped in the internal formalism of the system.
> No system can fully stand on it self.
If it's true, what's reflection? I thought reflectivity meant the
execution WAS fully expressible within the system itself.
> [Note, I'd have used "intensional"
> and "extensional", only I never know when to use either; I believe there's
> no consistent use of them, and that for any point of view where they are
> meaningful, a dual point of view exchanges them].
Don't expect me to ever use these terms again, but I just looked them up
in the dictionary. Here's what I think they mean for Tunes:
Intensional is the connotative meaning of an expression, that is, what it
means when it is used as part of another expression. How may the object
be interpreted differently according to the context in which it is used?
That is, the way other "outside" objects treat it. The intensional aspect
of an object's type depends on what functions are defined that take that
object as an argument. The object designer, and the object itself, have
no control over these "external", "intensional" dimensions.
Extensional refers to the denotative meaning of an object; it's
specification. (What features of the object remain the same no matter
where it is used?) The extensional perspective on an object is the
view looking inside at how its parts relate; the form of the expression
that is its specification. The object user, has no control over what the
object does "internally", "extensionally".
So, the two terms ex- and in-tensional hint at the change in meaning of an
object as you move from between orders of abstraction. The difference is
about the same as the difference between arguments and parameters. A
value is an argument when it is being passed to a function, and a
parameter if you are inside the function.
Let me write about the abstraction barrier.
In traditional systems, the lambda is treated as a black box. Functions
are used as units of componentization, i.e. modules. This is necessary
for designers, because the act of abstraction--taking something complex,
like a function body, and giving it a name that refers to the entire
entity--is the way people think. If we didn't have abstraction, nobody
would be able to understand anything. Remove abstraction, and the world
Just imagine what it would be like to not know a language, to not think of
the world as made of distinguishable objects. A baby sees the world
without words, and therefore to the child, the universe flows. There is
no color to a baby, no lines, corners, no events, no actions, no units, no
NOUNS... just one, incredible, wonderful world that isn't perceived,
cataloged, or limited; it is experienced. The child doesn't think; she
flows, completely at one with her environment.
In the same way, deep inside TUNES there are no objects. The abstraction
barrier creates an exclusion, a limit, a word. TUNES smooths out the
barrier, looking inside objects, expanding their specification, putting
all the expressions together, twisting the result around, stretching it,
fitting it to the definition of the computer system that TUNES is running
on. A pure stream of information flows from TUNES to the processor. At
the same time, TUNES is reversing the action, interpreting pure streams of
information into identifiable objects, constructing them for us to view.
The power of TUNES derives from its ability to look inside black boxes and
relate the contents to the rest of the system. That is why we need a
fullly specified, high-level language with the ability to open all black
boxes. How many instructions do you think it really takes to run an
e-mail client? What percent of binary executable files is required to
obtain the desired result, and what percent is remnant of the abstract
design the programmer used? Function calls, data structures, processes,
protocols--the computer doesn't need these concepts, WE DO. In order to
go between the computer and the user (as per the definition of an OS),
TUNES needs to translate from OUR meaning to the computer's meaning and
back. We just want to do it better: Support meaning a little closer to
the human, and a little closer to the processor. This means higher level
interfaces as well as better optimization.
What does this mean in terms of design? I cannot stress enough how we
should design the system as little as possible and let it design itself.
The minimum TUNES should consist of 3 parts:
1. Abstract machine.
2. Physical machine, built within the abstract machine.
Right now, I am only working on (1). (2) and (3) need to be written in
(1), so they must wait for its completion (or at least its specification).
> This notion, however, can be encapsulated as the process of validating
> a set of constraints as implementable: any computation can be viewed
> as the search of a reasonably efficient proof that some constraints can
> be fulfilled.
My latest idea is that functional languages are too complicated. They
have at least two operations: defining a function and evaluating one.
Lambda and apply. In my TUNES framework, I unify the two. (To get using
and programming to be identical, at the topmost level.) Of course, they
make up two main branches off the root, but there must be one concept that
supercedes each (to borrow part of a phrase from you).
I don't have time to go into how this works yet.
> In any case, the system should clearly distinguish the
> intensional/extensional project/object aspects of programming:
> an objects is a constant thing whose semantics is fixed once and for all
> (which does not prevent the underlying *implementation* to change,
> as long as it remains compatible with the semantics), while
> a project is just a identifier/holder for a value that may evolve,
> or even diverge depending on the point of view.
I'd distinguish between projects and objects like this:
The spec for the object says, "optimize for maximum speed and least
malleability" while the spec for the project says "optimize for most
malleability, allowing possible loss of speed." That's assuming you
wanted this distinction for efficiency reasons.
> I'd rather use the verb "duplicate" when applied to objects,
> and "copy" when applied to projects, since in the former case you actually
> get the *same* object considered twice, while in the latter case,
> you get a new project that looks the same, but is not quite the same.
> See also Henry Baker's "The more things change, the more they are the same"
> article on Object Identity (ftp://ftp.netcom.com/pub/hb/hbaker/).
I want the system to decide whether to copy or duplicate. I don't want to
worry about it. The decision should be based on resource constraints.
> As for efficiency, it's a matter of transformating code into
> "better" (faster/smaller) code which behaves nevertheless "the same".
> An essential limitation with current automatic transformers ("optimizers")
> is that their notion of "the same" is often very bad, since it is defined
> with respect to a static model of computation that is both inadequate
> for high-level desires of users and to low-level constraints of execution,
> not to talk about the dynamic fluctuations of these.
Keep logical proofs separate from optimization rules, and write the
ability to do proofs first (since optimization needs them).
> Let's take a piece of software that takes many thousand lines of code;
> if you keep it fully observable, then you can't optimize it in clever ways,
> since the user may always want to observe the abstract source-level
> execution model. On the other hand, if you declare that you won't
> care anymore about the execution model, and only want the external behavior
> to be the same, then you cannot anymore make any internal
> observation or modification (aka bug fix, tweaking, etc).
Or you could do whatever you want to the software, and have Tunes
automatically change the specification, with confirmations, as you
David Manifold <email@example.com>