coherent states

Francois-Rene Rideau rideau@cnet.francetelecom.fr
Thu, 25 Jun 1998 00:47:14 +0200


I don't have time to comment on everything that was said on the list:
I hope I'll be able to submit my paper on Reflection by July 17 to POPL'99.
Now, there are a few things I think must be added to what was said
(I wouldn't remove anything on any side).


About "instant evaluation", I agree with the principle that
if object $b$ is specified as $f(a)$, and $a$ is modified,
then modifications should be transparently propagated to $b$,
without the user having to explicitly call a synchronization program
(or worse, manually state the steps needed to synchronize).

This is the very essence of a semantics-based system:
once the semantics is given and accepted, then you don't have to worry
about manually enforcing its consistency -- it is automatically dealt
with by the system.

But this cannot mean "instant". Computations intrinsically take time;
some do not even terminate, because they need more resources than available
(sometimes a transfinite amount). Actually, taking resources into account
is precisely the difference between informatics (also known under the lousy
name "computer science") and mathematics.

This does mean that we cannot escape an external notion of execution,
that can never be fully grasped in the internal formalism of the system.
No system can fully stand on it self. [Note, I'd have used "intensional"
and "extensional", only I never know when to use either; I believe there's
no consistent use of them, and that for any point of view where they are
meaningful, a dual point of view exchanges them].

This notion, however, can be encapsulated as the process of validating
a set of constraints as implementable: any computation can be viewed
as the search of a reasonably efficient proof that some constraints can
be fulfilled.


Now, there is a problem with the precise semantics of constraints:
when a parameter is changed, should constraints defined using
this parameter change accordingly, or should they keep the value
of the parameter at the time they were defined? When duplicating
an object that (recursively) contains variables, should the variables
be duplicated or shared? And even when there's a way to say which,
what if *it depends*, and we don't want to statically say which?

A simple model is to force constraints to be statically defined
(though the creation may be deferred by a lambda), and kept in some
historical or causal order, so that invalidating a constraint also
invalidates all constraints that "depend" on it.
This does work, but is not very practical, and does not allow for
seamless "change in point of view", since the particular way in
which the system was constructed is priviledged (which is precisely
what "high-level" tries programming tries to fight).

The problem is quite the same as that of identifying
the "limits of an object" when migrating/dumping or otherwise
manipulating an object: which constraints exactly among those that
are currently verified constitute the declared semantics of the object,
and which are but implementation dependencies?

This problem I have long been aware of (see Migration/ pages and
Tunes mailing list archive), but I have become convinced that there
is no "standard" solution to it. We can but require the user or program
that defines an object to be careful when defining its limits.

In any case, the system should clearly distinguish the
intensional/extensional project/object aspects of programming:
an objects is a constant thing whose semantics is fixed once and for all
(which does not prevent the underlying *implementation* to change,
as long as it remains compatible with the semantics), while
a project is just a identifier/holder for a value that may evolve,
or even diverge depending on the point of view.

I'd rather use the verb "duplicate" when applied to objects,
and "copy" when applied to projects, since in the former case you actually
get the *same* object considered twice, while in the latter case,
you get a new project that looks the same, but is not quite the same.
See also Henry Baker's "The more things change, the more they are the same"
article on Object Identity (ftp://ftp.netcom.com/pub/hb/hbaker/).


As for efficiency, it's a matter of transformating code into
"better" (faster/smaller) code which behaves nevertheless "the same".
An essential limitation with current automatic transformers ("optimizers")
is that their notion of "the same" is often very bad, since it is defined
with respect to a static model of computation that is both inadequate
for high-level desires of users and to low-level constraints of execution,
not to talk about the dynamic fluctuations of these.

Let's take a piece of software that takes many thousand lines of code;
if you keep it fully observable, then you can't optimize it in clever ways,
since the user may always want to observe the abstract source-level
execution model. On the other hand, if you declare that you won't
care anymore about the execution model, and only want the external behavior
to be the same, then you cannot anymore make any internal
observation or modification (aka bug fix, tweaking, etc).

Of course, you should be able to specify on a fine grain which observations
you're willing to be able to do, and which you don't give a damn for, so
that you can benefit from optimization of uncared for parts, while keeping
the ability to do cared-for observations/modifications.
[computer scientists might directly relate this to intensional vs extensional
models of equality between programs].

To get the best of both worlds, you can keep an object in source form
(i.e. observable with syntactical or grammatical operators),
"clone it", and forget unwanted observers for the clone.
Actually, the syntactically observable source program would
then not be identified with any running instance, but would just be
a pattern to generate executionally-observable only instances.


All these remarks leave a lot of space to implementers and users
as to what the safe defaults, usual programming style, starting
constraints and constraint-defining language will be in a reflective
platform like Tunes. Only experience can tell us.

Minor notes I might add are that the UI, not the system as such,
is a structure editor; some might very well want to "dump" stripped
versions of the system on interface-less appliances.


I feel like I've already said all of this on the mailing-list before.
Sorry for redundancy. I'm going to try write it for my would-be POPL'99
submission instead.

## Faré | VN: Уng-Vû Bân   | Join the TUNES project!  http://www.tunes.org/ ##
## FR: François-René Rideau |    TUNES is a Useful, Not Expedient System     ##
## Reflection&Cybernethics  | Project for a Free Reflective Computing System ##
So you think you know how to translate french into english? Now what if the
french meant something completely different than what the english understood,
only neither the french, nor the english, could figure out the difference?