First Class Representations
Sun, 3 Nov 1996 12:54:11 +0100 (MET)
Patrick, are you still here?
Here are some very late thoughts about your FCREP article.
Those are the very same objections I had originally,
but presented in a way that have grown to full maturity since.
Your article was essential to help me find those objections
and consequently crystallize the ideas that are at the center of my design
for a reflective language that allows proofs.
BTW, could you introduce me to Prof. Malenfant?
>> * I think that if we consider representations as a general concept,
>> not only for representing end-user objects with low-level bit streams,
>> then we got something very generic,
>> that is the very basis of computational reflection:
>> more abstract objects being represented by more concrete objects,
>> with arbitrary variations and subtleties about abstractness/concreteness.
> Yes absolutely. I've specialized my description for low level cases to
> show that it can go all the way down.
Yes. That's great.
> The next version should be an
> almost complete rewrite where I forget about that triple and see
> representations as relations (or morphisms) between two sets.
Relations, no; morphisms, yes.
The structure being conserved by "representation"
can be considered as a mapping to the an abstract set of "meaning".
Something represents an integer if I can attach to it the
meaning of the integer being represented.
That's the "semantic function" as described in semantic theory
(see e.g. articles by Cousot&Cousot).
The way the system chooses a particular representing instance,
that is your function from the set of abstract represented objects to the
concrete representations, is not what caracterizes the representation,
but a meta-object that is known as the instantiation tactic,
which people aware of the Curry-Howard isomorphism know is isomorphic
to proof tactics in formal proof systems.
A same representation can have many different corresponding tactics:
see lots of compilers on a same architecture with same calling convention,
each with lots of code-generation/optimization switches.
There might be no every-where defined total *computable* mapping from
represented space to representing space,
but you could still say that something actually represents another thing
because *the semantic function* is uniquely well-defined.
> them abstract and concrete looks like a good idea.
I stick to it. Plus it shows how abstract and concrete are
concepts relative to a considered representation.
> Although I'd pitch
> in some comment that what makes a representation useful for computation
> in our physical universe is that its concrete set is a set of states
> of some system (or can be mapped to such a set through additional
>> * I'd call both sets Universes rather than Type and Set of States.
>> You'd have the "Univers de depart" and "Univers d'arrivee" of your mapping.
>> For many reasons, and particularly because I don't agree with the
>> direction of your mapping, I propose that another terminology be used:
> What do you mean by direction ? You mean you'd like it to go in the opposite
> direction ? I think I'll use relations anyway. Are you saying you'd
> prefer the concrete universe to be the first argument of the relation ?
> I tend to prefer the other way.
Yeah, I do prefer defining the representation
by the existence of the Semantic Function.
That's what comply to the formal theory of Semantics,
and gives us a clean framework to prove things:
an object is actually a representation of given Semantic Function
iff the function is a morphism,
i.e. conserves all the properties we prove on the abstract object.
That's *exactly* what is meaningful to write proofs.
On the contrary your opposite mapping,
that I identify with the instantiation tactic,
just CANNOT be a morphism in the general case
(e.g. the mapping from a set of considered objects to the way the
objects are represented in dynamically allocated realtime GCed memory),
and shouldn't even when it can
(imagine that your compiler would never optimize
because compiling wouldn't be a morphism then!).
About compilers: surely someone will object that
a C compiler doesn't output an assembly representation of a C source program
in my definition. Because it doesn't output an assembly representation
of the C source program AT ALL. It does output an assembly representation
of the program the C source itself represents (its semantic value).
Because the space of programs is not a freely generated algebra,
it can never be manipulated directly by any syntactical structure,
but we must always use some non-trivial representation for it.
Hence, the compiler takes a one representation of the abstract program,
and outputs another representation for it in another context,
that tries to minimize some cost.
>> either "represented universe" and "representing universe",
>> or "abstract universe" and "concrete universe",
>> would be better, wouldn't they?
> Yes for abstract and concrete. But the terms I used are still
> appropriate in the sense that eventually representations will be
> composed to form representations that reach from the type to represent
> all the way down to the states of the machine. So the state space could
> be the special case of the concrete space or universe.
No. The Semantic Function,
which is what counts for a Formal Verification System,
goes from the concrete state space all up to the abstract space.
The instantiation function is seldom computable,
not useful to verify proofs, etc.
>> * on real computers, the set of states is finite,
>> whereas on languages used, the set of objects is infinite;
> Yes. And that leads to the infamous "out of memory" problem.
And to Ariane V exploding...
Our framework *MUST* take that into account,
or the problem will stab you in your back at the worst moment.
Again, if we consider the semantic function,
it will always be defined to find what a program/state means,
while the instantiation tactic need not always succeed.
Surely, when we try to execute an operation that
overflows from the representation,
this should be formalized, and usually detected,
and an escape mechanism used to recover the system in some way.
Of course, we are better off when we can *prove* that the
overflow won't happen, or at least not in the 10% of code
that gets executed 90% of time...
>> moreover, representations generally do not exhaust states
>> on the destination universe either;
> How is that ? I would think the opposite, if something is represented
> on a system then for each of its states you can determine what is
> represented (perhaps only ambiguously but it never represents
> nothing). But maybe that's what you want, to have states that say
> that nothing is represented. I'm not sure if that is a good idea. I'd
> have to think about it. What are your views on that ?
What I meant above is that
there might be abstract objects omitted by a representation
(hence, my system integers do not represent integers larger than 2^32-1);
now here I mean that conversely,
all raw memory images do not correspond to a valid state of our GC system.
If you represent one-one mappings from 1..n to 1..n as an
array[1..n] of 1..n then all such arrays do not represent valid
>> * This also breaks the fact that a representation be global morphism;
>> it is only a *local* morphism (in the sense of Fraisse).
> Oops, never heard of global or local morphisms. Can it be defined quickly?
Yes. A local morphism is a morphism from a (finite) subset of a structure
to a (finite) subset of another.
That is, all basic assertions in the from subset
are conserved by the morphism.
This allows for overflows.
Of course, we're always better off when the morphism is global,
just that it can't always be.
> Is it worth learning ? On the intuitive level, I think the representations
> I talked about are and should be global.
No. We MUST manage exceptions, in some way or the other.
> Higher level representations may,
> when composed with others to form a representation from type to machine
> state, have only local effects but that is not a problem with the formalism.
Yes it is.
If a formalism will have Ariane V explode because of memory overflow,
then it's definitely broken, IMO.
I hate it when each taxpayer (me included) will pay hundreds of dollars to
pay for broken formalisms for a multibillion dollar rocket explodes...
>> * Dynamic memory allocation alone is such that the state
>> of a dynamic environment can be represented in many
>> different ways, according to the pattern with which memory was allocated
>> during the construction of the state.
> Yes. I don't think that's a problem with the formalism.
Well, not with mine, but it introduces a lack of symmetry
in your formalism as the instantiation function is
not quite as clean as the semantic function.
It is quite easy to express instantiation tactics
in terms of semantic function, but the converse is hard.
>> * if the abstract object is defined in some kind of behavioral way
>> (operational semantics),
>> then a large set of isomorphic representations may
>> exist, that only need to
> oops, incomplete.
Sorry, can't remember what I meant at that time :(
>> * whether something is a representation for an object can be considered
>> as a set of equations. The set of solutions can be difficult
>> or impossible to compute, but we only need find one solution,
>> in an incremental way if possible (because of real-time constraints).
> Not sure I understand you here. I'm not suggesting that we'll be able
> to find representations automatically, in a computable manner, for
> types defined by a set of equations. But you're not saying I suggested
> that either.
If you remove automatically,
and accept that humans and computers help each other
at finding the representation,
then you have exactly the way that
how people should program to have proven-correct programs.
>> * To sum up, a Representation is a local morphism
>> from the representing/concrete space to the represented/abstract space.
> this local thing...
>> * Hence, the design process of refining representations
>> is an operation that, contrarily to
>> composition of simpler known objects into new more complex ones,
>> is *contravariant*, not covariant, with computation.
> Oops, covariant, contravariant, what do these mean ? Does one ever know
> enough mathematics ? :) Are these worth learning ?
Yes these are worth learning if you're doing any type theory.
When you have two Universes each made of sets and morphisms between sets,
then a covariant functor from one universe to the other is
a metamorphisms that maps sets to sets, morphisms to morphisms,
and conserves the relations between sets and morphisms.
A contravariant functor instead changes the direction of the morphisms.
For instance, consider the arrow operator -> such that A->B
is the type of functions from type A to type B
from the Universe of types with type inclusions as morphisms to Itself.
If B' is a subtype of B,
then any function in A->B' is also trivially in A->B,
so that A->B' is a subtype of A->B;
however, we conversely obtain that if A' is a subtype of A,
then a function from A to B is trivially a function from A' to B,
so that A'->B is a *supertype* of A->B.
Hence the arrow operator is covariant in B and contravariant in A!
This is inheritance in OO languages is NOT some kind of subtyping,
because the arrow is covariant with inheritance, not with subtyping.
Hence, all the "OO" methodologies that will try to achieve subtyping
through inheritance are fundamentally flawed!
If SIC: C->I is a semantic function
from concrete universe C to intermediate universe I,
and SAI: I->A from I to abstract universe A,
with corresponding tactics TIA: A+>I and TCI: I+>C (partial functions)
then SAC = SAI o SIC : c-> SAI(SIC(c)) is the semantic function from
C to A while the corresponding tactic is TCI o TIA.
Hence, tactics are contravariant with semantic functions.
The way we refine representations is contravariant to the way
we compute the meaning of a calculus.
Refining representations is going from higher-level objects
to lower-level representations;
Computing a value is going from lower-level objects to Her-L ones.
Traditional programming only allows the later.
Reflective programming also allows the former.
Screwy Traditional programming will introduce lots of unsafety
doing the later (e.g. overflows in integers, pointers, and memory allocation
go wild and undetected in C or ADA until the rocket explodes).
Screwy Reflective programming will introduce lots of unsafety
doing the later (e.g. semantics is not guaranteed to be conserved
by reflective operations in Scheme, LISP (CLOS), Smalltalk).
>> * Up to now, for reasons of easier computability,
>> people have priviledged covariant design,
>> and tried to artificially apply it to representation,
>> whereas we can only grossly approximate the inverse
>> of the contravariant function with covariant constructs.
> Of course I don't understand this...
Yeah. See above Traditional programming:
you build HerL obj from LerL ones, which is covariant w/ evaluation.
People developing "OO" methodologies et al
have tried to keep their design covariant w/ evaluation and it sucks.
If you define a representation by the instantiation tactic,
you also only obtain an approximation of an inverse of the semantic function.
>> * in the best cases, reflection could be used
>> to make the approximation implicit, and absorb it,
>> with some kind of exception being raised when the approximation fails.
>> Because the basic computable constructs are all covariant with themselves,
>> this is the Right Way to *implement* things;
>> but as long as only covariant interfaces can be specified,
>> there will be a mismatch between what people need to express,
>> and what they can express.
> ...and this.
Well, reflection can make the instantiation tactic implicit;
when the tactic fails, or the represented object overflows,
this can be detected and recovery tactics can be used.
Because a computer can only ultimately compute objects from LL to HL,
anything to be computed must be ultimately translated into a
covariant design. Any contravariant design will have to be
(dynamically) translated into covariant design by meta-objects
(like those instantiation tactics);
these meta-objects if not completely covariant, will themselves
have meta(2)-objects, etc, but to be computable,
the system needs be (dynamically) founded on covariant specifications only.
Of course, the advantage wrt to pure covariant design is that you
can dynamically modify the instantiation tactics while conserving
the semantics, so that you can dynamically adapt and optimize your code
while keeping confidence that you won't be thrashing what you already have.
With pure covariant design, any change in a representation will endanger
the semantics of the whole system w/o the system being able to give you
*any* guaranty that everything is ok.
>> * Representation as concrete->abstract morphism makes many things easier.
>> But that's only a better starting point,
>> and doesn't solve trivially solve any of the problems you pose
>> after page 5.
> Yes, exactly.
But it *DOES* solve many other problems you have *before* page 5:
way to formally verify co-satisfiability, to manage overflows, etc.
Now, we *can* solve the problems of page 5
independently of our starting from my point of view or not:
if you want that a function work
on variable representations of the argument,
then consider that the abstract function works on the abstract argument,
and that a *representation of the function* does a representation of work
on the representation of the argument.
Hence, to different representations of the argument
will correspond different representations of the function.
Surely, building these representations
can be done statically or dynamically by meta-functions,
that can or not be specialized with the given function, etc.
>> Please tell me if there's anything wrong about what I said.
>> If you've got anything to add, I'm interested:
>> I think I'll use many of those ideas in my speech about reflection...
> I'm sure I'll have more eventually but probably not before you
> speech. We can discuss your points however.
My speech was reported indefinitely as I'm in Nice while the
people for the speech are in Paris :( :( :(
== Fare' -- email@example.com -- Franc,ois-Rene' Rideau -- DDa(.ng-Vu~ Ba^n ==
Join the TUNES project for a computing system based on computing freedom !
TUNES is a Useful, Not Expedient System