objects and messages

Francois-Rene Rideau fare@tunes.org
Sun, 25 Oct 1998 16:46:51 +0100


>>: Fare
>: Brian

>>[about syntax]
>>syntax matters in as much as we need it to communicate the semantics below.
>>textual syntax is sequential just because text is sequential. (btw, it looks
>>like some bottleneck forces individual humans to acquire data sequentially).
>
> I'd like to see you're evidence of that bottleneck, especially
> considering the multiplicity of ontologies on human actions.  (i.e.
> breathing while walking: is it one action or two?)  any bottleneck would
> be a society-wide symptom of a disease of the consciousness, and
> certainly not worthy of consideration as a Tunes constraint.

Uh? Humans sure can breath, walk, and "do" many other such things at the
same time; this is because the human mind is made of multiple agents, some
of which can work in parallel in as much as they don't simultaneously access
the same resources. However, it seems to me that the I/O subsystem allowing to
communicate *articulate ideas* goes through a bottleneck that mostly
forces sequential communication. Ever tried to talk or reason about two things
at the same time? Surely, some resources can be timeshared or split
(most people have about seven registers to store immediate ideas)
but that's a matter of multiplexing the bottleneck. The bottleneck exists,
it's a fact of human psychology (BTW, I recommend "The Society of Mind" by
Marvin Minsky for those interested in the structure of our minds).


>>> What are types?
>>Usually, they are some "abstract interpretation" of runtime values
>>(this is a well-defined term in academia).
>
> (In order to shock you)  Bullshit.  That's relative to an ontology, and
> you know it.  We're not here to kow-tow to standards, even if advocated
> by academic tradition. Interpretation can be considered an action of an
> object known as the interpreter (not the standard computer term, but the
> abstract idea).

I think you fail to understand these words. "Abstract interpretation" is
a one-piece expression with a very precise meaning, which has *very little*
to see with programming language interpreters.
It's very neat mathematical theory, with lots of useful applications
inside compilers, and other static code analysis technology.
For the relationship with types, see the POPL'97 paper by Patrick Cousot.
	http://www.ens.fr/~cousot/papers.html
The address might be changed to www.dmi.ens.fr.
Both web servers are currently down.


> In a reflective system, the interpreter is the one
> thing which should rarely be a constant.
I challenge the idea that there be "one" interpreter.
See category-based reflection vs tower-based reflection.

> By changing the interpreter
> (reflecting), you will inherently change the meaning of the
> 'abstraction' action itself.
I challenge side-effects as a clean way to conceptualize computer science.
If functional programming teaches anything, it's that for cleaner semantics,
we can, and should, reify state. Instead of having "objects that change"
(an oxymoron), we have constant objects, which we consider in sequence
(or in richer structures). Then, you immediately see that this kind of change
is not semantically clean, and usually not conceptually atomic, in that it
involves both an local state modification and a global fix point.
What's cleaner is to have languages exploring "other" languages,
so you can in sequence (or else) consider any other language;
reflection is a (at times useful) particular case, whose possibility is
guaranteed by universality, and whose eventuality is guaranteed by a gist
of magic (actually, a voluntary meta-operation involving a fixpoint).

> So, abstraction is not some implicitly atomic thing with clear meaning.
With your reasoning, nothing is atomic, and nothing has a clear meaning!

> We're dealing with a system where the
> user benefits from having as many conceptual frameworks apply to her/his
> goal as is possible.
Sure. But it won't hurt if the *default* framework reuses concepts that
withstood the stand of time (such as functions, the essence of which
*is* lambda-calculus).


> How can you advocate a general-programming reflective system where truth
> is implicitly absolute?  How then could you define multiple ontologies
> and then transform "their truth" into "the truth" for the sake of
> argumentation within the system?  These ideas are contradictory.
Uh? Not sure what you mean.


>>> How can we create 'intensional' systems of semantics
>>> where referral is never implicit?
>>I'm not sure I understand what "referral is never implicit" means exactly.
>>Intrinsically, we may only manipulate ground observation, i.e. syntax;
>>semantics we observe only indirectly as an epiphenomenon of this syntax.
> 
> I'm hypothesizing that intensional systems are capable of generalizing
> on the referral mechanism by having it available as a first-order
> object.
>
First-class bindings? Explicit substitution?
Sure, we shall be able to express that.
We shall also be able to have it done tacitly.
Reflection is all about choosing what's explicit and what is implicit.


>>The fact that syntax be always a substrate for "computable" manipulations,
>>and that in general semantics be not, proves that we can't equate them
>>(well, actually, intuitionnist logicians may disagree).
>>Also, simple paradoxes show that we always need some external referent;
>>a useful logic can't be fully self-contained.
>
> Yes, but what of the nature of the external referent?  Could that be the
> essence of user-interface?
>
We don't have to discuss the nature of the external referrent.
By definition, if it's external, then we can't, anyway.
Also, reflection is about not having an "absolute" notion of externality,
but being able to refine or grossify this notion.
It would seem that the raw hardware and its I/O interface is the ultimate
limit of what's external to a given computer software program;
now, we might be considering not just isolated programs,
but network of interacting computers; or the human operator
may be part of the program; etc; so the notion of "external"
is actually defined a contrariori by the ability of introspection of programs:
what can be reified is internal, and what cannot is external.


> >> If so, how can we make it representation-independent?
> >At the meta-level, this is just quotienting of syntactic objects
> >up to some high-level notion of equivalence.
> >Multiple syntaxes or contexts correspond to as many modalities.
> 
> Once again, I'm arguing an intensional system where the word "is" would
> have no meaning, in the sense of "A is the representation of B" or "A is
> an example of B" or "A is the value of B" being absolute statements.
>
Hum. I'm not sure I understand.
Statements (happily) need not be absolute to be useful.
I'm currently writing an essay on ethics and information
	http://www.tunes.org/~fare/articles/cybernethics.html
and such is undoubtly one topic it will tackle.


> I'd also like to restate Tril's very important question, which I have
> also been pondering, "How do you implement lambda-calculus using
> lambda-calculus?".  Indeed, how do you implement pi-calculus using
> lambda-calculus?  More significantly, how do you pass around types with
> uncountably-many axioms using lambda-calculus?  How would
> lambda-calculus deal with multiple models of the same system (which
> seems to violate the uniqueness of the typing mechanism, or at least
> makes types always uncountably large in description size for any object
> anywhere)?

I'm not sure what *you* mean by "implement".
There are lots of flavors of lambda-calculus.
Pick any Turing-equivalent one, and you can "encode" quite anything in it,
which you might consider as a valid "implementation",
unless you have some way to distinguish between those.
Some of them have a built-in notion of type, too.
None have a builtin support for concurrent programming (when they do,
they are considered extensions, and no more called lambda-calculus;
one example is Gerard Boudol's blue calculus,
that embodies both lambda- and pi- calculi).
As for "uncountably many axioms", I don't see how any framework at all
can "effectively" "implement" it from a computational point of view.
Ultimately, we (intend to) use finite physical digital devices,
that can only manipulate logical frameworks with finitary presentations.
Then, lambda-calculus per se doesn't deal with models at all;
it only deals with representing functions,
which is a simple yet powerful concept for expressing computations.
If you want an explicit first-class notion of metaprogramming,
you sure must extend the lambda-calculus with new "primitives";
which is why the Tunes HLL could not be "just" one of the existing
functional programming languages (CommonLISP, Scheme, OCAML, Haskell, etc).
Finally, uniqueness of typing, existence of a principal type,
and even decidable type inference,
are in no way compulsory features of a type system.
Decidable type-checking is usually required,
but even then, it might be useful to not require it,
as long as ultimately, we manipulate objects (programs with types,
proof of typechecking, etc) whose correctness is decidable.
See for instance F_\omega, where only type checking is fully decidable.

Regards,

## Faré | VN: Уng-Vû Bân   | Join the TUNES project!  http://www.tunes.org/ ##
## FR: François-René Rideau |    TUNES is a Useful, Not Expedient System     ##
## Reflection&Cybernethics  | Project for a Free Reflective Computing System ##
Often in a discussion, I will ask the other person to define some term.
It is not that I believe that terms are absolute, and want to test whether
the person knows its One True Meaning. On the contrary, words are conventions,
and it is necessary to negociate a common meaning so a sane discussion
be possible. For a constructive discussion *is* a negociation.
		-- Faré