HLL Primitives

Brian T Rice water@tunes.org
Thu Feb 6 00:47:03 2003

On Thu, 6 Feb 2003, Francois-Rene Rideau wrote:

> re: your discussion about
> 	http://tunes.org/HLL/hlm/
> 	http://tunes.org/HLL/primitives.html
> > [...] my general intent here is to detail the different kinds of
> > meta-objects and semantics that need to be implemented to realize HLL
> > (read: "Tunes High-Level") semantics. [...] start with the primitives [...]
> > 1) Applying Functions
> > First, are multi-argument functions considered to be attributes?
> (un)curryfication, that are operations that apply to functional objects,
> not functional syntax. And indeed, though the general topic of your
> enquiry is about term constructors, the equivalence between functions
> and attributes it something the meaning of which appears much downstream
> in the semantic transformation from input syntax into computational effects:
> it applies to functional objects, not to functional terms.
> [Reminder: if unary functions can be isomorphed into attributes,
> then multi-argument functions can be seen as attributes by morphing them
> into unary functions in the usual ways: curryfication and uncurryfication
> (i.e. higher-order function of decreasing order as you apply more arguments,
> or function of a tuple).]

But currying and uncurrying are specific formalisms, not universal
embeddings. Furthermore, "applying" in the case of currying and uncurrying
to make multiple-argument functions attributes of single objects means
that a probably meaningless argument order has to be chosen in order to do
this, and that the object's interface to this "higher order function
attribute" is tied to this choice. I only need to point out that your "can
be seen as" is a specific kind of embedding, and not an isomorphism. It
actually reminds me of single-argument dispatch vs. full signature

> So if I understand your question correctly, it is about whether
> it is possible to unify syntaxes for functions and attributes,
> rather than just unify their semantics up to some rather elaborate
> semantic transformations. -- i.e. can said semantic transformations
> be made trivial, and if we can, what does that mean?
> [If you are to distinguish the meaning of functional terms
> at various moments of the semantic transformation process,
> we should be useing different well-identified names for the various stages.]

I thought this was the point in the first place! You write about objects
being fundamental, and having unified abstractions, and now you're
insisting on a special type being available just to support lambda
calculus style.

So, what was all that about when you wrote on active vs. passive
annotations? Are annotations separate from attributions in your mind? And
are both of these supposed to be distinct from a set of lambda calculus
primitives? My understanding from scrutinizing your work for the last 7
years is that the answers are "no", and that lambda calculus goes on the
chopping block in favor of making Tunes work as we advertise. I definitely
don't understand the point of giving Tunes differing fundamental
metaphors. After all, it was you who said that the sytem is built out of
one first-class kind of citizen, whether it's called object, pattern,
function, foo, etc.

> Yet, I think that though Ulrich mightn't have understood the question,
> his reply was still somehow correct: multiple-argument attributes
> can be unified with multiple-argument functions, by considering
> the straightforward mapping of multiple-argument into single-argument
> through static tupling (with the syntax-specified number of arguments).

Yes, that's in a very crude way similar to what I was suggesting. The
difference is that tupling is an incredibly crude metaphor. The usual
Tuple type is a linear ordering, which gives its elements a notion of
ordinal position. Position in that sense has nothing to do with semantics,
it's just a particular rendering of a signature so that programmers can
learn the way to bind by reflex memory.

> As for a conceptual model for attributes, etc. - in Tunes,
> I think that what I had in my mind when I wrote this page
> was that the "state" of the system at any moment could be seen as a set
> of (attribute, object(s), value(s)) tuples that are known to hold.
> This view of the world is that of various knowledge-based systems,
> and it might also satisfy our relational database loving Mad70.
> As opposed to an simple relational databases, however,
> our semantics integrates dynamism deep inside the knowledge base,
> whereas relational databases are static stuff that are modified
> by external dynamic agents. (This statement begs for formalization.)

Alright. As for formalization, that's the task I'm attempting to
undertake. For example, your triples are "attributions" by lexical
derivative of the object-attribute concept. Unfortunately, "known to hold"
isn't absolute. You say yourself under the Safety and Security sections
that object-accessibility through capabilities or some enhanced notion
determines visibility itself in a perspective-based way. If we had a
single reflective tower, the simple answer would be that "the meta-level
knows", but this is not the case.

> > 2) Abstracting terms
> >
> > I suppose that this is the constructor for
> > the annotations on syntactic types mentioned above. Does this make sense?
> Abstrating terms was about lambda-abstraction, defun, and similar things.
> They are term constructors indeed (special forms, would say a lisper).
> Otherwise, I don't see what you mean.

It was exactly that obvious. I just wanted to know of hidden meaning. (I
do believe that some meaning is hidden. See the following.)

> > how information gets added to specify what
> > constraints are needed to trigger the abstraction/annotation.
> Well, the usual ways of doing things are that either the formal parameters
> are specified as part of the lambda/whatever syntax, as in the pattern
> `(lambda ,formals ,@body) -- or the parameters are specified as a special
> syntax, as in $1 $2 $@ in shells, de Bruijn indices, etc. -- or any
> unqualified variable of a certain syntax is considered as being a formal
> variable for the nearest abstraction point, as in Prolog.
> In the resulting expression graph, references to formal parameters in
> the body are somehow linked back to the formal parameters at the
> abstraction point, independently from the abstraction syntax.

Where is the simple objects and attributes model, and where is the Tunes
principle of "only the exact amount of structure needed"? All of your
examples are rooted in the concrete syntax. This tells me that you're not
phrasing it at the right level to make this work.

When I separated out Parse/Dispatch/Rewrite phases, the implication was
that the means of communication between the phases would be properly
specified conglomerations of Tunes objects with attributes and
attributions specifying exactly what is necessary and no more. This means
that whatever information was specified in the syntax that was not needed
would be discarded, including order of evaluation and such, when it

What I was interested in visible in the two lines you quote above: "how
information gets added to specify what constraints are needed to trigger
the abstraction/annotation". That means I was looking not for the means
for specifying how the arguments get conglomerated for application, but
the means for specifying what information has to qualify these arguments
in order for a dispatch to successfully occur.

> > 3) "Defining typed constructors"
> > wherever this term came from.
> Well, typed lambda-calculi, as in ML, Coq, etc.
> Though you have something like that in Boyer&Moore's ACL2, too.
> And you have records, cases, structs, unions, and objects in most languages,
> that amount to something similar.


> As for "trust", I guess the term ought to have been elaborated.
> -- When I wrote that, I was thinking in terms of the Curry-Howard Isomorphism,
> where types are propositions and objects are proofs, so that declaring
> "x is an object of type T" amounts to assuming an axiom.

Okay, but I am asking specifically about assumptions/declarations and
their place within the linguistic framework.

 [Snipped redundant and separate information.]

> > I'm reminded of the Epsilon Calculus reference you make
 [Snipped redundant hand-waving about how Epsilons cure all evils.]

The epsilon concepts is useful, but you seem to be making lots of
generalizations that have no merit whatsoever. Speak in terms of objects
and attributes, and enumerate the kinds of meta-objects needed to complete
the semantics of this. That would greatly satisfy me.

I noticed that you completely snipped the "nullary function" metaphor.

> > There's also the question of different kinds of types.
 [Snipped a long explanation that you have no idea what to do.]
> As for a syntactic primitives, I guess there must be
> (a) a way of selecting a typesystem for a program fragment
> (b) a way of specifying type constraints relative to the ambient typesystem
>  within a program fragment (type checks, type assumptions)

Well, we'd need to define a type system in terms of Tunes objects and
attributes in order to accomplish these things. So there's something else
to specify. But I didn't mean syntax anyway, just semantics.

> > a single object has infinitely many types.
> Yes, though it may have principal types in various typesystems
> (and at the meta-level these typesystems will provide a primitive
> to extract, check, etc. said principal types).

It occurs to me that type systems are essentially strategies for matching
the information inferred about an object with some "a priori" definitions,
for the purpose of organizing the implementation around those definitions.
For example, predicate dispatching is equivalent to other type systems
where the predicate is the primitively-encoded type. In those latter
cases, a predicate is still being tested, and with any other strategy
implemented, the predicate would be non-trivial and perhaps nearly
impossible to test.

> > we need some primitive that
> > expresses universal quantification *independently* from the abstraction
> > primitive.
> Note that universal quantification can either be a syntactic abstraction
> as in (forall (x) (foo x)), or it can be a higher-order syntactic combinator
> to be used with other means of abstractions, as in (holds-universally #'foo).

Well, both of those don't use abstract syntax really, but the latter is
very roughly similar to what I had in mind, except that there's no scope
(a sub-structural kind of) information in that syntax.

To be really clear, I am talking about quantified arguments in the
abstract sense. Let them be objects, and let the declarations and
inferences on them be some kind of declarative meta-objects. The
symbol-name should not be involved in a concrete way of necessity.

> > Note that the dispatch argument effectively separates universal
> > quantification from the abstraction primitive, at least to some extent.
> I'm not sure what you mean.

I should have been clearer here. I was referring to the discourse in the
HLL Architecture paper separating dispatch from application. A universal
vs. a witness quantifier can be specified independently of the actual
abstraction process, which merely takes a quoted term and abstracts
bindings for it. Quantification attaches to those bindings constraints on
what the binding signifies about the range that it is quantified over.

> > Anyway, this is universal quantification of "output" vs. "input", so the
> > argument is even farther removed. What primitive do you think arises here?

To restate what you divided into three parts clearly:

1) We already have an abstraction primitives, which in my design produces
elements of the Rewrite phase of evaluation.

2) Therefore, the presence of the "Defining Typed Constructors" section
must be providing some separate aspect, or otherwise it is totally

3) Due to the nature of the definition of constructors for a type, in my
perspective taking the Maude "op 0 : -> Natural", defining 0 as a nullary
function returning a natural and expressing equations about that to refine
it, if we take the result of this operator as an object, and refuse to
think about symbols, we have abstract universal quantification. Because
it's abstract, we have to ask how to make objects abstract. There's a main
part of my question.

4) "Output" vs. "Input" is about the fact that 0 is now the output of an
operator rather than the input to a formula (which is basically an
abstraction, since it takes a "binding") in its nature, so (3) becomes
more central to the argument.

> > 4) Evaluation strategy
> >
> > We only see two issues mentioned here: future vs. actual, and
> > side-effecting vs. non-side-effecting. These are useful issues to address,
> > but they seem like just artifacts of current terminology rather than a
> > real definition.
> Future and actual, or call-by-{need,name,future}
> vs call-by-{value,object,reference} are semantically very different things,
> both desirable at the same time, that thus require a syntactic constructor
> to distinguish between them. As for how to what to choose at first, if we
> are to bootstrap from CL, we might as well copy the way Screamer did it.

No thanks. That's totally non-informative, and probably a step backwards.

> For side-effecting vs. non-side-effecting, I was thinking not only
> of constructors for types that specify the difference, but also
> of functors that transform side-effecting objects into pure objects,
> and vice versa -- a bit like monads in Haskell, except that monads are
> never absorbed in Haskell, so that if ever you want to use locally,
> you either clutter up the whole program globally, or you do unsafe things.
> I admit I can't provide full formal semantics for such
> absorption and reification of monads.

You're talking about something totally different, apparently.
"Side-effecting" is not a switch you turn on and off; I'm concerned with
scalable adjustment of what happens to (side-)effects as they are
specified in a rewrite. There must be some set of primitives upon which a
negotiation system can be built.

> > So there must be some "containment" kind of primitive you're after, or maybe
> > higher-order rewrite (rewrite of rewrite rules in order to satisfy
> > constraints that aren't part of the declarative specification). But even
> > higher-order rewrite isn't really a separate primitive; maybe some
> > separate form of quotation if that's needed.
> Yup - that's the kind of things that the primitive I'm looking for
> would resolve into, when implemented.

Well, it sounds like PFM, that's Pure Magic, to me. I'm pretty sure that
no one in their right mind should volunteer for a project which rests on
such a conclusion. That's a sure signal to get out of the project (unless
you're a PhD student who has lots of grant money to burn through).

> And you're right that if we have a general syntax for this kind of things,
> considering techniques for modular construction of language semantics
> through the incremental application of monads, this could be the focal
> point of "reflection" in the HLL.

"Incremental application of Monads"? And what makes you use quotes around
"reflection"? What kind of reflection do you mean?

> > So [...] we have two of four "basic constructors" which don't have
> > an obvious specification yet.
> Indeed.
> I hope my answer clarified my original thoughts
> as expressed on the project page.
> I readily admit these thoughts were more of a seed than an achievement.
> I hope this answer made sense despite my mind getting so tired.

I'd prefer some more clarification and responses where I've attempted to
restate my position properly.

Brian T. Rice
LOGOS Research and Development