logical symbols

RE01 Rice Brian T. EM2 BRice@vinson.navy.mil
Tue, 27 Oct 1998 12:26:33 -0800


>Quantifiers and lambdas use naming because names are required when writing
>an expression sequentially (using strings).  Alpha-conversion in lambda
>calculus shows that the actual name picked is irrelevant.  In predicate
>calculus, we find that the inference rules for adding and removing
>quantifiers (UG, EG, UI, and EI) need to have qualifiers such as "x is not
>flagged", "t is free to replace x in w" and "x is not subscripted".  These
>rules are only needed because variables have names associated with them.
>Once we get rid of linear, written forms we can get rid of variable names.
>So the representation of "\x.x x" would not even remember the variable was
>named x (except for user sugar in printing out the expression later).
>Instead the expression would simply store the information that there is an
>input at two places.  You can think of it replacing x with a hole, or a
>dependency to the outside world.  I think I'm doing a bad job of
>explaining this, so I'll stop.  Maybe this will help: Instead of storing
>that "this is the variable x" in each place where x occurs, all the
>computer has to do is remember that each occurence of x is "the same" and
>the name can be left out.  "The same" can be represented by pointers, or a
>number, or anything.  I hope that was understandable.
Yes, thank you.  However, the notion of "the same" is what I intend to
factor out of the usual model of communication as a sequence of symbols.
 This also explains my distinction between "extension" and "intension",
in that the extension of a symbol which is used any number of times
within an expression is the entity which remains "the same" for each
occurrence.  Essentially, I am simply removing the language layer that
entails repeated references to the same object from the system's initial
interface.  An intuition for this idea would be the reduction of the
programming system to "direct object manipulation".  Naturally, multiple
objects would still refer to one object, but only as an efficiency issue
of representing relations between objects, not as the basis for creating
relations, as it is with extensional languages.
I am essentially arguing against any system requiring alpha-conversion,
because I wish to reduce/eliminate the dependence on particular
representations.
>> [...] the "forall" and "thereexists"
>> operators to be statements, at least in common-sense terms.  However,
>> these operators are declarations about symbols, not intensions.  They
>> seem also to be declarations of a meta-language sort: they operate on
>> expressions which are independent of their component formulae.  Because
>> of the extensionality associated with these operators, evaluators have a
>> much more difficult time rendering transformations upon structures of
>> logics and the lambda-calculus, particularly in the interesting
>> applications of these languages where statements become uncountable,
>> etc.
>
>I don't really understand what you mean, here.  After you define
>extensions and intentions I may have a better idea.  But could you clarify
>"independent of their component formulae" (what formulae?) and some more
>detail of statements becoming uncountable?
(Extension and intension dealt with above.)  Well, the most easily
understood examples come from set theory for real analysis.  For
example, no one wishes for a computer to know what a real-number plane
is by listing its elements sequentially.  Do they?  In similar terms,
suppose we want to explicitly deal with the axioms over a system, in
order to create a more useful variant system.  What if the number of
axioms are infinite, as in the decidable versions of predicate calculus
(with an infinite number of predicate orders)?  What if, with dealing
with a mathematical model of a logical language, one has to resort to an
infinite number of relations or functions or logical constants?
Obviously, one can come up with ways, using meta-statements for example
to "virtually" specify an object's "definition".  That is how
mathematicians have done it for quite some time.  However, we seek a
system where this sort of situation is part of a homo-iconic
environment, so that we must account for this action systematically, as
opposed to an "ad hoc" method of eliminating "uncountabilities" for each
instance.  Otherwise, Tunes would keep going into infinite loops every
time it tried to construct a new object with an infinitary definition.

>Without getting any more detailed, I agree, "forall" is a more powerful
>operator than, say, "not".  "Forall x" seems to be defining a property of
>every "x" that appears in the expression (the property that it is
>universally quantified).  It also defines a property of the expression: 
>the property of all x's being universally quantified.  I think forall
>really is multiple things. (In TUNES any object that is multiple things is
>a compound object, of multiple objects that are only one thing) 1. It says
>what is being universally quantified.  2. It says something is being
>universally quantified.  and 3. The scope in which (1) applies.  [(3)
>being implicit in the syntax.]  Our job is to orthogonalize these 3
>properties, make sure they are all explicit and can be adjusted
>independently.  Hope this helps.

Okay, I'll buy that (about orthogonalizing the 3 properties), but I'm
not sure how to go about the argument from that standpoint, myself.
Maybe the preceding paragraph of mine will help you understand my
reasoning about it.