Replies to replies.

RE01 Rice Brian T. EM2 BRice@vinson.navy.mil
Wed, 28 Oct 1998 18:40:06 -0800


>>> > So 'foo.c' will be all over a reflective system.
>>> 
>>> To anyone:  There seem to be many different kinds and extents of
>>> reflection.  Is it possible to clarify and distinguish these? 
>
>TF> I see reflection as the ability to modify self. In a software system,
>TF> reflection must in some way be implemented. Whether or not this
>reflection
>TF> mechanism should be reflective is an option. If it is, the reflective
>TF> system could redefine itself to not beeing reflective, from where there
>is
>TF> no way back. This way a non-reflective system is sort of a subset of a
>TF> reflective one. Also is should be possible to spesify a transition from
>TF> one reflective system to another without going breaking the boundaries of
>TF> the initial reflective framework.
>
>Thank you this helps put some things in perspective for me.
>
>I like the definition:
>-- Reflection is the ability to modify self.
>-- Anyone else are there any problems with that definition?

Your definition fits common sense, but my objections are only that it
doesn't help me address the issues in the machine-model that I am
looking at creating.  In other words, I'd prefer something more symbolic
and applicable to problem-solving.  For instance, whence the said
"ability" derives in a human-computer system is distributed in a highly
complex way between the human and the computer for most systems.  This
makes intuition barely capable of the leap to understanding a system
where this distribution is shifted toward the Tunes balance we have
talked about for utility.

>The line:
>Wether or not this reflection mechanism should be reflective is an option.
>Could actually be restated as:
>The reflective system may not be part of the self....

But isn't total reflection a goal?  What's so wrong about giving the
user the option to destroy the system?  Naturally, we wouldn't like that
to happen, so what we could do instead is to restate the ontology to
make the question (to kill Tunes or not to kill Tunes?) seem unnatural.
This ties in to my conception of how to get the system to do things on
its own in a reflective way.  We could 'encode' the identification in
the object system of the 'actor' of the system with the mathematical
'core' of the system, so that it would inherently include as one of the
partial-evaluation constraints (axioms) the preservation of this aspect
of the system via preserving reversibility of partial-evaluations.  I
should clarify and add to this, but I'm at some sort of loss for words
right now.  Criticism is welcome on this point.

>The concrete example I get from redefining a reflective system to not be
>relfective,
>is developing a program in a forth system  (or in any interactive intense
>system),
>and the removing the forth command line (the development environment).

Right, but in the fine-grain Tunes environment, the analogue would have
to be somewhat different, since for almost any situation, multiple
(uncountably many) routes may be used for plotting the same solution.
In other words, there would be no "_the_ development environment" per
se, so that removing its equivalent would be like saying "go lie" to a
machine programmed to tell the truth (sound familiar, anyone?).  This
would be _the_ bug, if it could be expressed (in terms of its closure
with the context).  I don't think that the system we look for would be
able to handle that.  I believe that there are ways around it.
Possibly, we could make this sort of statement the only one incapable of
finite representation within Tunes, which sounds too contrived a scheme,
but may develop as an inherent property of the _fully_ (finitely)
reflective system.  I hope that someone can think of a good metaphor for
this.