No subject

RE01 Rice Brian T. EM2 BRice@vinson.navy.mil
Sat, 31 Oct 1998 11:06:12 -0800


>> [def. of Reflection inadequate for Brian's machine model...]
>> For instance, whence the said
>> "ability" derives in a human-computer system is distributed in a highly
>> complex way between the human and the computer for most systems.
>>
>> [...deleting the entire system...]
>> [deleting the development environment]
>
>I will try to answer this whole message in one paragraph.
>I don't see why reflection is part done by the user and part by the
>computer.  Every operation should be invokable by the user or computer
>equivalently.  Therefore, the system does not have a separate API for
>programs and for user interface.  Instead each object has its own abstract
>syntax.  It is unclear how much of the system can be trimmed down because
>the system depends on itself many times over.  Be certain that the
>fine-grain orthogonality (lack of interdependence between small parts)
>allows you to remove anything that is not needed.
Well, what I mean, I guess, is that the actions required to complete
certain kinds of reflection would need the intuition of a person at some
critical point (not necessarily some step in a procedure).  For
instance, the "trimming down of the system" process that you just
mentioned could require a lot of intuition, as far as we know.  More
likely, intuition in a very large object system (as we may end up with
if not careful) will be more optimal, considering the large (O(n!))
number of patterns which can be derived from, say, arrow diagrams or
data-flow diagrams.  In other words, the system may  not be able to
effectively work with extremely large discrete systems due to search
times required for matching algorithms.  We would instead, say,
anticipate this and construct some representation schemes for these
systems which make the user's job of discovering a useful metaphor
between subsystems much easier.  The  computer would then have a finite
job of graph-reduction or 'pruning' or something like that.  And not
just one metaphor would be useful, but many would simultaneously.  If
the question is important enough to the user, then the computer could be
set up to perform optimized pattern-matching algorithms "overnight" and
present results for the user's (or a software evaluator's) approval.
Here's a strange metaphor: think of the user as a computing unit which
is a uncountably-multivariate non-linear object whose input and output
match some of the computer's output and input, respectively.  This
non-linearity, as a dependence of state on uncountably many factors
which are affected by history, with a guiding intelligence providing
answers that the linear machine could not be guaranteed to come up with,
ever.  The computer's part would be to factor out the linear aspects in
a useful way.
How does reflection tie into this?  Well, I think that because the
system we want, whatever its physical realization, must be totally
reflective, 'internal' parts will be presented small, finite situations
of things like pattern-recognition.  I could throw in some other words
like 'non-monotonic reasoning' as part of the argument, but that  would
only mislead us into confusion between AI and Tunes, which are clearly
separate in their philosophies.  The point is that, although I've
contradicted myself, there is a quite valid point of view about the
computing system as a whole and how its reflectional requirements per
Tunes interact with the software object system's reflectional
requirement.