Wegner OOPSLA'95, Interection vs Turing Machines (was: so-called Turing-Equivalence)

Massimo Dentico m.dentico@teseo.it
Sun, 29 Aug 1999 12:36:39 +0200


François-René‚ Rideau wrote:
>
> >: Tim Bradshaw <tfb@tfeb.org> on comp.lang.lisp
> > Another formalism of equivalent power to a UTM is the lambda
> > calculus
> I'm _sick_ of hearing such a meaningless statement of "being of
> equivalent power to a UTM" repeated over and over again as a bad
> excuse for bad programming language design.
>
> I question the meaningfulness of your term "Turing-equivalence".
> What definition of it do you use, if any?
> Equivalent _up to what transformation_?
> Do these transformation correspond to anything _remotely_
> meaningful *in presence of interactions with the external world
> (including users)*?

I  think  that  on  this  subject the paper  of  Peter  Wegner  is
illuminating.  Follows  a quotation of the  abstract  and  of  the
paragraph  2.3 because the original text is quite long (67  pages)
and  in  this  way you could have by yourselves  an  idea  of  the
content  and  you  could decide if you want to read  it  entirely.
However,  I  hope to don't annoy anyone with this  long  citation.
Sorry for my horrible English, any correction is wellcome.

==================================================================

                OOPSLA Tutorial Notes, October 1995

        Tutorial Notes: Models and Paradigms of Interaction
          Peter Wegner, Brown University, September 1995
                                 
        (http://www.cs.brown.edu/people/pw/papers/oot1.ps)

Abstract:

The  development of a conceptual framework and formal  theoretical
foundation  for  object-based programming has  proved  so  elusive
because  the observable behavior of objects cannot be  modeled  by
algorithms.  The  irreducibility of object  behavior  to  that  of
algorithms  has radical consequences for both the theory  and  the
practice of computing.

Interaction  machines, defined by extending Turing  machines  with
input  actions (read statements), are shown to be more  expressive
than  computable  functions, providing  a  counterexample  to  the
hypothesis  of  Church  and Turing that the  intuitive  notion  of
computation   corresponds  to  formal  computability   by   Turing
machines.  The negative result that interaction cannot be  modeled
by algorithms leads to positive principles of interactive modeling
by  interface  constraints that support  partial  descriptions  of
interactive   systems  whose  complete  behavior   is   inherently
unspecifiable.  The  unspecifiability  of  complete  behavior  for
interactive   systems   is  a  computational   analog   of   Gödel
incompleteness for the integers.

Incompleteness  is a key to expressing richer behavior  shared  by
empirical  models of physics and the natural sciences. Interaction
machines have the behavioral power of empirical systems, providing
a  precise  characterization of empirical computer  science.  They
also   provide  a  precise  framework  for  object-based  software
engineering  and agent-oriented AI models that is more  expressive
than algorithmic models.

Fortunately the complete behavior of interaction machines  is  not
needed to harness their behavior for practical purposes. Interface
descriptions are the primary mechanism used by software  designers
and  application programmers for partially describing systems  for
the   purpose   of   designing,   controlling,   predicting,   and
understanding  them.  Interface descriptions  are  an  example  of
"harness constraints" that constrain interactive systems so  their
behavior  can  be harnessed for useful purposes. We  examine  both
system  constraints  like  transaction correctness  and  interface
constraints for software design and applications.

------------------------------------------------------------------

[...]

2.3 Robustness of interactive models

The  robustness of Turing machines in expressing the  behavior  of
algorithmic, functional, and logic languages has provided a  basis
for the development of a theory of computation as an extension  of
mathematics. Interaction machines are an equally robust  model  of
interactive  computation. Each concept on the  left-hand  side  of
figure  24  has  greater expressive power than  the  corresponding
concept  on the right-hand side. Moreover, it is conjectured  that
left-hand-side concepts have equivalent expressive power  captured
by  a  universal  interaction  machine,  just  as  right-hand-side
concepts  have  equivalent expressive power of  a  Turing  machine
(universal algorithm machine).

                 ... has richer behavior than ...

universal interaction machine     universal algorithm machine
interactive problem solving       algorithmic problem solving
open system                       closed computing system
programming in the large       >  programming in the small
object-based programming          procedure-oriented programming
distributed AI                    logic and search in AI
scientific modeling paradigm      mathematical reasoning paradigm
philosophical empiricism          philosophical rationalism

                  robust equivalence of behavior

 Figure 24: Parallel Robustness of Interaction and Turing Machines

Interaction machines provide a common modeling framework for left-
hand-side  concepts  just  as Turing  machines  provide  a  common
framework  for  right-hand-side concepts. The equivalence  of  the
first five right-hand-side entries is the basis for the robustness
of  Turing machines. The corresponding left-hand-side entries  are
less  familiar,  and  there has not until recently  been  a  crisp
notion  of expressive power for interactive problem solving,  open
systems, programming in the large, object-oriented programming, or
distributed artificial intelligence. Interaction machines  provide
a  crisp  model  for these concepts and allow the  equivalence  of
behavior  and of interactive problem-solving power to be expressed
as  computability  by  a  suitably-defined  universal  interaction
machine.

The  somewhat  fuzzy notion of programming in  the  large  can  be
precisely  defined  as  interactive  programming.  Large  entirely
algorithmic  programs of one million instructions do not  qualify,
while  modest interactive systems with a few thousand instructions
do.   The   identification  of  programming  in  the  large   with
interactive programming implies that programming in the  large  is
inherently nonalgorithmic, supporting the intuition of Fred Brooks
that  programming in the large is inherently complex.  Interaction
machines  provide a precise way of characterizing  fuzzy  concepts
like  programming in the large and empirical computer science  and
elevate the study of models for objects and software systems to  a
first-class status independent of the study of algorithms.

The  role  of  interaction machines in expressing the behavior  of
scientific models and empiricism is less direct than their role in
expressing open systems, object-based programming, and distributed
artificial intelligence. The identification of interactive problem
solving  with the scientific (empirical) modeling paradigm follows
from  the  correspondence  between interaction  and  observability
(interaction  from  the  point of  view  of  an  agent  or  server
corresponds to observability by an external client).

Interaction machines express processes of observation and  capture
the   intuitive  notion  of  empiricism.  Moreover,  the   logical
incompleteness of interaction machines corresponds to  descriptive
incompleteness   of   empirical  models.   Modeling   by   partial
description  of  interface behaviors is  normal  in  the  physical
sciences.  The  incompleteness of physical  models  is  forcefully
described by Plato in his parable of the cave, which asserts  that
humans  are  like  dwellers in a cave that can  observe  only  the
shadows  of reality on the walls of their cave but not the  actual
objects in the outside world.

Plato's pessimistic picture of empirical observation caused him to
deny  the validity of physical models and was responsible for  the
eclipse of empiricism for 2000 years. Modern empirical science  is
based  on the realization that partial descriptions (shadows)  are
sufficient  for  controlling, predicting,  and  understanding  the
objects  that  shadows represent. The representation  of  physical
phenomena by differential equations allows us to control, predict,
and even understand the phenomena represented without requiring  a
more  complete description of the phenomena. Similarly,  computing
systems  can be designed, controlled, and understood by interfaces
that  specify their desired behavior without completely accounting
for or describing their inner structure or all possible behavior.

Turing  machine models of computers correspond to Platonic  ideals
in  focusing  on  mathematical  tractability  at  the  expense  of
modeling accuracy. To realize logical completeness, they sacrifice
the  ability  to  model external interaction and  real  time.  The
extension  from Turing to interaction machines, and of  procedure-
oriented to object-based programming, is the computational  analog
of  the  liberation  of  the natural sciences  from  the  Platonic
worldview and the development of empirical science.

The correspondence of closed systems and philosophical rationalism
follows from Descartes' characterization of rationalism by "Cogito
ergo sum", which asserts that noninteractive thinking is the basis
for  existence and knowledge of the world. Interaction corresponds
precisely  to allowing internal computations (thinking  processes)
of agents (human or otherwise) to be influenced by observations of
an  external  environment. The correspondence between  rationalism
and empiricism and algorithmic and interactive computation is thus
quite  direct.  The demonstration that interaction  machines  have
richer behavior than Turing machines implies that empirical models
are  richer  than  rationalist models. Fuzzy questions  about  the
relation  between  rationalism  and  empiricism  can  be   crisply
formulated   and   settled  by  expressing  them   in   terms   of
computational models.

The equivalent expressive power of imperative (Turing machine) and
declarative   (first-order  logic)  models  lends  legitimacy   to
computer  science  as  a  robust  body  of  phenomena  with   many
equivalent models, including not only Turing machines but also the
predicate  and  lambda  calculus. Church's  thesis  expresses  the
prevailing  view  of  20th century formalists that  the  intuitive
notion  of  computing  is coextensive with the  formal  notion  of
functions computable by Turing machines. However, Church's  thesis
is  a  rationalist  illusion,  since Turing  machines  are  closed
systems  that shut out the world during the process of computation
and  can  very simply be extended to a richer intuitive notion  of
open,  interactive computation that more accurately expresses  the
interactive behavior of actual computers.

Declarative  and imperative systems compute results  from  axioms,
arguments  or  initial  inputs by rules  of  inference,  reduction
rules,  or  instructions.  The interactive  paradigm  extends  the
declarative   and   imperative  paradigms  by   allowing   initial
conditions  distributed over time. This extension is analogous  to
extending  differential  equation  techniques  from  one-point  to
distributed  boundary  conditions. In  a  distributed  interactive
system  we further extend initial conditions to distribution  over
both space and time. Distribution of initial conditions over space
is  familiar from multihead Turing machines and does not by itself
increase   expressive  power.  However,  distribution  over   time
increases expressive power:

declarative paradigm: initial conditions (axioms + theorem) +
                      rules of inference -> (yes + proof) or no
imperative paradigm:  initial value (precondition) and program
                      yields result (postcondition)
interactive paradigm: initial conditions distributed over space
                      and time, imperative or declarative rules

This  analysis suggests classifying paradigms by the  distribution
of  external  interactions over space and time. Differences  among
inner  rules  of  computation have no effect on expressive  power,
while  extension  from one-point to distributed  interaction  over
time increases expressive power. Interaction machines derive their
power  from their ability to harness the interactive power of  the
environment.  Their  expressive power is  comparable  to  that  of
managers who need not have inner ability to solve a problem  since
they can harness the problem-solving power of their employees.

[...]

-- 
Massimo Dentico