Arrows in steps
Brian Rice
water@tscnet.com
Tue, 24 Aug 1999 16:52:52 -0700
At 01:33 PM 8/24/99 -0400, you wrote:
>I just finished reading the Arrows paper in totality (finally). I'd
>have to say, the more I read, the more it seemed like the same thing I
>wished to create with UniOS, actually even before UniOS (but was
>obviously talked out of doing, and rightfully so as I didn't have a clue
>of how to make it happen). I have a few random comments, which would
>work well within your system, but unfortunatly you're going to have to
>answer them without it's assistance ;)
very cool. thanks.
>1) Practical application #1: Creating Ontonologies for every major
>processor, architecture, OS, and environment, along with basic
>programming theory and math (in that order). In this way you'd be able
>to analyse a bitstream (program) and have it recompose it into another
>form. For example taking a Windows program, and making it KDE on some
>unix variant (due to there similarities in capability). There would be
>a very huge demand for something of this nature, and may even be a money
>making opportunity. Or the concept of "dedicated servers", in which an
>entire OS environment with a single purpose (web serving, FTP, etc.)
>could be created for almost any modelable purpose. This would totally
>replace the need for "jack-off-all trades" type OS's (NT, Unix).
>Companies usually only need certian capabilities, why not implement them
>in the most efficent way possible for the given hardware.
well, that's quite a lot of work to do, but then there are many programmers
to be thrown around these days. the trick of course is to convince them to
throw themselves at your own tasks. my focus is more related to ontologies
that provide generic frameworks, and to use those to develop ones specific
to a processor, etc. also, one big limitation on the ontology notion that
i suggest is that translating between various ontologies is very often not
computable or simply infeasible. also, if the user requests a translation,
then the computing system needs to ask the right questions of the user to
construct the desired kind of translation.
>2) Big assumption #1: Any binaries the system creates to run (I assume
>it can't be interpreted all the time), will be fitted exactly to the
>system and it's environment. If not, then I imagine this is something
>the system would excel at, and should not be ruled out.
yes, this is very similar to the tunes idea of partial evaluation of code
in steps until actual threaded code is achieved which can be run without
interpretive overhead or even kernel overhead (per se). the evaluations
would be specific to the current or desired environment and could be
dynamically modified.
>3) I'm not sure if user interface is really a valid point to bring up
>in the docuement. I think the system and it's implementation(s) are
>totally seperate issues (unless you were stating it for the purpose of
>showing that they are irrelevent?).
they aren't separate issues for me, since i am considering info systems as
self-sustaining entities, which means that the system's capabilities would
be reflective and that the implementation is a necessary part of explaining
what the system is (should be) useful for.
>4) I'm wondering what would be the drawback of implementing the whole
>system as a text based storage system in which arrows could be
>represented by textual statements rather than binary operators, making
>the data files more readable, and giving the "alpha" system a static
>textual command driven interface. You could implement the whole thing
>in a C'ish language and be able to start creating the data right away
>(including the model for the real system). In essence, I'm saying that
>maybe there is a way to implement a kludge system that could be used to
>create and manipulate the arrow frames (in whatever format you choose),
>so that you could get on to making and manipulating arrow frames rather
>than worrying about the chicken and egg implementation problem.
the simplest system would declare CONS cells for arrows (and chained CONS's
for multi-arrows). of course, nesting is a nice syntactic convention, but
in the arrow system it is an unnecessary (and undesired) restriction of the
potential name-space for arrows. so, the expression syntax would be "A = (B
C)", and since the system is reflective, the "=-application" is actually
also available as an arrow just as the CONS cell is for the application of
a function to an argument. this concept is enough to model as much of the
system as can be finitely described.
basically, restricting CONS cells to only point to other CONS cells, as
well as casting all the elements of an arrow textual specification as CONS
cells, is sufficient for now to encode arrow information.
the user/coder should always keep in mind the current ontology that they
desire to build. the system of course will eventually be capable of
analyzing such a development at a fine-scale, able to describe the
intermediate states of ontologies (as they are built) as other ontologies.
all that is required of an ontology is that it's elements providing meaning
can be grouped together, which is relative to other ontologies (say,
requiring ontologies to be consistent systems of predicates within a logic).
one thing to add: arrows are epistemic constructs and ontologies are built
from them. this is the philosophical view on the system's conceptual strategy.
>5) I imagine in a full scale implementation of the system, there would
>be 3 distinct parts: The Arrow Knowlege Base, the Operating
>Environment, and the binaries that run. I assume this is the logical
>breakdown of how this system must work. The operating environment,
>ideally, was modeled in Arrows, and is (truly) portable across all
>systems that use the same basic interfaces.
here i assume that you refer to people gathering together and agreeing on
standards for encoding arrow information, except that instead of explicitly
declaring each arrow, the declarations assume some ontology. this of course
is good, but should be fluid and dynamic, to allow these interfaces to
adapt and evolve to new uses, etc.
>6) Big Assumption #2: You could model and do logical calculations on
>problems that aren't conciveable using normal operators found within
>normal systems. Like infinite recursion as something useful... actually
>the whole concept of infinity I guess.
well, the real benefit is not just about infinities, because ordinary logic
can do a lot of that. the real advantage is that the arrow system can talk
about such things in arbitrary ways (i.e. not limited to standard kinds of
predicate logic, etc.). but then, this is a great improvement over
computer languages, because they restrict expressions to those which are
algorithmic in nature. in arrow, you can encapsulate ideas which are not
algorithmic, but which may be calculable when described from another
perspective. it's this framework that is inherently independent of concerns
for calculability which allows the user to study relationships that other
systems would ignore, though they are useful (even applicable to
computational systems).
>7) Here's another idea that might have financial merit: The Arrows
>Knowlege base is not local to the machines, but only accessable through
>the internet, and essentially software publishers model their software
>and provide the essential data to create the binaries. However the
>entire thing must draw from this internet Arrow-base. In essence you
>keep the whole of the knowlege to yourself and charge a fee for access
>(one time free hopefully). Actually I'm not sure about this one
>anymore, but I'll leave it here for you to see anyways.
well, i don't want to encourage centralization because it is such a natural
(read: addictive) tendency of social groups, and can go overboard. however,
your ideas are similar to the tunes metaprogramming concepts, and of course
arrow supports this in a certain way (which i intend to show is much more
general and much more potentially useful). i also intend this system to
promote information freedom in a way similar to the bazaar model. my intent
with this system is to provide unity for the space of information that
people create, in order that ventures farther away from the status quo
would not be seen as dangerous. my hopes are that this process will
actually promote a unified diversity of human interests as well as
promoting utility with respect to that scheme.
>8) The idea of modeled graphics, sound, along with algorithyms is
>great. I envisioned a tetris type game where the basic logic model
>existed, and all the graphics for the game were done in something like a
>pov-ray type language and the sounds something similar, and when
>installed it fits itself to the evironment. Truly scaleable programs...
>what a concept... For example if you're running a PDA or a Game Boy
>type system, the graphics would be rendered (only when installed) in a
>greyscale type format, in a very small block size, and the sound would
>be low in size and hz. However if the user had a screen capable of
>1800x1600x32 bits, and had a sound card that did 7000 simultenous voices
>at 48,000hz in AC-5 format, then the game would scale to fit that type
>of system. Of course the sounds would have to be simple (or if predone,
>have to be shipped in a very good quality and scaled from there), and
>the same goes for the graphics, they'd have to be vector (2d or 3d) or
>if bitmapped, must be in high-res to scale down from. Anyways, point
>is, modeled games = good, static games = good also because they can be
>converted as in point #1.
most of your comments fall under the notion of meta-programming, but i
believe that you intend more (as i do). the ontology concept allows high(or
whatever)-level modelling of a system, and potentially the transformation
of those models into other models. with tunes, the high-level description
is "meta-programming", which suggests an implicit context of programming, a
problematic domain for information-sharing. in other words, the ontologies
for a meta-programming system would all address one paradigm: the
programming process. instead, i propose that this implicit multiplexing of
concepts through the programming paradigm is too restrictive, in that it
forces all declared constructs to be computable (processed by the machine
alone). the alternative is to make such multiplexing explicit (placing it
within a larger framework of information transitions)
>9) Multi-headed/tailed arrows... I know these are necessary, however
>I'm not sure how this is going to affect garbage collection... or if
>garbage collection should even be done. Imagine a program written for a
>specific ontonology, but then due to garbage collection, the ontonology
>gets canned because it's represented in another form somewhere else...
>cleans up the database but messes up the model. Is this even possible?
>10) Practical Application #2: Language barriers. This system could be
>used as a universal translator for human language, even from a voice
>sample. Geeze, can't see any practical application for that...hehe
shhh... :) (of course it will still take a lot of thought to put into a
framework for langauges, but then i've been researching linguistics all
along. so, yes, i do have plans in that direction)
>11) Another thought: There is no argument that this system as just
>another system that "re-invents the wheel". It's not re-inventing, it's
>analysing the wheel, then representing it in a different format, and
>then mass producing wheels of all conciveable varieties.
hehe... not a bad analogy. however, i'm not sure if it could be used to
describe the use of the epistemology vs ontology idea and the notion of
relativism as it applies.
>Hmm... that's it for tonight I guess, sorry it's so mangled I was tired
>and a bit excited when I wrote it all, I'll have more later, or when you
>reply.
>
>Pat Wendorf
>beholder@unios.dhs.org
thanks for the feedback