Ken Dickey on a new OS...
Francois-Rene Rideau
fare@tunes.org
Fri, 2 Jun 2000 03:55:15 +0200
Dear Ken,
sorry for this late reply.
> [restating the obvious..]
>
> I find that converging on and working toward a specific 'target'
> goal set clarifies a number of concrete details and
> exposes design problems. Starting from a 'taproot'
> system which can be generalized has an advantage in growing
> communities of users & developers from a working base.
Sure.
> E.g. pick an under $200 hw computing platform [...]
I fear that if we are to be a internet-distributed project,
our computing platform will have to be mostly PCs (and maybe PowerMacs),
and/or the UNIX C virtual machine. In any case -- yuck.
While this is a problem, it might still be effective,
since there's a large body of existing code for PCs
(OSkit, *BSD, Linux, lots of small stuff, including retro),
and I know several people working on OS infrastructure for PC and/or PMac
who are likely to open their sources.
> and work on the UI (from the top) and the TUNE (from the bottom).
Uh, what do you call the TUNE, here?
> Start with a dynamic system, develop/adapt a good IDE =>
> fast mutate/learn/update cycle [make mistakes faster and cheaper
> than anyone else]. The substrate should be stable enough early on
> to allow rapid learning and having a coherent target means that the
> momentum converges rather than going 'brownian'/random.
That's how we see things indeed.
>> The BIG problem is to build a usable initial core.
>
> I guess the questions I have are: "How does what you want differ from
> the closest already available approximation?" and "How can you make use
> of the user community which supports this?".
I fear that what we want differs a lot from the closest already available
approximation, and that there is no user community in which to tap directly.
Indeed, one of the major features we want is orthogonal persistence:
reify,save,load,error-check,intern data (or worse, code) anymore;
the system should manage that by default
(leaving the possibility to handle it manually).
Such thing doesn't exist in the mainstream computing world;
it kind of exists in PDAs and handheld calculators;
it has been implemented on mainstream hardware (Eumel, PJama, PLOB!);
but in closed ways that make it unsuitable both to talk
to existing services and to extend the system for customized services.
Orthogonal persistence is a pervasive feature that requires synergy
with the rest of the system; without the ambition to take over the whole
system and implement all its services, it is a vain feature.
But even then, orthogonal persistence isn't
the only feature we're interested in,
or we'd just take Texas or RScheme and be happy.
The key feature we want is dynamic reflection,
the ability of the system to dynamically inspect and reify its state
so as to reason on it, to analyse or instrument code,
to migrate code, make it persist, etc; more generally,
so as to dynamically extend the system
in ways not necessarily designed in advance by the system implementers.
Coming with reflection is some kind of tight system integration,
where all system components can easily discuss with each other
inside a same object system without having ever to go through
ad-hoc parsers and unparsers, configuration files, command-line protocols,
etc, because you can just use the builtin system services, variable inspection, etc.
Maybe the nearest thing ever implemented to do that were LISP machines;
I can't say, I don't have a LispM (yet -- am on the verge of buying one).
Another greatly hackable computer was the HP28/HP48/HP49 series
of Reverse-Polish-Lisp-based handheld calculators.
(the latter had orthogonal persistence, too!)
Certainly Squeak also qualifies as a near target.
But with all of these, the user still depended
on system-inaccessible software:
the LISP machines depended on their $$$$ specialized hardware;
the HP calculators depended on their non-hackable ROM;
even Squeak depends on a C runtime that isn't Squeak-hackable,
although you can mostly save the Squeak image
and restart with a new runtime.
This matters in as much as the user isn't able
to fully manage the evolution of the system from within the system;
he cannot build e.g. total quality management as automatic internal tools,
or whole-system reasoning, or system-managed migration
to a new underlying hardware/software run-time platform.
Also, I'm not satisfied with the current implemented models of reflection
as an ad-hoc feature, instead of an instanciation of a more general ability
to serve as universal metasystem for arbitrary other computerized systems,
with respect to development, execution, manipulation, reasoning, etc.
This is of course particularly important for system evolution,
where the future system is not exactly the same as the current one,
and the ability to meta-control one is not the same
as the ability to meta-control the other,
so that an ad-hoc reflective loop can but fail
to provide both these features at once when needed.
All this to say, I'm not sure what "nearest" to what we want to do means,
or if it is meaningful at all. It looks like to me we're striving towards
some kind of infrastructure that just doesn't exist yet.
But maybe I'm just deluding myself.
> [Obviously, I do not yet have a concrete mental model
> of what you are proposing].
I'm not sure most of us have a concrete enough model either.
There are many details that escape me.
> I have been part of a number of development efforts/communities
> (e.g. in Scheme/Smalltalk/Dylan/...) which have been taking
> various fundamental approaches for a couple of decades now,
> which is why I am open to "radical rethink".
> However, I was trained as an engineer before getting into CS
> and I tend to reach for existing solutions,
> particularly those which have done significant work,
> have a research/developer community (injecting new ideas)
> as well as a user community (beating the ideas into shape
> and throwing out the ones that don't work).
> Again, my questions are "what am I trying to achieve?"
> and "how do I get there with the lease work/resource?".
> If I can leverage, what specific missing fundamentals are required
> to get ahead? What problems are there that need to be eliminated?
> [Do I really need to build from scratch?
> It is fun, but it is also a lot of work.
> What is the requirement which drives this "ground up" approach?].
> It _does_ take a long time to build a dog from amino acids..
On the other hand, providing emulation for existing systems
or translation from them has a constant (albeit large) cost
(i.e. doesn't increase with the number of ported applications).
So that, considering the large base of free software,
we are not starting from scratch.
>> Hum. Would you be available in one year from now, when I find funding?
> Probably--if I am not already working on such a solution
> (perhaps in another context).
I'll contact you. If you start something,
I'm interested in hearing about it, too.
> I have looked through various docs (arrow, etc.)
I fear the arrow paper is not the right thing to read for concrete stuff.
Actually, there is currently no coherent documentation about our concrete
goals, only lots of information scattered around the various subprojects
and the mailing-list archive.
The only documents that are currently maintained are the FAQ and the Glossary.
The FAQ includes a list of features with a concrete meaning that we want
TUNES to have that distinguish it from other systems:
a modular concurrent programming model based on a safe high-level language
(not unsafe C processes separated by a paranoid kernel);
orthogonal persistence (not explicit low-level file management),
software-based safety (hardware protection being only the last resort),
dynamic reflection (no state-lossy reboot ever needed
for either process or whole machine), etc.
> but my experience is that the more abstract things are,
> the more concrete the examples must be to illustrate what is going on.
> I tend to learn well from examples.
> Can you point me to more specific examples/docs
> which illustrate the higher-level reflective capabilities
> you are referring to? [I am familiar with computational
> reflection/reification and somewhat with "machine learning" technologies
> but less familiar with specific AI ontologies which are computationally
> tractable/efficient with small resource consumption.
> I'm a bit out of date w.r.t. the ai research literature.].
Unhappily, not.
The Interfaces/ and Migration/ subprojects show ramblings
about simple intended uses of reflection.
More ramblings are scattered in web-archived mailing-lists.
Note that we do not directly aim at complex AI technology,
at least not at first; our first aim is the whole-system-reflection
infrastructure that an AI can later put to good use.
>> The ability for the user to dynamically define or select new
>> low-level implementation strategies is thus essential to achieve
>> a universal system, one that can _express_ solutions to all problems.
>
> It only needs to express problems that most people are interested in. ;^)
>
I disagree: if it is not a UNIVERSAL system, able to express all problems,
then any success of it may only further the advent of such universal system,
not make it nearer.
Indeed whatever feature you make unexpressible in your system,
sometimes someone somewhere will make a discovery
that might be useful to everyone at large,
but depends on that unexpressible feature so as to work reliably,
or at all, without rewriting everything from scratch.
For instance, consider process persistence or migration;
if your programming language doesn't support it,
it's hell to do it manually; alternatively, on some hardware,
you could hack your operating system to do it transparently,
except that it still won't be reliable in presence of file reopening failure,
since your language has no way to catch and handle such events.
Another feature you may consider is capability-based security;
if your system can't express capabilities,
if they have to be implemented manually using user-level mechanisms,
then your security mechanisms is purely advisory,
and the first-come non-compliant or buggy program can break it.
Note that the ability to dynamically retro-instrument running code
in a way coherent with modifications to the original high-level source code
(i.e. dynamic compile-time reflection)
DOES allow to express all such features;
I believe it does provide for a universal system.
But it's kind of like the assembly-level of universal systems,
upon which you have to build useful abstractions.
> Again, I am most interested in meaningful, useful solutions
> for ordinary people.
So am I; but I'm convinced that better software infrastructure is
instrumental in enabling end-user solutions that are currently out-of-reach.
[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
If debugging is the process of removing bugs,
then programming must be the process of putting them in.
-- Dijkstra