PIOS Update (Sorry about the delay)

Mike Prince mprince@crl.com
Thu, 27 Oct 1994 19:04:08 -0700 (PDT)


This message is being sent to four individuals, Francois-Rene Rideau,=20
Jeremy Casas, Andy Thornton, and Luther Stephens.  Thank you for=20
expressing interest in investing time in this project.  If all goes well, w=
should have sample code running by Christmas, and a basic development=20
environment by February.

By then our ranks will be larger.  I have had numerous requests for more=20
information, and believe there will be many more interested in helping to=
develop PIOS once we get the ball rolling.  But for now it's just us five. =
Give what time you can, be open for new ideas, and have faith we can=20
resolve our differences and pull together to produce an operating=20
system like none other.

In my descriptions of the project I have only scratched the surface of=20
what I would like to do.  PIOS will usher in the age of organic=20
computing; when computers behave as organisms.  It will provide a=20
foundation upon which OS research can be done easily.  It will be the=20
common denominator for all electronic devices, enabling them to=20
exchange information.  Lofty goals, yes, but I know we can do it.

For now let's call it the PIOS Project.  If there is strong belief we shoul=
change the name I am open to suggestions.  I will set up a mailing=20
reflector as soon as I can to provide a more convenient forum.  In the=20
mean time we should be able to set up mailing lists of each other. =20

Please excuse any mistakes or omissions I have made.  I am sorry for=20
this being mailed out a day late.  On Wednesday I will be mailing out a=20
more complete picture of what I would like to accomplish with the low-
level side of the project.  This should begin to steer us in the direction=
of the intermediate language and virtual machine.

Hope to hear from you soon,


Based on the responses I received we will be organized as a research=20
group.  In the event that the project ever generates revenue, equity will=
be distributed during the course of the project based on published=20
estimates of the number of hours team members are contributing.  All=20
coding will become the property of the project and will be distributed=20
under the Free Software Foundations GPL.

I think the OS should be open, so we have a wide-spread Interment support
is possible. The misfeatures and bugs become much easier to correct this wa=
and the OS will be able to spread like Linux.
Then, we can add a close in the standard license, so that people who make
money out of the system must pay royalties to the authors. The killer-apps
can be developed under non-disclosure agreements if you see the needs.

In order to get any type of wide acceptance I would suggest that the=20
kernel is placed under the gnu copy-left type of agreement and any=20
particular tools could perhaps be marketed

My opinion is whereas the killer-apps and migration modules such should be
commercial/shareware, the "micro-kernel" (that is, enough code to run the O=
on a single CPU system) itself should be released under the GPL, so that th=
system be wide-spread across the net.
Such an official net support *is* important (see the Linux community).=20
That didn't prevent Linus from earning directly and indirectly money from h=
work.  But it helped the Linux community to multiply across the world. And =
know of no OS with such a support.
And if our killer-app is as good as we say, it deserves commercial
distribution above the GPL'ed OS. We can also put a condition on the=20
GPL that a GPL'ed version of the OS cannot be used in a commercial=20
environment, which will force companies to buy OS licenses, whereas people=
can use it freely (but are welcome if they pay).
In any cases, don't expect to earn much money from that project. If you=20
do, that's all the better. And if you don't, I'd rather not being paid for =
successful thing than not being paid for a forgotten project.

But remember any project deeply *needs* a referee, and won't=20
go faster than the referee. So after all the discussion has been done, a
fast decision must come (even to be modified later should *new*=20
elements come). The MOOSE project once died because it had neither referee=
nor voting assembly, just raw discussion.

Basically, you get best results if you decentralize decision so that
each subject has its own maintainer. But in case of conflicts, you need a
referee (say the subject maintainer or ultimately you). If we are to discus=
and vote, then it must be quick; that's why we need regular meetings whenev=
common subjects are discussed. TALK/IRC sessions are welcome. Mail is viabl=
only if participants reply very quickly.
(under IRC, I'm Fare when connected. Please send me a message if you=20
see me).

I will act as the referee for now.  Until December 1st when we will split=
into two teams, and a second person will become the referee of the=20
second design team.  I am playing with the idea that an unchallenged=20
idea becomes law after 3 days.  If an idea is challenged then the debate=20
continues until the referee stops it, or until 3 days elapses without=20
challenge (last one to talk wins).  No filibusters will be tolerated, only=
thoughtful critique.  All outcomes must be acknowledged by the referee. =20
The 3 day limit may be waived if 2/3 of the people vote to push it=20

We need People who participate *regularly*, and reply quickly.
The one week latency between message & answer killed MOOSE.

I would like to finalize the first two items on our agenda by 10/31/94.
     Structure of organization
     Enumerate goals, finalize mission statement, very important!
Please mail your suggestions to me by the 10/29, in order to be=20
integrated into the documents which will act as a guidelines for our=20
On 11/1 I would like for our debate to begin on the virtual machine to=20
which our intermediate language will be focused.  I would like to have=20
our virtual machine and the intermediate language specifications done=20
by 12/1.  I will be mailing out an agenda for November on 11/1.  If you=20
have any suggestions please mail them at least two days before.

By December 2 we should split into two teams.  One team will be=20
responsible for the low level porting of the kernel to different platforms,=
and deciding which language to implement it in.
The second team will begin developing the tool boxes to enable the=20
kernels to do something useful.  Engineering the development=20
environment.  Maybe including a HLL to intermediate compiler, user=20
interface, etc.  We need to discuss this.

FRANCOIS-RENE {development schedule}
Oh yes we need one. MOOSE had no schedule and it died because nothing was d=
Typically, we should have regular meetings and schedule for next meeting;
if mail is quick enough, we can have it. But TALK/IRC sessions are better
if we are to vote in real-time after discussions. Also, we should write
good english in mails as is slows the process, if we can use symbolic stuff=
If you know IRC, we can have an IRC channel with an IRC bot to keep it open=

Mission Statement
We are here to design a microkernel based operating system to serve
as the basis for the next generation of higher-level operating
systems and consumer electronics.  The OS is to work seamlessly in a=20
dynamic, heterogeneous, wide area network.  It should execute on the=20
largest range of platforms possible, from household appliances to=20
supercomputers.  The OS will be the common denominator of all=20
systems.  No backward compatibility will be designed in, so as to allow the=
highest degree of design freedom.

Design Goals

Application-Centric Paradigm
All commercial OS's are based on the machine-centric paradigm.  In this,
an application is based on one computer and does communications with other
applications on the same machine or others.  An application is grounded
to the machine it was initially executed on.  Due to this perspective =20
software engineers have had a great deal of latitude in typecasting
their applications and OS's to one platform, thus reducing their=20

An application-centric OS allows applications, during execution, to be=20
migrated to different machines.  Applications engineers can no longer=20
assume certain static attributes of their platforms.  The environment=20
becomes a dynamic one.  The job of the OS is to allow freedom of the=20

FRANCOIS-RENE {Application-Centric Paradigm}
I'd say a low-level resource-centric paradigm.

Once again, I think that a truly portable efficient OS is neither applicati=
nor machine centric. It is really both and neither. A truly portable effici=
abstracts the concepts of both and connects them. For instance, to
applications the OS provides a consistent view of all machines regardless
of the hardware. There is storage, process capacity, and some set of device=
(this is regardless of whether the underlying hardware is one "machine" or
many). To the machine the OS provides a consistent view of the applications=
machines are respondent to a known set of application requests.=20
I think that this is really what you are trying to imply, however if you
really want to take an application centric view, you will eventually run
into the same walls as machine centric views do...

I'd say high-level OS. Do not define bitwise behavior like under Unix/C.
Just define high-level protocols, and an abstract intermediate-level langua=
Moreover, we'll still need machine-centric layers. Only you'll address
them only if you really need to (i.e. play music on a host, and not another=
one ten miles away :)

Logical Configuration (Virtual Machine)
The largest object is the tool box.  A tool box is composed of an array of
arbitrarily sized data and code segments and a number of agents.  Code=20
segments are viewed as tools.  Data segments are viewed as stacks, however
data within the stacks can also be accessed directly.  Agents are the=20
execution primitives.  Agents execute tools, and have access to the global=
tool box stacks, plus their own private array of local stacks.  Agents can=
move from tool box to tool box carrying their local stacks.

Why introduce arbitrary differences between objects ? Let the system manage
a unified set of entities, called objects, agents, frames, patterns,
functions, tool boxes, shit, or whatever you like (though object seems the
most natural). If the user language introduces differentiation between
objects, let it do so. But this is no OS requirement. What we want the OS
to do is providing security protocols, including for type-checking under
and between various languages.

FRANCOIS-RENE {Logical Configuration}
This seems a bit complicated and low-level. Can't we use unifying semantics
a la SELF (everything is message-passing) or BETA (everything is a pattern)=
or STAPLE (everything is a function) ?
Let's only have arbitrarily typed objects with a global GC as a basis.

Physical Configuration
Physically, a tool box resides within one working space, and is usually
serviced by one CPU.  Normally a working space contains several tool=20
boxes. =20

Implementing the OS in a portable fashion over foreign OSes is exactly what
should be done. But don't expect having the version that runs over OS X run
faster than raw OS X itself !!!

Tool Box Migration
At the discretion of the OS a tool box may be migrated to another working=
space.  At such time all constituent parts of the tool box (tools and globa=
stacks, resident agents) and bundled up and moved to the new working space.=
At the new working space the tool box is unbundled and all tools are either
re-compiled, or are deemed better interpreted for overall system performanc=
The intermediate code is retained in case another move is warranted.

Moving big objects is not always beneficent. Allowing small objects to
migrate seems a better policy to me: when using the "archie" equivalent, th=
search process is migrated, but not including the human interface.
Ok. But again, I prefer having as light-weight objects as possible to reduc=
object manipulation overhead.

That's no problem. What's more problematic is a heuristic to determine the
cost of migration. We also need a secure system-wide (which may mean
world-wide) object identification protocol.

FRANCOIS-RENE {Design Goals Overview}
1.  The smaller the objects, the easier the migration.
2.  for read-only objects, it may be better not to *migrate* the object, bu=
    to propagate copies.
3.  Now, what if there is a net split ? Will modifying the
    object be made impossible ? Then you must copy the object and=20
    maintain your own copy. But then, what when the net is one again ?=20
    How to merge changes ?
There can be several object-dependent (object-parametrizable ?)=20

Distributed processing has a lot of implications and hardships. Until
some of the communications standards see the "realizable light of day",
I don't think that Wide Area Network processing is all that effective.
Take for instance WWW (World Wide Web). Even that best sites response times
are on the orders of seconds (far too slow for OS activity). Until the
ATM and Optical backbones are solidly in place LAN type processing
is the most distributed level that you can work at (and high speed lans
only function at 100Mb/s, still kinda slow).

The user won't see it.  Every address external to the moved module will be=
converted to an absolute system-wide address before being sent.
Migration cost and decision is computed as part of a local address
space's garbage collection, which in turn is called when scheduling detects
suboptimal system performance.
Migrating itself is a particular case of saving and restoring objects.
It's just restoring the object as soon as possible on another machine !

Resource Management
Tool boxes can be viewed as resources.  Each tool box is named and all=20
inter-tool box communications are vectored according to that name.  Names=
may be shared by tool boxes, in the case of libraries for which there may b=
instances in several working spaces.  All services provided by tool boxes=
must, by default, be stateless.

Inter and Intra Tool Box Communications
Agents carry all data in their local stacks.  Typically, parameters will=20
be pushed onto a stack before a call, and popped off during the call. =20
The actual parameter passing format is up to the application.

FRANCOIS-RENE {Inter and Intra Tool Box Communications}
That's well. But why force using stacks ? Some languages have no stack
(see ML or LISP implementations), and just use a heap. Let's not specify
internal behavior. The only thing is: objects must be able to migrate in
some way whatever. Being able to migrate is the same as being able to be
saved/restored to/from a file (the file just being transmitted over the net
in case of migration).

All communications (agents) are vectored through a gate upon entry to a=20
tool box.  The entry code can choose to do minimal checking to optimize for=
speed, or extensive checking to maximize security.  This checking is not=20
a function of the operating system, but instead of individual tool boxes.

To allow some security, we must also provide a regular or permanent logging
process which will ensure that all system change will be written in persist=
memory (that survives power failure). See Grasshopper for that.

Yes. let's have a compiler-based security system. All binary code *must*
be secure. When it comes to migrating it, let's have some optional PGP
signature system to ensure that code comes from a trusted compiler or=20

 1) Require super-user rights to validate any low-level code before=20
 2) use the policy: "if the object is addressable, it's usable".
 3) Use run-time optimization (i.e. partial evaluation) a la SELF to achiev=
=09good performance even with "object filters" that allow only partial=20
=09access to an object.
 4) Now, as for security, what to do when hosts do *NOT* completely=20
=09trust each other ? In a WAN, that's especially critical. The=20
=09answer is: in the WAN, all machines are not equal; each machine has=20
=09levels of trust for other hosts (including distrust due to net link=20
=09quality) which will decide it *NOT* to migrate an object.

Intermediate Language
All code is distributed in an intermediate language, below human programmin=
but high enough to be applied to a wide range of microprocessors.  It will
be simple to interpret, and quick to compile down to binary for speed=20
intensive applications.  It is expected that human-oriented programming =20
languages will be layered on top of this intermediate language.  I would li=
to do an implementation of c(++) for the intermediate language.  Though
c would not effectively utilize the attributes of the OS, it would satisfy
the short term complaints of those still bound to c.

An application must rebuild everything but low-level I/O from scratch.
This is just unbearable. Also persistent storage and human interface are to
be rebuilt every time, which is 95% of current application programming, whe=
all that stuff should go in generic OS modules.

I'm myself considering writing a real light & powerful (unlike Unix)
OS using (some non-ANSI) dialect of FORTH as a portable low-level language.
My idea of specs for the OS seem to match yours (I'd add persistence as
an important characteristics for the system; also dynamic strong typing and
lazy partial evaluation and garbage collection).

Because of low-level-ness, programmers must manage raw binary files instead
of well-typed objects. Hence they *must* use typecasting. All this is *very=
unsafe, and implies a *slow* OS with run-time checking everywhere, without
ensuring security for all that.

SELF is an OO language that already allows much of what PIOS needs.
BETA also is an OO language that has persistence and distributed objects
over Unix (though the current implementation is quite trivial)
(BTW, C++ or Objective C are definitely *NOT* OO languages; they are just
bullshit to me). What I mean is the OS should include type-checking protoco=
conventions or mechanism, or else system security and consistency cannot=20
be ensured.  Allowing the most generic type-checking (so that any language
can be implemented using such protocol) leads us to something

- the "intermediate" language you mentioned should be as close to the=20
machine code that will eventually implement it to make for fast=20
translation times.  I would suggest a "virtual cpu" approach where a RISC=
style cpu is used as a basis.  This would also give the advantage that=20
the gnu tools could be used for development - all we need do is make a=20
machine description file for our virtual cpu and make a c/c++ compiler=20
for it using gcc.

Some kind of FORTH or byte-code is good. See the byte-code interpreter from
CAML-light. I've always wanted such a beast.

Well, not for non-hacker human. But we will still have to manipulate it.
And if we do, other hackers may want to use it too. Only they'll have to be
superuser on the system.

I have a different idea about strong dynamic typing.  The basic OS=20
should not concern itself about the content of objects.  This would make=20
the OS very small, and very versatile.  On top of the OS, as part of the=20
applications, ANY level of typing could be used.  I want the OS to be=20
able to serve as many camps as possible.  This way the OS will be a standar=
building block others can use to explore different programming languages.

That's also how I see it.
But allowing any kind of object is definitely *NOT* enforcing low-levelness
and disabling any kind of typing (as is the case under UNIX where untyped
objects are used).
To me it means being able to parametrize declared objects by there type,
itself parametrized by its the type universe, etc. A very basic (but powerf=
system will provide means to implement any type system inside it; it should
also allow explicit mapping to a set of low-level constructors.

In order to steer programmers in the direction of dynamic typing I have=20
contemplated a data encapsulation language to enclose parameters being=20
passed between objects.  Objects could use a library of extraction
functions to pull parameters.  Just an idea though.  And it would not be=20
part of the low level OS, a higher level that can be elected to be used.

That's about what I mean. Again in no case should an object be called with
unproper parameters (or worse even -- parameters that would make a process
or the system crash).

Automatic Stack Contraction/Expansion
I'm not sure about this one yet.  Each stack (tool box or agent)
grows from the bottom, and is used like a stack.  It has a profile (
max size, grow/shrink increment, and stack pointer).  Data can be pushed/
popped from the stack, or accessed arbitrarily inside the range below the S=
When the stack overflows, the segment is automatically expanded. When
memory resources in the workspace run low, garbage collection can commence
by contracting stacks that are under utilized (stack top - SP > shrink=20
increment).  I believe this might save space by applications using
only what they need, and by bundling the memory allocation code in the=20
kernel which might otherwise have many instances in the application code.=
What do you think?

FRANCOIS-RENE {Automatic Stack Contraction/Expansion}
I love stacks. But why have them as OS primitives ? To me, let the OS
handle arbitrary objects, and have stacks in what resemble the standard
library. Let's have an ever-running garbage collecting system and use it
as a criterion for migration.

What is that segment stuff ?
Let's not specify implementation behavior, but high-level response from=20
the system.

: > Semaphores (and mutex) are optimized for the case of serial
: > processing.  messages are optimized for the case of parallel
: > processing.
: >=20
: > Imagine a semaphore that works in a system with 200 machines --
: > now imagine each of these machines trying to work with a (usually
: > independent) semaphore.

: Actually PIOS will be kind of a hybrid, the blocking semaphores are
: used for synchronization of the agents.  Agents are like messages
: and carry the data between and within machines.

So it sounds like you're using semaphores at specific machines, and
message passing between machines.  Further, it sounds like you're
trying to structure the code so that transactions can be represented
by an agent (which I presume is a short program that indicates (a)
which [sorts of] machines to visit, and (b) what to do at each of
those machines).

I am wondering at what you plan on doing to avoid/deal with agent
Hmm.. I don't see any obvious flaws with this plan.lossage.  (e.g.
something goes wrong after an agent has completed part of its work).

Here's an artificial problem:

Let's say you build an agent to do something significant (say, buy a
plane ticket).  It goes out gets part way through the process then
dies.  Now, eventually you notice that the agent hasn't completed its
work.  So, you need some mechanism to sanely pick up where it left

Now, maybe no one ever uses this os for an airline reservation system.
And, maybe it's decided that this class problem isn't appropriate for
the OS to attempt to solve.  But, every applications which operates in
a distributed fashion is going to have a similar aspect.

So, there has to be a part of the system which keeps track of all
agents which have been issued (presumably this will be any site that
creates an agent) and a timeout mechanism for agents that don't
"complete on schedule".  Furthermore, there need to be tools to debug
the process -- both in the manual sense and in the sense of automatic

I think that a lot of the interest in "functional languages" and the like=
stem from a need to deal with this class of problem.

Specialized Platforms
For truly speed intensive applications, the actual application code (tool
box) would be bundled with the PIOS microkernel and coded in the native=20
machine language.  The tool box would be tied to the machine to prevent=20
migration, or an additional copy of the tool box (the intermediate language=
version) could be migrated.

FRANCOIS-RENE {Specialized Platforms}
Again, in a distributed system, some kind of signature must come with any
low-level code, that may be checked to verify that any binary code comes
from a trusted source. Objects could come with their "equivalents" under
such or such assumption; then when migration cost is computed, matching
equivalents are taken into account.

They sound the same.  I'd interpret persistence as the ability of the=20
system to remain running (i.e. no bugs crashing the system) for long=20
periods of time.  I would add to that resilience, the ability to survive=20
the failure of several CPU's which are working on part of an application.

More than that, I'd say persistence would be being able to survive
indefinitely, permanently even beyond power-down (on system that don't have
permanent memory, this means that the complete system state must be regular=
logged on permanent media (say, hard disk).

The UDD (Universal distributed Database) will arise one day !

That's good: demonstrate that we offer not only power, but speed
at low cost. But convincing the Fortune-1000 is harder than that: you
must provide something that I can't give myself -- maintenance over years.
You need some large organization for that.

Optimization of Resources
Tool boxes should include a benchmark tool, which could be compiled
on a number of different machines to determine which has the best fit
of architecture to problem.  This benchmarking can take place just before=
initial execution, or during a re-optimization of resources during executio=
Taking this measure, plus that of available network bandwidth, estimated=20
communications demands, etc, the tool box could be placed in the most=20
optimal workspace.  Notice that we are entering into the territory of a=20
priori knowledge of application demands.

My opinion is that compile-on-demand and optimize-on-use is the best policy=
that adapts to the users' needs. See SELF about that.
I think we need some kind of persistent system with lazy-evaluation like

Anything you run on top of someone else=92s (OS or whatever) will be bounde=
by their capabilities. I don't think that distributing it across a bunch
of someone else=92s boxes will give you that much more capability. There ar=
always bounds (communication link bounds, their bounds, device bounds...)

No File System?!
I don't believe in file systems (maybe I'll change my mind).  In any case,
I'd like for tool boxes to behave like organic systems, going to sleep
on persistent computers when not in use, being brought back to the fast
non-persistent computers when being utilized.  What is a persistent compute=
A hard drive with a very dinky CPU could be viewed as a slow slow computer
that is persistent, with a very large memory.  Using the same algorithm for
optimizing the distribution of tool boxes, the less used ones would natural=
migrate towards the hard drive based work spaces when not in use.  I look
forward to the day when all computers have soft power switches; ask the=20
computer to turn off, it moves the tool boxes to persistent storage, and th=
turns the power supply off.

We still FSes to communicate with other OSes and import/export data, though
I agree they are not a system primitive.

Design Goals Overview
The migration of processes (tool boxes and agents) during run-time in a=20
=09dynamic heterogeneous environment

This will come later. Concentrate on the power.

Small minimalistic microkernel (10-20K)

Why need a micro-kernel at all ?
We need objects, including memory managers and intermediate-code
interpreters/compilers, but no microkernel. Only conventions.
Nothing in kernel as proper. Everything is loadable/unloadable modules.
But there are various moving conventions.

A high-level OS will hide all the low-level concerns:
persistent storage to contain data, machine chosen to run code, low-level
encoding of objects, object implementation, human interface code (will be
semi-automatically generated).

I'd prefer a no-kernel OS. Why need a kernel at all ? Let's have a
decentralized system. Of course we locally have conventions, but locally
means any convention can be later replaced by a better one some day, and
the OS part independent from the convention still be valid. This means
programming all that in a *generic* language.

To me the Kernel is 0K.
At boot process, we have a boot loader, which loads modules using some
dumb convention. A second-level or nth-level loader(s) can change the
convention to anything. But basically, we must think in terms of objects
that depend one on the other, each using its convention, and calling other
objects through a proper filter (itself a needed object).
There's no need about a centralized kernel. What we need is good
conventions. Each convention/protocol will have its own "kernel"/object.
The only requirement is that the system is well founded upon the hardware o=
underlying OS.

Application centric
Allow for any level of security, based on applications need
Parallel processing intermediate language
=09not for human consumption
Organic System
=09"HDD as slow persistent computer" storage (instead of file based)
=09No file system?!
=09development as an interactive process (FORTH like)
Implement initial design on top of existing OS's.
=09(distributed file system as improvised network?  Or jump straight
=09in and do a TCP/IP implementation?)
All original coding!!  No copies of others work (for legal reasons)

There are already a lots of systems whose terminology we can reuse. But yes=
an official glossary is needed and when we communicate, we must have a
one-to-one word -> meaning mapping. Thus we also need a referee for words.
Again, I suggest that the subject maintainer ultimately decides (after vote=
while the global referee (you) will end discussions if there still are.
Glossary is of course modifiable, if *new* arguments come for a new

There are a lot of legal and security issues related to embedding in=20
embedded OS's (such as a microwaves). What happens if your OS goes insane
and has the microwave turn-on with the door open and kills someone? The
idea of embedded productive OS's is one that will come, but there are still
a lot of technical issues that have to be resolved by the manufacturers

Microkernel (10-20K)
=09Processor (application, when run on top of an OS) initialization
=09Creation, Bundling and Unbundling for transport, Destruction of
=09=09Tool boxes
=09Interpreter (in kernel for speed)
=09Compiler (in kernel for security?)
=09Automatic segment expansion/contraction control?
=09Agent synchronization

OS ?
Clearly C is not a HLL.
We need a language that's well with persistence, concurrency, etc.
BETA and SELF or STAPLE may be such languages/systems.

No kernel at all ! Only moving conventions.

Anything that produces low-level code must be trusted (i.e. supervisor mode=
which does not mean it belongs to the "kernel", even if almost all hosts wi=
have one.

Tool Boxes
=09Inter-microkernel packet communications manager
=09Tool box re-allocation algorithms
=09Device drivers (HDD,TCP/IP)
=09General applications
=09High level to intermediate code compiler (gcc?)
=09Development tools
=09Workspace tool box mapping/redirection
=09Global tool box mapping/redirection
=09Nearby workspace utilization, access speed, etc statistics
=09Intermediate code to binary compiler (as resource for versatility?)

We need to develop arguments as to what makes our project better than=20
theirs.  This is to ensure we are not re-inventing the wheel, and to=20
counter arguments from others during the process of garnering support=20
for our cause.  Below are some of the OS=92s and languages that have=20
been mentioned in articles and mailings.  The fundamental improvement=20
we are making is the ability to move agents during execution, and the=20
goal of a very small useable OS (10-20K).  Let me know if I have=20
overlooked/misinterpreted any of the following as not having the ability=20
to do the above.

Grasshopper:    http://www.gh.cs.su.oz.au/Grasshopper/
MOOSE:          ftp://frmap711.mathp7.jussieu.fr/pub/scratch/rideau/moose/
SELF:           ftp://self.stanford.edu/pub/papers
STAPLE:         ftp://ftp.dcs.st-and.ac.uk/pub/staple
Xanadu:         http://www.aus.xanadu.com

Our Team
The following four people have said they would be willing to invest a=20
few hours a week on the project.  Here is a little more information about=
each one of them.

Jeremy Casas <casas@cse.ogi.edu>
I'm currently working as a Research Associate (a sufficiently vague title)=
at OGI.  My work here is primarily centered around cluster computing,=20
effective net-resource sharing, and high-level communication=20
protocols/libraries.  I finished by bachelors degree in CS from the U.  of=
the Philippines, Diliman ('90) and then worked in Japan as a systems=20
engineer for a while.  In '92, I came here for an MS which I finished by=20
'93.  I've been doing research  here ever since (1 year or so already).

As per papers, I have co-authored the following

Ravi Konuru and Jeremy Casas and Steve Otto and Robert Prouty and
=09Jonathan Walpole. "A User-Level Process Package for PVM".  In
=09Proceedings of the 1994 Scalable High-Performance Computing=20
=09Conference.  pages 48-55, May 1994.

Jeremy Casas and Ravi Konuru and Steve Otto and Robert Prouty and
=09Jonathan Walpole. "Adaptive Load Migration Systems for PVM".  To
=09appear in Supercomputing '94 proceedings.  Nov. 1994.

Francois-Rene Rideau <rideau@clipper.ens.fr>=20
I've finished my masters in C.S. (even though I'm still struggling to obtai=
the diploma). The thesis was about translating logical expressions from
a language (B, kind of Z cousin -- based on explicit substitutions and joke=
to another (Coq -- based on lambda calculus).

  I'm interested in all the parts of the project, except the low-level
specific device-driver stuff (say, write a SCSI interface for such board
-- yuck).=20
  I also hate *unix* programming, so if there's low-level thing to code, I
prefer writing a direct OS (through the PC BIOS to begin with) rather=20
coding it over *unix*. Not that I deny interest in writing the stuff over

I am ready for all language-related topics, including implementing the
intermediate language, compiling to it and from it.

Andrew Thornton <A.J.Thornton@durham.ac.uk>
My name is Andrew Thornton and I am second year computer science=20
student at Durham niversity, UK.  My particular interests are in=20
operating systems and parallel computing so this project is right down=20
my street.  I have spent that last year or so reading up on current=20
implementations of micro kernels and parallel processing environments=20
with a view to commencing on a project like this.

Luther (l.w.) Stephens <luthers@bnr.ca>
Currently employeed as a telecommunications software developer
with Bell Northern Research. Projects include O-O software
design and development of billing systems for telecommunication
switching systems. I have a B.S. Computer Engineering from
N.C. State University and am currently working on my Master's.

These are the defining terms for our project,  Let=92s see if we can finali=
on the usage and names by the 1st of November.  I am not attached to=20
any of the names, they are all fair game.

Application-centric: This NEEDS to be changed.  I used it to convey an=20
=09idea.  Resource centric has been suggested.  What shall it be?  What=20
=09does it mean exactly?
Intermediate language: It would be nice to have a formal name (Like=20
=09Taos' VPcode).
PIOS Project: Shall we keep the name?
Stack: an array used to store data
Tool: an immutable segment of code.
Tool Box: a collection of tools
Work Space: a collection of tool boxes
Virtual Machine: This is what I was describing as the logical=20
=09configuration to which our intermediate code would compile down to.