PIOS and Protestant

Mike Prince mprince@crl.com
Tue, 1 Nov 1994 23:20:16 +0000 (GMT)


(This is a fake mail from a Usenet message)

From: mprince@crl.com (Mike Prince)
Newsgroups: comp.os.research
Subject: Re: PIOS and Protestant
Date: 1 Nov 1994 23:20:16 GMT
Organization: CRL Dialup Internet Access	(415) 705-6060  [Login: guest]
NNTP-Posting-Host: ftp.cse.ucsc.edu
Originator: osr@ftp

In article <391m57$jjf@darkstar.ucsc.edu>,
Dejan Milojicic <dejan@tesla.osf.org> wrote:
>Ok, I have three classes of questions: academic, commercial, and
>general. I apologize if I sounded negative in previous email.
>On contrary, I would really be happy if you can succeed in your
>ideas, and I wish you good luck. However, I am suspicious about
>a couple of issues in your proposal. About your use of migration,
>and about building a new microkernel in general. I've been following
>migration mechanism for half a decade, even doing some work of
>my own. In general there was not so much interest in using it. 
>Moreover, even the other earlier implementors of well-known migration
>mechanisms had similar experience. Check experience of V kernel,
>Sprite, Accent, Mach, Chorus, Emerald (and there are many others). 
>Check also recent feedback to NOW project at Berkeley, as presented 
>at ASPLOS.
>
>Although I still believe in the future of migration, even
>at various levels (coarse, medium and fine grained, depending
>on applications), what bothered me in your email was the ease
>with which you suggest heterogenous migration when even in a
>homogeneous one hasn't succeeded so far. But let me get back 
>on questions. First the academic.
>
>1) How do you proof that systems/applications can benefit from migration
>   among the same architectures, let alone the different architectures
>   where the costs are going to be much higher. (Krueger is one of 
>   the rare who (through simulation) demonstrated benefits of process 
>   migration, however, even he changed his mind lately, check his
>   recent publications).

How did you upgrade your last system?  Buy a whole new computer, swap out 
the motherboard, or if you were lucky change a few jumpers and switch the 
CPU.  It would be easier to plug in a new CPU card to work WITH the current 
CPU.  At worst all of the new processing would go to the new faster 
CPU.  At best you'd have symmetric processing.  At the very best you'd be 
able to buy five of those boards and get a nice speed-up.  Cheap.

If you mean to address the question of migration of code during execution 
then there are two answers;

Persistent computing can be accomplished.  When you shut down your 
system, processes are moved to non-volatile memory.  Turn your system 
back on and they come alive.  Ooops, you changed your hardware, new CPU, 
etc.  The process can still come back, the application doesn't even know 
it had been suspended.  Even more fun example; you go home for the night, 
begin accessing you work computer to finish somthing up, and the work 
moves towards your home (or is that a nightmare?).

Secondly, as far as speeding up your system while it's running, try these 
scenarios.  My two CPU system has three applications up and running, 
distributed as such; A and B on CPU 1, and C on 2.  (if you like you can 
substitute Excel for A, DLL graphics card driver for B, and some Comm 
program for C). My downloading session finishes and shuts down C, but I'm 
still using A.  C sits idle.  In time either A or B should get shuttled 
to 2 and my PC will run faster.

People are still not multitasking that much on their PC's (thanks to 
MS-DOS) so this scenario is stretching it a little.  But given time, your 
PC will inherit many daemons that will play in the background and benefit 
from having more than one CPU to bog down.

Last of all we can go to the big examples; rendering photorealistic 
images, running big spreadsheets, etc.  Already people are ganging 
together tens of Amigas to do Babylon 5 renders.  None of my friends have 
that many toys, but imagine being able to cheaply plug in 20 CPU's, 
incrementally.  PC's will become fun, helping to exploit peoples creative 
potential.  And my friends on Wall Street will have less time to call me 
while their spreadsheets are cranking out.

>   What is your system intended to 
support?
>   General applications, scientific, data base? For each there is a
>   different need, and probably different kind of LD support, some
>   may not need migration at all.

I intend to support all the above by creating a foundation upon which 
higher-level OS's would provide the requisite functionality.  A really 
vague answer I know.  My main goals for now are fast mechanisms for 
moving my execution primative, agents, between code segments.  Regardless 
of whether the destination is local or not.  And for an intermediate 
distribution language that can be interpreted or quickly compiled down to 
binary on a variety of machines.

>   In general there is small class
>   of applications that need load distribution. Check Utopia that 
>   does most of LD at user-level.

I disagree.  In any PC there are at least 4 different CPU's 
(disk/keyboard/video controllers, main CPU).  We are already distributing 
our computation amongst these as a total hack.  We need to unify our 
system architecture to include these as well.  Our plug in boards should 
also have CPU's (serial ports, some sound cards), etc.  If everyones 
cards were smart, a unified view of the system as such, would yield 
more simple, high level calls between CPU's.  This would in turn minimize
inter-CPU traffic, reducing hardware requirements or increasing system 
throughput.  In addition, this "novel" idea of plug and play would come 
naturally as an application of inter-processor communications.

>2) Estimates of the costs of migration to the machine of another 
>   architecture? They are much higher than among machines of the 
>   same architecture. Would this not outweight migration in some
>   cases? Have you made any estimate on this.

The architecture I am proposing for storing data would facilitate the 
transport of data to differing architectures at only the cost of 
transmission.  There would be no conversion of data between disparate 
formats.

The cost of moving the code would be slightly higher, but still 
negligible.  Taos is able to compile code faster than the disk drive can 
transfer the raw data.  Migration of code would be part of a settling 
processes wherein the system finds a more comfortable load balance.  Not 
something being continually done.

>> A PIOS is application-centric, breaking the machine-centric model which 
>> has pervaded OS design for decades.  Applications are viewed as resources
>> and are named.  The name is not tied to any specific machine.
>> A resource is a grouping of code and data, and agents which act on that 
>> data.  A resource is not bound to any processor and may be migrated to 
>> any other processor.  Agents execute code within resources, and can travel 
>> between resources with data.
>
>> A PIOS will rely on an intermediate language, below programming, but general 
>> enough to compile on a wide number of machines.  The language will
>> compile down to binary just before run time, or after a resource is migrated 
>> to a new processor.
>
>3) What are the costs of doing this? Which applications are envisioned?
>   What is predicted execution time of applications, and how many can 
>   be running at a site which could be a reason for migration of some 
>   of them?

Again, the cost of this compile is minimal (See Taos).  Applications; 
windowing systems, graphical rendering, real-time decompression, 
distributed processing (shifting sessions between workstations), personal 
communications, to name a few. 

I am hoping for compiled code to execute at 60-80% of the speed of 
applications compiled specifically for the destination machine. 

I have some ideas about the algorithms which will determine migration 
conditions, but nothing specific enough to answer your question.

>Commercial questions:
>
>> The initial goal of the project will be to design a small microkernel 
>> (10-20K), the intermediate language, and a small toolkit of 
>> resources for developing and testing.  I would like to implement the design
>> on at least a 486/Pentium and a PPC.  In order to encourage people to
>> experiment with PIOS, the microkernel would also be packaged as an application
>> and run on top of existing OS's.  Then, when conditions warranted, the
>> true microkernel version could be used.
>
>1) Time and man power for doing this? Chorus, Mach, Spring, QNX, all of 
>   them required quite a significant time/man_power. Would you not 
>   be duplicating their effort? If you want completely free code,
>   why not starting with Linux, Lite, VST, ... not to mention
>   many other academic systems.

I want the foundation kernel to be up by February of next year.  It is 
minimalistic and Forth like.  Out first goal is to provide the basic 
functionality needed to test our ideas.

I would not be duplicating their effort because they cannot 
transparently migrate a process nor share data between disparate CPU's. 

Again, Linux, Lite, VST, etc do not have what I want.

>
>2) Have you checked ANDF project at OSF that already provides some
>   of the needs for intermediate format?

Some, but not all...

>
>> Here are some of my perceived advantages to PIOS;
>> 	Forward and backward compatibility of software
>> 	High level of sharing resources amongst applications, true
>> 		code reusability
>> 	Absolute code portability
>> 	Efficient use of disparate and dynamic hardware resources
>> 	Elimination of backward compatibility hinderence to
>> 		new processor development
>> 	Parallel processing in a heterogenous environment
>
>3) For which applications.

All applications.

Forward compatability for all applications.  Microsoft will be bummed 
their old wordprocessors still work on the HP/Intel VLIW processor.  Their 
applications are good enough now, that they could survive a development 
freeze for 10 years (I'll get flamed for that too) as far as 70% of the 
population at large is concerned.  

As far as code sharing, check out DLL's.

If the intermediate language works out, then any application will run on 
any computer.

As far as the disparate and dynamic hardware resources, elimination 
of backward compataiblity hinderence, and parallel processing, read my prior 
remarks.

>General comment. 
>
>In the today's world with Microsoft on the one side and 
>the rest of companies offering various versions of
>operating systems and a number of microkernels, yet another
>one doesn't seem probable to succeed. Don't forget that 
>the same idea came from DARPA, using Mach on all platforms,
>and it does run on different architectures, but you should
>be aware how much effort is put into it by many companies,
>including OSF. 

I recognize the deck is stacked against us.  I have had very favorable 
responses from a number of people ready to contribute.  If nothing 
becomes of this, but the propogation of a few new ideas, we all still 
win.  But possibly, at the end of a long tunnel (I'm guessing '97 or even 
the year 2000) the ideas we are advocating will be mature and the market 
will be ready.

Thank you for the input and questions,

Mike