[unios] BAck to the basics

Pieter Dumon Pieter.Dumon@rug.ac.be
Mon, 25 Jan 1999 13:47:38 +0100 (MET)


Let's get back to the basics of computing...
And design an OS model form there. We are designeing new models
which have beautiful features we don't need but lack the things we _do_
need. That's not the way to go. Before thinking of some 'innovative' 
model, that in the end is nothing more than existing stuff, we should
think at how computing is organized. 
So, let's start with 100-years-old stuff such as the Turing model, and
the Neumann model on which all modern computers are built. Let's start
with the first diagram in every course on computers:

 INPUT -----> PROCESSING -----> OUTPUT

When new models are designed in the UniOS group, this basic thing seems to
be forgotten. We cannot avoid it however, because it's how the hardware of
Turing/Neumann - computers is built. A Neumann computer is basically based
on the idea that data and code are one : code can be seen as data and
acted on as such. (code can be written etc.)
 
Some process acts on data and outputs other data: e.g. a graphics filter
acts on some image data and outputs anotehr format, or a text processor
read a data file and outputs the formatted text on screen.  

Wether you use a functional or object-oriented system, this is the same.
The problem with all new models is that we think far too much on how to
make them beautifully object-oriented without regarding what functionality
it should have. Whether you use the name "objects" or "files", it is
basically the same thing: files are handled by processes, objects are too,
and these processes pass the data from these files or the properties from
these objects to other processes with IPC. So you can form a linked chain
of processes. These processes can do 2 different things: act on the
objects/files to read their properties/data or do some processing on this
properties/data. The processes acting on the objects/files directly are
thus representing these objects, while the others do the processing work
in the diagram above. So, whether you use a sort of Unix-system or a
hyper-object-oriented model, chaining of processes is just basically the
same, except for the fact that we use other names in both systems and
another paradigm but the binary architecture is the same. Whether you want
it or not, if yopu want a stable and secure system, seperate objects in a
system will have to be represented by seperate threads of execution. 
In an object-oriented system, everything will be an object, so
'users','images','programs','devices' will all be some sort of object on
whichi can be acted by processes, that are objects themselves. This is
fully equivalent to an old "file"-oriented Unix-like system : users can be
represented by files and so: files are just some collection of bytes. It's
up to the processes representing them or acting on them to interprete it.
One advantage that a modern system  should have over a Unix-y system is
that objects/processes should be able to be chained in more ways :
multiplexing and multicasting of data: So, while unix follows a scheme
like this:

 input file(object) ---> process ---> process ---> process ---> ouput data

a system like this should be possible:

                              ___|process|-->output
 input object --- |         |-| 
                  | process |---|    
 input object --- |         |---| process|--->output 
                              |            |->output
 input object ----------------|
  
   

(Okay, my text art is not of high quality, bu it's just to give a rough
idea)

This is NOT an object model, it's just what every object model should
provide, and what should be added to a Unix-y system. 

I hope I did make sone things clear...


Pieter

--------------------------------

Pieter.Dumon@rug.ac.be

http://studwww.rug.ac.be/~pdumon

--------------------------------