ORG, IPC, pem 05/17/93 [djg22]

David Garfield david@davgar.arlington.va.us
Tue, 18 May 1993 00:54:00 EDT


Gary D. Duzan wrote:

> Peter Mueller <mueller@sc.zib-berlin.de> wrote:
>
> =>Let me comment on this:
>
> => 1. A user should, of course, use blocking RPC (which meant
> =>    ROI).  This is the usual way to communicate in an
> =>    OO-environment.  (Ok, but to be honest: I suppose that their
> =>    will be some low-level user who uses low-lewel languages and
> =>    who will call IPC primitives directly ...  but that's for the
> =>    system people :-) Nonetheless their must be provided
> =>    mechanisms within the kernel with which ROI can be provided.
> =>    ROI is a more conceptual view of communication. As is RPC in
> =>    common procedure oriented environments. RPC also is only a
> =>    conceptual model, where the IPC stuff is hidden by stubs,
> =>    which are in turn automatically created.  We have also to
> =>    create such "stubs", though they will actually be some kind
> =>    of communication objects.  They should handle all necessary
> =>    stuff to invoke a remote method.
>
>    Fair enough.
>
> => 2. Now for the difficult part. I do not agree to include
> =>    asynchronous calls within the kernel.
>
> =>>    So I would advise making IPC asynchronous at the system level and
> =>> providing standard library (or langauge-imbedded) functions for
> =>> synchrounous IPC. When the secure function call technology appears, we
> =>> can simply change the library function (compiler) to use the new call.
> =>
> => And I say make it vice-versa: Provide synchronous IPC at system
> => level and mak e a standard lib. for asynchronous IPC. Yup, and:
> => BEWARE, I'm a tough guy to argue with in that subject ...
> => grrrrrr >:-|
>
>    Ok, ok, calm down. :-) I can live with synchronous IPC as long
> as we have good context switch time and kernel-level multiple
> thread support.

Well, the big problem with providing only synchronous IPC is that you
CANNOT make asynchronous I/O out of it without having a second thread.
The standard way in Unix to do I/O reading from two different sources
is to make one process reading each source.  Unix now has a kludge
called select() that allows a program to identify the descriptors with
available I/O, but this relies on the system and process being able to
keep up with the I/O.  On a VMS system (with asynchronous I/O), one
queues a number of asynchronous I/O calls on each channel, and
processes the data after it is transferred to your processes own space
and you are notified.  I feel this is a much more satisfactory
solution.

Until recently, I had felt that we should provide both synchronous and
asynchronous forms, but I now suspect that, by using only the
asynchronous form, we can have all method receptions having basically
the same starting environment.  With carefull design, starting a new
thread should be a very minor chore, probably less than the cost of
transferring data as part of the call.

> => Now I've got a question: An address of (Object id, method id
> => [,authorization] ).  Why is authorization necessary? Isn't it
> => enough that, IF an object have the address (object id, method
> => id), THEN it is automatically authorized to call the method?
> => Then authorization is separated from the objects. As the actual
> => address is grabbed from a (what's it name DIRECTORY? and:
> => where's the glossary?) Object, why not providing authorization
> => tasks within that object?
> => You are then again free to provide several security policies on
> => the fly by choosing the appropriate directory object. So you
> => can range from a system with no security at all to a full
> => secured system. Of course, the request for receiving an object's
> => address must hold a kind of ticket. This ticket should identify
> => the owner of the request and its permission rights.
>
>    The case you mention is using implicit rather than explicit
> authorization. Also, unless steps are taken to insure that it
> is very difficult (ideally impossible) to forge an address
> (capability, protected pointer, whatever), then it will not be
> secure. These steps I rolled up and called "authorization",
> since there is any number of ways to do it.
>
>                                         Gary Duzan
>                                         Time  Lord
>                                     Third Regeneration
>                          Humble Practitioner of the Computer Arts

The obvious way to avoid this is to not make the parameter be the
object ID.  If Unix had used their equivalent, you would pass inode
numbers to read() and write() calls.  The obvious solution is exactly
the same that Unix used, you open() the Object and get a process-
specific identifier.  There is then no possibility of forging
anything, and the authorizations are all checked only once.

On the subject of objects on remote machines, I believe that the
correct way to do this is to access an object of a "remote access
class", and tell it a remote machine and object identification to
connect to.  Once it is connected, all method calls get passed to the
remote machine.  [This might be a seperate meta-class, just so we can
grab all method calls at one point, while not increasing the overhead
for normal objects.]  Once you terminate access to the local object,
the network connection is dropped, and the remote machines object is
no longer used.

-- David

P.S. I should be available all the time except for a week in July and 
a week in December.
-- 
David Garfield/2250 Clarendon Blvd/Arlington, VA 22201   (703)522-9416
Email: david%davgar@uunet.uu.net or garfield@verdi.sra.com