ORG, IPC, pem 05/17/93 [djg22] [dpm11]
Dennis Marer
dmarer@iws804.intel.com
Tue, 18 May 93 16:33:28 PDT
Forwarded message:
> From: "Gary D. Duzan" <duzan@udel.edu>
> In Message <2bf86bed.davgar@davgar.arlington.va.us> ,
> David Garfield <david@davgar.arlington.va.us> wrote:
>
> =>Well, the big problem with providing only synchronous IPC is that you
> =>CANNOT make asynchronous I/O out of it without having a second thread.
> =>The standard way in Unix to do I/O reading from two different sources
> =>is to make one process reading each source. Unix now has a kludge
> =>called select() that allows a program to identify the descriptors with
> =>available I/O, but this relies on the system and process being able to
> =>keep up with the I/O. On a VMS system (with asynchronous I/O), one
> =>queues a number of asynchronous I/O calls on each channel, and
> =>processes the data after it is transferred to your processes own space
> =>and you are notified. I feel this is a much more satisfactory
> =>solution.
> =>
> Well, presumably you would want to different things with data
> from different sources, so threads would make sense, and if they
> are lightweight enough, performance should be ok. I think the main
> argument is that object-invocation is basically synchronous, so an
> object-oriented operating system should be synchronous.
I also think threads make sense. As far as being lightweight, this too is
achievable. Memory consumption (from a programmers perspective) is close
to nothing per thread, and task switching time could also be reduced by
having a more lightweight switch to go between threads in a process.
> =>The obvious way to avoid this is to not make the parameter be the
> =>object ID. If Unix had used their equivalent, you would pass inode
> =>numbers to read() and write() calls. The obvious solution is exactly
> =>the same that Unix used, you open() the Object and get a process-
> =>specific identifier. There is then no possibility of forging
> =>anything, and the authorizations are all checked only once.
> =>
> This is certainly one way of doing it, but it adds a lot of
> state and complexity to the kernel, which is exactly where not
> to put complexity in a microkernel-based system.
Ah, I still don't believe in the kernel is where IPC should go - support for
IPC, yes (shared memory, etc) but not the IPC itself. Maybe. I dunno.
> =>On the subject of objects on remote machines, I believe that the
> =>correct way to do this is to access an object of a "remote access
> =>class", and tell it a remote machine and object identification to
> =>connect to. Once it is connected, all method calls get passed to the
> =>remote machine. [This might be a seperate meta-class, just so we can
> =>grab all method calls at one point, while not increasing the overhead
> =>for normal objects.] Once you terminate access to the local object,
> =>the network connection is dropped, and the remote machines object is
> =>no longer used.
> =>
> A remote access object is a good idea, but we shouldn't use a
> connection-oriented protocol to do the job of an RPC. We can also
> implement TCP/IP, SNA, and Vines objects if we want.
In OOP terms, creating an instance of a "remote access class" establishes
the connection (by calling its constructor); when it is destroyed, it calls
its destructor to terminate the connection.
I use the term "connection" loosely - I don't mean to imply any protocol here.
Ideally, network connections should be possible independent of the protocol.
That's ideally...
Laterz.
Dennis