No subject
David Garfield
david@davgar.arlington.va.us
Thu, 20 May 1993 00:44:32 EDT
Subject: IPC - synchronous/asynchronous etc... [djg25]
"> " is Peter Mueller
> I want to comment to the main design decision, whether to use synchronous or
> asynchronous communication primitives. As Moose will be an oo-os it should be
> possible, to provide a basic communication object (note: the object is
> part of the kernel). Within an application one can create an instance of that
> object, say, COMMUNICATION, something like this:
>
> main() {
> COMMUNICATION comm;
> MESSAGE msg;
> ADDRESS addr1, addr2;
>
> addr1 = ... // Initialize destination address
> comm.send(addr1, msg); // Send the message
>
> // do something
>
> addr1 = ... // Initialize source address
> comm.receive(addr1, msg, addr2);
> comm.reply(addr2);
>
> // do something
> }
Assuming you are talking about doing communications with a device
object, like a serial port COMMUNICATIONS object, then it can look
like this:
MESSAGE msg_connect, msg_send, msg_receive; // constant values
main() {
COMMUNICATION comm;
ADDRESS addr;
char output_data[1024];
char input_data[1024];
addr1 = ... // Initialize destination address
comm.send(msg_connect, addr); // establish the connection
comm.send(msg_send, output_data); // Send some data
// do something
comm.send(msg_receive, input_data); // Send some data
// do something
}
> The program above demonstrates the lowest level of IPC. A "normal" user will
> not see this in this way, he will use ROI, where a method of a remote object
> is directly invoked. Nonetheless, ROI must be mapped to such IPC primitives
> (which we call "stubs" or communication objects, as above)
>
> Now my idea: I'm definitely convinced that synchronous IPC is the right choice.
> (But: there some of you who are not as much convinced as I am. Well, well. :-)
> In my opinion, using very lightweight processes (threads) will give us the
> possibility to provide *asynchronous* IPC outside the kernel.
>
> I think, both IPC versions can be used with the following interface, namely
>
> send, receive, reply
There are two basic models of communications.
In one model one person sends and the other person receives. In this
model, communication is basically uni-directional. You can do it bi-
directional, but the directions are totally independent. This model
is basically synchronous, and I am not sure how you make it
asynchronous, other than having notification that you can receive
something.
In the other model, one person says do_X, and the other responds
I_did_X. In this model, communications is basically bi-directional,
in a master-slave mode. One person can say do_X or do_Y, and the
other can only answer that it did it or not. This model is easily
either synchronous or asynchronous, from the senders point of view.
In synchronous mode it is equivalent to a function call. In
asynchronous mode, the originator at some later time learns that the
other guy is finished. The thing that goes weird in this model, is
that the receiver basically gets started out of the blue to do
something, and can have multiple threads running all at once, but that
is why we're building an OS.
For use in Moose, the second makes more sense to me than the first, as
it conforms nicely to a message-invokation model. In the case of
Moose, do_X could be "write this data out" or "read data and put it
inn the buffer I gave you". The send can then either tell how and
where it is to be notified of completion, or receive back a tag that
it can wait for the completion of.
> If we can build the IPC primitives within an object, it should be possible,
> to create two (2!) kernels: one synchronous and one asynchronous. As the
> interface should be the same, there should be no problem to the application
> layer and to the lower layers as well. We can then compare these two or
> even offer both.
Now you want two(!) kernels!?!? Do you think we are masocists? I
mean, they can't be the same OS. They would require two completely
different sets of support software ...
[...]
> David wrote:
>
[...]
> > On a VMS system (with asynchronous I/O), one
> > queues a number of asynchronous I/O calls on each channel, and
> > processes the data after it is transferred to your processes own space
> > and you are notified. I feel this is a much more satisfactory
> > solution.
>
> Actually, I think the VMS IPC is a third form of I/O, though it's
> asynchronous. If I understand it right, it is possible, to direct an I/O
> to write into a pre-defined data space within my own process. I suppose,
> there's a call like
>
> receive(chan, to_memory)
>
> where 'to_memory' points to (big enough) data space. The OS waits for data
> sent on 'chan' and relay the data to the indicated data space. After a
> sender says, "End Of Send" a notification signal is transferred.
The VMS low level IPC is through an interface name SYS$QIO(). You
give it an open channel (VMS's equivalent to a Unix file descriptor),
a message number with modifiers, three different forms of return
status values, and six arbitrary parameters. The receiving program,
known as a device driver, may process the message in any way it
chooses. It may use all six arbitrary parameters in any way it
chooses, either as numbers, pointers to data to be read, or pointers
to data to be written (though if it wishes to defer the activity
pending a hardware device's activity, it can only have one pointer,
and it must tell at compile time if the data is to be copied to/from a
system buffer [Buffered I/O], or locked in memory for direct access by
the device driver [Direct I/O]). Within VMS, devices are categorized
into classes that all respond to the same basic set of messages in the
same way, so that you will know that if you have a class 1 (disk)
device, you can (if you have the privilege to send the message in
question) send messages to do physical read and physical write
operations and expect them to work.
> In my opinion this differs from my use of the term 'asynchronous' IPC. Here
> a sender is not blocked, because data is immediately transferred into a
> system internal buffer space. On the other hand, a receiver might be blocked,
> if there's no data available. (In the above case a receiver is not blocked.)
> (Or am I mixing things up?)
>
> Nevertheless, if we do agree that we can provide each form of asynchronous
> IPC by an additional buffer server this VMS asynchronous method can be simply
> provided: Create a buffer server, who takes such an interface, and who
> will be able to write to a process' data space. Then notification is done by
> invoking a method. The easiest way would be within this method, to set a
> flag that data is ready to be used.
Run that by me again....
=============================================================================
"> " is Gary D. Duzan
">=>" is David Garfield (me)
">=>> " is Dennis Marer
">=>>> " is Gary D. Duzan
>=>>> Well, presumably you would want to different things with data
>=>>> from different sources, so threads would make sense, and if they
>=>>> are lightweight enough, performance should be ok. I think the main
>=>>> argument is that object-invocation is basically synchronous, so an
>=>>> object-oriented operating system should be synchronous.
>=>
>=>"basically synchronous", but only when you do simple stuff. More
>=>complicated stuff requires more advanced techniques.
>=>
> Well, if we use Ellie, we can always use future return objects. :-)
Sounds like its just a language construct for asynchrous messages. :-)
>=>> I also think threads make sense. As far as being lightweight, this too is
>=>> achievable. Memory consumption (from a programmers perspective) is close
>=>> to nothing per thread, and task switching time could also be reduced by
>=>> having a more lightweight switch to go between threads in a process.
>=>
>=>Agreed. These threads need only grab a copy of the loaded image's
>=>process space, add a stack, copy parameters, and run.
>=>
> Sounds like we are going to do threads. Now the trick is to do
>it portably and efficiently. We may also want to consider what sort
>of protection among threads is required/possible.
I think we won't have much choice, but it may be reasonable for a
image to specify that only one thread may be active at any time, all
others must wait to get in (subject to rule changes to prevent deadlock).
>=>>> This is certainly one way of doing it, but it adds a lot of
>=>>> state and complexity to the kernel, which is exactly where not
>=>>> to put complexity in a microkernel-based system.
>=>
>=>Protection is REQUIRED in an operating system. If we don't have it,
>=>we get a system known as DOS. Protection includes a necessity for the
>=>OS to protect objects from unauthorized processes. Any other
>=>suggestions on this that are proof against forgery?
>=>
> Very sparse address spaces and encryption can be used, as in the
>Amoeba system. Sparse addressing (i.e. say, a 128-bit number with
>object numbers scattered around in it) may be sufficient for a local
>system.
Even if you do pick them at random in a sparse address, you are only
getting good chances of preventing forgery, not proof. Remember, if
the OS is not protected (i.e. PROOF) against the applications, any
application can trash the OS. We call this an unprotected operating
system. DOS is unprotected.
>=>If IPC is not in the kernel, how do you communicate with the IPC?
>=>
> Quite so. Unless the hardware has some IPC support, it has to
>be in the kernel. The 386 seems to be one of the exceptions in that
>the gates (if I remember correctly) can be used here. Other machines
>aren't so fortunate. Regardless, it needs to be in the kernel API
>whether it actually goes to the kernel or not.
All CPUs with multiple protection levels will have a method for a user
level program to invoke something at a higher protection level, and
pass some sort of parameter. The unusual part of the 386 is that the
parameter is that any of several calls can be used. Most hardwares
just let you pass an n-bit constant.
>=>Well, connection may be used loosely here, but in some ways, a
>=>connection-oriented protocol (like TCP) is a GOOD thing. There are
>=>things that will work out much easier if it is not necessary to
>=>establish one or more objects, connect them appropriately, use them,
>=>and disconnect WITH EVERY CALL. I admit that the RPC style makes
>=>sense for something as simple as a disk access (NFS), but not for
>=>everything. Can you imagine trying to access a remote Vines
>=>connection-oriented connection over anything other than a connection-
>=>oriented protocol?
>=>
> Quite so. So are we going to implement StreetTalk for Moose too? :-)
If you can come up with a useful protocol and if failure to provide an
implementation for the protocol will result in different people
writing different implementation for it, then we need to provide an
implementation as part of the original development. At this time,
TCP/IP fits the bill, as do SCSI hard disks, and MFM/ESDI/IDE hard
disks, and VGA and SVGA displays. Other protocols and interfaces may
also qualify.
=============================================================================
Sorry about the length, but it seemed to make sense to put the
messages together since they where both based on variants off one
original message, about the same basic topic, and I it didn't seem
right to cut much.
David
--
David Garfield/2250 Clarendon Blvd/Arlington, VA 22201 (703)522-9416
Email: david%davgar@uunet.uu.net or garfield@verdi.sra.com