[unios] m-kernel

David Jeske jeske@home.chat.net
Wed, 13 Jan 1999 11:01:08 -0800


On Tue, Jan 12, 1999 at 11:12:30PM -0800, OJ Hickman wrote:
> David Jeske wrote:
> > Microkernel which are pure IPC are not fast. Witness Mach pulling
> > device drivers into the kernel, NT pulling device drivers into the
> > kernel, QNX resorting to shared memory for low-latency communication.
> 
> I think that shared memory based systems show the most potential for
> competitive performance. When ever posable services should be
> implamented in shared class objects. These are both data and
> code encapsulated in a shared object and so can serve as a form of IPC
> or hardware driver. [Some process blocking may be needed]

Show the most potential for competitive performance? Lets just be
clear, shared memory is faster than stream based IPC. 

My problem is not that 'you can't do what you want in a
microkernel'. You can do anything you want within any system. My
problem is that the kinds of things I like to do involve plugging
things together becased on their semantic intefaces, not based on
whether or not they were put in the same process, or are using shared
memory based IPC or stream based IPC.

More below.

>> 1) have a disk driver export a raw disk device
>> 2) have a partition server read this raw device and export partition devices
>> 3) have an application access either a raw disk device or a partition in the
>>    same manner
> 
> > However, microkernel's I've seen combine 1 and 2 into a single
> > process. Essentially, they 'macro-ify' it, and export it through
> > IPC. They do similar things for network stacks and display
> > servers. When they don't, they witness a performance hit for having to
> > 0cross 2 or more IPC boundaries.
> 
> Why not make 1 and 2 shared objects and have a storage server as 3?

If you are asking me why I don't want to do this.... The answer is:
Because I don't like the idea of having to rigidly decide, and
compile, a block of code as a 'shared object' or a 'server'. I want to
compile a block of code which exports an interface. I want other
blocks of code to talk to that interface. I want the system to decide
whether to inline the two blocks of code together (i.e. for maximum
speed) or to run the two blocks of code on two different machines via
network IPC.

In working with microkernels, one of the most bothersome deatils (to
me) is that you are constantly compiling all this RPC-style stub code
which basically takes data out of an interface, stuffs it across some
IPC channel, just to have more code on the other side pull it back
out. If we must have this kind of code, I'd at _least_ rather have it
created by the system itself, and not by the developer or RPC
stubber. However, I think there are many places where we can allow a
smart run-time to transparently optimize out IPC boundaries for simple
single-client/single-machine cases.

> I think a lot of IPC is avoidable without 'macro-ify' the over all
> service.

You propose making 1 and 2 shared objects and 3 a storage
server. Which means 1 and 2 do not get the same level of 'safe'
protection that a separate process server does.

However, in response, I think you are correct, and for me, the answer
is, stop compiling the implementation details of the communication
channel into the executable. I'd rather use a run-time
binding/compiling solution to make target specific code which handles
the specifics of IPC.

As long as we limit ourselves to producing static binaries at compile
time, the tradeoffs of existing kernel's and software systems will
always exist. Namely, some kind of 'safety vs. run-time speed'
tradeoff.

-- 
David Jeske (N9LCA) + http://www.chat.net/~jeske/ + jeske@chat.net