SEC: object security

Chris Harris
Mon, 31 Oct 1994 18:59:56 -0800 (PST)

On Tue, 1 Nov 1994, Francois-Rene Rideau wrote:

> > As for securing object handles, I propose the following: each object, 
> > process, or whatever our unit of allocation for security, would have a 
> > patch of memory that could only be accessed by the OS. 
> I've thought about it; but it means vey few new objects are created,
> as object creation would involve a system call. Now this contradicts
> the objects being fine-grained, and also inlining of messages.
> Thus, I think this method should only be used when communicating with
> unsafe (e.g. Unix native or emulated) processes.

Sounds good, I guess.  Most books, etc. about IPC of any sort seem to 
make it sound like a system call is required.  I myself can't visualize 
any other way to communicate with system-managed objects.  Care to 
enlighten me (and any others who may be wondering) about that?

> > When you first 
> > access an object, you specify what offset into this protected area to use 
> > as the handle.  If access is allowed, the appropriate area of the secure 
> > memory is updated to include the handle.  From then on, you could pass 
> > the offset into the protected area instead of a handle.  The OS 
> > would decode this, and use what it finds as the handle. 
> On architectures with segmenting (e.g. intel 386 family), this may simply
> mean using LDT segment descriptors as object handles !

Hmm...Curious idea.  Of course, this can impose some fairly low limits on 
things.  If TUNES supports persistant processes (like it should imho), it 
would seem like more than 1,000 (or whatever the number of LDT entries) 
objects might be in use at once.

Actually, I suppose a breif explination of at once might be in order.  In 
my imagionary world of perfect OSs ( =) ), you could start a number of 
processes executing, like you would start apps in OSs today.  However, 
instead of quitting the apps, and removing them from memory, you would 
simply let them sit idle long enough, and they would be paged out of 
memory.  Well lets suppose that on any randomly selected TUNES machine, 
the user might have 500 processes, both active and dormant.  Now if each 
process used just three objects, we'd be over the limit of LDTs.  The 
only way I can see to get around that would be to re-number handles on 
the fly, but that would waste a lot of computing power to keep track of.

> > While it might 
> > be slightly slower, it would be impossible to "guess" an object's handle, 
> > and it would make it easier to remove privlidges at the same time. 
> Well, you'd still have heavy-weight processes that would share a set of object
> handles. And what I hate about Unix is all those kind of heavy weight things.
> Your object handles would be alike Unix file handles ! It might be slightly
> better than Unix handles, but not much, and you'd have to stick to 
> coarse-grained objects. Yuck.

Yeah, I've been thinking about a way to get around that, and I guess 
someone else has already done it for me.  =)  BTW, for anyone who's 
listening, the same argument would be applied for my concurrent 
process/LLL proposal.  By making each process responcible for less work, 
context switch times go down, making everything might lighter (and 
therefore easier to pick up.  hehe....).

> Really, I mean no offense. I myself lost several weeks before I found out such
> method was definitely *not* ok. Well done anyway ! It means we all make some
> (many) mistakes, and believe me, this is one of them, and we are lucky you
> proposed it, which means we'll know about it.

No offense at all.  That's why we don't all work independently, right?  =)

> > The 
> > dependence on compilers for security would also be removed, and this, 
> > imho, is a VERY important.  Backdoors are not a good idea....
> As I see things, user compilers would only produce intermediate code
> in some safe language; then a system backend would translate the intermediate
> code into binary code. If the LLL is well done, it should be safe as for
> forgery. Only system programmers/compilers could write potentially unsafe
> code. We still have to trust the system back-end; but hey - you must always
> trust someone, mustn't you ?

Gotcha.  And I suppose a step down from foolproof would be worth the 
major speed increase.