It was our last best hope for peace...

Kragen kragen@pobox.com
Thu, 8 Oct 1998 15:52:56 -0400 (EDT)


On Thu, 8 Oct 1998, P T Withington wrote:
> You could require that all your programs be written in a safe language.  
> That wouldn't prevent someone from intentionally making unsafe operations 
> (they could write their own assembler, etc.) which is why you need 
> hardware support to make a system secure againts malicious users.

With current technology, making a system secure against someone who has
physical access to it requires large quantities of high explosives.

Within the constraints of only providing remote access, you could as
easily prevent someone from intentionally making unsafe programs as
prevent someone from corrupting the filesystem.

In Unix, the integrity of the filesystem depends on a number of things,
one of which is special files called "directories" meeting a number of
validity constraints.  Unix enforces this by keeping a special bit on
every file that tells whether it is a directory or not.  If the bit is
clear, the user can put whatever content they want into the file.  If
the bit is set, the user cannot directly modify the file at all; only
the OS kernel can modify it, and it ensures that it does so only in
ways that preserve the integrity of the filesystem. 

There's another constraint: it is not possible to set or clear that bit
on a file that already exists.

It's entirely possible to keep a bit in the filesystem that indicates
whether or not a particular file is the output of a trusted compiler,
and restrict access to that file in the same way that Unix restricts
access to directory files.

With this scenario, it is entirely possible to prohibit anyone from
using their own assembler --- they could write one, but they couldn't
run its output.

Software distributed for such a system would have to be in a form that
the trusted compiler would accept as input.

Replacing the trusted compiler would be, at worst, as difficult as
replacing the kernel, although you could make it easier.

> I don't know how Be do it, since I thought they just programmed in C++, 
> which makes it very easy to unintentionally screw up.
> 
> Hardware pointer security doesn't have to mean separate address spaces.  
> Capability machines provide hardware pointer security, essentially on an 
> object-by-object basis.

Did the Symbolics machines do this, too?  Or was it possible to forge a
pointer?

How do capability machines handle memory allocation, virtual memory,
and the other tasks that normally require violating the type system?
Are there papers about these monsters on the Web?

Kragen

-- 
<kragen@pobox.com>       Kragen Sitaker     <http://www.pobox.com/~kragen/>
A well designed system must take people into account.  . . .  It's hard to
build a system that provides strong authentication on top of systems that
can be penetrated by knowing someone's mother's maiden name.  -- Schneier