Thoughts on provably secure systems
Paul_Campbell@ENGELHARD.COM
Paul_Campbell@ENGELHARD.COM
Tue, 11 Jan 2000 13:51:45 -0500
I've been looking at the papers floating out there on provably secure systems
and
simultaneously thinking about operating systems using them.
Here's the basic problem:
If you write your own stuff in a vaccuum, you're probably not interested in much
more
than the compiler gauranteeing that you can't crash your own hardware. You'd
like to
have as much support for finding/fixing bugs as possible but you don't really
want it to
interfere with getting to the bottom line any more than necessary...
code that does what you want.
As a user somewhere else if I want to use your code, I also want to make sure
that
your code won't mess up my files or my hardware (crash/disable it), or steal any
of
my secrets or steal CPU cycles and such.
I claim that the security problem can be broken down like this:
A program must not use any resources outside those it is authorized to use.
There are 2 approaches. The current traditional approach (including Java) is
that you
lock the program down inside a box. You make holes in the box that the program
can
access the outside world (hardware, other software, data) through. And hopefully
the
holes are not so big that the program can break security.
Also in this model, the program itself is treated more or less as a black box.
Nobody
looks behind the curtain to see what the Wizard is actually doing, except if
they are
using a disassembler/debugger on it for some purpose.
In the second approach being promoted as "provably correct code", you have
either
proofs or verification of proofs that the program will not misbehave.
Essentially before
running the program, you scan it to make sure that it will not behave (end of
the black
box approach). In that case, the box over the program is also not necessary and
the program can run out in the open without security constraints. Conceptually,
this
is more efficient.
The problem is this: how far can "provably correct code" be generalized to cover
all
aspects of resource utilization?
The operating system I am thinking of runs all programs in the same address
space
since the PCC verifier gaurantees that any program will not violate it's address
space.
This means all the security for memory management goes away and "system calls"
become simple jump instructions...no context switches to suck up CPU time. And
the
various papers on PCC pretty much lay out that this is a non-issue.
What I'm really interested in then is CPU time usage. Now with a verifier, the
program
can be shipped as a tree of code blocks, essentially the output of the compiler
at some
stage of compilation where loops are still visible and some code motion is still
possible
for instance. The loops are necessary anyways to provide information to the
checker
for handling memory management (such as induction variables).
Now, in this form, pre-emptive multitasking (yet another context-switch like
operation)
is not really necessary. The scheduler can simply execute individual blocks of
code,
monitoring deadlines as it progresses. A linker-like step is necessary because
if the
granularity of the blocks is too large, the blocks become unschedulable. If the
granularity
is too fine, the scheduler itself becomes the biggest resource bottleneck.
Hard real time can be gauranteed by examining information in the headers and the
blocks
to determine whether or not the application is schedulable...this is a dynamic
scheduling
system.
BUT...if the program contains required execution deadlines, then the
verifier/scheduler
needs to be able to determine the execution time requirements. Under PCC in a
generalized
form (such as LXres), this consists of meta-information about the program's
execution time.
The problem is that interpreting the meta-information by the scheduler may
itself violate the
execution time limitations without even executing the actual code!
Rather than determining actual execution time (ala LXres or other PCC systems),
it would
be better to determine a MAXIMUM execution time. This limits the types of
programs that
would be schedulable to some degree. All programs for which the maximum
execution time
is indeterminate (or very large) would have to be broken down into say a short
interrupt
handling thread (with finite execution time) that queues blocks of data up for a
second thread
which has no maximum execution time but can do the more complex operations. Or
the
indeterminate program can contain a loop with a fixed, definite counter which
limits execution
to say scanning some decent number of nodes in a graph problem, and terminates
gracefully and prematurely on it's own if the limit is exceeded. Then the PCC
system will look
at the loop induction variable (the counter) as the execution limitation.
Now from this, the meta-information can still be present but the form of the
meta-information
(complexity) can be strictly specified and limited so that execution time can be
determined by
the scheduler in a finite amount of time.
I can envision similar "scheduling/verification" systems being constructed for
printer spoolers,
disk reads/writes, network packet reads/writes, and the interface could be made
generic enough
to support any system.
The last question is whether you can gaurantee termination. Termination could
only be
gauranteed if only certain types of loop constructs are allowed, where the loop
termination
conditions are controlled. And this means loops in a general sense since
recursive code
is essentially a type of loop. A generalized loop construct could never
terminate. This means that
a "kill" function is still necessary. A time limit could be set on execution of
untrusted code
(since schedulability is necessary), but untrusted code could consist of just
cycle stealers
otherwise.
With the proof verification system in hand, the "kernel/user space" boundary
disappears. You
could download the latest "HP printer driver from hacks-r-us with the new
grooving printer head
music function". Give the printer driver access only to the hardware printer
port and the
spooler interface. You can't in general gaurantee that the driver will actually
print anything
(this is the fundamental problem with "proving programs correct") but you can at
least
gaurantee that it won't trash anything but the printer.
I hope this makes sense...but I'm trying to approach the idea of a "totally
redesigned OS" (like
TUNES) from the approach of what is theoretically attainable. The idea I'm
suggesting opens
up the OS almost completely, but forces processes into three groups:
1. Some processes have a maximum execution time which can be mechanically
checked.
2. Some processes do not have an easily measured execution time but will
terminate.
3. A few processes will never actually terminate but they can't be distinguished
from #2
without a more fundamental understanding of program execution.