Security, Persistence, and Reflection (was: Shared address space...)
Thu, 8 May 1997 20:29:59 +0200 (MET DST)
An interesting thread lately was about security under LispOS.
If we want LispOS to be a usable system in a networked world,
we must seriously address security issues.
It would be too bad that the slightest mistake from a lisper,
whether at a console, or in a program, or through the network,
would crash the system, destroy or modify important data, etc.
The possibility of a malevolent (volontary or unvolontary) program
propagating on the net is to be seriously considered.
However, one of the main features of LispOS is that software components
can seamlessly exchange objects without the programmer having to manually
encode and decode them with a stubborn unsafe flat format.
My opinion is that the system have a reflective kernel,
so it can virtualize execution contexts.
An "execution context" is any metaobject
that defines how objects can be executed.
An example execution context is the initial state of the CMUCL repl,
where objects being executed are in source form.
An execution context involves more than just the "environment"
(set of bindings in accessible namespaces),
as it also involves the way memory is implicitly allocated, etc.
Hence, Chris Garrigues asked:
> what exactly is a "user" in a LispOS?
Well, with a reflective approach, a "user" would be just
a particular execution context.
Hence, to restrict the rights of a program,
you just would need create an execution context with less "rights":
some functions/variables would not be accessible in the environment,
or would have been redefined so as to filter accesses;
the very implementation of the language could be changed so
as to limit or other control resource usage (quotas, etc).
An implementation might involve modified (or isolated) memory mappings,
much like unix processes; or Lisp-in-Lisp interpretation/compilation;
or tweaking of the assembly-level implementation of Lisp; or whatever.
The result is that we really have some kind of LispVM (which doesn't imply
a canonical representation; much less bytecodes or virtual processor stuff),
that can be reified and modified, so as to control at a very fine grain
the behaviour of programs running inside.
Hence, any user can in turn drop some rights when executing programs;
typically, any new access to a global resource would be monitored by the user
[thanks to persistency, you only need do it once per program*access];
a human user would also virtualize the memory usage of sub-users,
so that he always keep a minimal pool of resource in case he runs
short of memory.
All in all, you have dynamic fine-grained control of resource usage,
instead of the unix pseudo-security where the user
(with the horrible need to give allmighty root access to daemons
having to impersonate users, such as sendmail,
or to programs accessing some hardware, such as X).
[David Gadbois perfectly summarized the issue against coarsed-grained
"protection" as only providing pseudo-security]
Then comes the problem of having multiple address spaces or not.
As for the assembly-level details, these should be *completely*
implementation dependent. Any OS whose abstract semantic design
depends on wierd hardware tricks is fundamentally braindead.
This does not mean that using multiple address spaces should be
a precluded option for implementers; it might or it might not be
useful, depending on the particular task to do and hardware at hand.
This means that multiple address spaces should stay the implementation
trick they are, without ever being mentionned in the OS specs.
MMUs are very expensive beasts that cost a lot in silicon,
memory access time, power consumption, inter-CPU synchronization in SMP, etc,
and I see no reason why LispOS should not be seamlessly portable
to cheaper faster MMUless hardware (LispOS hand-calculator, anyone?
LispOS in space-born embedded controllers?)
However, there is a real need for LispOS to be able to control
multiple address spaces. For instance, I have multiple disks,
some being *faster* than others, some being *more reliable* than others,
some being *amovible*, etc. I want to be able to tell the system
that some data should go on one store, but not another,
that one store usually should be read-only,
that some data should be replicated over several stores
(with soft or hard constraints), that one store should not be used
for "critical" stuff, etc. There also might be lots of stores throughout
the net, that I want to seamlessly access, despite the fact that
these stores trust me more or less, that I trust them more or less,
that we trust the link more or less, etc.
Again, Reflection would allow the user and/or system administrator
to dynamically control all those things: by being able to specify
and modify the execution environment of objects, I can
Perhaps the CLOS MOP offers enough expressiveness to do that,
but the way it is usually implemented, it seems that this would be
a completely inefficient way to implement
something like configurable persistence
(a reified generic function call at every memory access? Ouch!);
Allowing the underlying implementation details of objects,
such as the store(s) they are bound to, to be *dynamically* modified
seems even a worse task for the CLOS MOP.
Conclusion: Reflection must be built deep into the system.
[Side note: Chris Bitmead talked about authentification
of binary code by the system. Indeed, only authorized code
may be executed, and only authorized meta-objects may produce
authorized code. Such meta-objects could include compilers,
cryptographic signature verifiers, etc; but if we trust the persistent store,
this only needs be done once per binary object]
PS: all these issues have been discussed at length on the Tunes
mailing list, and some of the WWW pages summarize the discussions a bit.
== Fare' -- firstname.lastname@example.org -- Franc,ois-Rene' Rideau -- DDa(.ng-Vu~ Ba^n ==
Join the TUNES project for a computing system based on computing freedom !
TUNES is a Useful, Not Expedient System