Starting w/Linux (or something else) and gradually replacing it w/Lisp
cwg@DeepEddy.Com
cwg@DeepEddy.Com
Sat, 03 May 1997 15:28:29 -0500
--==_Exmh_558292996P
Content-Type: text/plain; charset=us-ascii
Somewhere in that huge pile of email I just read through someone expressed
concern that if you start with Unix and gradually replace pieces of it with
Lisp, what you get is Unix written in Lisp.
If we're careful, I don't think this needs to be a concern. We simply need to
be careful about the interface layer between the old Unix code and the new
Lisp code. The Unix code should always have a lisp wrapper put around it that
generalizes whatever the Unix code does and make it look (from the outside)
like it's implemented using Lisp Objects. The contract for the methods on
these objects would be defined so that the Unix code is capable of
implementing the minimal functional specification, but other functionality
could easily be put in a native Lisp implementation.
For instance, since I was thinking about paths and filesystems, the first
version of the interface to a file system would only support files on the
local host that do not have version numbers or extensions. This module might
be replaced with some lisp code that recognises a new "protocol" field and if
that field is equal to "file", it calls the old code, otherwise it calls some
newly written code for handling URLs. Later, the submodule that deals with
the file protocol would be replaced with some lisp code that supports our new
lisp file system that knows about version numbers, extensions, soft deletion,
etc.
Because the Lisp side of the interface is at a much higher level than the Unix
side, we can always replace the Unix stuff with something that does more.
When we're done, we don't have something that doesn't look much at all like
Unix, but which is capable of handling the limited set of cases that Unix
could have handled.
This *does*, however, mean that all important parts of the lispos to unix
layer be designed with the future in mind and not just slapped together. We'd
need mixins for our ELF objects that do things like convert keyword arguments
to a lisp function into gnu style --arguments, and the like. A call to 'tar'
would not use the low level:
(apply (find-elf-binary "tar") "-cvf" "foo.tar" filelist)
but instead we'd wrap something around it that allows is to say something like:
(make 'tarball :path (make 'path :name foo)
:files filelist
:verbose *standard-output*)
this would return a first class object which we could refer to. We might also
generalize tarballs to have the same methods as a directory in a file system,
so we could access the files in them exactly the same way we'd access files
in a directory. When the code got rewritten in Lisp, we could remove the
requirement that the tarball have a path associated with it, so we could
handle a tarball that didn't exist on a physical medium. Note how this
removes Unix semantics from a Unix concept w/o sacrificing interoperability.
--
Chris Garrigues O- cwg@DeepEddy.Com
Deep Eddy Internet Consulting +1 512 432 4046
609 Deep Eddy Avenue
Austin, TX 78703-4513 http://www.DeepEddy.Com/~cwg/
--==_Exmh_558292996P
Content-Type: application/pgp-signature
-----BEGIN PGP MESSAGE-----
Version: 2.6.2
iQB1AwUBM2uf5paQnaaFII2dAQFNDgL/Sme1lQxFKx+bN93l39XNErm7xf+vYf6A
zCy3SSHNhxiYbfdPdzr04AephcQs42NVdcKl/hPfnEVD/CJ8Brw8tOgJNu0RU8iB
bVJRhlESCTSoqfZKikCxyWRi1e3NcPgw
=Io64
-----END PGP MESSAGE-----
--==_Exmh_558292996P--