Schools for OS/PL/DS Research

Francois-Rene Rideau fare@tunes.org
Sun, 9 Jul 2000 18:58:45 +0200


[Out of my current apathy on mailing-list]

On Sat, Jul 08, 2000 at 08:50:00PM -0400, Gary Duzan wrote:
>    Hello everyone. Some of you might remember me from long ago from
> the Moose project and early Tunes days. (If so, you have a better
> memory than I do.)
Hey, Gary! Long time no e-see!

> I'm now looking to head back to school to work on a Ph.D., with a
> focus on the application of programming language technologies to
> building systems software and distributed systems. [...]

* There are lots of language and systems people at CMU, and Fox is the
 code-carried proof that they can achieve integrated solutions when they want.
* At Georgia Tech, there's Olin Shivers, who was doing ML/OS at the MIT,
 which is kind of the successor to Fox (SML/NJ+OS-Kit+Fox userland+MIT hacks);
 Olin is such a great and nice guy, any language+system hacker would love
 to work with.
* The MIT might still host the ML/OS team (including Roland McGrath),
 although I don't know if anyone has taken over with Olin's departure.
 Has anyone more information? Can anyone investigate?
* At Rice University, there's Matthias Felleisen's great PLT team;
 they've done stuff with MzScheme and the OS-Kit.
* There are lots of interesting people at OGI, Utah, Cornell
 developing language or system technology, but I don't know how much
 of a bent they have towards integrating them in whole systems.
* In Paris, we have the LIP6 collaborating with guys at the INRIA and the
 former CNET (now FTR&D) doing systems+language stuff (hehe) with a strong
 commitment to distributed systems (my boss's lab at FTR&D), and
 master hacker Ian Piumarta at INRIA.

Tell us about progress in your quest.

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
You don't test the validity of a theory by seeing that it says correct
things, but by seeing that it doesn't say incorrect things. What you test
by seeing that it does say correct _and previously unpredicted_ things,
is the interest of a theory you've tested to be valid.