Protocols (was: Oops)
Fare Rideau
rideau@nef.ens.fr
Tue, 19 Aug 1997 00:03:25 +0200 (MET DST)
Dear Alaric,
some of your private replies to tunes list messages seem would be as well
posted on the whole list. I herein reply on the whole list this
double-private message...
>>: Fare
>: Alaric
>> BTW, postscript would have been fine if they designed it as a real
>> clean language, instead of kludge of an interpreter (PS1 lacked GC,
>> and they used brain-damaged dynamic binding). See Functional-PostScript
>> for a better way to do this.
>
> It's very much a one way language... a structured vector graphics format
> would be much nicer. Is Turing-completeness really necessary for driving
> printers?!?!?!
>
Sure! Why not? Turing-completeness allows for arbitrary good compression.
Well, ok. I admit something like DVI would be as well suited in most case.
Turing-completeness allows it as just a standard procedure among others.
Hey, why not write a DVI display manager in PostScript?
Why not have an enhanced compressed DVI as a standard printer format?
Just a few random guesses...
> I think the Scheme Underground are working on a Scheme-based
> version of Postscript, with more functional semantics (PreScript? :-)
That's Functional-Postscript [FPS] that I've been talking about.
> PS Have you heard of Lout? It's functional TeX.
I've seen some announces, but not looked at it yet.
Do you have pointers?
>> when you have to define protocols,
>> you begin by the semantics that you want to achieve,
>> (e.g. some kind of lambda-calculus for arbitrary computations),
>> and you let the system infer an automatic encoding for it
>> (e.g. the direct specialization of a default standard universal encoding).
>> Once your semantics is debugged, you can modify the underlying encoding,
>> optimize it for speed of operation, after having profiled use for your code,
>> and defined an abstract model of speed (taking into account parsing,
>> encoding, compression, transmission, decompression, decoding, unparsing,
>> and whatever "steps" there might be).
>
> Right; my design for IPC protocols for ARGON started with "what do we
> want entities to say to each other", and I decided that an imperative-style
> remote procedure call is what we want for initiating communications:
> query/command, response. EG,
>
> "what is your iconic representation?"
> "Here's a 16x16 RGB icon"
>
Remember that in some case, latency and/or reliability does not allow for
fast round-trip, whereas raw volume can be higher. e.g. net-through-disk,
or spatial transmissions, or even internet transmissions, or anything
that works in burst-mode. When facing such situations, we should better
allow as much info as possible to be passed in a single packet.
Of course, the opposite case can happen, when roundtrip is much higher
than processing speed (unlikely, but possible); in such cases,
if lots of information has to be processed, partial evaluation
should be used to merge code on both ends of the link.
When this evaluation is not possible or too costly,
an optimistic declarative protocol, instead of an imperative protocol,
with every end having a model of the ongoing communication,
can be used to automatically adapt the communication to available
bandwidth and roundtrip. i.e. every end continuously sends information
up to the measured available bandwidth, sends advice about how to
enhance the protocol, and takes (or not) advice and information into account
to adjust what it thinks the bandwidth and roundtrip are,
how it should modify its behavior, and what it should send
(which includes declaring any change in further protocol that the other
end should know about).
> However, not all communications is best modelled in this way. It's best for
> starting communications and for other applications where query/response
> is a natural model (database server, for example). Therefore, the public
> namespace exported by an entity is in terms of the message protocol,
> which corresponds conceptually to the well-know TCP ports - 25 for SMTP,
> 21 for FTP, 110 or so for POP3, etc. The other types of communication are
> started with the cooperation of both ends, rather than the "server" sitting
> and waiting for a "call". The message protocol is used to negotiate the
> connection of other protocols, eg a video server (webcam?) might provide
> an interface to be used thus:
>
> A: "Hello, are you a webcam"
> B: "Yes"
> A: "Ok, I want a video stream sent to video-blit-protocol address <unprintable>"
> B: "Ok"
>
> EG, the initiator requests a protocol socket be opened, and is given
> back a portable (netwide) address object that can be passed to the
> other end, which requests communication to that address.
>
Another advantage of declarative communication
is that we really need implement only raw one-way channels at low-level,
and that they can be arbitrarily coupled and filtered at high-level;
processing can be more easily parallelized, etc.
For instance, with a declarative protocol, sending a file from a world-wide
distributed fileserver to particular site would not require centralizing
the data (from all over the world!) to a unique handler on the "server" side,
before to send it to the "client" (which in turn might dispatch parts of
the file to several hosts). Instead, a declarative protocol allows
communication to go directly from source to target; if a model of the
interconnections is available, multicast targets can be dynamically optimized,
too. Another application of the technology would be trivial load-balancing
among arbitrary networking media: two PPP modems, a satellite antenna,
hand-exchanged floppies or hard-disks, CD-ROMs, etc,
and arbitrary combinations thereof through gateways, servers,
could all be seamlessly integrated into a consistent way
to exchange data between two computers.
All this means that the default mode for transaction be
powerful enough to express dynamic change. Of course, the default mode
may depend on the nature of the connection: a communication via
packets-in-email (in a MUA, MTA, or whatever) must start by assuming little,
and be ready to change it assumptions (either way) from real feed-back.
A connection through an INET TCP port would start as a pair of coupled
8bit (clean) links. A modem connection would start (depending on various
settings) as a coupled pair of either as an 8bit clean link,
or as a 7bit unreliable link to negociate using static assumptions
or fully automatic dynamic trial-and-error.
A text terminal would start as a pair of coupled dissymetrical links,
one feeding ASCII codes (or raw keycodes),
the other receiving ANSI sequences (or raw array modifications);
standard programs would arbitrarily multiplex the the terminal,
with ability to independently filter input and output
(which would subsume Linux utilities such as screen, splitvt,
loadkeys, gpm, spy, and more generally the kgi/ggi (de)mux framework,
also making secure tty spys and/or tty loggers, and text or graphic
window systems a triffle).
Again, the secret is: keep as little as possible as atomic
low-level primitives, then build up with high-level tools,
that can maintain a model of operations, and optimize them
accordingly (including dynamically producing specialized low-level code).
>> This is where reflection happens: it is the programming style that
>> ensures that the low-level protocols that you get to define are indeed
>> (by construction) an (exact or approximate) implementation
>> of the high-level semantics. And when there is approximation
>> (such as int32 for an int, etc), the system becomes aware of it,
>> and can (at least in safe mode) statically or dynamically detect "overflows",
>> and take measures, instead of going crazy. If the programmer has an idea
>> of optimization in the encoding, he can go on, try it, and benchmark it,
>> and possibly batch-benchmark several versions of it, and then choose
>> knowingly. And of course, every such benchmark is done on real working
>> optimized code, not on a pseudo "prototype". Several encodings can
>> co-exist, optimized for various uses (least storage, fastest execution,
>> or any imaginable measure), and going from one to the other is trivial,
>> and translators can be inferred automatically by the system
>> (programmer help welcome for optimization).
>
> Right, the implementation of a protocol is well seperated from the
> meta-implementation.
>
If you like. Let's say that humans have a well-defined abstract source,
and that computers consistently handle translation of this abstract source
way down to the concrete protocol while preserving its semantics,
and taking into account any implementation hint given by the human.
You might like to call the abstract source the "meta-implementation",
and the binary (or silicon, or atomic, or quantum, or superstringish)
image (or configuration) the "implementation".
>> And I disagree with Jecel's allegation that binary encodings ought to
>> be more complex that text encodings. After all, the later are a particular
>> case of the former. Anyway, metaprogrammation
>
> -ing is the proper ending in English, BTW; metaprogramming :-)
>
You nitpicker. Mwa frenchais spiike pa bien english,
butt Joan uv'Arc bootais you hors de France!
> Programmation would kind of imply automation of programming, as in
> "Computer, write me a program to do X", but really that's no different
> to "Computer, write me a program that behaves as per this C source
> code", which GCC does quite well.
>
Ok, Ok.
> Okay, so I'd meta-implement this kind of networking protocol with multiple
> revisions of the protocol considered to be different protocols
> at the low level
> interface, but the same at the high level. Therefore, any given system will
> make the highest revision of any protocol available for the construction
> of new communication channels, but would have to retain old ones until
> it is proven that no system on the network will try to initiate with
> the outdated protocol. Getting hold of new versions of the protocol
> as they propogate is a standard distributed database problem, and
> should be the same mechanism used to distribute any other kind
> of software module.
>
Yup. Actually, if the original meta-protocol (from version 1.0.0.0)
is powerful enough, it suffices that a meta-server be permanently
available with it, so that anyone can use it once and automatically upgrade
to the latest technology.
Of course, there are security problems
if you trust old meta-server signatures that have been cracked since,
and someone between you and the meta-server fakes being the meta-server
and sends you bad guy stuff. Well, the world is not perfect,
and we can still require and verify some kinds of type-safety
and proofs of various properties from upgraded protocols.
== Faré -=- (FR) François-René Rideau -=- (VN) Уng-Vû Bân -=- rideau@ens.fr ==
Join a project for a free reflective computing system! | 6 rue Augustin Thierry
TUNES is a Useful, Not Expedient System. | 75019 PARIS FRANCE
http://www.eleves.ens.fr:8080/home/rideau/Tunes/ -=- Reflection&Cybernethics ==