Sun, 11 May 1997 06:11:10 -0400 (EDT)
> > That's fine, now tell me when I press my shiny 'shutdown button'
> > do I then go and say 'hellllloooooooooo, everyone do your persistence
> > stuff, because the power is going away'. Then I need additional mechanism
> > to manage this, and I need programs to comply with protocols, and eventually
> > I may need to turn off the power not being sure that everyone has
> > persisted stuff. I'd call this a bad thing(tm).
> That's not true. You hit the "shutdown button" and the operating system
> store each processes state, byte by byte. This need not have ANYTHING to
> do with the processes usual mechanisms for persistence. Your typical
> $1000.00 laptop does this with operating systems as brain-dead as
> Windows 95. I have been told that the reason desktop computers do not do
> it properly is because of device driver initialization problems, but a
> new, from-scratch OS should be able to put sufficient distance between
> apps and devices that this is no longer a problem.
Right, so you want the same method, only to place all the overhead at the end
and to make it unable to recover from unexpected powerdowns at all?
Rather than to put that rather negligible overhead into idle time as I've
shown its quite possible to do.
> My point is that we've reintroduced much of the work that seemed like it
> would magically "go away" if we didn't have to deal with files. As long
> as users maintain their own hard disks, applications will have to deal
> with filenames and meta-data (e.g. paths). What we *can* get rid of is
> the painful step of manually recreating objects from bytes.
> There will also be the temptation to NOT register data in the
> filename/OO-system registry. "My web-browser program's state is taking
> up 20MB!!! There is this array called *crobjstack* in the middle of an
> object called *cslistwatcher* that is 19MB but I don't know what it is."
> The traditional requirement for software developers to explicitly place
> things in the user space (the file system) results in them giving things
> reasonable names.
There is a school of thought called 'bondage and discipline', its basic
tenet is that people are too stupid to be trusted, so we'll whip them until
they do things in what some arbitrary person considers "The one True way".
In many respects the point of reflective and open implementation systems
is to say, yes, we can trust the user , we will even trust them to
reimplement the system.
> > Calling it is the problem. Especially when you want a chance to recover when
> > that idiot in the next cubicle accidently unplugs your machine to make
> > the the coffee machine go.
> It is possible (but rare) that the system could lose power when it is in
> an uncertain state where one object has sent a message and another has
> not received it. Personally, I am not willing to take a performance hit
> in order to get around this relatively rare problem. I don't even want
> to think about what garbage collection of a 7GB drive looks like.
Use a GC that runs in the background using up only idle time.
Use a generational gc.
Its not a big deal.
> To summarize:
> #1. If something is important enough that it should survive a reboot,
> give it a NAME. An OOFS (instead of a Transparent Persistent Store)
> enforces this. A "transparent store" that requires everything to have
> user-level names isn't that transparent anymore. You must still do a
> bunch of the work you were hoping to avoid: naming things and organizing
Yeah, lets whip that user :)
Lots of things don't belong in the global registry. And hell, I may want
to make up lots of different ways to hold data.
Me: Hey, I'd like to do it in this interesting novel way, it could be
Me: Ouch ouch, ok. I'll just sit in the corner and imagine it.
> #2. If you want to shut down and store EVERYTHING (which is a good idea)
> then that can be done by the operating system with no special filesystem
> or application support. But between shutdowns you don't need to store
> everything so you can actually save some disk space (equivalent to the
> amount of RAM you have).
Really, an application should only reference stuff that it wants to use.
If its using it, and and I shift the clock forward 16 hours, it probably
still wants to use it, otherwise it can stop referencing it and let gc
take its natural course.
And yes, this is what I see turning off the machine should be like Just
shifting the clock forward, and having sockets, etc disconnect.
If other people find this idea objectionable in itself, that's cool.
So far the overhead argument doesn't seem convincing, and this allows
all the other proposed systems to be build happily on top, whereas the
the reverse isn't true.
Now, I'm not saying that this is the One True Way, just that the arguments
against it don't seem terribly convincing, and it gives you that level of
freedom to implement the other proposals that I've seen, as well as to
not implement them, which to me is rather more important.
This reminds me about 'gc is evil' arguments with people who haven't
used lisp, and are in those first few years where C still feels cool.