Moose specs: Revision 0!
Dennis Marer
dmarer@td2cad.intel.com
Tue, 9 Feb 93 14:53:30 PDT
Howdy,
Here it is. After many weeks of waiting, I can finally send to you
the specifications for the Moose operating system project...revision 0, of
course! Actually, this is more of a revision -72, as it has *tons* of work
left to be done on it. Last night I finally 'finished' it to a point where
I could send it to all of you and get your input.
Careful - it's over 600 lines long at this point...give yourself
quite a bit of time to read it and understand it. Once you have, I'd like
your *detailed* input on my ideas. Anything you think should be changed,
let me know & we'll talk about it. If you think parts need to be added,
removed, anything, let me know and we'll do it. This is the document from
which we'll be creating an operating system, so it needs to include every
detail, every facet, every nuance of the interface to the operating system.
At this point, it does not! It will probably take several weeks (months?)
to iron out all of the details, but we can start work as soon as the basic
details about the OS are set in concrete.
Keep in touch - hopefully with your suggestions (and the remainder
of my ideas) I'll have revision 1 out by this time next week. We'll keep
going until we're done, even if that means hundreds of revisions... :-)
-------------------------------------------------------------------------------
Moose Operating System
Initial Specifications
Revision 0, 2/8/93
I. General Specifications
A. Introduction
The Moose operating system will be a compact, flexible, and
powerful operating system. It is being designed as a replacement
for older operating systems available for the personal computing
arena, such as DOS. Many of its features will be similar to
existing products, combining the end-user simplicity of the
Macintosh with the networking and multi-user strength of Unix, as
well as the graphical user interfaces found in X Windows, Microsoft
Windows, and OS/2.
Many of the strengths of the Moose operating system stem from
the fact that it will be designed from the ground up with the
object oriented paradigm (OOP) in mind. Common system entities
such as devices and files will each be treated as objects, each
with their own methods and attributes. Devices with similar
functions can be implemented with identical interfaces, allowing
programs written for one device to work universally with all
devices of its class.
For example, a program which creates a graphical screen
display would be able to use the same code to generate a printout
or send a fax just by referencing a different device object. On
the other hand, complex devices could also be given more elaborate
interfaces to allow maximum efficiency in accessing hardware. A
certain amount of universal portability might be lost in this case,
but the application programmer would be left to choose the
appropriate method to meet his or her needs.
B. Resource Requirements
One failing of many recent operating systems is the vast
amount of resources necessary just to support the operating system
itself. This detracts from the usefulness of a system as a whole,
leaving less room for applications. Since today most personal
computers are sold with between 2MB and 8MB of memory, an operating
system and its basic drivers should consume no more than 25% of
available RAM. This equates to 512k of RAM on a basic system, and
hard drive requirements should also follow similar guidelines,
limiting necessary usage to 25%. Unnecessary device drivers, even
those supporting standard devices such as floppy disks or video
cards, should only be loaded onto disk or into memory as the user
deems necessary.
As applications grow more and more complex, software designers
argue that they require additional resources to support their
needs. Most graphical user interfaces, for example, are quite
complicated and require considerable effort on the part of the
application programmer to develop software for. In many ways
modern applications do require more resources, but these could be
reused and shared among similar applications, significantly
reducing the total overhead involved.
C. Platform Issues
This project will be developed concurrently for two major
microprocessors, the Intel 386 and the Motorola 680?0. This will
insure the system design will be flexible and avoid features which
cannot easily be implemented on other popular architectures. Once
the basic system has been designed and implemented for both
platforms, much of the application code can be shared between the
two to reduce the amount of work necessary in creating a complete
system.
>> Which base Motorola 680x0 should be used? 020? 030? The final
>> decision should depend on (1) which is prevalent today? (2)
>> which can easily support features such as multi-tasking, virtual
>> memory, etc without external hardware? The idea is to simplify
>> our design task without alienating a large population... :-)
At this time it is also important to note that since there are
relatively few differences between the 386 and 486, no distinction
will be made between the two. Features which are specifically
found in the 486 and not the 386 will not be used to in order to
promote greater portability. As the Pentium microprocessor is
released, certain enhancements may be added to improve performance,
and the same holds true for future versions of the Motorola
devices.
As far as possible, the kernel will be designed and
implemented with a generic platform in mind. For example, even
though most Intel 386 systems targeted will be IBM PCs and
compatibles, every effort will be taken to insure most of the code
will also work on other 386 based systems. This holds even more
true for the Motorola platforms, as both the Macintosh and Amiga
and a variety of other systems are built around this device.
D. Design Principles
Many of the design principles which will be used are taken
from the text "Operating Systems" by Andy Tannenbaum. In
particular, the operating system should act as only as a virtual
extension of the physical machine and as a manager for the
machine's resources. Actual responsibilities will include memory
allocation and access, hardware access, task handling and swapping,
and task protection and priority levels. This will supply a
framework for a complete, flexible operating system which can be
easily extended.
The two most important features designed into the Moose
operating system will be simplicity and flexibility. For one, a
simple design with good expansion capabilities will be easier to
design and maintain than a more complex one which implements every
possible feature. The fewer features implemented, the fewer
problems will arise, and the better the system as a whole will run.
A simple design does not necessarily sacrifice power, and a
flexible system allows new features to be added easily. For
example, both Microsoft Windows and OS/2 implement code
compatibility with other popular 80x86 based operating systems.
This dedicates resources which could otherwise be available to
applications, reducing the power of the system. In a flexible
system, the user could add such capabilities as desired.
As mentioned before, Moose will be developed with the object
oriented paradigm in mind where all system entities will be treated
as objects, especially devices such as video displays, keyboards
and mouses, local and networked disk drives, and communication
ports. Objects will be implemented in such a way that they are
useful by an object oriented language such as C++ or an object
oriented implementation of Pascal, and so on. This would
eventually allow objects developed in one language to be shared
among applications written in another language.
Mechanisms such as encapsulation, single and multiple
inheritance, and polymorphism will be defined at the operating
system level, increasing the reusability and utility of objects in
general. At the same time, only one copy of an object's code will
need to exist in memory at once in order for it to be used by
several tasks simultaneously.
E. Protection Hierarchy
Three distinct priority levels will exist in this operating
system, each of which will be protected from the others through a
variety of mechanisms. At any one time, each task in the system
will be operating at a specific privilege level which specifies the
extent of memory and hardware access it is granted. A task will
not be able to change its privilege level, but will be allowed
access to a number of functions which operate at a higher privilege
level, returning to the original level once a function has
completed. This will allow even under privileged tasks to perform
common system functions such as memory allocation and device
access.
The most privileged level will be reserved for use by the
operating system and its services, which is called the "kernel"
level. It will require full access to a system's resources, and
will be responsible for performing functions essential to
operation. In addition, the kernel itself will be inherently non-
portable, needing to know the intimate details about the
microprocessor being used and specifically the platform being run
on. For these reasons and also for speed concerns, the kernel
portion of the operating system will be written entirely in
assembly language.
At the middle "device" privilege level, device drivers will
require some special memory privileges and direct access to
specific portions of hardware. While the operating system will not
implicitly grant each device driver full hardware access, one will
certainly be able to request access to those areas that are
necessary. This will be used to avoid conflicts between various
drivers as well as to prevent misbehaving software from accessing
hardware which could interfere with normal operating system
functions.
Application software will operate at the most restricted
level, the "user" level, requiring no direct hardware access or
special memory access. This means that applications will rely on
device drivers and the kernel itself to perform all privileged
functions. For this reason, only the system administrator will be
allowed to install and activate such device drivers and portions
of the kernel. An interesting side effect of this is that the
application designer is almost forced to write software without
respect to the platform it is being developed for, resulting in
high reusability.
II. The Kernel
The heart of the Moose operating system will be the kernel,
a small and important part responsible for performing functions
essential to the operation of the system as a whole. It's primary
function is to share resources between tasks, including memory,
devices, and CPU time. Any operations not directly related to
resource management will be implemented as device drivers or in
operating system libraries.
It is important the kernel be as small and as efficient as
possible, so its implementation may be as varied as the types of
microprocessors it runs on. Regardless, its interface will be
constant across platforms, increasing portability of applications
written for the Moose operating system. The kernel will be divided
into the following logical sections:
General Memory Management
Shared Memory Management
Device Memory Management
Memory Transfer (DMA)
System Clock
A. General Memory Management
Within the Moose operating system, virtual memory will be
employed to maximize flexibility and to help multiple tasks
coexist. Infrequently used portions of memory may be swapped to
a hard disk or other mass storage device to allow use by other
tasks, and some portions may need to be relocated about physical
memory to provide optimal compaction and to reduce fragmentation
of memory blocks.
Most tasks will never need to call these functions explicitly
as a certain amount of memory will be allocated to each upon
initialization, depending on the compiler used. This larger block
of memory should then be broken up by the task into smaller, more
useable pieces, removing a lot of the burden from the kernel.
Tasks which need additional or possibly extraordinarily large
blocks of memory are free to allocate these as necessary.
It's important to note that in most implementations, a
difference will exist between a logical address as seen by a task,
and a physical address, as used to actually address physical
memory. This mechanism is what allows virtual memory and
protection of memory between tasks. The task never needs to know
that the logical address it uses to access memory is not directly
related to a physical memory address, as the kernel and related
paging hardware will perform the address translation transparently.
Just a handful of functions will be necessary to supply a task
at any privilege level with the means for utilizing memory. These
functions may be called from all privilege levels, including the
most restricted "user" level. All memory allocated will be
accessible only by the requesting task and its parent or children
tasks, allowing no other tasks access to this memory.
1. Allocating memory
Memory will be allocated from the system free memory heap by
requesting a block of memory with a specific attributes, as
specified in the 'flags' parameter, and a particular 'size'.
block address = MemoryAllocate(flags,size)
The 'flags' parameter specifies the attributes of the block,
but at this time only limited choices exist. Each pair of choices
is mutually exclusive, where selecting one will automatically
override the other.
MEM_VIRTUAL Block utilizes virtual memory (default).
MEM_PHYSICAL Block exists in physical memory.
By default, memory blocks will be marked with the MEM_VIRTUAL
flag, allowing them to be freely moved about in memory and swapped
to disk or some other mass storage device for later retrieval. The
operating system and other tasks will coexist best when virtual
memory can be utilized for memory blocks, especially ones. This
frees seldom used memory for use by other tasks.
A memory block can be locked within physical memory by
specifying the MEM_PHYSICAL flag. This is useful in time critical
applications or by device drivers which require all parts of a
memory block to exist in physical memory at one time, even though
portions of the block may still be relocated as necessary. Using
this feature will degrade the flexibility of the rest of the system
in terms of memory management, and should be used sparingly.
Unless absolutely necessary, memory blocks of 1 megabyte in size
or larger should never be marked with the MEM_PHYSICAL flag.
It is important to note that the total number of bytes
allocated in a block may be larger than the number specified by the
'size' parameter. To simplify and expedite memory management, only
large chunks of memory will be allocated at one time, usually
varying between 1k and 8k in size, depending on the host platform.
This means that to reduce the total application overhead, only a
small number of large memory requests should be made by one task.
There will be a limit on the total number of memory allocations
made by a single task and in the system as a whole, so efforts
should be made by applications to consolidate memory usage.
The starting address of the memory block will be returned by
this function. If the allocation fails a null address is returned,
indicated by an address of zero.
2. Reallocating memory
Once a block of memory has been allocated, its allocated size
will be modifiable, taking from or replacing to the system free
memory heap. Note that this function will also deal with memory
in blocks between 1k and 8k in size, meaning the size of the
resulting block could be larger than the requested 'size'. The
original address returned by the original call to MemoryAllocate()
will still be valid and point to the start of the allocated block.
status = MemoryReallocate(size)
The status returned indicates the success or failure of the
reallocation of a block of memory. As with most functions, a zero
value indicates failure while a non-zero value indicates success.
This function will have certain implementation problems on
systems which do not support virtual memory. Since the
reallocation of large blocks of contiguous memory depends on the
location of neighboring blocks, this function will most likely
often fail on such systems. However, most modern systems do
include virtual memory hardware in one form or another, so this
should not be a problem.
3. Freeing memory
Once through with a block of memory, a task will be able to
free it back into the system free memory heap. This will
invalidate the address and allow other processes to utilize this
memory. Note that all memory allocated by a task will be
automatically freed to the system when the task exits, but it is
still good programming practice to implicitly free this memory.
status = MemoryFree(block address)
Again, a zero status will indicate some error, while a non-
zero status will indicate success.
B. Device Memory Management
Device drivers and other tasks operating at the device
privilege level may need special capabilities to insure they
operate efficiently and (sometimes) properly. It is often
important that a device is serviced as quickly as possible,
requiring a memory block always be in physical memory, which can
be accomplished by marking the block with the MEM_PHYSICAL flag.
Many other devices, especially those using DMA or similar
transfer by hardware external to the CPU, may require a memory
block to exist in a contiguous memory space at a fixed position.
Due to hardware limitations, some memory blocks may even need to
exist between a range of physical memory address. For example, the
8237 DMA controller on the IBM PC platform is only capable of
transferring to and from memory in the first 1 megabyte of physical
memory. These requirements facilitate the need for some special
functions accessible by tasks running at the device privilege level
or higher.
Devices will often require the implicit use of a specific
physical address or range of addresses for proper operation, and
these addresses may exist either in regular memory space or in I/O
space. Some processors, such as the Motorola 68000, may not
differentiate these two address spaces. Because of this, the
interface will be dependent of the processor being used, and will
be discussed indivdually on a processor by processor basis. This
is not a problem as most devices are bound to a single platform
anyway.
1. Allocating memory: Device privilege level specific flags
When allocating memory using the MemoryAllocate() function,
tasks operating at the device privilege level are given a few more
flags to choose from.
MEM_FRAGMENTED Memory block is fragmented (default).
MEM_CONTIGUOUS Memory block is contiguous.
A block of memory may be made contiguous so it will occupy a
contiguous series of physical memory addresses. The starting
address of the block within physical memory will also be fixed to
an address, meaning the block cannot be moved by the kernel to
allocate other memory requests. This will be useful for device
drivers which utilize DMA or similar transfer by hardware external
to the CPU. Note that making a block of memory contiguous will
imply that block is located entirely in physical memory, regardless
of whether or not the block has its MEM_PHYSICAL flag set or not.
>> Special DMA memory requirements: 8237 can only xfer under 1Meg?
C. Shared Memory Management
>> How should this be done?
D. Memory Transfers (DMA)
>> How should this be done?
E. System Clock and Event Scheduling
>> How should this be done?
F. Interrupt Handling
>> How should this be done?
III. Objects
An object in the Moose operating system will be defined as
the encapsulation of a data structure (attributes) with the
functions (methods) dedicated to manipulating that data, known as
a class. New, derived classes will be able to inherit the data and
functions from one or more previously defined base classes while
possibly redefining or adding new attributes and methods. This
will create a hierarchy of classes using both single and multiple
inheritance, if necessary. Finally, a single method can be
implemented differently by several different classes in a hierarchy
in a way appropriate to those classes to achieve polymorphism.
Strictly speaking, the operating system will define nothing
more than a standard format for storing a class and a methodology
for accessing its attributes and methods in a meaningful way. No
enforcement of these policies will actually be done to allow for
maximum flexibility and efficiency. If objects are not implemented
correctly by an application or device driver, the loss will be only
functionality of that particular software. While it would be
possible to implement standard functions within the kernel for
accessing objects, this is certainly not practical.
>> Actual format of the object? VMT access?
The operating system will define the interface to standard
classes, many of which will be implemented in the operating system
library. Concrete ojects such as files, semaphores, timers, and
interrupt handlers will be implemented in the operating system
library for immediate use by the application programmer. The
remainder of the objects defined by the operating system are
classified as devices, and will be discussed thoroughly in the
following section.
1. Files
>> Anybody have ideas on how files can be implemented as persistent
>> objects? How about memory mapped I/O?
2. Semaphores
A semaphore object will be implemented by the operating system
library to further enhance its multitasking capabilities. This
will provide a simple way for tasks to coordinate the usage of
shared resources. It will contain only two simple functions, as
shown below. A semaphore is initialized with some predefined
value, always positive, usually one for mutual exclusion.
When a task wishes to access a semaphore, it decrements the
semaphore's value by one. If the resulting semaphore value is
positive, the task is allowed to enter its critical region.
However, if the semaphore count is negative after decrementing, the
task is placed in a list of tasks waiting on the semaphore and is
made to sleep until the semaphore is incremented back to zero and
the task is given control. Once the task has control, it is woken
from its sleep and may enter its critical section.
semaphore@Down()
Once the critical section has been completed, the task may
relinquish control of the semaphore by incrementing its value by
one. The next task in the waiting list will then be woken and
allowed to enter its critical section.
semaphore@Up()
3. Timers
A useful object for accurately reporting time elapsed since
a particular event will be the timer object. This object will
utilize the event scheduling capabilities of the kernel while
providing a simple interface to the programmer.
Essentially, a timer has two states, running and stopped.
When a timer is created, it is stopped and has an internal count
of zero. When a timer is started, its internal count is
synchronized with the system clock, from which an accurate count
of time elapsed since it was started can be calculated. Once
started, a timer can be stopped, and once stopped it can be
continued to resume count where it left off. Finally, the elapsed
time can explicitly be set to some predetermined value.
timer@Start()
timer@Stop()
timer@Continue()
timer@SetElapsedTime(time)
The time elapsed since a running timer has been started or
the time elapsed before a running timer was stopped can be easily
found, allowing a task to interactively check a timer for progress.
time = timer@ElapsedTime()
Using a timer, it is then possible to schedule events to occur
relative to the start of a timer. If the timer is stopped, pending
events will automatically be rescheduled until later. Explicitly
changing the elapsed time will delete past events and reschedule
future events to their new times. Events are unscheduled in a
similar manner.
status = timer@ScheduleEvent(time,handler,object)
status = timer@UnscheduleEvent(time,handler,object)
Period events are also supported by the timer object, allowing
the caller to specify an offset from which to begin the events, as
well as a period. These are also unscheduled in a similar manner.
status = timer@SchedulePeriodic(ofs,period,handler,obj)
status = timer@UnschedulePeriodic(ofs,perios,handler,obj)
IV. Devices
One of the most important feature of Moose is the way it deals
with devices, both logical and physical. Input, output, storage,
and communication device classes are defined by the operating
system, each of which are a descendant of the device object.
When the system starts up, instances of all devices used by
the system will be created and placed in the device list. New
device drivers may be added and unused device drivers will be able
to be removed or reconfigured at any time without rebooting the
system. Note that since device drivers are semi-privileged
software, they can only be manipulated through the use of a
privileged account. The administrator of devices will also be able
to specify which users or groups of users have access to a
particular device.
A. The Device Object
B. Display Output Devices
A proper definition of the state of the display output devices
would be an object-oriented GUI similar to X11, only much easier
to use. :-)
C. User Input Devices
With the standard user input devices, it would be ideal to
allow keyboards, mouses, and other pointer devices to all interact
with the system in a similar manner. Especially for traversal and
moving about the screen, no difference from the programmers
perspective should be apparent.
D. Mass Storage Devices
Mass storage devices will include floppy drives, standard IDE
and SCSI hard drives, CD-ROM drives, and many others.
E. Filesystem Devices
The filesystem device defined by the operating system is
intended to provide a single, standardized interface to a multitude
of filesystems, similar to Unix's Virtual File System (VFS)
interface. The ability to switch between filesystems quickly and
efficiently is important in this case to correspond to the user
changing a floppy disk or a CD-ROM. Many types of filesystems will
be directly supported, including the standard Unix file systems,
the Fast File System (FFS), the MS-DOS file system, the Amiga file
system, the Macintosh file system, and possibly many more. As for
a native filesystem, the Viva File System (VIFS) is an experimental
filesystem which combines high speed and low overhead to make
itself ideal for a multi-user environment than many existing
filesystems.
F. Data Link Devices
The data link devices defined by the operating system will
correspond to the OSI (Open Systems Interconnection) layer 2,
providing for the reliable transfer of data across a physical link.
These devices will send blocks of data, or frames, with the
necessary synchronization, error control, and flow control.
Examples of these devices would be serial and parallel ports.
G. Network Devices
The network devices defined by the operating system will
correspond to the OSI layer 3, which provides the upper layers with
independence from the data transmission and switching technologies
used to connect systems. These devices will be responsible for
establishing, maintaining, and terminating connections. Examples
would include X.25 and the Department of Defense's IP (Internet
Protocol).
H. Transport and Session Devices
On top of the network devices, the transport and session
devices defined by the operating system will correspond to the OSI
layers 4 and 5, which provides reliable, transparent transfer of
data between end points, end-to-end error recovery and flow
control, as well as the control structure for communication between
applications. These logical devices will establish, manage, and
terminate connections between cooperating applications. Examples
would include the Department of Defense's TCP (Transmission Control
Protocol) and the ISO session and TP (Transmission Protocol)
layers.
V. Libraries
One important system object will be the library, which is
essentially a collection of functions, data, and objects.
A library may be explicitly loaded into memory so it will be
available when needed, or it may be loaded at the time when it is
referenced. When a library is loaded into memory it is given a
global data area which can be accesses by all instances of the
library. A library reference count is created and initialized to
0, or to 1 if the library should stay in memory after it is de-
referenced. Initialization of the library takes place at this time
also.
Each time a library is linked to by another library, a new
instance is created. Local data area specifically for that
instance is allocated and the internal library instance count is
incremented. When the library instance is no longer needed, the
internal library instance count is decremented. When the library
instance count reaches zero, the library is removed from memory.
B. Locating a library
Before linking, a library must first be located. The internal
list of libraries is searched, followed by a search of the system
path. If found, an attempt is made to link to the library. If the
link fails because of an interface mismatch, the search continues.
This allows multiple libraries with identical names to coexist,
provided their interfaces are significantly different.
C. Dynamic library linking
Naturally, internal references within a library would be
resolved at compile time, but external references are resolved when
the library is loaded. When linking to a library, it must first
be located in memory or loaded from disk, resolving its own
references if necessary. Next, the library itself is asked to
resolve a group of references and check to see that the interface
is correct.
When a library is created, items declared as external are
placed in the library's interface so they may be accessed by other
libraries. An interface can be entirely explored using the
library's methods, finding out about data items and types,
functions and the type of each parameter, as well as full
information about objects. In this way, compilers could extract
the library's interface directly from the library.
Other methods should be available to provide quick linking
between libraries, automatically checking to make sure if the
interface matches what is expected. However, minor interface
changes such as new items being added or unused items being removed
are allowed. The entire interface to a library does not need to
be checked when linking, only those items being linked to.