Parallel embedded systems
Matt
mattman@belegost.mit.edu
Sat, 19 Dec 1998 21:40:17 -0500 (EST)
The focus of the original message seems a bit broad. I hope this
response isn't too far off-target, but here's the result of my experience
with embedded, multi-processor systems:
> push for integration of all types of systems (toasters, home electronic
> etc.)..what's the best (most efficient;least latency;easiest to code for) way
> to get these systems talking?
>
> Maybe a central OS that manages all systems and provides sufficient
> hooks so that new systems can be added dynamically?
>
> An OS on only some systems?
This should be answered by looking at the requirements of the
device and analyzing the amount of code and different types of tasks
happening at different places within the system.
> Is there a general abstraction we can make that can apply to parallel systems
> under any implementation from a LAN network to cards on a PC?
Embedded systems tend to be designed for a specific problem
and, therefore, don't tend to require the flexibility of a general-purpose
solution.
Whether it's heap fragmentation, efficiency or memory-size,
general-purpose solutions have their overheads. They tend to seem
enticing, but don't underestimate the complexity or cost of
general-purpose approaches. Parallel solutions are generally applied as a
result of the need for more performance than a single processor can
provide. As a result, there isn't much performance to spare.
Additionally, one should remember that the memory requirements for an OS
must be multiplied by the number of processors on which it is to be run.
All of this should be factored into the over-all cost of the software
solution being evaluated for such an environment.
I can only say this with such certianty, because I've been a
first-hand witness to carnage caused by an Icarus flying towards the lofty
goal of a universally-scalable, architecture-independent, object-oriented,
multi-processor, real-time, embedded OS.
Don't misinterpret my admonition. I don't intend to discourage
your efforts or attenuate your interest. I only desire to warn of the
horrors engendered by the failure to recognize the cost of solving
problems with specific solutions that fit the problem, vs. abstracting
the problem to such a generic form that a general solution can be applied.
> If I come up with a solution I'd like to produce something practical. One
> area where I can see this kind of stuff getting really big is the 3d video
> card market. Won't be long before we have multiple processers on video
> cards...do the poor programmers for these systems have to re-invent an OS to
> allocate and manage resources on this once simple system (and in doing so end
> up hard coding interfaces to hardware and the like) or is there a way we can
> standardize an interface so that it can use a set of already made resource
> managing and system to system communication libraries.
3D accelerators are very special-purpose devices. It is
relatively simple to create the software needed to get the job done.
Anything more general-purpose is likely to cost development time and
reduce performance. However, I am aware that the goal of more flexible
computing systems is to reduce development time in the long term, which is
why higher-level APIs are created to leverage as much of the code as
possible, from one generation to the next.
Basically, I'm simply trying to say that what looks nice on paper
won't always provide the best fit in practice. Embedded, parallel systems
are quite sensitive to such a mismatch, due to the complexity of the
system and often rather primative software tools. Additionally, the
limited focus of the problem rarely requires the advantages of a
general-purpose abstraction, and a simple, specific solution often works
quite well.
Hopefully, software (and hardware) technologies can someday
advance to the point at which much of this is no longer valid. There are
certainly opportunities for anyone who can make it happen. Maybe you can
be there, when it does.
Good luck,
Matt