[linux-elitists] Freeing parts of Unix?

Eugene Leitl Eugene.Leitl@lrz.uni-muenchen.de
Fri Jun 15 08:52:18 PDT 2001

On Fri, 15 Jun 2001, Aaron Sherman wrote:

> No matter what, the end-result of all of this debate and research is
> going to be better free/open source software, and that I'll take any
> day.

Unfortunately, the Open Source community doesn't really understand
hard-realtime concurrency on fine-grain (deeply embedded-like) hardware.
The canonical target is a fat *nix box, or a cluster of them, at best.

This sucks.

> The difference, then, is permission. Do you let the application

The difference is not allowing the code to shoot the OS or other code by
whatever minimal means it takes. There's no point in taking this
religiously, you have to assume the target runs more or less trusted code,
which may be buggy.

Anything else would be much too expensive, and hence needs to be
outsourced. If the code doesn't come from a trusted source, you don't run
it. The cluster is not sitting there nekkid on the global network (it
can't, anyway, since the individual nodes don't speak TCP/IP).

> authenticate itself in some interesting way (always expensive, but
> perhaps mitigatable) and then access hardware as it pleases? Or do you

If I need to send a small packet, and expect the code on the other end to
jump within ~100 ns, there's not much time for OS overhead and checking.
It has to be hardware, and pretty lean hardware at that.

> put the small amount of hardware-aware code into kernel space? Either

The kernel should be just few kByte sized, and be more a VM than anything
else. Such a kernel can pretend it's a fat kernel, and in fact even be
somewhat faster, at least in respect QoS/isosynchronicity, but a fat
kernel can't. The assymetry is obvius, and this is why Linux will either
have to be emulated via a nanokernel one day, or get out of the way. (And
die. In the rain).

> way, with loadable modules or shared libraries, you get pretty much
> the same performance and footprint (after application startup, for
> which shared libs will be more expensive). The key question is which

Dude, the amount of transistors you have within a system for a given price
is fixed within one order of magnitude. But how you distribute these
transistors is not. There's definite advantage towards distributed your
apps into a gadzillion objects, which' address spaces are insulated from
each other via hardware message passing. So you make the grains small,
which also has positive impact on good silicon yield. But this limits the
amount of redundancy present in each node. You have to radically defatten
the OS layer, capisce?

> one offers more options to your developers?

Developers need to start getting really good at parallel debugging really

> I would think the library approach would be harder to get safe
> and correct, but a bit more of a win for developers in the long run.
> Then again, UNIX would likely not be the best model for such a
> system. You'd probably want to make it more VMS-like, to get the
> permissions per-installed-file and granular permissions concepts.

I dunno what you're smoking, but I would like to have a bowl of some.

-- Eugen* Leitl
ICBMTO  : N48 10'07'' E011 33'53'' http://www.lrz.de/~ui22204
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3

More information about the linux-elitists mailing list