[linux-elitists] ME roadshow
Wed Sep 6 04:05:24 PDT 2000
Mike Touloumtzis writes:
> Speaking as someone whose job involves porting Linux to embedded-type
> (well, consumer-electronics-type) devices, I strongly believe that the
> "Linux kernel is full of desktop bloat" argument doesn't have legs.
Please show me a QNX/L4-type 12 kByte Linux kernel written in
carefully handcrafted assembly, which will run on a DSP or an ARM and
live comfortably in 1 MByte memory (and zero virtual memory), and I'll
readily believe you.
> Think instead of the fact that Linux lets you blur the artificial
> boundaries between product categories like "PDA", "desktop", and "server".
Well, I haven't seen many PDAs which can afford to burn a few kW and
run a 10 TByte database, while saturating a fat pipe.
The boundary between a high end desktop and low end server is pretty
permeable, of course. But a cluster filling a major airconditioned
room is also a server (Google is being run on a distributed server of
few thousand machines), and a Linux based information appliance is
also a desktop.
> When I go traveling, I can not only use dialup access with my Linux
> laptop, I can also run a DHCP server on my PCMCIA ethernet card and let a
> friend piggyback on my connection using IP masquerade. If I want to bring
> a little hub along, I can get everyone in the conference room online.
> I can use FreeS/WAN to give all of them access to our corporate network
> via a VPN, and proxy DNS so that they see our internal namespace. I can
> use network filtering to reduce that risk that someone breaks into my
> PC or any of the others while I'm online.
> You don't get these things with one of those little 10KB TCP/IP stacks
> written in Forth. You also don't get a stack that can handle the range
1) I'm not talking about a TCP/IP stack. Is a TCP/IP stack supposed to
be a part of the kernel? Why on earth? What has a WAN to do with
minimal functionality? DNS? proxies? Why don't you mention a frame
buffer, or a GUI?
2) You're describing legacy bloatware applications and protocols.
While you could do all these on a small box supporting threaded code in
hardware, and on a much smaller footprint (see the matchbox web
servers for an illustration), it is missing the point. You can have
all the bloatware you like outside of the nanokernel, I don't care.
As long as I can use the nanokernel for the niche apps, and still
remain one-way compatible (you can run mine, but I can't run yours).
> of hardware optimizations from a dumb-ass NE2K to a sophisticated
> gigabit Ethernet card whose programmable firmware is practically a
> separate computer. And this is just networking--parallels exist for
> other subsystems.
You're talking about device drivers serving a WAN protocol. I do not
see the point why this has anything to do with doing message-passing
(inasmuch not already handled by hardware), doing minimal memory
management and stopping and starting code execution. This is a more or
less minimal subset of tasks existing between the bare metal and my
code. Any other functionality is strictly facultative, and should be
provided on demand by pieces of code which can be swapped-in or reside
> To return to small-scale systems, who's to say that I don't want to be
> able to the above fancy networking with my PDA? The reaction I get from
Because your typical PDA has no PCI slots?
> a lot of people when I describe a scenario like the above is "wow, it's
> like your laptop is a _server_". I largely blame MS for encouraging the
> silly, rigid PDA/PalmPC/client/server way of thinking, although they're
> not the only ones.
We're talking about semantical issues here. Formally, a server is
something serving content to a client. More traditionally, a server is
a big box of noisy iron in an airconditioned room. It is helpful not
to use the same word when trying to convey different meanings. If
integration density allows you to put more performance into your PDA
than once the entire world had, the more power to you. But the same
technology also allows me to fill an airconditioned room with the same
iron, and run it at much higher frequences. Because of this, the
performance and storage size gap always remains roughly the same.
> > I would probably implement a kernel as a rudimentary VM, preferably
> > with good hardware support.
> The really interesting thing to me about the "Unix kernel vs. microkernel"
> Torvald/Tanenbaum debate is that Linus's decision holds up much _better_
> now than when he first made it (and the "Unix is 30 year old technology"
Linux holds very well in his main niches. In the embedded (something
with few 10 kBytes of ROM or flash and few 10 Bytes of RAM which is
running in dozens of units in your car) or a DSP Linux is strictly a
no go. In the supercomputer market Linux can hold only as long as the
individual node is fat.
*nix is some kewl shit and no mistake, but don't tell me it's the best
thing since sliced bread. 'cause it's not true, and by insisting that
it is you're performing quite a feat of a highly selective world
perception. With a bit of trimming and outsourcing of the bloat a
minimal Linux can become truly ubiquitous, and kick Java CPU butt, but
only if we realize the deficiencies, and act accordingly.
> arguments are correspondingly weaker). Back then, Linus's most compelling
> argument was that Unix kernels offer inherent parallelism in situations in
> which "pure" microkernels are inherently single-threaded. People often
I don't care about microkernels, only nanokernels.
> equate threadedness, microkernels, and multi-CPU parallelism without
> thinking about the fact that Unix kernels (since they are essentially
> privileged libraries rather than priveleged processes) inherently
> allow all processes to act in parallel, while the "privileged process
> for filesystem, networking, etc." model requires you to come up with
> complicated resource-pooling strategies (e.g. pooling of thread stacks
> in each server) to get the same parallelism.
Dude, we're from different planets. We talk orthogonally past each
other, and not even in the same language.
I'm not interested in threading, because I believe that each process
should have it's own CPU. At least one CPU. And there are hardware
reasons for why this is the way to go if you want to have maximum
efficiency. And the reason we don't go that way is because the
braindead software architecture is blocking the hardware architecture
> Since 1991, one of the biggest trends in CPU performance has been the
> increasing importance of the cache and TLB. This means that the best
Yeah, but look into embedded memories. If you can get a kBit word with
the 1st level cache latency, regardless of address locality, why on
earth do you need a cache then? And if your memory is that fast (ns
range), why on earth do you need a TLB? And if your memory is that
fast and that broad but only a MByte grained, how on earth do you want
to shoehorn your bloated OS and your bloated app and app development
circus into a parallel box full of these babies?
> way to get really high-performance behavior for your kernel code is to
> map it into each process's address space--what Linux does with modules.
You make it sound as if mapping is for free. Every functionality
requires gates, and gates ain't that cheap if you want to pack 32 CPUs
on a single die. Or a single CPU with a 1 MByte embedded RAM, and
high-speed networking and a crossbar switch.
> It sounds like QNX does this for core kernel components, but if you follow
> this argument to its logical conclusion, you'll say "hmm, anything we
> implement as an out-of-kernel server is going to thrash the cache and TLB,
> and will suck performance-wise, so let's just move it all into loadable
> modules." Of course, you still want stuff like Apache outside the kernel
> for reasons of security and robustness, but anything that needs to take
> advantage of highly asynchronous hardware behavior should be in there.
I doubt our planets are even in the same local cluster.
> So, in a nutshell, using process/address space boundaries as privilege
> boundaries is just a bad idea on modern hardware. I wish these
> points were better understood by Linux advocates at large. I often
> come across people making apologies for the Unix kernel design rather
> than portraying it as the most up-to-date (and continually reassessed)
> technology there is.
Yeah, you keep repeating it, and I keep not believing it. You stop
being the cutting edge when you start believing that cutting edge is
where you are, wherever you are.
I use Linux because it sucks less, and I use PC hardware because I
can't roll my own, but I can't seeing imminent technologies in which
the entire computing paradigm has to be rewritten. If you think in 30
years *nix will be still the top of the food chain...
More information about the linux-elitists