[linux-elitists] ME roadshow

Eugene Leitl eugene.leitl@lrz.uni-muenchen.de
Wed Sep 6 23:34:42 PDT 2000


Ben writes:

 > So does this mean that the days when the kernel lives in one address
 > space and each user space application had its own memory space are
 > gone. I remember in my kernels class they talked about the two kinds

If they're gone, they'll come back with a vengeance. If you go up to
high clock rates (10...100 GHz, latter probably the ceiling for
semiconductor transistors, perhaps not incidentally also for NEMS),
individual gate delays become very visible, so you have to minimize
the number of gates the signal has to traverse during each
operation. This means, you have to streamline the CPU, and you have to
put the memory where your CPU is, since scaling up of the signal to
macroscale is where the delay and the powerburn is. Also, at 1 GHz a
bit is ~30 cm long, at 10 GHz 3 cm, and at 100 GHz 1 mm. You don't
want to routinely access stuff off-die, because it comes with a stiff
relativistic latency and you'll lose enough power to microwave
radiation (and just think about the crosstalk) you could add a
waveguide to your computer case to keep your coffee pot
hot. Electrosmog, my ass.

Your on-die memory is very fast and knows no locality, so you get so
many delays trying to figure out where the content is -- on die, or
off die -- so that you get an *added delay* when accessing the on-die
memory (off-die is so slow that it doesn't matter anyway) due to all
the heavy thinkin'. Even not speaking about wasting silicon real
estate with strange obsolete machinery, instead of filling it up with
memory and ALU, and serial networking stuff.

Oh yeah, and how would Linux like extensive FPGA areas on the CPU die?

 > of VM systems. One was when the kernel was mapped into the same
 > address space as the user process and the other was when the kernel
 > had an independant memory space all its own. I know Linux at least on
 > intel is using the first method.

So we've got an OS which assumes you have huge memory spaces in each
node, no inherent parallelism besides threads, since written in C come
with a high cost on stack, has no especial understanding of MMX type
SIMD, with elaborate protection and expensive context switches.

These are extensive adaptations to a given environment. A too-well
adapted species does lousy when the environment changes. It's the
generalists which do things less well, but do a lot of things that
survive.

You see, for a long time the protomammals didn't have a ghost of a
chance against the well-adapted dinosaurs, smugly filling up every
niche. It's when the climate changed due to impact or volcanic winter
that the little furry dudes suddenly came out of the woodwork to feast
on the carcasses of former kings who ruled Earth.




More information about the linux-elitists mailing list