[linux-elitists] 32 essential computer books?

Ben Woodard ben@zork.net
Thu Dec 18 14:00:08 PST 2003


On Thu, 2003-12-18 at 13:25, Nick Moffitt wrote:
> begin  Ben Woodard  quotation:
> > On Thu, 2003-12-18 at 10:02, Jim Thompson wrote:
> > > The Art of Computer Programming, Volumes 1-3, Donald Knuth.
> > 
> > I think that these are fine books but I'd have to argue with you
> > regarding them.  I think it is important for any programmer to have good
> > algorithms and data structures experience but these books are not the
> > way to get it. Plus they are terribly out of date not necessarily with
> > regard to the information that they have but with regards to the place
> > that the information has in the wider realm of computer programming.
> > 
> 
> 	I would argue that they have other faaults as well.  They were
> written during a time when memory accesses were equivalent to
> instruction execution, and before pipelining created elaborate cache
> hierarchies within a CPU.  Knuth is only *now* re-writing them for
> RISC, which was So Over by 1998 that it's kind of embarassing.
> 
> 	They're like good heavy leather-bound legal books: they look
> impressive on your bookshelf.  

Thank you for bringing up this point, Nick. I had forgotten about that
one. It is rather amazing what happens to algorithms when you change
some of the assumptions and throw all the oddities of modern hardware
into the fray.  For example sometimes it is worth it to have the
processor do twice as much work internally rather than doing an
additional memory access.

Also where it really matters, I am finding more and more that much of
computer science lags far behind the state of the art and that most data
structures and algorithms really don't apply anymore.

For example, in my current world of high performance computing, what I
am seeing is that most of the time you can't use data structures more
elaborate than an array. Even linked lists are verbotten. The problem is
if you have this massive amount of data and you need to do something to
it, and you have lots of processors the data pretty much has to be in an
array. There is no parallel algorithm to traverse a linked list, or most
kinds of trees or things like that.

Try to come up with a parallel algorithm that searches for something?
What you will find is that in today's world when performance really
matters and you are forced to do things in parallel, the computer
science just doesn't exist.

I don't know if I understand the problem well enough to explain it but
what I gather is the fundamental problem is that all the mathematics of
computer science is based upon the assumption that you are executing on
a Von Neuman machine and parallel computers are not Von Neuman machines.
-ben

> 
> > I think that the day has passed when we want people doing their own
> > implementation of X,Y, or Z data structure or algorithm. I think for
> > maintainability sake, and to foster faster code development we
> > should be strongly suggesting to the underlings that they use
> > already extant libraries like glib with reasonably bug free and
> > optimized versions of many of the fundamental building blocks. 
> 
> 	And of course, this is precisely why even if they were
> up-to-date, they'd be useless.  Let the GNU project write the Best
> Posisble Qsort or Incremental Search algorithms.  Pull your random
> numbers from /dev/ass*, and don't bother re-inventing the wheel.
> 
> 	You can learn more from SICP than you can from Knuth.  
> 
> *: http://zork.net/~nick/mail/dev-ass




More information about the linux-elitists mailing list