[linux-elitists] More crudeness and light
Tue Oct 25 16:53:44 PDT 2005
Okay, so lately a lot of RSS feeds have begun including enclosures.
This is often used to store audio files, which have been christened
"podcasts" in order to trick ordinary internet doofuses into doing
Apple's marketing for them.
There are a gazillion and six programs out there for you (yes you!) to
"download" these "sound files" (no doubt this is called "MacinGrabbin'"
or some other insipid portmanteau). Many have features you'll never
want or need since al lyou realyl care about is that you wake up in the
morning and have a directory full of oggs to throw on your iRiver (you
did buy one before they stopped producing them, right?).
So instead of trying to figure out some glob of ruby code that half-does
what you need, why not just let shell tools do all the work? here's my
wget -O - "$@" | xml2 | grep /enclosure/@url= | cut -d = -f 2 | xargs wget -m
My crontab then just looks like this:
0 3 * * * cd ~/grabs/ && while read i < feeds; do /usr/local/bin/podcast $i; done
This used nothing but grep, cut, bash, wget, xargs, and xml2. Of these,
only xml2 and wget are non-standard items:
So, who can do me one better? Better argument handling? is there a
cleaner way to do it with curl? Is there an awk one-liner version that
makes the use of cut irrelevant? Some sort of auto-classification by
date instead of just letting wget -m do the grabbing? Optional switches
for adding -nd on (or otherwise configuring for curl magic)?
The only thing I actually will change soon is the addition of a -o
/dev/null or -o /tmp/dlerr.$$ or something.
A: No. Nick Moffitt
Q: Should I put my reply above quoted text? email@example.com
More information about the linux-elitists