[linux-elitists] Applications and the infamous DNS vulnerability
Thu Jul 24 14:37:21 PDT 2008
Quoting Don Marti (firstname.lastname@example.org):
> On a properly set up home or office network, it should
> be difficult to poison the nameserver completely
> from the outside -- for a long time it's been best
> practice to put your public DNS on a separate machine.
There are some benefits to doing so, but it does rather little to
prevent cache poisoning -- which can affect a variety of nameserver
hosts that do recursive-resolver service and then cache the results,
even those that don't accept recursive queries from public networks.
So: A recursive-resolver server that's on an IP different from your
"public" DNS is no less threatened.
Note that even the "stub resolver" built into Linux libc / glibc is
vulnerable to some deviltry on account of _its_ recursive queries
not originating in randomised source UDP ports but rather always coming
from port 53. However, the results thereof don't get _cached_, so any
poisoned information gets flushed right away. (Running nscd would be
bad, in this context -- but then, it sucks, and if you use it at all
you should always disable its caching of DNS data on general principle.)
Absence of caching likewise mitigates the horrific vulnerability of
pretty nearly everyone's cheap SOHO router appliance. Yes, they'll be
easy to hit with forged DNS RRs, but they're too anemic to spare the RAM
to, so to speak, refrigerate the poisoned meat.
OTOH, those same El Crappo appliances will also tend to de-randomise,
during NAT conversion, the outbound recursive queries from people's
carefully upgraded inside-the-firewall nameservers.
> Should applications that handle untrusted data
> be keeping track of the number of times they get
> NXDOMAIN for subdomains of one domain -- some kind
> of wrapper around getaddrinfo -- then refusing to
> query again if there are so many that it looks like
> an attack? Maybe with a "I'm not looking up another
> randomcrap.example.com domain for you" dialog, or
> maybe just stop processing the offending web page or
> whatever other data source is causing the queries?
That's an interesting idea, but I don't think it's going to happen --
nor would I want to trust app coders with handling network security in
the first place.
More information about the linux-elitists