[linux-elitists] web server software for tarpitting?
Thu Feb 14 09:55:22 PST 2008
Could we clear something up?
I've always heard tarpitting as attempting to slow down a attacker
connections as much as possible by stringing along their TCP
connections for as long as possible. Specifically, I've seen it done
against spammers (or spammerbots) since it reduces spam to oneself and
keeps them (at least the single-threaded ones, I guess) from moving on
to the next victim. I.e. as long as possible
It sounds, though, that you are wanting something different, which I
would call adaptive metering, i.e. slowing down the connections of IP
addresses that are causing too much traffic so that they are no longer
harmful. Drop enough packets and the backoff kicks in and their
traffic rate drops to "acceptable" levels.
By the way, there's really nothing unusual about having large amounts
of traffic coming from a single IP- many large organizations hide huge
numbers of machines behind a single IP. Of course, they ought to be
running a transparent cache at their NAT point so repeats to the same
URL are cut way way back. Presumably these organizations are not lost
causes and could be "taught". And it sounds like you don't want to
disconnect them, just limit their harm.
So, do you really mean tarpitting/teergruben or do you want adaptive
metering? And if the later, are you interested in limiting the data
rate per TCP connection, or are you interested in limiting the number
of connections per minute per IP addr. Either has a much simpler
solution than a "retaliatory" one like tarpitting.
On Sun, Feb 10, 2008 at 11:06 PM, Gerald Oskoboiny
> The other day we posted an article  about excessive traffic
> for DTD files on www.w3.org: up to 130 million requests/day, with
> some IP addresses re-requesting the same files thousands of times
> per day. (up to 300k times/day, rarely)
> The article goes into more details for those interested, but the
> solution I'm thinking will work best (suggested by Don Marti
> among others) is to tarpit the offenders.
> I just followed up on slashdot  about the implementation I
> have in mind, but that thread is pretty stale and this is
> probably a better place to ask anyway, so:
> Does anyone have specific web server software to recommend that
> is able to keep tens of thousands of concurrent connections open
> on a typical cheap Linux box? (Lighttpd? Nginx? Varnish? Yaws?)
> It also needs to be able to proxy other requests to an Apache
> server running elsewhere.
> Bonus marks for:
> - ability to do content negotiation
> - ability to set different delays for different IP addresses
> - HTTP compliance
> I'll research this myself as well, I'm just wondering if anyone
> has recommendations based on experience.
>  http://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dtd_traffic
>  http://developers.slashdot.org/comments.pl?sid=447350&cid=22376594
> which says:
> The implementation I'm thinking might work well is:
> Switch www.w3.org to use some lightweight server software
> that is able to keep lots of connections open, and configure
> it to serve DTD files with an artificial 5 second delay.
> Proxy all the other requests to our existing Apache server
> running elsewhere (possibly on another port on the same
> Most people shouldn't notice or care about the delay for DTD
> files, only the apps that are requesting them hundreds or
> thousands of times in a row will notice.
> W3C's current traffic is something like:
> - 66% DTD/schema files (.dtd/ent/mod/xsd)
> - 25% valid HTML/CSS/WAI icons
> - 9% other
> So we'd probably want to configure the lightweight server to
> serve those icons too (but then it would have to do conneg as
> (that's not really tarpitting, but has similar requirements)
> Gerald Oskoboiny <email@example.com>
> linux-elitists mailing list
Please keep in touch.
More information about the linux-elitists