SPF for forgery prevention (was Re: [linux-elitists] http get vs post...)

Karsten M. Self kmself@ix.netcom.com
Mon Oct 27 04:28:21 PST 2003


Omission generally indicates agreement.


on Sun, Oct 26, 2003 at 08:45:44PM -0500, Gerald Oskoboiny (gerald@impressive.net) wrote:
> * Karsten M. Self <kmself@ix.netcom.com> [2003-10-25 04:15+0100]
> > on Fri, Oct 24, 2003 at 04:53:27PM -0400, Gerald Oskoboiny (gerald@impressive.net) wrote:
> 

> > > SPF in particular seems to be coming along nicely, and has a growing
> > > user/developer community.
> > > 
> > >     http://spf.pobox.com/
> > 
> > Note that both DMP and RMX are referenced at the following SPF link:
> > 
> >     http://spf.pobox.com/intro.html

> > The short answer is that while I suspect there are merits to many of
> > these proposals, the successful solution will be one that is effective
> > while being backward compatible.
> 
> I think SPF can accomplish this; it aims to be something that anyone
> can deploy today without needing to update their DNS or MTA software.

While the system can be deployed (and you address performance below),
the _benefit_ isn't gained until sites which see significant spoofing
(whether or not you communicate with them directly yourself).  The point
being you're relying on others to take action which benefits you.  Not
bad, perhaps, but not quite as direct as it could be.


> > The various protocol suggestions are mostly concerned with making the
> > _protocol_ of mail exchange smarter.
> :
> > The problem of course is that both ends _won't_ be using the protocols,
> > most of the time, particularly early on.  Which means:

> The effort needed to publish SPF records seems fairly low -- many
> sites can probably get away with publishing a single TXT record
> that says "please reject any mail that claims to originate at
> our domain which did not come from one of our MXes."

As I indicated:  the problem is likely to be small sites managed by larg
hosting companies with offshored technical and support services.  The
domain owner/user and her correspondents are unlikely to understand the
problem as anything other than "mail isn't getting through".



> >   - Not knowing how well the system will work until it's widely
> >     deployed.  At which point the aggregate investment may not be worth
> >     the results.
> 
> Some sites have already been rejecting hundreds of forgeries
> using SPF, even at its current barely-deployed level.

Of what proportion of mail?  I'm getting hundreds of spam & viruses to a
single POP account, daily.

> http://archives.listbox.com/spf-discuss@v2.listbox.com/200309/0004.html

Note that the same mail indicates an SPF configuration issue for a valid
domain being denied.



> >   - All of the suggested systems have an ongoing administrative
> >     overhead.  Particularly for aggregated domain administrators, the

> Yup, SPF is just one more thing to make sure you don't mess up.

Truthfully, this is my main objection.  And while it's not a huge issue,
the fact that it's a constant irritant and the perpetual possibility for
human error, leads to strongly dismiss the method *IF* an equivalent
level of protection can be attained by reasonably attainable automated
means.  One of my own guiding principles for system design:  remove the
potential for human error.


> > Better:  come up with a system that works, immediately, if _one_ end of
> > the system is smart, isn't vulnerable to misleading information from a
> > remote host.  And has minimal downsides in the event someone's
> > wires get crossed over whether or not a host is valid.
> > 
> > The solution here is to give MTAs brains, and memory.  But let them be
> > Donne's island:  not dependent on any information on the remote host
> > other than its IP address and prior spam history.
> > 
> > That is:  as mail comes in, it's tested (at SMTP time) for spamminess,
> > and the mailserver keeps a running score of total amounts of mail, ham,
> > and spam, associated with a particular IP address.  Essentially a
> > reputation system.  This can be tweaked to favor (strongly if necessary)
> > recent behavior to discourage submarine attacks.
> 
> Sounds like greylisting:
> 
>     http://projects.puremagic.com/greylisting/
>     http://www.chiark.greenend.org.uk/~ian/sauce/
>     http://www.templetons.com/brad/spam/endspam.html

Somewhat.  Taking these in order:


Evan Harris's greylisting proposal  adheres to criteria I agree with.
And it's an SMTP time measure.  He limits his analysis to three bits of
header information (remote MTA IP, envelope sender, envelope recipient).
The metric is simple and likely successful for the time being.  Spammer
countermeasures are conceiveable.  

The main difference between this any
my proposal is that I'd actually look at some mail from a host to
determine content.  A spamming host might make several full DATA
transmits (resulting in 550 permanent errors), after which it would be
blocked at the IP/SMTP level.



Ian Jackson's SAUCE sounds as if it's possibly similar in design to my
proposal, though the description's too high-level for me to be sure.
It's along the same general ideas, though with different specifics.



Brad Templeton's unnamed "best method", which I call delivery
preference, is in large part the inspiration for my own suggestion.  The
main difference is that where Templeton's proposal calls for a
centralized certification, I'm using existing spam classification tools
to allow this task to be decentralized.  A given site might depend only
on its own experience, or share data (uni- or bi-directionally) with a
set of peers.



The core elements of my proposal:

  - Use your own experience _and_ definition of what is spam, to the
    user level if possible, to respond to spam.

  - Use the _known fact_ of the remote MTA IP, and your own history with
    this address, to determine service level (preferred, non-favored,
    delayed, denied) for a given message delivery.  This calls for
    analyzing _some_, but not _all_ traffic (ultimately, those cycles
    become scarce).  Blocks can be at SMTP, packet filtering, or DNS
    preference level.

    - SMTP filtering would either allow/deny a recognized IP, or
      allow / deny / delay based on content.

    - Packet filtering would involve firewalling off traffic from known
      bad hosts, or rerouting preferred traffic to a priority server.

    - DNS preference would involve providing differeing MX information
      to querying hosts, if a determination could be made among them,
      such that preferred servers would get preferred routing, and
      undesireable servers would be routed to secondary or more
      congested facilities (also isolated from other useful services).

  - Additional reporting, automated notification of spam services, or
    ISPs could also be worked into such systems; distributed internally,
    shared among peers, or publicly.

  - Automate processing as much as possible.  Tuning should be
    accomplished through rules and filters, not by directly updating
    specific data.  The system shouldn't rely to the point of failure on
    a single off-site system or feature.


Peace.

-- 
Karsten M. Self <kmself@ix.netcom.com>        http://kmself.home.netcom.com/
 What Part of "Gestalt" don't you understand?
  Backgrounder on the Caldera/SCO vs. IBM and Linux dispute.
      http://sco.iwethey.org/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : http://allium.zgp.org/pipermail/linux-elitists/attachments/20031027/1eccc438/attachment.pgp 


More information about the linux-elitists mailing list