[linux-elitists] GPG luser rant

Karsten M. Self kmself@ix.netcom.com
Fri Apr 13 03:31:31 PDT 2001


on Fri, Apr 13, 2001 at 01:25:58AM -0700, Joey Hess (joey@kitenet.net) wrote:
> Karsten M. Self wrote:
> >   - A cogent argument for why signing mail is a Good Thing®.
> 
> You may notice that I've not signed this mail. While I agree on most if
> not all of the technical points and even with a great deal of the
> background reasoning, I disagree on one central point: I don't believe
> that signing every mail, or even most of your mail, serves any useful
> purpose[1]. Instead, it dilutes the value of the occasional signed mail.

Your footnote (reading "Except for raising public awareness") is in fact
one reason for my campaign.  As should be clear from the simple fact
that I feel pressed to write a rant/FAQ on the topic, a significant
portion of the current response to use of GPG signed or encrypted email
is "you're weird, you're a crypto freak, I can't read your mail, and
you're being antisocial".  

Part of what I hope to accomplish by using GPG all the time, and not
bending, is creating a world in which people realize that they need this
stuff (hey, it's free), or at least software that doesn't fall to pieces
when presented with it.

Better yet, if folks actually *do* get GPG installed on their systems,
when there comes a need to send private mail, the option to go encrypted
exists.  We're shooting for a baseline state in which a presumption of
the presence of cryptographic infrastructure is valid, and the ability
to originate, receive, and validate such communications exists.

Hell, if Microsoft can push the state of the art by forcing
compatibility issues, why can't I?

> Let me digress and explain the security arrangements that one of my most
> paranoid friends uses when he gpg signs something:
> 
> 1. Check that noone except he and his wife are in the house, and that
>    whatever outdoor security system he has is armed and doesn't detect
>    anyone nearby.
> 2. Unlock his safe, and remove the CD containing his private key and
>    a minimal linux system.
> 3. Disconnect all network cabling.
> 4. Hard reboot his computer onto this CD, mount the hard drive, and sign
>    the document.
> 5. Remove the CD, replace it in the safe, boot back up into standard
>    mode, get back on the network.

Appropriate.  When the level of security desired requires these
measures.  But, for all my own insistence of creating a base
infrastructure of cryptographic security mechanisms, this is probably
overkill for typical situations.

> Compare with my reaction when I see a signed mail from another
> (slightly hypothetical) friend, who I know keeps his private key on
> not just one, but multiple networked machines, as well as an often
> physically insecure laptop that I've even had root on before.  He also
> uses an environment variable to hold his key's password, so he need
> not type it in every time.  He can easily sign *everything*, and well,
> that sig is probably from him, but if things go south, I may need to
> resort to out of band communication and shared secrets to be sure.

First, this points directly to trust levels, which *is* built into PKI,
to the extent that you can specify the extent to which you trust a key.
Your friend Sloppy Joe (as opposed to See Shy Joe) probably has a low
rating on your scale, as well he should.

Note that the situation in which a daily-use key exists and is used in
most communications, there may still be special purpose keys which an
individual uses for highly secure communications.  In my own case, I
have made a practice to have separate keys for separate purposes -- my
primary personal key, kept on my workstation at home, protected by a
passphrase.  A key for use at work.  A key on my laptop, used while
traveling.  All cross-signed.  But my desktop key doesn't travel.

While I haven't done so, I could as well create a highly secure key kept
isolated (say, in a safe, or an encrypted file), and used only on a
stand-alone system.   PKI allows for multinymity.   The infrastructure
doesn't currently provide for good support of this (which of the four
likely keys is Karsten using in this message, and how critically should
I trust it?).  But support could probably be provided.  This is an
identified but not quantified problem as yet, for most people -- hell,
they haven't even made it to square one.

I don't buy the argument that no security is preferable to an only
moderately secure system.  Fact is that *virtually all* real-world
security systems only serve to raise the opportunity cost of an exploit
marginally.  But as one hiker observed to the other when they
encountered a bear, most of the time you don't have to outrun the bear,
just the other hiker.  For casual opportunistic theft or security
breach, you want to present a harder target than the next guy.  For
targeted espionage, you're probably good as toast anyway -- Eve will
come in around your defenses, not through them.

> My next point has elements of FUD, but, I fear, elements of truth too.

Mostly FUD, though it's a good case to consider for rebuttal.

> Back in world war two, when crypto was fairly new, the worst way to
> destroy the usefulness of a cryptographic code was to use it
> incessantly, letting your enemy build up a large archive of data. They
> could then brute force crack it, or failing that, pounce on your first
> mistake and have a large body of data to work with.  

Most of the cryptosystems at the time were based on substitution
ciphers, susceptible to frequency analysis, particularly given a
sufficiently large body of work.  Commonly used current ciphers,
including 3DES and blowfish, are not based on substitutions and should
be secure against such attacks, as I understand crypto (which ain't all
that much, but I've got the refs).

> While current cryptosystems have a much sounder theoretical grounding,
> I firmly believe they will still be cracked one way or another, and it
> may well turn out that the more data you have, the easier it is to
> crack a given key. (This is certainly true for recently cracked
> systems, such as WEP.)

As I understand, WEP was pretty much a weak algorithm from the get-go,
offering little more than access control to wireless devices and
networks, and as it turned out, flawed to boot.

Note that it's typically the newer, not older, algorithms which fall
prey to cryptanalysis.

> And with the FUD out of the way, I have one more reason to prefer to use
> signed and encrypted mail sparingly. When I do sign something, I want it
> to stand out like a beacon:
> 
> 	Joey Hess said this. He meant it, 100%. Take notice. 

Too late for that.  You write something.  I read it.   Most of the time
I have to believe it's you. 

Sorry, but you're good.  If you've got imitators, they are too.

> Most of my email is much more at the level of a telephone conversation,
> where sure, the guy on the other end of the line could be a clever
> imitation, but if he is, you'll find out when you're out to lunch with
> him next Wednesday.

There's far less context carried in ASCII than the modulations of a
human voice, and far less opportunity for realtime challenge-response,
even of an informal (and possibly undetectable) manner as would be
possible in a phone conversation.

You'll find my words signed.

Cheers.

-- 
Karsten M. Self <kmself@ix.netcom.com>    http://kmself.home.netcom.com/
 What part of "Gestalt" don't you understand?       There is no K5 cabal
  http://gestalt-system.sourceforge.net/         http://www.kuro5hin.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 232 bytes
Desc: not available
Url : http://allium.zgp.org/pipermail/linux-elitists/attachments/20010413/75f661f6/attachment.pgp 


More information about the linux-elitists mailing list