What’s the matter with PGP? ― Some Comments
This is somewhat older blog post (2014), but it shows up in internet discussions from time to time, usually posted without context. I have some observations…
The post in question:
PGP keys suck
For historical reasons they tend to be large and contain lots of extraneous information, which it difficult to print them a business card or manually compare.
Basically the idea here is that PGP keys are too long. We are invited to compare the length of the key used for a system called “miniLock” with some PGP keys:
MiniLock is pretty much defunct now. The short keys did not prevent it from becoming irrelevant. Some work with the Wayback Machine showed that the miniLock key is a public encryption key intended for anonymous, unauthenticated file encryption.
But we are talking about encrypted email here. Normally encrypted email users would not want to exclusively send anonymous, unauthenticated messages. So a signing public key is also required which at least doubles the required size. Generally, the “PGP PUBLIC KEY BLOCK” has to contain everything that your correspondent will ever need to send you encrypted messages and verify your signed messages1). A single public encryption key is completely inadequate to the task. This is a comparison of apples to oranges. The size of my gym locker combination is equally as relevant.
Since PGP keys aren’t designed for humans, you need to move them electronically. But of course humans still need to verify the authenticity of received keys, as accepting an attacker-provided public key can be catastrophic.
The catastrophic thing here is a man-in-the-middle (MITM) attack. In essence, a MITM attack on public key encryption is where someone tricks the message sender into using the wrong public encryption key. For the miniLock example you need to compare something that looks like this to prevent a MITM attack:
For the OpenPGP examples, the thing you would need to compare would look something like this in either case:
B268 0152 E274 EDE5 53C3 7C80 F80F A811 DE73 D33B
The OpenPGP “key fingerprint” is easier to compare as it only contains numbers and the first 6 letters of the alphabet. That is as opposed to the longer miniLock public key that has numbers and the alphabet in both upper and lower case. The miniLock example is probably as good as it will ever get for using the public key directly and it doesn't quite win the contest against the key fingerprint. Different systems, such as, say, quantum resistant public keys are likely to be much larger than any of the examples given. The use of a key fingerprint would be required and things would be very awkward if you had committed to the use of the public key directly.
Now the post complains about the behaviour of GnuPG:
if you happen to do this with GnuPG 2.0.18 — one version off from the very latest GnuPG — the client won’t actually bother to check the fingerprint of the received key. A malicious server (or HTTP attacker) can ship you back the wrong key and you’ll get no warning. This is fixed in the very latest versions of GPG but… Oy Vey.
Here is the attack:
- You request a key from a malicious keyserver with a key fingerprint.
- The keyserver actually returns a key with a different key fingerprint.
- You go to certify the newly downloaded key and you can't find it.
- You might later notice the key that was actually downloaded and wonder where it came from.
As already clearly pointed out by the post a PGP public key is a separate entity. It could come from anywhere, not just a key server. So the normal practice is to add it to your keyring and then attempt to certify it as representing the identity of your correspondent. This part of the post strikes me as misleading in that it implies that something that is in practice a pointless prank is a serious problem.
A key ID is a sort of a nickname of a key fingerprint. By convention it is some portion of the key fingerprint starting from the right side. The value in the “Key ID” field is not the key ID of the given key fingerprint and is very unlikely to be any key ID. The post fails to specify why V3 keys are bad and why anyone should be bothered by their continued support.
PGP key management sucks
For the record, classic PGP does have a solution to the problem. It’s called the “web of trust“, and it involves individuals signing each others’ keys. I refuse to go into the problems with WoT because, frankly, life is too short.
If the web of trust is not the solution to anything, and is terrible, then why bring it up? By bringing it up it is being implied that the web of trust is some sort of failed initiative. That isn't really true.
Normally PGP is used with no trusted third parties. That is an important capability and represents the ideal for end to end encrypted messaging. There is no reason to preclude the use of a third party with the consent of the user and PGP supports this. So the normal “tree of trust” created by signing PGP public keys in your keyring becomes a “web of trust”. From time time to time people will speculate that it might be possible to use the PGP web of trust for some sort of distributed trust scheme but this isn't something that was or is done.
No forward secrecy
(Let’s not get into the NSA’s collect-it-all policy for encrypted messages. …
The linked Guardian article is not about NSA policy. It is about what the NSA is allowed to do by a oversight entity.
A few searches and some rough calculations reveals that it would not be possible for the NSA to collect all the encrypted traffic in the world unless they are utilizing a significant portion of all the storage that exists.
Since the post was written, it has come to pass that almost all email in transit (>90%) is protected by SMTP STARTTLS. So the NSA and the like are not able to determine what email is encrypted and would have trouble targeting particular users.
… If the NSA is your adversary just forget about PGP.)
Four months after the post we are discussing here was written there was a leak that suggested that PGP encryption was on a short list of things that the NSA could not get access to. See this interesting article by the same author: https://blog.cryptographyengineering.com/2014/12/29/on-new-snowden-documents/
See the Forward Secrecy article for a more extensive discussion of forward secrecy and encrypted email.
The OpenPGP format and defaults suck
This section starts out with a whole shopping list of things that are made to sound like serious problems. Then this:
Most of these issues are not exploitable unless you use PGP in a non-standard way, e.g., for instant messaging or online applications.
So which issues are exploitable if you use PGP in a non-standard way? How non-standard are we talking here? Obviously anything can be misused if you work at it hard enough. We are left to guess what is actually meant here. I could, but will decline.
The now dead link suggests that the instant messaging system example is XMPP. Off the top of my head, there is nothing in the ways that PGP is currently used over XMPP that would make any of the listed attacks work.
See the Oracle Attack Immunity article for some context.
And some people do use PGP this way.
This implies that it would be a good idea to extend OpenPGP or to abandon it for something that can cover all possible use cases. There is a fundamental disagreement here. I have come to believe that PGP gets its high level of security and reliability from simplicity. There is simply nothing there to attack or break. PGP can be as simple as it is because is is used for stateless, offline applications like encrypted email and encrypted files. If one were to extend it to cover online, stateful systems then that simplicity would be lost. We already have a standard for that. It is called TLS. Why mess around with or abandon a well established offline capable standard like OpenPGP that is known to do a good job within its scope?
…client versions are clearly indicated in public keys.**
I agree that this is a poor practice in terms of security. I don't know why anyone would need to know the version of the program/app that exported a key. Fortunately it seems to be rare. I don't know how common it ever was. The popular GnuPG for instance does not do this.
** Most PGP keys indicate the precise version of the client that generated them (which seems like a dumb thing to do). However if you want to add metadata to your key that indicates which ciphers you prefer, you have to use an optional command.
The cipher preference list is generated and included at key generation time so it can be as strong as the underlying encryption. It normally should not be edited by regular users. See: Downgrade Attack Immunity. I am not sure how this relates to the issue of including a version.
Terrible mail client implementations
I certainly agree that we have a long way to go to make end to end encrypted messaging usable. This isn't specific to PGP encrypted email however. An example:
In a usability study involving Signal2), 21 out of 28 computer science students failed to establish and maintain a secure end to end encrypted connection. The usability of end to end encrypted messaging is a serious issue. We should not kid ourselves into thinking it is a solved issue.
They demand you encrypt your key with a passphrase, but routinely bug you to enter that passphrase in order to sign outgoing mail — exposing your decryption keys in memory even when you’re not reading secure email.
I hope that the author is not suggesting that emails should routinely be sent anonymously (unsigned, unauthenticated).