Post by Gregory Maxwell
You cannot reliably engineer around raw carelessness or indifference.
Actually, you can. Consider, e.g., the verified seL4 microkernel, which
enforces various clear security policies against _all other software_,
no matter how carelessly that software is written.
There's tremendous ongoing work into making the underlying engineering
techniques less expensive, more broadly applicable (the kernel is only
one part of the security picture), and more functional from the user's
An important part of "less expensive" is systematically reducing the
size of the trusted computing base (TCB). In the crypto context this is
what leads to decisions such as
* replacing separate encryption+signatures with pure DH,
* implementing DH with Montgomery's very simple x-ladder formulas,
* choosing twist-secure Montgomery curves,
* focusing on one curve at an ample security level,
* choosing a prime very close to a power of 2,
etc., and in general getting the amount of critical crypto code down to
something that we can afford to convincingly verify. The fastest way
that we can get to wide deployment of verified software for encryption
_and_ authentication is by focusing on the simplest possible protocol
using X25519 and a similarly simple symmetric-cipher suite.
(We also have to do something to handle the risk of quantum computers;
but this doesn't justify doing anything more complicated than X25519 for
the pre-quantum part. Poor connectivity sometimes forces people to sign
data rather than using encryption to authenticate; but this doesn't
justify delaying deployment of pure X25519 solutions for the Internet.)
The same simplifications also have immediate side benefits such as
and other similar real-world vulnerabilities. Of course you can blame
these vulnerabilities on "carelessness"---many standards clearly tell
the programmers to add point-validation code!---but you have to realize
that you're advocating _more code_, and that this is just one out of
thousands of bad decisions that have made today's TCBs far too large.
You're working on a much more complicated bleeding-edge crypto protocol,
trying to achieve more advanced security features. These features seem
to require a significantly larger TCB. Of course having two metrics---
(1) the size of the TCB for core security features and
(2) the size of the TCB for more advanced security features
---means that it's not _necessarily_ possible to optimize both at the
same time, but this begs some really basic questions:
* Has anyone seriously tried to optimize #2?
* Does optimizing #1 do any noticeable damage to #2?
Anyway, if there _is_ a tension, then #1 is much more important---and a
much higher priority. Users need data encrypted and authenticated on a
massive scale; we need to get this done, and we need to get it right.