On Mon, Jun 22, 2015 at 1:35 AM, Johannes Merkle
Post by Johannes MerklePost by Trevor PerrinMostly agree with Watson, but I think there's an interesting question here.
The paper argues "even for twist secure curves a point validation has
to be performed". They give a case where point validation adds
(1) power or EM sidechannel can observe bits of the scalar during
scalar multiplication
(2) implementation performs scalar multiplication (aka DH) with fixed
private key
(3) implementation uses a scalar blinding countermeasure with
inadequate blinding factor
Well, that depends on what you call "inadequate". If points are validated, a blinding factor of n/2 bit is sufficient
even for Pseudo-Mersenne primes, but this twist attack requires n bit blinding.
Thanks for clarifying, I missed that.
If you could clarify further, what do you mean by "point validation"? Is it:
(A) = Point-on-curve
(B) = (A) + Point-not-in-small-subgroup
(C) = (A) + Point-in-main-subgroup
To prevent the Lochter issue I think (A) suffices. Do you think (B)
or (C) are important as well?
Post by Johannes MerklePost by Trevor Perrin(4) attacker can observe the input and output points
That's a rare set of conditions (particularly last 2).
(2) and (4) might not be satisfied in many important applications, but we should select curves that provide security
independent of specific properties of the application.
Maybe, but I agree with DJB the paper overstates its claims: it
doesn't apply to DH or ECDSA unless combined with unusual device
access (ability to read DH output or change the ECDSA base point).
Post by Johannes MerklePost by Trevor PerrinThis doesn't strongly support the claim "point validation has to be
performed". A better conclusion might be "use adequate blinding
factors".
(I think they're suggesting 128 bit blinding factors for a
special-prime curve like Curve25519, vs 64 bits for a "random-prime"
curve like Brainpool-256. So that's a 1.2x slowdown (~384 vs ~320
bits scalar) due to scalar-blinding, though the special-prime curve
will also have a 2x speedup in optimized implementations.)
I am not sure what you are comparing here. Your statement "128 bit blinding factors for a special-prime curve like
Curve25519, vs 64 bits for a "random-prime" holds only in the absence of the twist attack under discussion. If an
implementation satisfies conditions (1),(2) and (4) above and does not validate points, it needs to use 252 bit blinding
factors, resulting in a slowdown of 59% for random primes (504 vs 316 bit scalars) and a slowdown of approx. 20% for
special primes (379 vs 316 bit scalars). For higher security levels (e.g. for Ed448-Goldilocks) the impact is larger.
OK, so the slowdown for special primes vs random primes with this
blinding countermeasure, for 256-bit curves, would be ~1.33x.
Post by Johannes MerkleMy conclusion of that paper (and I have not contributed to it) would be that twist security does not provide any
performance benefit for implementations that need to thwart this attack by longer blinding factors. Thus, if someone
wants to select curves specifically for hostile environment scenarios (smart cards etc), it is questionable if twist
security should be a requirement, as it might be misleading to implementors.
I think that's the important question: Does point validation in DH
affect criteria for choosing curves?
You mention choosing curves for "hostile environment" scenarios. I
think the world prefers universal crypto standards, so I'll focus on
that:
I think (A) has similar cost on different curves, so doesn't affect
curve choice.
You argue that (A) makes twist-security unnecessary and dangerous,
since it might mislead implementors into not doing (A). That argument
seems weak:
* The conservative choice would be both: have twist-secure curves
*and* do point-on-curve checks.
* The point-on-curve check has a real complexity and performance cost
(e.g. several percent of scalar multiply). It seems reasonable to let
some implementations (e.g. high-speed SW) skip it, and rely on twist
security.
(B) only applies to cofactor>1 curves, but isn't an expensive check
(reject a list of input points). So this weakly argues for
cofactor=1.
(C) argues more strongly for cofactor=1, since for cofactor>1 the
check is typically a full scalar multiply. But is check (C) important
here? Let's assume check (B) is done and that the private scalar is a
multiple of the cofactor, like X25519, so the output ends up in the
main subgroup.
The only complaint I see is that if you don't check (C), multiple DH
inputs would map to the same output. This could be confusing if
you're using long-term DH keys for your identity. This could be
avoided by hashing the public keys into a session key, or by MAC'ing
them with every message sent. But there's an argument that (C) would
remove this property and mean one less thing to go wrong; also, some
people might be nervous about IPR for hashing-into-session-key with DH
identity keys, in some very specific cases (see Microsoft's KEA+).
Anyways, I'm curious what you (or others) think about the importance
of these checks, and how they affect curve preferences.
Trevor