Discussion:
[curves] Twist security for elliptic curves
Alexandre Anzala-Yamajako
2015-06-18 21:55:35 UTC
Permalink
Apologies if this has been raised before.
Has anobody had time to read this paper already :
http://eprint.iacr.org/2015/577
According to the authors the PointOnCurve check needs to be done even if
the curve is twist-secure and they describe an attack if it was forgotten.

Here is the full abstract :
Several authors suggest that the use of twist secure Elliptic Curves
automatically leads to secure implementations. We argue that even for twist
secure curves a point validation has to be performed. We illustrate this
with examples where the security of EC-algorithms is strongly degraded,
even for twist secure curves.

We show that the usual blindig countermeasures against SCA are insufficient
(actually they introduce weaknesses) if no point validation is performed,
or if an attacker has access to certain intermediate points. In this case
the overall security of the system is reduced to the length of the blinding
parameter. We emphazise that our methods work even in the case of a very
high identification error rate during the SCA-phase.
--
Alexandre Anzala-Yamajako
Watson Ladd
2015-06-19 02:43:14 UTC
Permalink
On Thu, Jun 18, 2015 at 2:55 PM, Alexandre Anzala-Yamajako
Post by Alexandre Anzala-Yamajako
Apologies if this has been raised before.
http://eprint.iacr.org/2015/577
According to the authors the PointOnCurve check needs to be done even if the curve is twist-secure and they describe an attack if it was forgotten.
Several authors suggest that the use of twist secure Elliptic Curves automatically leads to secure implementations. We argue that even for twist secure curves a point validation has to be performed. We illustrate this with examples where the security of EC-algorithms is strongly degraded, even for twist secure curves.
We show that the usual blindig countermeasures against SCA are insufficient (actually they introduce weaknesses) if no point validation is performed, or if an attacker has access to certain intermediate points. In this case the overall security of the system is reduced to the length of the blinding parameter. We emphazise that our methods work even in the case of a very high identification error rate during the SCA-phase.
I was extremely unimpressed with the paper, which shows the opposite
from the introduction. In the paper it's assumed that we have an
implementation which will operate on the curve or on the twist.
Without twist security, this implementation is obviously broken. With
twist security, they need an SCA attack, which is only possible
because the blinding length is shorter than the scalar. The paper
summarizes this as twist security being harmful, by arguing that the
necessary checks are removed by using a twist-secure curve. But one
could also summarize this as twist security turning an easy to mount
attack

I'm unaware of anyone making the claims attributed in the paper. All
DJB has said is that twist security removes the need for point
validation on the Montgomery ladder without SCA considerations, and
that's all that his implementations claim also. (Yes, they are
resistant to timing attacks, but that's a slightly different set of
considerations: there is nothing claimed about EM side channels).
Furthermore, I'm unaware of protocols that can be attacked with what
is in this paper: most points are hashed, and signatures do not take
attacker-controlled input.

Using large blinding factors solves this problem, and was proposed
when the question of side-channel resistance for special primes was
discussed in the CFRG.

Sincerely,
Watson Ladd
Post by Alexandre Anzala-Yamajako
--
Alexandre Anzala-Yamajako
_______________________________________________
Curves mailing list
https://moderncrypto.org/mailman/listinfo/curves
--
"Man is born free, but everywhere he is in chains".
--Rousseau.
Trevor Perrin
2015-06-19 21:15:30 UTC
Permalink
On Thu, Jun 18, 2015 at 2:55 PM, Alexandre Anzala-Yamajako
Post by Alexandre Anzala-Yamajako
http://eprint.iacr.org/2015/577
Mostly agree with Watson, but I think there's an interesting question here.

The paper argues "even for twist secure curves a point validation has
to be performed". They give a case where point validation adds
security, even for twist-secure curves:
(1) power or EM sidechannel can observe bits of the scalar during
scalar multiplication
(2) implementation performs scalar multiplication (aka DH) with fixed
private key
(3) implementation uses a scalar blinding countermeasure with
inadequate blinding factor
(4) attacker can observe the input and output points

That's a rare set of conditions (particularly last 2).

This doesn't strongly support the claim "point validation has to be
performed". A better conclusion might be "use adequate blinding
factors".

(I think they're suggesting 128 bit blinding factors for a
special-prime curve like Curve25519, vs 64 bits for a "random-prime"
curve like Brainpool-256. So that's a 1.2x slowdown (~384 vs ~320
bits scalar) due to scalar-blinding, though the special-prime curve
will also have a 2x speedup in optimized implementations.)


Still, is there an argument that point-validation is a good
"robustness principle", even with twist-secure curves?

And if so - if implementations should perform point validation
regardless of twist-security - does that have any effect on curve
selection? I think the answer is no - twist-secure curves are more
robust and should be preferred. But I'd be curious if anyone thinks
otherwise.


Trevor
Michael Hamburg
2015-06-19 21:20:45 UTC
Permalink
Post by Watson Ladd
On Thu, Jun 18, 2015 at 2:55 PM, Alexandre Anzala-Yamajako
Post by Alexandre Anzala-Yamajako
http://eprint.iacr.org/2015/577
Mostly agree with Watson, but I think there's an interesting question here.
The paper argues "even for twist secure curves a point validation has
to be performed". They give a case where point validation adds
(1) power or EM sidechannel can observe bits of the scalar during
scalar multiplication
(2) implementation performs scalar multiplication (aka DH) with fixed
private key
(3) implementation uses a scalar blinding countermeasure with
inadequate blinding factor
(4) attacker can observe the input and output points
That's a rare set of conditions (particularly last 2).
This doesn't strongly support the claim "point validation has to be
performed". A better conclusion might be "use adequate blinding
factors".
(I think they're suggesting 128 bit blinding factors for a
special-prime curve like Curve25519, vs 64 bits for a "random-prime"
curve like Brainpool-256. So that's a 1.2x slowdown (~384 vs ~320
bits scalar) due to scalar-blinding, though the special-prime curve
will also have a 2x speedup in optimized implementations.)
Still, is there an argument that point-validation is a good
"robustness principle", even with twist-secure curves?
And if so - if implementations should perform point validation
regardless of twist-security - does that have any effect on curve
selection? I think the answer is no - twist-secure curves are more
robust and should be preferred. But I'd be curious if anyone thinks
otherwise.
Trevor
I prefer to validate all points if there isn’t a big perf/complexity hit, because that way the protocol designer doesn’t have to take twist points into account. But I still think curves should be selected as twist-secure if there isn’t a good reason to do otherwise. Some people will prefer the 20-line Curve25519-style Montgomery ladder, and there’s very little cost to giving those folks security against non-DPA-equipped adversaries.

— Mike
Johannes Merkle
2015-06-22 08:35:52 UTC
Permalink
Post by Trevor Perrin
Mostly agree with Watson, but I think there's an interesting question here.
The paper argues "even for twist secure curves a point validation has
to be performed". They give a case where point validation adds
(1) power or EM sidechannel can observe bits of the scalar during
scalar multiplication
(2) implementation performs scalar multiplication (aka DH) with fixed
private key
(3) implementation uses a scalar blinding countermeasure with
inadequate blinding factor
Well, that depends on what you call "inadequate". If points are validated, a blinding factor of n/2 bit is sufficient
even for Pseudo-Mersenne primes, but this twist attack requires n bit blinding. So, the attack indeed requires longer
blindings that necessary otherwise.
Post by Trevor Perrin
(4) attacker can observe the input and output points
That's a rare set of conditions (particularly last 2).
(2) and (4) might not be satisfied in many important applications, but we should select curves that provide security
independent of specific properties of the application.
Post by Trevor Perrin
This doesn't strongly support the claim "point validation has to be
performed". A better conclusion might be "use adequate blinding
factors".
(I think they're suggesting 128 bit blinding factors for a
special-prime curve like Curve25519, vs 64 bits for a "random-prime"
curve like Brainpool-256. So that's a 1.2x slowdown (~384 vs ~320
bits scalar) due to scalar-blinding, though the special-prime curve
will also have a 2x speedup in optimized implementations.)
I am not sure what you are comparing here. Your statement "128 bit blinding factors for a special-prime curve like
Curve25519, vs 64 bits for a "random-prime" holds only in the absence of the twist attack under discussion. If an
implementation satisfies conditions (1),(2) and (4) above and does not validate points, it needs to use 252 bit blinding
factors, resulting in a slowdown of 59% for random primes (504 vs 316 bit scalars) and a slowdown of approx. 20% for
special primes (379 vs 316 bit scalars). For higher security levels (e.g. for Ed448-Goldilocks) the impact is larger.

My conclusion of that paper (and I have not contributed to it) would be that twist security does not provide any
performance benefit for implementations that need to thwart this attack by longer blinding factors. Thus, if someone
wants to select curves specifically for hostile environment scenarios (smart cards etc), it is questionable if twist
security should be a requirement, as it might be misleading to implementors.
--
Johannes
Watson Ladd
2015-06-22 12:09:27 UTC
Permalink
Post by Johannes Merkle
Post by Trevor Perrin
Mostly agree with Watson, but I think there's an interesting question here.
The paper argues "even for twist secure curves a point validation has
to be performed". They give a case where point validation adds
(1) power or EM sidechannel can observe bits of the scalar during
scalar multiplication
(2) implementation performs scalar multiplication (aka DH) with fixed
private key
(3) implementation uses a scalar blinding countermeasure with
inadequate blinding factor
Well, that depends on what you call "inadequate". If points are
validated, a blinding factor of n/2 bit is sufficient
Post by Johannes Merkle
even for Pseudo-Mersenne primes, but this twist attack requires n bit
blinding. So, the attack indeed requires longer
Post by Johannes Merkle
blindings that necessary otherwise.
Why is n/2 bits sufficient? Bits n-1 through n/2 don't appear to be
protected with this proposal.
Post by Johannes Merkle
Post by Trevor Perrin
(4) attacker can observe the input and output points
That's a rare set of conditions (particularly last 2).
(2) and (4) might not be satisfied in many important applications, but we
should select curves that provide security
Post by Johannes Merkle
independent of specific properties of the application.
Does Brainpool offer the same security claims when given an oracle that
computes xG for any G? The answer is no: an attack due to Cheon extracts
xG, x^2G, etc until we get G, then uses this information to recover x
faster then a discrete log.

In practice this sort of oracle attack isn't a concern. But given the
assumptions in the paper it is. This makes analysis of the improvement
given by SCA harder.
Post by Johannes Merkle
Post by Trevor Perrin
This doesn't strongly support the claim "point validation has to be
performed". A better conclusion might be "use adequate blinding
factors".
(I think they're suggesting 128 bit blinding factors for a
special-prime curve like Curve25519, vs 64 bits for a "random-prime"
curve like Brainpool-256. So that's a 1.2x slowdown (~384 vs ~320
bits scalar) due to scalar-blinding, though the special-prime curve
will also have a 2x speedup in optimized implementations.)
I am not sure what you are comparing here. Your statement "128 bit
blinding factors for a special-prime curve like
Post by Johannes Merkle
Curve25519, vs 64 bits for a "random-prime" holds only in the absence of
the twist attack under discussion. If an
Post by Johannes Merkle
implementation satisfies conditions (1),(2) and (4) above and does not
validate points, it needs to use 252 bit blinding
Post by Johannes Merkle
factors, resulting in a slowdown of 59% for random primes (504 vs 316 bit
scalars) and a slowdown of approx. 20% for
Post by Johannes Merkle
special primes (379 vs 316 bit scalars). For higher security levels (e.g.
for Ed448-Goldilocks) the impact is larger.
Post by Johannes Merkle
My conclusion of that paper (and I have not contributed to it) would be
that twist security does not provide any
Post by Johannes Merkle
performance benefit for implementations that need to thwart this attack
by longer blinding factors. Thus, if someone
Post by Johannes Merkle
wants to select curves specifically for hostile environment scenarios
(smart cards etc), it is questionable if twist
Post by Johannes Merkle
security should be a requirement, as it might be misleading to
implementors.

How many EC operations does a smart card need to do a second? Of course we
already know that validation is cheap, and standards don't specify blinding
factors. Yet somehow hardware implementations manage to figure out how long
a blinding factor to use: surely the can figure out the have to compute a
Jacobi symbol the same way.

I've got a list of ECC implementations that don't validate points in
protocols. It's included Bouncycastle, Go's TLS stack, a browser, several
embedded TLS implementations, and is definitely incomplete.

All the ensuing vulnerabilities wouldn't exist with a secure x coordinate
only system like Curve25519.

Sincerely,
Watson Ladd
Post by Johannes Merkle
--
Johannes
_______________________________________________
Curves mailing list
https://moderncrypto.org/mailman/listinfo/curves
Trevor Perrin
2015-06-22 23:57:25 UTC
Permalink
On Mon, Jun 22, 2015 at 1:35 AM, Johannes Merkle
Post by Johannes Merkle
Post by Trevor Perrin
Mostly agree with Watson, but I think there's an interesting question here.
The paper argues "even for twist secure curves a point validation has
to be performed". They give a case where point validation adds
(1) power or EM sidechannel can observe bits of the scalar during
scalar multiplication
(2) implementation performs scalar multiplication (aka DH) with fixed
private key
(3) implementation uses a scalar blinding countermeasure with
inadequate blinding factor
Well, that depends on what you call "inadequate". If points are validated, a blinding factor of n/2 bit is sufficient
even for Pseudo-Mersenne primes, but this twist attack requires n bit blinding.
Thanks for clarifying, I missed that.

If you could clarify further, what do you mean by "point validation"? Is it:
(A) = Point-on-curve
(B) = (A) + Point-not-in-small-subgroup
(C) = (A) + Point-in-main-subgroup

To prevent the Lochter issue I think (A) suffices. Do you think (B)
or (C) are important as well?
Post by Johannes Merkle
Post by Trevor Perrin
(4) attacker can observe the input and output points
That's a rare set of conditions (particularly last 2).
(2) and (4) might not be satisfied in many important applications, but we should select curves that provide security
independent of specific properties of the application.
Maybe, but I agree with DJB the paper overstates its claims: it
doesn't apply to DH or ECDSA unless combined with unusual device
access (ability to read DH output or change the ECDSA base point).
Post by Johannes Merkle
Post by Trevor Perrin
This doesn't strongly support the claim "point validation has to be
performed". A better conclusion might be "use adequate blinding
factors".
(I think they're suggesting 128 bit blinding factors for a
special-prime curve like Curve25519, vs 64 bits for a "random-prime"
curve like Brainpool-256. So that's a 1.2x slowdown (~384 vs ~320
bits scalar) due to scalar-blinding, though the special-prime curve
will also have a 2x speedup in optimized implementations.)
I am not sure what you are comparing here. Your statement "128 bit blinding factors for a special-prime curve like
Curve25519, vs 64 bits for a "random-prime" holds only in the absence of the twist attack under discussion. If an
implementation satisfies conditions (1),(2) and (4) above and does not validate points, it needs to use 252 bit blinding
factors, resulting in a slowdown of 59% for random primes (504 vs 316 bit scalars) and a slowdown of approx. 20% for
special primes (379 vs 316 bit scalars). For higher security levels (e.g. for Ed448-Goldilocks) the impact is larger.
OK, so the slowdown for special primes vs random primes with this
blinding countermeasure, for 256-bit curves, would be ~1.33x.
Post by Johannes Merkle
My conclusion of that paper (and I have not contributed to it) would be that twist security does not provide any
performance benefit for implementations that need to thwart this attack by longer blinding factors. Thus, if someone
wants to select curves specifically for hostile environment scenarios (smart cards etc), it is questionable if twist
security should be a requirement, as it might be misleading to implementors.
I think that's the important question: Does point validation in DH
affect criteria for choosing curves?

You mention choosing curves for "hostile environment" scenarios. I
think the world prefers universal crypto standards, so I'll focus on
that:

I think (A) has similar cost on different curves, so doesn't affect
curve choice.

You argue that (A) makes twist-security unnecessary and dangerous,
since it might mislead implementors into not doing (A). That argument
seems weak:
* The conservative choice would be both: have twist-secure curves
*and* do point-on-curve checks.
* The point-on-curve check has a real complexity and performance cost
(e.g. several percent of scalar multiply). It seems reasonable to let
some implementations (e.g. high-speed SW) skip it, and rely on twist
security.

(B) only applies to cofactor>1 curves, but isn't an expensive check
(reject a list of input points). So this weakly argues for
cofactor=1.

(C) argues more strongly for cofactor=1, since for cofactor>1 the
check is typically a full scalar multiply. But is check (C) important
here? Let's assume check (B) is done and that the private scalar is a
multiple of the cofactor, like X25519, so the output ends up in the
main subgroup.

The only complaint I see is that if you don't check (C), multiple DH
inputs would map to the same output. This could be confusing if
you're using long-term DH keys for your identity. This could be
avoided by hashing the public keys into a session key, or by MAC'ing
them with every message sent. But there's an argument that (C) would
remove this property and mean one less thing to go wrong; also, some
people might be nervous about IPR for hashing-into-session-key with DH
identity keys, in some very specific cases (see Microsoft's KEA+).

Anyways, I'm curious what you (or others) think about the importance
of these checks, and how they affect curve preferences.

Trevor
Johannes Merkle
2015-06-25 12:42:40 UTC
Permalink
Post by Trevor Perrin
On Mon, Jun 22, 2015 at 1:35 AM, Johannes Merkle
Post by Johannes Merkle
Post by Trevor Perrin
Mostly agree with Watson, but I think there's an interesting question here.
The paper argues "even for twist secure curves a point validation has
to be performed". They give a case where point validation adds
(1) power or EM sidechannel can observe bits of the scalar during
scalar multiplication
(2) implementation performs scalar multiplication (aka DH) with fixed
private key
(3) implementation uses a scalar blinding countermeasure with
inadequate blinding factor
Well, that depends on what you call "inadequate". If points are validated, a blinding factor of n/2 bit is sufficient
even for Pseudo-Mersenne primes, but this twist attack requires n bit blinding.
Thanks for clarifying, I missed that.
(A) = Point-on-curve
(B) = (A) + Point-not-in-small-subgroup
(C) = (A) + Point-in-main-subgroup
To prevent the Lochter issue I think (A) suffices. Do you think (B)
or (C) are important as well?
I was only referring to (A), because twist security has nothing to do with (B) or (C). If you have non-trivial cofactor,
you need to check at least (B) or multiply inputs with the cofactor, but this is a different story.
Post by Trevor Perrin
Post by Johannes Merkle
Post by Trevor Perrin
(4) attacker can observe the input and output points
That's a rare set of conditions (particularly last 2).
(2) and (4) might not be satisfied in many important applications, but we should select curves that provide security
independent of specific properties of the application.
Maybe, but I agree with DJB the paper overstates its claims: it
doesn't apply to DH or ECDSA unless combined with unusual device
access (ability to read DH output or change the ECDSA base point).
Right, like the Brown-Gallant / Cheon attack, it doesn't apply to ECDH as implemented in most protocols.
Post by Trevor Perrin
Post by Johannes Merkle
Post by Trevor Perrin
This doesn't strongly support the claim "point validation has to be
performed". A better conclusion might be "use adequate blinding
factors".
(I think they're suggesting 128 bit blinding factors for a
special-prime curve like Curve25519, vs 64 bits for a "random-prime"
curve like Brainpool-256. So that's a 1.2x slowdown (~384 vs ~320
bits scalar) due to scalar-blinding, though the special-prime curve
will also have a 2x speedup in optimized implementations.)
I am not sure what you are comparing here. Your statement "128 bit blinding factors for a special-prime curve like
Curve25519, vs 64 bits for a "random-prime" holds only in the absence of the twist attack under discussion. If an
implementation satisfies conditions (1),(2) and (4) above and does not validate points, it needs to use 252 bit blinding
factors, resulting in a slowdown of 59% for random primes (504 vs 316 bit scalars) and a slowdown of approx. 20% for
special primes (379 vs 316 bit scalars). For higher security levels (e.g. for Ed448-Goldilocks) the impact is larger.
OK, so the slowdown for special primes vs random primes with this
blinding countermeasure, for 256-bit curves, would be ~1.33x.
Post by Johannes Merkle
My conclusion of that paper (and I have not contributed to it) would be that twist security does not provide any
performance benefit for implementations that need to thwart this attack by longer blinding factors. Thus, if someone
wants to select curves specifically for hostile environment scenarios (smart cards etc), it is questionable if twist
security should be a requirement, as it might be misleading to implementors.
I think that's the important question: Does point validation in DH
affect criteria for choosing curves?
You mention choosing curves for "hostile environment" scenarios. I
think the world prefers universal crypto standards, so I'll focus on
Universal crypto standards may be nice to have, but IMHO it's not always the best way to go. You often have algorithms
preferred for hardware and others being more efficient in software. IMHO it would be good to have different curves
standardized to accommodate the different requirements / benefits for hard- and software.
Post by Trevor Perrin
I think (A) has similar cost on different curves, so doesn't affect
curve choice.
You argue that (A) makes twist-security unnecessary and dangerous,
since it might mislead implementors into not doing (A). That argument
* The conservative choice would be both: have twist-secure curves
*and* do point-on-curve checks.
* The point-on-curve check has a real complexity and performance cost
(e.g. several percent of scalar multiply). It seems reasonable to let
some implementations (e.g. high-speed SW) skip it, and rely on twist
security.
I was not referring to SW implementations. Furthermore, I qualify my statement about twist security being potentially
dangerous to the (unusual) cases, where the common DH point is revealed. Nevertheless, as many speakers at the NIST
workshop have pointed out, the benefit of twist security is rather limited, and this limited benefit has to be carefully
balanced with its potential issues. For high-speed SW implementations, twist security may provide significant
advantages. But I advocate separate curves for high-assurance (hardware) implementations anyway and, there, I'm not sure
the benefit justify outweighs the potential drawbacks.
Post by Trevor Perrin
(B) only applies to cofactor>1 curves, but isn't an expensive check
(reject a list of input points). So this weakly argues for
cofactor=1.
The argument is not weaker that that for twist security - you do not need to perform a simple check.
Post by Trevor Perrin
(C) argues more strongly for cofactor=1, since for cofactor>1 the
check is typically a full scalar multiply. But is check (C) important
here? Let's assume check (B) is done and that the private scalar is a
multiple of the cofactor, like X25519, so the output ends up in the
main subgroup.
The only complaint I see is that if you don't check (C), multiple DH
inputs would map to the same output. This could be confusing if
you're using long-term DH keys for your identity. This could be
I'm also not sure if (C) poses a serious threat. But, yes, the protocol can prevent that easily by multiplying with the
cofactor.
--
Johannes
Trevor Perrin
2015-06-25 23:35:20 UTC
Permalink
On Thu, Jun 25, 2015 at 5:42 AM, Johannes Merkle
Post by Johannes Merkle
Post by Trevor Perrin
You mention choosing curves for "hostile environment" scenarios. I
think the world prefers universal crypto standards, so I'll focus on
Universal crypto standards may be nice to have, but IMHO it's not always the best way to go. You often have algorithms
preferred for hardware and others being more efficient in software. IMHO it would be good to have different curves
standardized to accommodate the different requirements / benefits for hard- and software.
Are there cases where separate standards for asymmetric crypto for HW
vs SW was a good idea?

I guess NIST/SECG had binary-field curves that were faster on HW, as
well as prime-field curves. But I think the binary-field curves
weren't used much.


I could understand if there's very different performance for HW vs SW.
But that doesn't seem the case here. Curves over efficient prime
fields like 25519 can be optimized well on SW and HW.

Even for less-optimized implementations (generic field multiplier),
using blinded scalars, the speed hit for curves like 25519 or
Goldilocks vs random-prime curves seems small:

Assuming point-on-curve checks are performed, the Lochter paper and
your email suggest a blinding factor of keysize/2 bits, instead of 64
bits. So that's a slowdown of ~1.2x for a 256-bit curve and ~1.3x at
448 bits.

This doesn't seem worth fragmenting standards over, particularly
because the more efficient equations of newer curves probably make up
that difference?
Post by Johannes Merkle
Post by Trevor Perrin
You argue that (A) makes twist-security unnecessary and dangerous,
since it might mislead implementors into not doing (A). That argument
* The conservative choice would be both: have twist-secure curves
*and* do point-on-curve checks.
* The point-on-curve check has a real complexity and performance cost
(e.g. several percent of scalar multiply). It seems reasonable to let
some implementations (e.g. high-speed SW) skip it, and rely on twist
security.
I was not referring to SW implementations. Furthermore, I qualify my statement about twist security being potentially
dangerous to the (unusual) cases, where the common DH point is revealed. Nevertheless, as many speakers at the NIST
workshop have pointed out, the benefit of twist security is rather limited, and this limited benefit has to be carefully
balanced with its potential issues. For high-speed SW implementations, twist security may provide significant
advantages. But I advocate separate curves for high-assurance (hardware) implementations anyway and, there, I'm not sure
the benefit justify outweighs the potential drawbacks.
By "twist security being potentially dangerous ... issues ...
drawbacks" all you mean is that twist-secure curves might lull
implementors into a false sense of security, so they might skip the
point-on-curve check?

So you're arguing to remove a safeguard to make it extra-clear people
should be doing another safeguard?

I'm not sure what "high-assurance" means here (or in [1,2]). But
assuming it means something like "conservative security design",
wouldn't you rather have both safeguards?

Trevor

[1] http://csrc.nist.gov/groups/ST/ecc-workshop-2015/presentations/session4-merkle-johannes.pdf
[2] http://eprint.iacr.org/2014/832.pdf
Johannes Merkle
2015-06-29 16:36:02 UTC
Permalink
Post by Trevor Perrin
On Thu, Jun 25, 2015 at 5:42 AM, Johannes Merkle
Post by Johannes Merkle
Post by Trevor Perrin
You mention choosing curves for "hostile environment" scenarios. I
think the world prefers universal crypto standards, so I'll focus on
Universal crypto standards may be nice to have, but IMHO it's not always the best way to go. You often have algorithms
preferred for hardware and others being more efficient in software. IMHO it would be good to have different curves
standardized to accommodate the different requirements / benefits for hard- and software.
Are there cases where separate standards for asymmetric crypto for HW
vs SW was a good idea?
I guess NIST/SECG had binary-field curves that were faster on HW, as
well as prime-field curves. But I think the binary-field curves
weren't used much.
I don't think there is (other) precedence for such a separation in the standards. Which doesn't mean that it isn't a
good idea if the requirements differ considerably.
Post by Trevor Perrin
I could understand if there's very different performance for HW vs SW.
But that doesn't seem the case here. Curves over efficient prime
fields like 25519 can be optimized well on SW and HW.
Of course, in theory, hardware can also be optimized for special primes. However, its one thing to implement a
specialized multiplier as a prototype but a very different thing to developed this as a product. I talked about that
with the guys from Infineon and NXP. They say that they have to maintain hardware implementations for general primes
anyway, e.g. for RSA. Their implementations are not replaced with new versions but continuously evolving, going back to
the very first implementations in the early 90s. For them, developing a new multiplier from scratch and maintaining it
as a second product is a complete no-go as this would imply tremendous additional costs. You have to take into account
that they have to certify their chips according to CC EAL4+ or higher which is a very lengthly and expensive process.
(Additional certifications are required for the smart card operating system and crypto applications based on the chip.)
Costs and resources for product management would also double.
Post by Trevor Perrin
Even for less-optimized implementations (generic field multiplier),
using blinded scalars, the speed hit for curves like 25519 or
Assuming point-on-curve checks are performed, the Lochter paper and
your email suggest a blinding factor of keysize/2 bits, instead of 64
bits. So that's a slowdown of ~1.2x for a 256-bit curve and ~1.3x at
448 bits.
Its not just the computation time, generating random bits of sufficient quality can also be time consuming, depending on
your entropy source. And in devices like smart cards, 20% can already be an issue.
Post by Trevor Perrin
This doesn't seem worth fragmenting standards over, particularly
because the more efficient equations of newer curves probably make up
that difference?
My position is that we have different requirements for HW and SW, so why not having different curves?
Post by Trevor Perrin
Post by Johannes Merkle
Post by Trevor Perrin
You argue that (A) makes twist-security unnecessary and dangerous,
since it might mislead implementors into not doing (A). That argument
* The conservative choice would be both: have twist-secure curves
*and* do point-on-curve checks.
* The point-on-curve check has a real complexity and performance cost
(e.g. several percent of scalar multiply). It seems reasonable to let
some implementations (e.g. high-speed SW) skip it, and rely on twist
security.
I was not referring to SW implementations. Furthermore, I qualify my statement about twist security being potentially
dangerous to the (unusual) cases, where the common DH point is revealed. Nevertheless, as many speakers at the NIST
workshop have pointed out, the benefit of twist security is rather limited, and this limited benefit has to be carefully
balanced with its potential issues. For high-speed SW implementations, twist security may provide significant
advantages. But I advocate separate curves for high-assurance (hardware) implementations anyway and, there, I'm not sure
the benefit justify outweighs the potential drawbacks.
By "twist security being potentially dangerous ... issues ...
drawbacks" all you mean is that twist-secure curves might lull
implementors into a false sense of security, so they might skip the
point-on-curve check?
Yes.
Post by Trevor Perrin
So you're arguing to remove a safeguard to make it extra-clear people
should be doing another safeguard?
If a safeguard does not provide the protection it is supposed to do, it might be better to remove it. We are talking
about the case, where resulting points are revealed and side-channels allow to launch the attack.
Post by Trevor Perrin
I'm not sure what "high-assurance" means here (or in [1,2]). But
assuming it means something like "conservative security design",
wouldn't you rather have both safeguards?
I used the term "high assurance" in my talk at the NIST ECC workshop. I mean implementations that run in potentially
hostile environments, e.g. smart cards, smart meter devices, RFID tags, etc. There, you need to take additional side
channels into account and typically apply very conservative designs. Due to these additional countermeasures and
security certifications (often required by regulations), development cycles are much longer there.
--
Johannes
Trevor Perrin
2015-07-07 05:52:57 UTC
Permalink
On Mon, Jun 29, 2015 at 9:36 AM, Johannes Merkle
Post by Johannes Merkle
Post by Trevor Perrin
Are there cases where separate standards for asymmetric crypto for HW
vs SW was a good idea?
[...]
Post by Johannes Merkle
I don't think there is (other) precedence for such a separation in the standards. Which doesn't mean that it isn't a
good idea if the requirements differ considerably.
I'm not convinced yet that HW and SW requirements do differ considerably.
Post by Johannes Merkle
Of course, in theory, hardware can also be optimized for special primes. However, its one thing to implement a
specialized multiplier as a prototype but a very different thing to developed this as a product. I talked about that
with the guys from Infineon and NXP. They say that they have to maintain hardware implementations for general primes
anyway, e.g. for RSA. Their implementations are not replaced with new versions but continuously evolving, going back to
the very first implementations in the early 90s. For them, developing a new multiplier from scratch and maintaining it
as a second product is a complete no-go as this would imply tremendous additional costs. You have to take into account
that they have to certify their chips according to CC EAL4+ or higher which is a very lengthly and expensive process.
(Additional certifications are required for the smart card operating system and crypto applications based on the chip.)
Costs and resources for product management would also double.
OK, so some people won't implement special multipliers in HW. For
25519 they wouldn't get the ~2x speedup due to special primes that
optimized implementations get. If they use a particular blinding
countermeasure they'll take ~1.2x slowdown due to larger blinding
factor, but won't that be balanced out by the faster Edwards curve
equations?

Assuming so, then even with a generic multiplier Curve25519 is going
to be about the same speed as Brainpool curves (and perhaps faster
than P-256? What size blinding factor does P-256 need?).

I guess you'd rather have a different curve that's ~1.2x faster on
this existing HW without the opportunity for 2x optimization
elsewhere? That seems like trading off a large benefit for a much
smaller one, and not worth multiplying standards for.

Trevor

Loading...