Discussion:
XEdDSA specification
(too old to reply)
Trevor Perrin
2016-10-21 06:51:14 UTC
Permalink
(Changing title)
You derive DSA keys from DH keys using the bilateral equivalence relation and setting the sign bit to zero. Why not instead go the other way and derive DH keys from DSA keys? That way you get to keep the sign bit. One bit is not a big deal, but was there a reason for going DH->DSA instead of the other way?
Sure, it allows the Montgomery ladder for DH, see discussion at
beginning of 2.3.

Trevor
Mike Hamburg
2016-10-21 18:08:18 UTC
Permalink
Post by Trevor Perrin
(Changing title)
You derive DSA keys from DH keys using the bilateral equivalence relation and setting the sign bit to zero. Why not instead go the other way and derive DH keys from DSA keys? That way you get to keep the sign bit. One bit is not a big deal, but was there a reason for going DH->DSA instead of the other way?
Sure, it allows the Montgomery ladder for DH, see discussion at
beginning of 2.3.
Trevor
Of course, you can use the Montgomery ladder with Edwards y coordinates too. It’s pretty much the same formulas and the same loop. It just requires an extra multiply per bit.

The reason to use XEdDSA is to retrofit signatures on an existing PKI that distributes X25519 keys.

— Mike
Ron Garret
2016-10-21 20:27:38 UTC
Permalink
Post by Trevor Perrin
(Changing title)
You derive DSA keys from DH keys using the bilateral equivalence relation and setting the sign bit to zero. Why not instead go the other way and derive DH keys from DSA keys? That way you get to keep the sign bit. One bit is not a big deal, but was there a reason for going DH->DSA instead of the other way?
Sure, it allows the Montgomery ladder for DH, see discussion at
beginning of 2.3.
Trevor
Of course, you can use the Montgomery ladder with Edwards y coordinates too. It’s pretty much the same formulas and the same loop. It just requires an extra multiply per bit.
I think both of you misinterpreted my question. I understand why you would want to use one form for DH and the other for DSA. What I didn’t understand was why you would want to make the DH form primary and derive the DSA from from it rather than the other way around. (I was particularly interested in this because I do it the other way around in SC4 and I wanted to make sure there was not some cryptographic reason to prefer DH->DSA that I was not aware of.)
The reason to use XEdDSA is to retrofit signatures on an existing PKI that distributes X25519 keys.
That is the answer I was looking for. Thanks!

rg
Trevor Perrin
2016-10-22 13:58:20 UTC
Permalink
Post by Ron Garret
I think both of you misinterpreted my question. I understand why you would want to use one form for DH and the other for DSA. What I didn’t understand was why you would want to make the DH form primary and derive the DSA from from it rather than the other way around.
If you want to support X25519 and Ed25519 with a single key pair
format (or key pair), then there's room for debate, but I'm advocating
X25519.

One reason is that converting public keys from X->Ed or Ed->X uses an
inversion, but since Ed25519 uses point decompression anyways, X->Ed
can combine the inversion with decompression at very little
computation cost [1].

Another reason is that a signature-only system can already be easily
extended with encryption/DH by signing subkeys. However, a DH-based
system (like Ntor, Noise[1], or earlier versions of TextSecure) cannot
be extended to signatures without having an X->Ed conversion like
this.

If you just want DH and signatures rather than X25519 and Ed25519
specifically, then the design space is larger and I guess you could
consider DH with Edwards curves or signatures with Montgomery curves,
or anything else. But then you're diverging from the existing
algorithms, which means more design and analysis is needed, more new
code, and less potential for interop, so I'd be less excited about
that.

Trevor

[1] https://moderncrypto.org/mail-archive/curves/2015/000376.html
[2] https://noiseprotocol.org/
Trevor Perrin
2016-10-24 04:44:46 UTC
Permalink
Failing to specify a non-malleable form has resulted in
vulnerabilities in multiple protocols and systems.
For example, some users of openssl will blacklist certificates by
their hash. But you can take a valid ecdsa signature, change it to
another valid one under the same key, thus change the certificate
hash-- and bypass the blacklist. OpenSSL CVEed their fix for
DER-parser originated signature malleability related to blacklisting,
but still has the ecdsa algebraic one (adding half the order to s to
flip the sign of R).
Good example. I knew of the bitcoin issue but not that one. If you
have a reference for that, or knew of other examples, that would be
helpful.

This is irrelevant to protocols that use signatures correctly. Also,
the checks in existing spec match some existing Ed25519 code and are
simple. But you may be right that a spec for general use should
prefer stricter checks, to make things safer for careless users.

Trevor
Trevor Perrin
2016-10-27 17:44:53 UTC
Permalink
In the motivation for the randomized scheme, the document says "However, if
the same message is signed repeatedly, a glitch that affects the calculation
of h could cause this to happen (an observation due to Benedikt Schmidt)."
Could you provide a reference to a paper/message that explains what is being
referred to here, and/or add a description of the issue to the paper?
Sure, what do you think needs to be clarified, exactly? The math
seems clear, I'm guessing you think "glitch" attacks need to be
defined?

Benedikt points out this is more relevant to hashing large messages;
it's also more revelant to VXEdDSA, where "h" is encoded as part of
the signature, so those seem like good points to add.
r = hash1(a || M || Z)
r = hash1(Z || a || m)
r = hash1(a || Z || M)?
It seems like it would be better to incorporate the randomness into the
state of the hash function as early as possible to reduce risk of some
(non-timing) differential side-channel attacks on the hashing step.
I don't think it matters much.

But if an attacker could choose M to cause a collision between M1 and
M2 or biased output, even with unknown/randomized prefix, then hashing
Z at the end might be better [1].

That's an unlikely scenario. But hashing Z at the end also makes the
design look like EdDSA a little more, as the first two inputs are
secret key, then message, same as EdDSA.

You mention "differential" side-channel attacks against the secret key
"a", but adding randomness before the secret key would make those
attacks easier if the randomness leaks (e.g. a predictable RNG), so
that's not obviously an improvement with respect to side-channels.
Why is Z fixed at 64 bytes? That is, why is its length not a function of the
size of the key and/or the digest function output length?
Ed25519 was specified to reduce a 512-bit hash output to produce its
nonce, so it seemed most consistent to add 512 bits of entropy. Since
these values/sizes are also suitable for 448, we don't bother changing
them.

512 bits of hash output / entropy is overkill in all these cases, but
it's simple to specify and easy to implement. Taking 64 bytes from
the RNG might add some defense in depth if the RNG is slightly weak.
Do you have any suggestions for how one would mark a key as to whether it is
intended to be used for X25519 and/or EdDSA and/or XEdDSA and/or VXEdDSA?
You can use X25519 or X448 keys for any of these algorithms. If you
want to keep them separate that's traditionally a higher-level thing
(e.g. keyUsage bits).
The security considerations say that the caller "must pass in a new secret
and random 64 byte value each time the signing function is called." I
suggest s/64 byte value/**Z**/. Also, it would be nice to clarify the extent
of the badness if a Z value were to be reused.
OK, will note to clarify.
Would you be open to using a different name than `q` for the order of the
base point? `q` is commonly used in code and specifications to refer to what
this paper calls `p`. Another name would be less confusing. The EdDSA RFC
uses `L`, IIRC.
For "plain" DSA (i.e. non-elliptic-curve DSA), it's pretty
conventional to use "p" for the field prime and "q" for the subgroup
order (e.g. FIPS 186, Wikipedia, RFCs 6979, 3279).

Some math is "mod p", and some is "mod q", so (p,q) make a good pair,
and that was loose inspiration for the XEdDSA choice.

For EC-DSA (not EdDSA) the SEC1 and FIPS-186 standards use "p" for the
field prime, but "n" for subgroup order. Bitcoin follows that (p,n)
but I don't think "n" is any better than "q".

The Ed25519 and EdDSA papers use "q" for field prime and "l" for
subgroup order. Not sure why "l" was chosen, it's a problem character
(looks like "1" and "I"). There's a CFRG Draft that uses (p,L) but
XEdDSA uses lower-case for integers, and I think that's a good
convention I want to stick to.

So I think "p" and "q" are good choices, considering?
Section 2.5 says "since the first |p| bits encode an integer greater than
p." I think it would be useful to add a clarification such as "(This is why
it is required 2**|p| - 1 - i > p.)" or otherwise clarify the motivation for
that precondition.
OK, I was hoping that was clear, but will try to clarify.

Trevor

[1] https://moderncrypto.org/mail-archive/curves/2014/0002
Brian Smith
2016-10-28 06:40:33 UTC
Permalink
In the motivation for the randomized scheme, the document says "However,
if
the same message is signed repeatedly, a glitch that affects the
calculation
of h could cause this to happen (an observation due to Benedikt
Schmidt)."
Could you provide a reference to a paper/message that explains what is
being
referred to here, and/or add a description of the issue to the paper?
Sure, what do you think needs to be clarified, exactly? The math
seems clear, I'm guessing you think "glitch" attacks need to be
defined?
It wasn't clear to me that "glitch" referred to glitch/fault attacks.

BTW, more generally it isn't clear in the document what types of attacks
are in the threat model. A lot is said about constant-timedness but it
seems like the scope might be bigger. Or, perhaps protection from fault
attacks isn't a primary consideration for the adding of randomness, but
rather it is a nice side benefit?
r = hash1(a || M || Z)
r = hash1(Z || a || m)
r = hash1(a || Z || M)?
It seems like it would be better to incorporate the randomness into the
state of the hash function as early as possible to reduce risk of some
(non-timing) differential side-channel attacks on the hashing step.
I don't think it matters much.
But if an attacker could choose M to cause a collision between M1 and
M2 or biased output, even with unknown/randomized prefix, then hashing
Z at the end might be better [1].
That's an unlikely scenario. But hashing Z at the end also makes the
design look like EdDSA a little more, as the first two inputs are
secret key, then message, same as EdDSA.
Consider this "best of both worlds" scheme:

r = hash1(a || [first half of Z] || M || [second half of Z])

I believe this has the benefit of being like EdDSA but also protects
against two different kinds of attacks.
You mention "differential" side-channel attacks against the secret key
"a", but adding randomness before the secret key would make those
attacks easier if the randomness leaks (e.g. a predictable RNG), so
that's not obviously an improvement with respect to side-channels.
If the RNG isn't predictable (i.e. if the randomness doesn't leak) then
isn't it better against those attacks, though? In the document it says that
nonpredictable RNG output is a prerequisite so shouldn't we assume it is
unpredictable?

Regardless of that, I don't see how it would be worse than the current
scheme, but I'm not an expert in this particular area. Either way, I think
some rationale for this aspect of the design would be helpful to include in
the doc.
Why is Z fixed at 64 bytes? That is, why is its length not a function of
the
size of the key and/or the digest function output length?
Ed25519 was specified to reduce a 512-bit hash output to produce its
nonce, so it seemed most consistent to add 512 bits of entropy. Since
these values/sizes are also suitable for 448, we don't bother changing
them.
512 bits of hash output / entropy is overkill in all these cases, but
it's simple to specify and easy to implement. Taking 64 bytes from
the RNG might add some defense in depth if the RNG is slightly weak.
OK. I think one might argue that it is overkill to use a Z larger the |p|
and typically Z-like values are the same size as |p| so it would be more
consistent with existing practice and more intuitive to use a |p|-sized Z.
Or, somebody who has read papers like |A| and the papers it cites might
argue that it might be useful to consider the block length of the hash
function when choosing the size of Z.

BTW, I'm not making an argument here. I just think it is strange to have
this constant presented without any justification in the paper.
The security considerations say that the caller "must pass in a new
secret
and random 64 byte value each time the signing function is called." I
suggest s/64 byte value/**Z**/. Also, it would be nice to clarify the
extent
of the badness if a Z value were to be reused.
OK, will note to clarify.
That would be great. Notice in the email I'm replying to now, you seem to
be intending that the scheme should be usable even with a compromised RNG
to some extent, which makes me wonder whether a good RNG is a requirement
or just nice-to-have.
Would you be open to using a different name than `q` for the order of the
base point? `q` is commonly used in code and specifications to refer to
what
this paper calls `p`. Another name would be less confusing. The EdDSA RFC
uses `L`, IIRC.
<snip reasonable explanation>
So I think "p" and "q" are good choices, considering?
It is not a big deal but IMO the more consistency between this and the
EdDSA specification, the better. I don't know why the EdDSA specification
uses "L" but I don't see any harm in using "L" here, and I do see a benefit
in making it easier to understand this specification relative to the EdDSA
specification.
Section 2.5 says "since the first |p| bits encode an integer greater than
p." I think it would be useful to add a clarification such as "(This is
why
it is required 2**|p| - 1 - i > p.)" or otherwise clarify the motivation
for
that precondition.
OK, I was hoping that was clear, but will try to clarify.
I think it actually was pretty clear.

Since only small values of |i| are actualy used in the spec, the main
takeaway the reader should make is that this scheme can't work for any p
which is a Mersenne prime, AFAICT. That doesn't seem significant given the
curves that people are presently using, but it might be useful to make
crystal clear for anybody trying to use it with different curves in the
future.

[1] https://moderncrypto.org/mail-archive/curves/2014/0002
This link isn't working for me.

Cheers,
Brian
--
https://briansmith.org/
Brian Smith
2016-10-28 06:50:35 UTC
Permalink
Post by Trevor Perrin
Post by Trevor Perrin
r = hash1(a || M || Z)
r = hash1(Z || a || m)
r = hash1(a || Z || M)?
It seems like it would be better to incorporate the randomness into the
state of the hash function as early as possible to reduce risk of some
(non-timing) differential side-channel attacks on the hashing step.
I don't think it matters much.
But if an attacker could choose M to cause a collision between M1 and
M2 or biased output, even with unknown/randomized prefix, then hashing
Z at the end might be better [1].
That's an unlikely scenario. But hashing Z at the end also makes the
Post by Trevor Perrin
design look like EdDSA a little more, as the first two inputs are
secret key, then message, same as EdDSA.
r = hash1(a || [first half of Z] || M || [second half of Z])
I believe this has the benefit of being like EdDSA but also protects
against two different kinds of attacks.
Sorry, I meant to share a link that gives some motivation for choosing a
different construction by describing the other kind of attack, and
specifically criticizes the EdDSA construction:

https://dl.acm.org/purchase.cfm?id=2858932&CFID=686720127&CFTOKEN=45906669

Note that the alternative constructions I suggested above had approximately
zero thought put into them. They're more intended to illustrate that it
isn't obvious that the construct in the paper is the best one, and maybe it
is worth considering different constructions with a clear justification for
the construction finally chosen.

Cheers,
Brian
Trevor Perrin
2016-10-28 16:47:53 UTC
Permalink
Post by Brian Smith
Post by Trevor Perrin
Sure, what do you think needs to be clarified, exactly? The math
seems clear, I'm guessing you think "glitch" attacks need to be
defined?
It wasn't clear to me that "glitch" referred to glitch/fault attacks.
I'll clarify that, you're right that "fault" is the more generic term.
Post by Brian Smith
Post by Trevor Perrin
It seems like it would be better to incorporate the randomness into the
state of the hash function as early as possible to reduce risk of some
(non-timing) differential side-channel attacks on the hashing step.
[...]
Post by Brian Smith
r = hash1(a || [first half of Z] || M || [second half of Z])
Huh, interesting. I've thought more about leakage of nonce bits,
which is dangerous because even a leak of a few bits can be
accumulated across many nonces into an attack (e.g. [2]).

But it's also possible that hashing a secret key with attacker-chosen
messages could leak that key through "differential" analysis of power
or electromagnetic side-channels.

That leakage is probably small, so a system that has to worry about
hardware attacks like this probably has a lot of other things to worry
about. But if we can cheaply make the design more robust, that's not
a bad thing.

In your construction, the "first half" keeps the secret "a" from being
directly mixed with the message. That could help against differential
power/EM analysis, if the randomness is good.

The "second half" would randomize the nonce even if earlier collisions.

I don't see anything wrong with that. It's still simple, but perhaps
more robust against an exotic threat (power/EM sidechannel leakage in
hash, plus attacker-chosen messages).

Other opinions on this? (or other ideas)?
Post by Brian Smith
Post by Trevor Perrin
Why is Z fixed at 64 bytes? That is, why is its length not a function of the
size of the key and/or the digest function output length
[...]
Post by Brian Smith
OK. I think one might argue that it is overkill to use a Z larger the |p|
and typically Z-like values are the same size as |p| so it would be more
consistent with existing practice and more intuitive to use a |p|-sized Z.
For discrete-log signatures you need to make sure the nonce is
unbiased, so one technique is to reduce a value much larger than the
subgroup order by the subgroup order (e.g. FIPS-186 recommends
choosing a value that's 64 bits larger).

So it's traditional to use larger values here.

But I agree there should be a Rationale section that explains things like this.
Post by Brian Smith
TNotice in the email I'm replying to now, you seem to be
intending that the scheme should be usable even with a compromised RNG to
some extent, which makes me wonder whether a good RNG is a requirement or
just nice-to-have.
It's intended to be secure even with a weak RNG, but with more
assumptions about the hash, and private scalar, and exotic attacks
(per above) that randomization helps with. So I'd rather not give
people the idea the RNG is optional, or could be skimped on.
Post by Brian Smith
Post by Trevor Perrin
So I think "p" and "q" are good choices, considering?
It is not a big deal but IMO the more consistency between this and the EdDSA
specification, the better. I don't know why the EdDSA specification uses "L"
but I don't see any harm in using "L" here, and I do see a benefit in making
it easier to understand this specification relative to the EdDSA
specification.
The Ed25519 and EdDSA specs from DJB et al use "l", but I think that's
a problem typographically.

The CFRG work-in-progress draft uses "L", but nothing else does, and
that's inconsistent with the XEdDSA convention of using lower-case for
integers (scalars, coordinates) and upper-case for other things
(points, raw byte sequences). I think that convention helps
readability, so I'm still pretty happy with (p, q).
Post by Brian Smith
Post by Trevor Perrin
Section 2.5 says "since the first |p| bits encode an integer greater than
p." I think it would be useful to add a clarification such as "(This is why
it is required 2**|p| - 1 - i > p.)" or otherwise clarify the motivation for
that precondition.
OK, I was hoping that was clear, but will try to clarify.
I think it actually was pretty clear.
Since only small values of |i| are actualy used in the spec, the main
takeaway the reader should make is that this scheme can't work for any p
which is a Mersenne prime, AFAICT. That doesn't seem significant given the
curves that people are presently using, but it might be useful to make
crystal clear for anybody trying to use it with different curves in the
future.
OK, good point.
Post by Brian Smith
Post by Trevor Perrin
[1] https://moderncrypto.org/mail-archive/curves/2014/0002
oops, [1] https://moderncrypto.org/mail-archive/curves/2014/000227.html

[2] https://eprint.iacr.org/2013/346.pdf

Trevor
Trevor Perrin
2016-10-29 22:47:26 UTC
Permalink
[...]
Post by Brian Smith
r = hash1(a || [first half of Z] || M || [second half of Z])
Samuel Neves mentioned (off-list) that with SHA512's 128-byte block
size, a and M would still be mingled together in the first block.

So I'm thinking about this:

a || Z || pad || M

where "pad" adds zero bytes to fill the hash block (so 32 bytes with
25519 and SHA512, since |a|=32 and |Z|=64).

Rationale:
(1) In the Random Oracle Model, this is no different from hashing "a
|| M || Z".
(2) Processing the secret inputs (a and Z) in a separate block (or
blocks) from M seems cleaner
(3) Mixing the secret key with secret random data (Z) prior to mixing
it with known input (M) is better for resisting physical sidechannels
(power, EM).
(4) The prior rationale for hashing Z at the end was weak: It might
help protect a very weak hash where the attacker was able to choose M
to force biases or collisions, even with unknown and randomized
prefix. But I think the sidechannel threat is more plausible.
- We could consider a || Z1 || pad || M || Z2, but the risk of (4) is
low enough that I doubt that's worth the complexity

Thoughts?

Trevor
Mike Hamburg
2016-10-30 02:15:47 UTC
Permalink
Post by Trevor Perrin
[...]
Post by Brian Smith
r = hash1(a || [first half of Z] || M || [second half of Z])
Samuel Neves mentioned (off-list) that with SHA512's 128-byte block
size, a and M would still be mingled together in the first block.
a || Z || pad || M
where "pad" adds zero bytes to fill the hash block (so 32 bytes with
25519 and SHA512, since |a|=32 and |Z|=64).
(1) In the Random Oracle Model, this is no different from hashing "a
|| M || Z".
(2) Processing the secret inputs (a and Z) in a separate block (or
blocks) from M seems cleaner
(3) Mixing the secret key with secret random data (Z) prior to mixing
it with known input (M) is better for resisting physical sidechannels
(power, EM).
(4) The prior rationale for hashing Z at the end was weak: It might
help protect a very weak hash where the attacker was able to choose M
to force biases or collisions, even with unknown and randomized
prefix. But I think the sidechannel threat is more plausible.
- We could consider a || Z1 || pad || M || Z2, but the risk of (4) is
low enough that I doubt that's worth the complexity
Thoughts?
Trevor
_______________________________________________
Curves mailing list
https://moderncrypto.org/mailman/listinfo/curves
I like the change to a || Z || pad || M, for the reasons you listed.

— Mike
Brian Smith
2016-10-30 06:49:43 UTC
Permalink
Post by Trevor Perrin
[...]
Post by Brian Smith
r = hash1(a || [first half of Z] || M || [second half of Z])
Samuel Neves mentioned (off-list) that with SHA512's 128-byte block
size, a and M would still be mingled together in the first block.
a || Z || pad || M
where "pad" adds zero bytes to fill the hash block (so 32 bytes with
25519 and SHA512, since |a|=32 and |Z|=64).
This also makes sense to me, and also partially addresses my question about
whether |Z| should be a function of the block size of the hash function.
Post by Trevor Perrin
(4) The prior rationale for hashing Z at the end was weak: It might
help protect a very weak hash where the attacker was able to choose M
to force biases or collisions, even with unknown and randomized
prefix. But I think the sidechannel threat is more plausible.
- We could consider a || Z1 || pad || M || Z2, but the risk of (4) is
Post by Trevor Perrin
low enough that I doubt that's worth the complexity
I would like to read more about attacks of the form of #4, if people have
references. Up to now I've always lived a life of luxury wherein I have
been allowed to assume there is no such attack, but I've not searched hard
for research on that matter either.

Cheers,
Brian
--
https://briansmith.org/
Tim Ruffing
2016-10-31 12:14:45 UTC
Permalink
The spec currently says:
"XEdDSA and VXEdDSA require a cryptographic hash function. The default
hash function is SHA-512 [10]." 

"Default" sounds a little like "SHA-512 is nice but you can use another
function in your private implementation" -- and people sometimes make
use of such choices for whatever reason, which is bad enough.

If additionally the structure of the input is optimized for SHA-512,
then I think it makes sense to make SHA-512 a hard requirement. (Sure,
we want to replace the hash function if it fails but then it's
necessary to update the spec anyway.)

Tim
Post by Trevor Perrin
[...]
     r = hash1(a || [first half of Z] || M || [second half of Z])
Samuel Neves mentioned (off-list) that with SHA512's 128-byte block
size, a and M would still be mingled together in the first block.
  a || Z || pad || M
where "pad" adds zero bytes to fill the hash block (so 32 bytes with
25519 and SHA512, since |a|=32 and |Z|=64).
 (1) In the Random Oracle Model, this is no different from hashing "a
M || Z".
 (2) Processing the secret inputs (a and Z) in a separate block (or
blocks) from M seems cleaner
 (3) Mixing the secret key with secret random data (Z) prior to
mixing
it with known input (M) is better for resisting physical sidechannels
(power, EM).
 (4) The prior rationale for hashing Z at the end was weak:  It might
help protect a very weak hash where the attacker was able to choose M
to force biases or collisions, even with unknown and randomized
prefix.  But I think the sidechannel threat is more plausible.
 - We could consider a || Z1 || pad || M || Z2, but the risk of (4)
is
low enough that I doubt that's worth the complexity
Thoughts?
Trevor
_______________________________________________
Curves mailing list
https://moderncrypto.org/mailman/listinfo/curves
Brian Smith
2016-10-30 06:39:58 UTC
Permalink
Post by Trevor Perrin
Post by Brian Smith
Post by Trevor Perrin
Sure, what do you think needs to be clarified, exactly? The math
seems clear, I'm guessing you think "glitch" attacks need to be
defined?
It wasn't clear to me that "glitch" referred to glitch/fault attacks.
I'll clarify that, you're right that "fault" is the more generic term.
Simply, I was unsure whether "glitch" was being used colloquially, to
indicate some unspecified problem, or whether it was referring to glitch
attacks in general, or a specific type of glitch attack, or fault attacks
in general, or faults in general (accidental or induced).
Post by Trevor Perrin
Post by Brian Smith
Post by Trevor Perrin
Why is Z fixed at 64 bytes? That is, why is its length not a function
of
Post by Brian Smith
Post by Trevor Perrin
the
size of the key and/or the digest function output length
[...]
Post by Brian Smith
OK. I think one might argue that it is overkill to use a Z larger the |p|
and typically Z-like values are the same size as |p| so it would be more
consistent with existing practice and more intuitive to use a |p|-sized
Z.
For discrete-log signatures you need to make sure the nonce is
unbiased, so one technique is to reduce a value much larger than the
subgroup order by the subgroup order (e.g. FIPS-186 recommends
choosing a value that's 64 bits larger).
The thing that is reduced mod q is the SHA-512 digest, not Z, right? I
don't see how the FIPS-186-suggested technique applies.

In your justification for why it is fixed at 64 bits, you imply that it is
kinda-sorta a function of |q|. Notice that 64 bytes = 512 bits = 448 + 64.
Using the above reasoning, would a 448-bit curve be the maximum supported
by this scheme? Or, for a 512-bit curve, would we require a Z that is at
least 512 + 64 = 576 bits = 72 bytes?

To a large extent, it probably doesn't matter. But, if you have an
implementation of X25519/Ed25519 for a very constrained device you might
have a function for gathering exactly 32 bytes of random data to use as a
key, and it seems like it would be nice to be able to reuse that exact same
function for generating Z, if a Z of size |q| is sufficient for the
security goals.
Post by Trevor Perrin
Post by Brian Smith
[I n]otice in the email I'm replying to now, you seem to be
intending that the scheme should be usable even with a compromised RNG to
some extent, which makes me wonder whether a good RNG is a requirement or
just nice-to-have.
It's intended to be secure even with a weak RNG, but with more
assumptions about the hash, and private scalar, and exotic attacks
(per above) that randomization helps with. So I'd rather not give
people the idea the RNG is optional, or could be skimped on.
Understood. But, the questions that entered my mind when reading the
document were more along these lines:

1. Is using XEdDSA with a deterministic nonce equally, less, or more safe
than using EdDSA (with a deterministic nonce)?

2. Ditto, for VXEdDSA.

3. Is using XEdDSA with an actively malicious (attacker-controlled) RNG
equally, less, or more safe than using EdDSA (with a deterministic nonce)?

4. Ditto for VXEdDSA.

Cheers,
Brian
--
https://briansmith.org/
Trevor Perrin
2016-10-30 18:11:10 UTC
Permalink
Post by Brian Smith
Post by Trevor Perrin
I'll clarify that, you're right that "fault" is the more generic term.
Simply, I was unsure whether "glitch" was being used colloquially, to
indicate some unspecified problem, or whether it was referring to glitch
attacks in general, or a specific type of glitch attack, or fault attacks in
general, or faults in general (accidental or induced).
I'll add a little more text about faults. For example, mention that
flipping a single bit in a randomized discrete-log signature
calculation is typically insufficient to leak the private key. Also,
add a citation for Jivsov [1].
Post by Brian Smith
Post by Trevor Perrin
For discrete-log signatures you need to make sure the nonce is
unbiased, so one technique is to reduce a value much larger than the
subgroup order by the subgroup order (e.g. FIPS-186 recommends
choosing a value that's 64 bits larger).
The thing that is reduced mod q is the SHA-512 digest, not Z, right? I don't
see how the FIPS-186-suggested technique applies.
We're reducing a 512-bit hash output, so adding 512 bits of entropy to
the input hopefully means the output gets a full 512 bits of entropy.

Which is overkill but it's easy and simple, and means Z doesn't need
to change sizes depending on curve (e.g. larger Z for 448). It also
might help if your RNG is slightly weak, so I still like |Z|=512.
Post by Brian Smith
In your justification for why it is fixed at 64 bits, you imply that it is
kinda-sorta a function of |q|. Notice that 64 bytes = 512 bits = 448 + 64.
Using the above reasoning, would a 448-bit curve be the maximum supported by
this scheme? Or, for a 512-bit curve, would we require a Z that is at least
512 + 64 = 576 bits = 72 bytes?
Depends, if the 512-bit prime is close to 2^512 maybe you can convince
yourself a 512-bit hash is good enough. Or maybe you want to follow
NIST's generic advice and add 64 bits, or follow the DJB/EdDSA generic
advice and go to 1024 bits.

So I would dodge that question, but I might add that a 512-bit hash
(and 512-bit Z) is sufficient for curves up to 448 bits in size (or
|q| <= 448 to be exact), following FIPS 186-4's advice on 64 "Extra
Random Bits".

For a larger curve you'd have to think more, and you might need
large-output constructions (e.g. HKDF, XOF, FIPS 186-4 Appendix B,
EdDSA's SHA512/832 or SHA512/912 [2], etc). Since other curves are
less useful (IMO) and different people might prefer different
large-output constructions, it seems best to leave that out of scope.
Post by Brian Smith
Post by Trevor Perrin
It's intended to be secure even with a weak RNG, but with more
assumptions about the hash, and private scalar, and exotic attacks
(per above) that randomization helps with. So I'd rather not give
people the idea the RNG is optional, or could be skimped on.
Understood. But, the questions that entered my mind when reading the
1. Is using XEdDSA with a deterministic nonce equally, less, or more safe
than using EdDSA (with a deterministic nonce)?
2. Ditto, for VXEdDSA.
About the same, in my opinion.

In terms of provable security in the Random Oracle Model, it's the same.

Because "a" is used in the first hash (XEdDSA), instead of an
independent secret key (EdDSA), a deterministic XEdDSA might have a
slightly increased side-channel risk of leaking "a", but I think
randomization would help both algorithms reduce side-channel risk.
Post by Brian Smith
3. Is using XEdDSA with an actively malicious (attacker-controlled) RNG
equally, less, or more safe than using EdDSA (with a deterministic nonce)?
4. Ditto for VXEdDSA.
DJB has discussed a malicious RNG that can read (but not write)
memory, and has no other way to exfiltrate secrets except choosing RNG
output. Such an RNG could bias the discrete-log signature nonce, to
make the private key extractable [3].

I think that's less likely than fault and physical sidechannel
attacks, but it's true that the RNG adds this tradeoff.

Trevor

[1] https://www.ietf.org/mail-archive/web/cfrg/current/msg06759.html
[2] https://ed25519.cr.yp.to/eddsa-20150704.pdf
[3] https://blog.cr.yp.to/20140205-entropy.html
Loading...