Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constant-time scalar sampling & hash-to-scalar #55

Merged
merged 10 commits into from
Feb 25, 2025
Merged

Constant-time scalar sampling & hash-to-scalar #55

merged 10 commits into from
Feb 25, 2025

Conversation

survived
Copy link
Contributor

@survived survived commented Jan 29, 2025

We currently rely on actual curve implementation (i.e. rust crypto or dalek repos) to provide a function for random scalar generation from source of randomness.

However, in the code, we implicitly rely that random scalar generation is reproducible: we expect that when a reproducible PRNG is provided (such as HashRng) with the same seed, then output scalar is the same on all platforms. It's not guaranteed at all: a library may actually run platform-dependent algorithm for scalar generation, for instance, depending on whether it's x64 or x86 platform.

It is crucial to provide a reproducible Scalar::random as it is used in Scalar::from_hash, which in return is used in all non-interactive ZK proofs to derive a challenge.

This PR addresses this by implementing random scalar generation within the library. We simply take a sufficiently large bytestring, and reduce it modulo curve prime (sub)group order, in compliance with RFC9380.

As a side effect, sampling a scalar is constant-time (which was not the case before)

Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
@survived survived marked this pull request as ready for review January 30, 2025 15:21
@survived survived requested a review from maurges January 30, 2025 15:21
@maurges
Copy link
Contributor

maurges commented Jan 31, 2025

Is it maybe worth to keep old random function as fast_random?

@survived
Copy link
Contributor Author

survived commented Jan 31, 2025 via email

@maurges
Copy link
Contributor

maurges commented Jan 31, 2025

Why do you assume it’s faster?

Because I associate constant-time with slowness (=

I assume the authors of the underlying libraries optimized them more than we did. Also they might have other upsides, like consuming less entropy and having better distribution. The only one I care about from these is speed, for tests

But I don't care about it much. If you think it's not worth it to complicate the api, I don't mind

@survived
Copy link
Contributor Author

In the tests, I think we want to use a reproducible function, to have reproducible tests

@survived
Copy link
Contributor Author

It's interesting to see if there's any significant performance difference, I will do the experiments

Signed-off-by: Denis Varlakov <denis@dfns.co>
@survived
Copy link
Contributor Author

Did benchmarks in 4639ab1, results:

random/secp256k1/v04    time:   [34.735 ns 34.743 ns 34.753 ns]
Found 4 outliers among 100 measurements (4.00%)
  1 (1.00%) high mild
  3 (3.00%) high severe
random/secp256k1/latest time:   [69.344 ns 69.417 ns 69.506 ns]
Found 5 outliers among 100 measurements (5.00%)
  1 (1.00%) low mild
  2 (2.00%) high mild
  2 (2.00%) high severe

random/secp256r1/v04    time:   [34.955 ns 35.007 ns 35.113 ns]
Found 12 outliers among 100 measurements (12.00%)
  3 (3.00%) low mild
  3 (3.00%) high mild
  6 (6.00%) high severe
random/secp256r1/latest time:   [131.63 ns 131.69 ns 131.76 ns]
Found 13 outliers among 100 measurements (13.00%)
  5 (5.00%) high mild
  8 (8.00%) high severe

random/stark/v04        time:   [1.8662 µs 1.8687 µs 1.8711 µs]
Found 6 outliers among 100 measurements (6.00%)
  1 (1.00%) low severe
  4 (4.00%) low mild
  1 (1.00%) high mild
random/stark/latest     time:   [6.4075 µs 6.4093 µs 6.4114 µs]
Found 16 outliers among 100 measurements (16.00%)
  5 (5.00%) high mild
  11 (11.00%) high severe

random/ed25519/v04      time:   [163.51 ns 163.55 ns 163.60 ns]
Found 9 outliers among 100 measurements (9.00%)
  2 (2.00%) low mild
  3 (3.00%) high mild
  4 (4.00%) high severe
random/ed25519/latest   time:   [155.30 ns 155.35 ns 155.41 ns]
Found 12 outliers among 100 measurements (12.00%)
  1 (1.00%) low mild
  2 (2.00%) high mild
  9 (9.00%) high severe

Yeah it became 2-6 times worse, except ed25519, which became faster 😄 Given that the difference is significant, it does make sense to provide vartime random generation

@survived
Copy link
Contributor Author

survived commented Jan 31, 2025

Ah, ed25519 became faster because they use the same method for random scalar generation, except that they reduce 64 bytes instead of 48 bytes

Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
@survived
Copy link
Contributor Author

survived commented Feb 3, 2025

@maurges added vartime random method (I don't think we can use it in the tests though)

@maurges
Copy link
Contributor

maurges commented Feb 10, 2025

(I don't think we can use it in the tests though)

Because of reproducibility?

@survived
Copy link
Contributor Author

Because of reproducibility?

Yes exactly, in most cases we want tests to be reproducible

@survived survived merged commit a113b0d into m Feb 25, 2025
24 checks passed
@survived survived deleted the random-scalar branch February 25, 2025 10:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants