-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Constant-time scalar sampling & hash-to-scalar #55
Conversation
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
Is it maybe worth to keep old random function as fast_random? |
Why do you assume it’s faster?
…On Fri, Jan 31, 2025, at 11:15 AM, maurges wrote:
Is it maybe worth to keep old random function as fast_random?
—
Reply to this email directly, view it on GitHub <#55 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADRTINGNTXP7VBJL2IHGFKL2NNEKRAVCNFSM6AAAAABWDKXMC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMRWHA2DAMZXGQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Because I associate constant-time with slowness (= I assume the authors of the underlying libraries optimized them more than we did. Also they might have other upsides, like consuming less entropy and having better distribution. The only one I care about from these is speed, for tests But I don't care about it much. If you think it's not worth it to complicate the api, I don't mind |
In the tests, I think we want to use a reproducible function, to have reproducible tests |
It's interesting to see if there's any significant performance difference, I will do the experiments |
Signed-off-by: Denis Varlakov <denis@dfns.co>
Did benchmarks in 4639ab1, results:
Yeah it became 2-6 times worse, except ed25519, which became faster 😄 Given that the difference is significant, it does make sense to provide vartime random generation |
Ah, ed25519 became faster because they use the same method for random scalar generation, except that they reduce 64 bytes instead of 48 bytes |
Signed-off-by: Denis Varlakov <denis@dfns.co>
Signed-off-by: Denis Varlakov <denis@dfns.co>
@maurges added vartime random method (I don't think we can use it in the tests though) |
Because of reproducibility? |
Yes exactly, in most cases we want tests to be reproducible |
We currently rely on actual curve implementation (i.e. rust crypto or dalek repos) to provide a function for random scalar generation from source of randomness.
However, in the code, we implicitly rely that random scalar generation is reproducible: we expect that when a reproducible PRNG is provided (such as HashRng) with the same seed, then output scalar is the same on all platforms. It's not guaranteed at all: a library may actually run platform-dependent algorithm for scalar generation, for instance, depending on whether it's x64 or x86 platform.
It is crucial to provide a reproducible
Scalar::random
as it is used inScalar::from_hash
, which in return is used in all non-interactive ZK proofs to derive a challenge.This PR addresses this by implementing random scalar generation within the library. We simply take a sufficiently large bytestring, and reduce it modulo curve prime (sub)group order, in compliance with RFC9380.
As a side effect, sampling a scalar is constant-time (which was not the case before)