Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross-platform compatibility #502

Closed
wants to merge 24 commits into from
Closed
Show file tree
Hide file tree
Changes from 17 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 15 additions & 3 deletions .github/workflows/test_on_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,24 @@ jobs:
needs: permission
strategy:
matrix:
platform: [macos-latest, windows-latest, ubuntu-latest]
runs-on: ${{matrix.platform}}
platform: [macos-latest, windows-latest, ubuntu-latest, ubuntu-latest-x86]
runs-on: ${{ matrix.platform == 'ubuntu-latest-x86' && 'ubuntu-latest' || matrix.platform }}
env:
DBGSYNCLOG: trace
DBGSYNCON: true
steps:
- name: Setup Alpine Linux
if: matrix.platform == 'ubuntu-latest-x86'
uses: jirutka/setup-alpine@v1
with:
arch: x86
packages: >
golang
make
git
gcc
musl-dev

- name: Set up Go ^1.13
uses: actions/setup-go@v3
with:
Expand All @@ -51,7 +63,7 @@ jobs:
fetch-depth: 0

- name: Test without coverage
if: matrix.platform == 'macos-latest' || matrix.platform == 'windows-latest'
if: matrix.platform == 'macos-latest' || matrix.platform == 'windows-latest' || matrix.platform == 'ubuntu-latest-x86'
run: make test

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a step that does a uname -a and a go env just to be able to double check that it works?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can see that in the runner anyway, at the top...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that would be the case for the Alpine x86 job, since it seems to run on the same worker, no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I didn't see that it only runs 3 out of the 4 entries in the matrix - according to the Standard Github Runners, the ubuntu-latest-x86 doesn't exist. So perhaps it's just ignored?

If you look at the checks from this PR, only 3 are run. And I cannot see anything about two tests being run on the ubuntu-latest.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the new workflow would be executed until the changes haven't been merged

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I'm confused:

  • according to the header, the synchronize type will launch the script every time you do a git push
  • I can't find your changes to the file .github/workflows/test_on_pr.yml anymore - did you remove the changes?

I do think it's a good idea to add the 32-bit test to the workflow. And if I'm not mistaken, the workflow has actually been run. It's just that the -x86 target doesn't exist. But I didn't test it.

Copy link
Contributor Author

@matteosz matteosz May 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, yes the synchronize make the CI to run for every git push, however the CI that is run is the one which is already on the master branch (or the drandmerge since it's the base branch now). Hence, I changed the workflow to execute separate jobs (thus avoiding the -x86 target issue) and the changes weren't detected either, so I moved the CI changes to a separate PR to merge before this one so we have the correct CI tests running.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, as I have too many things going on and couldn't decide where to start, I did this:

https://github.com/ineiti/test_ubuntu_32bits/blob/main/.github/workflows/test.yaml

some notes:

  • it's nicer to use a matrix, it keeps the things simpler and easier to extend
  • you don't use the shell: alpine.sh {0}, so I think you're still running it on 64-bits and not 32-bits
  • uname doesn't show it's 32-bits, so I used getconf LONG_BIT - thanks to stackoverflow
  • ChatGPT was useless in this endeavour...

I hope that helps...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will update the CI with that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, a good CI-pipeline is worth putting some effort in!

- name: Test with coverage
Expand Down
18 changes: 15 additions & 3 deletions .github/workflows/test_on_push.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,24 @@ jobs:
test_and_coverage:
strategy:
matrix:
platform: [macos-latest, windows-latest, ubuntu-latest]
runs-on: ${{matrix.platform}}
platform: [macos-latest, windows-latest, ubuntu-latest, ubuntu-latest-x86]
runs-on: ${{ matrix.platform == 'ubuntu-latest-x86' && 'ubuntu-latest' || matrix.platform }}
env:
DBGSYNCLOG: trace
DBGSYNCON: true
steps:
- name: Setup Alpine Linux
if: matrix.platform == 'ubuntu-latest-x86'
uses: jirutka/setup-alpine@v1
with:
arch: x86
packages: >
golang
make
git
gcc
musl-dev

- name: Set up Go ^1.13
uses: actions/setup-go@v3
with:
Expand All @@ -26,7 +38,7 @@ jobs:
fetch-depth: 0

- name: Test without coverage
if: matrix.platform == 'macos-latest' || matrix.platform == 'windows-latest'
if: matrix.platform == 'macos-latest' || matrix.platform == 'windows-latest' || matrix.platform == 'ubuntu-latest-x86'
run: make test

- name: Test with coverage
Expand Down
262 changes: 262 additions & 0 deletions examples/dkg_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,262 @@
package examples

import (
"os"
"strconv"
"testing"

"github.com/stretchr/testify/require"
"go.dedis.ch/kyber/v3"
"go.dedis.ch/kyber/v3/group/edwards25519"
"go.dedis.ch/kyber/v3/share"
dkg "go.dedis.ch/kyber/v3/share/dkg/pedersen"
)

var suite = edwards25519.NewBlakeSHA256Ed25519()

/*
This example illustrates how to use the dkg/pedersen API to generate a public
key and its corresponding private key that is shared among nodes. It shows the
different phases that each node must perform in order to construct the private
shares that will form the final private key. The example uses 3 nodes and shows
the "happy" path where each node does its job correctly.
*/
func Test_Example_DKG(t *testing.T) {

// DKG scales exponentially, the following command prints the duration [ns]
// of this test case with an increasing number of nodes. The resulting plot
// should illustrate an exponential growth.
//
// for (( i=1; i<30; i++ )); do
// start=`gdate +%s%N`
// NUM_NODES=$i go test -run Test_Example_DKG >/dev/null
// duration=$(( `gdate +%s%N` - start ))
// echo $duration
// done
//
var nStr = os.Getenv("NUM_NODES")
if nStr == "" {
// default number of node for this test
nStr = "7"
}
nUnsz, err := strconv.Atoi(nStr)
n := uint32(nUnsz)
require.NoError(t, err)

type node struct {
dkg *dkg.DistKeyGenerator
pubKey kyber.Point
privKey kyber.Scalar
deals []*dkg.Deal
resps []*dkg.Response
secretShare *share.PriShare
}

nodes := make([]*node, n)
pubKeys := make([]kyber.Point, n)

// 1. Init the nodes
for i := uint32(0); i < n; i++ {
privKey := suite.Scalar().Pick(suite.RandomStream())
pubKey := suite.Point().Mul(privKey, nil)
pubKeys[i] = pubKey
nodes[i] = &node{
pubKey: pubKey,
privKey: privKey,
deals: make([]*dkg.Deal, 0),
resps: make([]*dkg.Response, 0),
}
}

// 2. Create the DKGs on each node
for i, node := range nodes {
dkg, err := dkg.NewDistKeyGenerator(suite, nodes[i].privKey, pubKeys, n)
require.NoError(t, err)
node.dkg = dkg
}

// 3. Each node sends its Deals to the other nodes
for _, node := range nodes {
deals, err := node.dkg.Deals()
require.NoError(t, err)
for i, deal := range deals {
nodes[i].deals = append(nodes[i].deals, deal)
}
}

// 4. Process the Deals on each node and send the responses to the other
// nodes
for i, node := range nodes {
for _, deal := range node.deals {
resp, err := node.dkg.ProcessDeal(deal)
require.NoError(t, err)
for j, otherNode := range nodes {
if j == i {
continue
}
otherNode.resps = append(otherNode.resps, resp)
}
}
}

// 5. Process the responses on each node
for _, node := range nodes {
for _, resp := range node.resps {
_, err := node.dkg.ProcessResponse(resp)
require.NoError(t, err)
// err = node.dkg.ProcessJustification(justification)
// require.NoError(t, err)
}
}

// 6. Check and print the qualified shares
for _, node := range nodes {
require.True(t, node.dkg.Certified())
require.Equal(t, n, uint32(len(node.dkg.QualifiedShares())))
require.Equal(t, n, uint32(len(node.dkg.QUAL())))
t.Log("qualified shares:", node.dkg.QualifiedShares())
t.Log("QUAL", node.dkg.QUAL())
}

// 7. Get the secret shares and public key
shares := make([]*share.PriShare, n)
var publicKey kyber.Point
for i, node := range nodes {
distrKey, err := node.dkg.DistKeyShare()
require.NoError(t, err)
shares[i] = distrKey.PriShare()
publicKey = distrKey.Public()
node.secretShare = distrKey.PriShare()
t.Log("new distributed public key:", publicKey)
}

// 8. Variant A - Encrypt a secret with the public key and decrypt it with
// the reconstructed shared secret key. Reconstructing the shared secret key
// in not something we should do as it gives the power to decrypt any
// further messages encrypted with the shared public key. For this we show
// in variant B how to make nodes send back partial decryptions instead of
// their shares. In variant C the nodes return partial decrpytions that are
// encrypted under a provided public key.
message := []byte("Hello world")
secretKey, err := share.RecoverSecret(suite, shares, n, n)
require.NoError(t, err)
K, C, remainder := ElGamalEncrypt(suite, publicKey, message)
require.Equal(t, 0, len(remainder))
decryptedMessage, err := ElGamalDecrypt(suite, secretKey, K, C)
require.Equal(t, message, decryptedMessage)

// 8. Variant B - Each node provide only a partial decryption by sending its
// public share. We then reconstruct the public commitment with those public
// shares.
partials := make([]kyber.Point, n)
pubShares := make([]*share.PubShare, n)
for i, node := range nodes {
S := suite.Point().Mul(node.secretShare.V, K)
partials[i] = suite.Point().Sub(C, S)
pubShares[i] = &share.PubShare{
I: uint32(i), V: partials[i],
}
}

// Reconstruct the public commitment, which contains the decrypted message
res, err := share.RecoverCommit(suite, pubShares, n, n)
require.NoError(t, err)
decryptedMessage, err = res.Data()
require.NoError(t, err)
require.Equal(t, message, decryptedMessage)

// 8 Variant C - Nodes return a partial decryption under the encryption from
// the client's provided public key. This is useful in case the decryption
// happens in public. In that case the decrypted message is never released
// in clear, but the message is revealed re-encrypted under the provided
// public key.
//
// Here is the crypto that happens in 3 phases:
//
// (1) Message encryption:
//
// r: random point
// A: dkg public key
// G: curve's generator
// M: message to encrypt
// (C, U): encrypted message
//
// C = rA + M
// U = rG
//
// (2) Node's partial decryption
//
// V: node's public re-encrypted share
// o: node's private share
// Q: client's public key (pG)
//
// V = oU + oQ
//
// (3) Message's decryption
//
// R: recovered commit (f(V1, V2, ...Vi)) using Lagrange interpolation
// p: client's private key
// M': decrypted message
//
// M' = C - (R - pA)

A := publicKey
r := suite.Scalar().Pick(suite.RandomStream())
M := suite.Point().Embed(message, suite.RandomStream())
C = suite.Point().Add( // rA + M
suite.Point().Mul(r, A), // rA
M,
)
U := suite.Point().Mul(r, nil) // rG

p := suite.Scalar().Pick(suite.RandomStream())
Q := suite.Point().Mul(p, nil) // pG

partials = make([]kyber.Point, n)
pubShares = make([]*share.PubShare, n) // V1, V2, ...Vi
for i, node := range nodes {
v := suite.Point().Add( // oU + oQ
suite.Point().Mul(node.secretShare.V, U), // oU
suite.Point().Mul(node.secretShare.V, Q), // oQ
)
partials[i] = v
pubShares[i] = &share.PubShare{
I: uint32(i), V: partials[i],
}
}

R, err := share.RecoverCommit(suite, pubShares, n, n) // R = f(V1, V2, ...Vi)
require.NoError(t, err)

decryptedPoint := suite.Point().Sub( // C - (R - pA)
C,
suite.Point().Sub( // R - pA
R,
suite.Point().Mul(p, A), // pA
),
)
decryptedMessage, err = decryptedPoint.Data()
require.NoError(t, err)
require.Equal(t, decryptedMessage, message)

// 9. The following shows a re-share of the dkg key, which will invalidates
// the current shares on each node and produce a new public key. After that
// steps 3, 4, 5 need to be done in order to get the new shares and public
// key.
for _, node := range nodes {
share, err := node.dkg.DistKeyShare()
require.NoError(t, err)
c := &dkg.Config{
Suite: suite,
Longterm: node.privKey,
OldNodes: pubKeys,
NewNodes: pubKeys,
Share: share,
Threshold: n,
OldThreshold: n,
}
newDkg, err := dkg.NewDistKeyHandler(c)
require.NoError(t, err)
node.dkg = newDkg
}
}
2 changes: 1 addition & 1 deletion group/curve25519/param.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ type Param struct {

P big.Int // Prime defining the underlying field
Q big.Int // Order of the prime-order base point
R int // Cofactor: Q*R is the total size of the curve
R int64 // Cofactor: Q*R is the total size of the curve

A, D big.Int // Edwards curve equation parameters

Expand Down
12 changes: 6 additions & 6 deletions proof/deniable.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ import (
// the Sigma-protocol proofs of any or all of the other participants.
// Different participants may produce different proofs of varying sizes,
// and may even consist of different numbers of steps.
func DeniableProver(suite Suite, self int, prover Prover,
func DeniableProver(suite Suite, self uint32, prover Prover,
verifiers []Verifier) Protocol {

return Protocol(func(ctx Context) []error {
Expand All @@ -25,7 +25,7 @@ func DeniableProver(suite Suite, self int, prover Prover,

type deniableProver struct {
suite Suite // Agreed-on ciphersuite for protocol
self int // Our own node number
self uint32 // Our own node number
sc Context // Clique protocol context

// verifiers for other nodes' proofs
Expand All @@ -43,14 +43,14 @@ type deniableProver struct {
err []error
}

func (dp *deniableProver) run(suite Suite, self int, prv Prover,
func (dp *deniableProver) run(suite Suite, self uint32, prv Prover,
vrf []Verifier, sc Context) []error {
dp.suite = suite
dp.self = self
dp.sc = sc
dp.prirand = sc.Random()

nnodes := len(vrf)
nnodes := uint32(len(vrf))
if self < 0 || self >= nnodes {
return []error{errors.New("out-of-range self node")}
}
Expand All @@ -60,7 +60,7 @@ func (dp *deniableProver) run(suite Suite, self int, prv Prover,
verr := errors.New("prover or verifier not run")
dp.err = make([]error, nnodes)
for i := range dp.err {
if i != self {
if uint32(i) != self {
dp.err[i] = verr
}
}
Expand Down Expand Up @@ -187,7 +187,7 @@ func (dp *deniableProver) challengeStep() error {
mix[j] ^= key[j]
}
}
if len(keys) <= dp.self || !bytes.Equal(keys[dp.self], dp.key) {
if uint32(len(keys)) <= dp.self || !bytes.Equal(keys[dp.self], dp.key) {
return errors.New("our own message was corrupted")
}

Expand Down
Loading