Skip to content

Commit f12acb8

Browse files
Merge branch 'unitary-hack-ibmq-auth' into ibmq-auth
2 parents c483dc3 + 60ace1e commit f12acb8

File tree

76 files changed

+4506
-935
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+4506
-935
lines changed

.github/workflows/functional_tests.yaml

+4-4
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
name: Python package
55

66
on:
7-
push:
7+
push:
88
pull_request:
99

1010
jobs:
@@ -17,16 +17,16 @@ jobs:
1717
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
1818

1919
steps:
20-
- uses: actions/checkout@v3
20+
- uses: actions/checkout@v4
2121
- name: Set up Python ${{ matrix.python-version }}
22-
uses: actions/setup-python@v3
22+
uses: actions/setup-python@v5
2323
with:
2424
python-version: ${{ matrix.python-version }}
2525
- name: Install dependencies
2626
run: |
2727
python -m pip install --upgrade pip
2828
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
29-
python -m pip install flake8 pytest qiskit-aer qiskit_ibm_runtime
29+
python -m pip install flake8 pytest
3030
- name: Lint with flake8
3131
run: |
3232
# stop the build if there are Python syntax errors or undefined names

.github/workflows/lint.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ jobs:
1414
lint:
1515
runs-on: ubuntu-latest
1616
steps:
17-
- uses: actions/checkout@v3
17+
- uses: actions/checkout@v4
1818
- name: Setup Python 3.8
19-
uses: actions/setup-python@v4
19+
uses: actions/setup-python@v5
2020
with:
2121
python-version: ${{ env.PYTHON_VERSION }}
2222
- name: Update pip and install lint utilities

.github/workflows/pull_request.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ jobs:
99
pre-commit:
1010
runs-on: ubuntu-latest
1111
steps:
12-
- uses: actions/checkout@v3
13-
- uses: actions/setup-python@v4
12+
- uses: actions/checkout@v4
13+
- uses: actions/setup-python@v5
1414
with:
1515
python-version: ${{ env.PYTHON_VERSION }}
1616
- uses: pre-commit/action@v2.0.3

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Simulate quantum computations on classical hardware using PyTorch. It supports s
5555
Researchers on quantum algorithm design, parameterized quantum circuit training, quantum optimal control, quantum machine learning, quantum neural networks.
5656
#### Differences from Qiskit/Pennylane
5757

58-
Dynamic computation graph, automatic gradient computation, fast GPU support, batch model tersorized processing.
58+
Dynamic computation graph, automatic gradient computation, fast GPU support, batch model tensorized processing.
5959

6060
## News
6161
- v0.1.8 Available!

examples/PauliSumOp/pauli_sum_op_noise.py

Whitespace-only changes.

examples/QCBM/README.md

+42
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# Quantum Circuit Born Machine
2+
(Implementation by: [Gopal Ramesh Dahale](https://github.com/Gopal-Dahale))
3+
4+
Quantum Circuit Born Machine (QCBM) [1] is a generative modeling algorithm which uses Born rule from quantum mechanics to sample from a quantum state $|\psi \rangle$ learned by training an ansatz $U(\theta)$ [1][2]. In this tutorial we show how `torchquantum` can be used to model a Gaussian mixture with QCBM.
5+
6+
## Setup
7+
8+
Below is the usage of `qcbm_gaussian_mixture.py` which can be obtained by running `python qcbm_gaussian_mixture.py -h`.
9+
10+
```
11+
usage: qcbm_gaussian_mixture.py [-h] [--n_wires N_WIRES] [--epochs EPOCHS] [--n_blocks N_BLOCKS] [--n_layers_per_block N_LAYERS_PER_BLOCK] [--plot] [--optimizer OPTIMIZER] [--lr LR]
12+
13+
options:
14+
-h, --help show this help message and exit
15+
--n_wires N_WIRES Number of wires used in the circuit
16+
--epochs EPOCHS Number of training epochs
17+
--n_blocks N_BLOCKS Number of blocks in ansatz
18+
--n_layers_per_block N_LAYERS_PER_BLOCK
19+
Number of layers per block in ansatz
20+
--plot Visualize the predicted probability distribution
21+
--optimizer OPTIMIZER
22+
optimizer class from torch.optim
23+
--lr LR
24+
```
25+
26+
For example:
27+
28+
```
29+
python qcbm_gaussian_mixture.py --plot --epochs 100 --optimizer RMSprop --lr 0.01 --n_blocks 6 --n_layers_per_block 2 --n_wires 6
30+
```
31+
32+
Using the command above gives an output similar to the plot below.
33+
34+
<p align="center">
35+
<img src ='./assets/sample_output.png' width-500 alt='sample output of QCBM'>
36+
</p>
37+
38+
39+
## References
40+
41+
1. Liu, Jin-Guo, and Lei Wang. “Differentiable learning of quantum circuit born machines.” Physical Review A 98.6 (2018): 062324.
42+
2. Gili, Kaitlin, et al. "Do quantum circuit born machines generalize?." Quantum Science and Technology 8.3 (2023): 035021.
32.6 KB
Loading

examples/QCBM/qcbm_gaussian_mixture.ipynb

+255
Large diffs are not rendered by default.
+129
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
import matplotlib.pyplot as plt
2+
import numpy as np
3+
import torch
4+
from torchquantum.algorithm import QCBM, MMDLoss
5+
import torchquantum as tq
6+
import argparse
7+
import os
8+
from pprint import pprint
9+
10+
11+
# Reproducibility
12+
def set_seed(seed: int = 42) -> None:
13+
np.random.seed(seed)
14+
torch.manual_seed(seed)
15+
torch.cuda.manual_seed(seed)
16+
# When running on the CuDNN backend, two further options must be set
17+
torch.backends.cudnn.deterministic = True
18+
torch.backends.cudnn.benchmark = False
19+
# Set a fixed value for the hash seed
20+
os.environ["PYTHONHASHSEED"] = str(seed)
21+
print(f"Random seed set as {seed}")
22+
23+
24+
def _setup_parser():
25+
parser = argparse.ArgumentParser()
26+
parser.add_argument(
27+
"--n_wires", type=int, default=6, help="Number of wires used in the circuit"
28+
)
29+
parser.add_argument(
30+
"--epochs", type=int, default=10, help="Number of training epochs"
31+
)
32+
parser.add_argument(
33+
"--n_blocks", type=int, default=6, help="Number of blocks in ansatz"
34+
)
35+
parser.add_argument(
36+
"--n_layers_per_block",
37+
type=int,
38+
default=1,
39+
help="Number of layers per block in ansatz",
40+
)
41+
parser.add_argument(
42+
"--plot",
43+
action="store_true",
44+
help="Visualize the predicted probability distribution",
45+
)
46+
parser.add_argument(
47+
"--optimizer", type=str, default="Adam", help="optimizer class from torch.optim"
48+
)
49+
parser.add_argument("--lr", type=float, default=1e-2)
50+
return parser
51+
52+
53+
# Function to create a gaussian mixture
54+
def gaussian_mixture_pdf(x, mus, sigmas):
55+
mus, sigmas = np.array(mus), np.array(sigmas)
56+
vars = sigmas**2
57+
values = [
58+
(1 / np.sqrt(2 * np.pi * v)) * np.exp(-((x - m) ** 2) / (2 * v))
59+
for m, v in zip(mus, vars)
60+
]
61+
values = np.sum([val / sum(val) for val in values], axis=0)
62+
return values / np.sum(values)
63+
64+
65+
def main():
66+
set_seed()
67+
parser = _setup_parser()
68+
args = parser.parse_args()
69+
70+
print("Configuration:")
71+
pprint(vars(args))
72+
73+
# Create a gaussian mixture
74+
n_wires = args.n_wires
75+
assert n_wires >= 1, "Number of wires must be at least 1"
76+
77+
x_max = 2**n_wires
78+
x_input = np.arange(x_max)
79+
mus = [(2 / 8) * x_max, (5 / 8) * x_max]
80+
sigmas = [x_max / 10] * 2
81+
data = gaussian_mixture_pdf(x_input, mus, sigmas)
82+
83+
# This is the target distribution that the QCBM will learn
84+
target_probs = torch.tensor(data, dtype=torch.float32)
85+
86+
# Ansatz
87+
layers = tq.RXYZCXLayer0(
88+
{
89+
"n_blocks": args.n_blocks,
90+
"n_wires": n_wires,
91+
"n_layers_per_block": args.n_layers_per_block,
92+
}
93+
)
94+
95+
qcbm = QCBM(n_wires, layers)
96+
97+
# To train QCBMs, we use MMDLoss with radial basis function kernel.
98+
bandwidth = torch.tensor([0.25, 60])
99+
space = torch.arange(2**n_wires)
100+
mmd = MMDLoss(bandwidth, space)
101+
102+
# Optimization
103+
optimizer_class = getattr(torch.optim, args.optimizer)
104+
optimizer = optimizer_class(qcbm.parameters(), lr=args.lr)
105+
106+
for i in range(args.epochs):
107+
optimizer.zero_grad(set_to_none=True)
108+
pred_probs = qcbm()
109+
loss = mmd(pred_probs, target_probs)
110+
loss.backward()
111+
optimizer.step()
112+
print(i, loss.item())
113+
114+
# Visualize the results
115+
if args.plot:
116+
with torch.no_grad():
117+
pred_probs = qcbm()
118+
119+
plt.plot(x_input, target_probs, linestyle="-.", label=r"$\pi(x)$")
120+
plt.bar(x_input, pred_probs, color="green", alpha=0.5, label="samples")
121+
plt.xlabel("Samples")
122+
plt.ylabel("Prob. Distribution")
123+
124+
plt.legend()
125+
plt.show()
126+
127+
128+
if __name__ == "__main__":
129+
main()

examples/amplitude_encoding_mnist/mnist_example.py

+13
Original file line numberDiff line numberDiff line change
@@ -100,10 +100,23 @@ def forward(self, x, use_qiskit=False):
100100
bsz = x.shape[0]
101101
x = F.avg_pool2d(x, 6).view(bsz, 16)
102102

103+
104+
print("Shape 1:")
105+
print(self.q_device.states.shape)
103106
self.encoder(self.q_device, x)
104107
self.q_layer(self.q_device)
108+
109+
110+
111+
print("X shape before measurement")
112+
print(x.shape)
113+
105114
x = self.measure(self.q_device)
106115

116+
117+
print("X shape after measurement")
118+
print(x.shape)
119+
107120
x = x.reshape(bsz, 2, 2).sum(-1).squeeze()
108121
x = F.log_softmax(x, dim=1)
109122

0 commit comments

Comments
 (0)