Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unitary hack #281

Merged
merged 63 commits into from
Jul 17, 2024
Merged
Changes from 1 commit
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
e18afe4
chore: remove unnecessary import
king-p3nguin May 29, 2024
36e3f5e
fix: fix dep warns
king-p3nguin May 29, 2024
33d0e94
fix: fix more depwarns and remove job_monitor
king-p3nguin May 29, 2024
20c2cee
fix: bump up qiskit version
king-p3nguin May 29, 2024
1d56d0b
fix, lint: fix type annotation error for py38, fix lint
king-p3nguin May 29, 2024
30eab91
fix: fix cnot error
king-p3nguin May 29, 2024
0932418
fix: fix examples
king-p3nguin May 29, 2024
c0e7a8a
ci: update workflow
king-p3nguin May 29, 2024
66205f5
test: relax assertion threshold
king-p3nguin May 29, 2024
0167897
Merge branch 'dev' into bump-up-qiskit-version
01110011011101010110010001101111 May 29, 2024
9d544ca
Create QGAN.Py
AbdullahKazi500 May 31, 2024
b0d8446
Added QCBM algorithm with example
Gopal-Dahale May 31, 2024
428be4a
Remove unused imports
Gopal-Dahale May 31, 2024
4bd170c
Updated init py following best practices
Gopal-Dahale May 31, 2024
0b267cf
Add files via upload
AbdullahKazi500 Jun 1, 2024
3a04848
rm density matrix for now
01110011011101010110010001101111 Jun 2, 2024
92d3384
Merge branch 'unitary-hack' into bump-up-qiskit-version
01110011011101010110010001101111 Jun 2, 2024
b7c7f3c
Updated with argparse
Gopal-Dahale Jun 3, 2024
792b766
bump ibm runtime
01110011011101010110010001101111 Jun 5, 2024
b4e6b67
bump qiskit aer
01110011011101010110010001101111 Jun 5, 2024
191934d
[fix] revive paramnum
01110011011101010110010001101111 Jun 5, 2024
b4b2748
change: remove unnessesary cloning
king-p3nguin Jun 6, 2024
dc41acc
Added qcbm gaussian mixture notebook
Gopal-Dahale Jun 6, 2024
134b3c6
support parameter expression in qiskit2tq
king-p3nguin Jun 6, 2024
2ada759
fix tab
Gopal-Dahale Jun 6, 2024
d5ebf7a
fix tab
Gopal-Dahale Jun 6, 2024
f101036
fix spacing
Gopal-Dahale Jun 6, 2024
98b4f36
fix tab
Gopal-Dahale Jun 6, 2024
11e5029
fix tab
Gopal-Dahale Jun 6, 2024
15605d0
Update torchquantum/operator/standard_gates/qubit_unitary.py
king-p3nguin Jun 6, 2024
75955c6
black formatted
Gopal-Dahale Jun 6, 2024
86217e6
added QGan notebook
AbdullahKazi500 Jun 6, 2024
c4ab183
test: add test for qiskit2tq
king-p3nguin Jun 7, 2024
ee6834a
change: print
king-p3nguin Jun 7, 2024
120fc2a
change: remove comments
king-p3nguin Jun 10, 2024
16232df
Create QGan.py
AbdullahKazi500 Jun 11, 2024
7695fd7
Delete examples/Newfolder/QuantumGAN/README.md directory
AbdullahKazi500 Jun 11, 2024
363751b
Create QGan.py
AbdullahKazi500 Jun 11, 2024
aeb213d
Create Readme.md
AbdullahKazi500 Jun 11, 2024
ee7b5f7
Add files via upload
AbdullahKazi500 Jun 12, 2024
37d389b
Update Readme.md
AbdullahKazi500 Jun 12, 2024
945d47c
Add files via upload
AbdullahKazi500 Jun 12, 2024
ad384fa
Merge pull request #267 from king-p3nguin/bump-up-qiskit-version
Hanrui-Wang Jun 12, 2024
60ace1e
Merge pull request #271 from Gopal-Dahale/qcbm
Hanrui-Wang Jun 12, 2024
99b5a18
Merge branch 'unitary-hack' into qiskit2tq-parameterexpression
01110011011101010110010001101111 Jun 12, 2024
2ff1d61
Delete qgan_notebook.ipynb
AbdullahKazi500 Jun 12, 2024
016f598
Delete QGAN.Py
AbdullahKazi500 Jun 12, 2024
64919c2
fix: fix test
king-p3nguin Jun 12, 2024
968c21b
Create quantum_pulse_simulation.py
AbdullahKazi500 Jun 12, 2024
cd2f3d4
fix: fix type annotations
king-p3nguin Jun 12, 2024
0f0de3f
Delete torchQuantumpulse.ipynb
AbdullahKazi500 Jun 12, 2024
6f84504
Rename QGANtorch (2).ipynb to qgan_generated.ipynb
AbdullahKazi500 Jun 12, 2024
f8d2965
Rename QGANPng.png to qgan_generated.png
AbdullahKazi500 Jun 12, 2024
aa1869c
Rename QGANPng2.png to qgan_image.png
AbdullahKazi500 Jun 12, 2024
5fdbf39
Update QGan.py
AbdullahKazi500 Jun 12, 2024
9701b11
Rename Readme.md to README.md
AbdullahKazi500 Jun 12, 2024
79cb93c
Update README.md
AbdullahKazi500 Jun 12, 2024
c940a1e
Update README.md
AbdullahKazi500 Jun 12, 2024
d433bbe
Rename qgan_image.png to qgan_latent_dim.png
AbdullahKazi500 Jun 12, 2024
82cf184
Update quantum_pulse_simulation.py
AbdullahKazi500 Jun 12, 2024
91c80de
Merge pull request #275 from king-p3nguin/qiskit2tq-parameterexpression
01110011011101010110010001101111 Jun 13, 2024
eda17c1
Merge pull request #272 from AbdullahKazi500/AbdullahKazi500-patch-3
01110011011101010110010001101111 Jun 13, 2024
6b30997
Merge pull request #270 from AbdullahKazi500/AbdullahKazi500-patch-2
01110011011101010110010001101111 Jun 13, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Updated with argparse
Gopal-Dahale committed Jun 3, 2024

Verified

This commit was signed with the committer’s verified signature.
ArcticLampyrid ArcticLampyrid
commit b7c7f3ca27b994b000eddb3411ed4c80334c2277
34 changes: 34 additions & 0 deletions examples/QCBM/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,41 @@
# Quantum Circuit Born Machine
(Implementation by: [Gopal Ramesh Dahale](https://github.com/Gopal-Dahale))

Quantum Circuit Born Machine (QCBM) [1] is a generative modeling algorithm which uses Born rule from quantum mechanics to sample from a quantum state $|\psi \rangle$ learned by training an ansatz $U(\theta)$ [1][2]. In this tutorial we show how `torchquantum` can be used to model a Gaussian mixture with QCBM.

## Setup

Below is the usage of `qcbm_gaussian_mixture.py` which can be obtained by running `python qcbm_gaussian_mixture.py -h`.

```
usage: qcbm_gaussian_mixture.py [-h] [--n_wires N_WIRES] [--epochs EPOCHS] [--n_blocks N_BLOCKS] [--n_layers_per_block N_LAYERS_PER_BLOCK] [--plot] [--optimizer OPTIMIZER] [--lr LR]

options:
-h, --help show this help message and exit
--n_wires N_WIRES Number of wires used in the circuit
--epochs EPOCHS Number of training epochs
--n_blocks N_BLOCKS Number of blocks in ansatz
--n_layers_per_block N_LAYERS_PER_BLOCK
Number of layers per block in ansatz
--plot Visualize the predicted probability distribution
--optimizer OPTIMIZER
optimizer class from torch.optim
--lr LR
```

For example:

```
python qcbm_gaussian_mixture.py --plot --epochs 100 --optimizer RMSprop --lr 0.01 --n_blocks 6 --n_layers_per_block 2 --n_wires 6
```

Using the command above gives an output similar to the plot below.

<p align="center">
<img src ='./assets/sample_output.png' width-500 alt='sample output of QCBM'>
</p>


## References

1. Liu, Jin-Guo, and Lei Wang. “Differentiable learning of quantum circuit born machines.” Physical Review A 98.6 (2018): 062324.
Binary file added examples/QCBM/assets/sample_output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
171 changes: 120 additions & 51 deletions examples/QCBM/qcbm_gaussian_mixture.py
Original file line number Diff line number Diff line change
@@ -3,58 +3,127 @@
import torch
from torchquantum.algorithm import QCBM, MMDLoss
import torchquantum as tq
import argparse
import os
from pprint import pprint


# Reproducibility
def set_seed(seed: int = 42) -> None:
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# When running on the CuDNN backend, two further options must be set
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Set a fixed value for the hash seed
os.environ["PYTHONHASHSEED"] = str(seed)
print(f"Random seed set as {seed}")


def _setup_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
"--n_wires", type=int, default=6, help="Number of wires used in the circuit"
)
parser.add_argument(
"--epochs", type=int, default=10, help="Number of training epochs"
)
parser.add_argument(
"--n_blocks", type=int, default=6, help="Number of blocks in ansatz"
)
parser.add_argument(
"--n_layers_per_block",
type=int,
default=1,
help="Number of layers per block in ansatz",
)
parser.add_argument(
"--plot",
action="store_true",
help="Visualize the predicted probability distribution",
)
parser.add_argument(
"--optimizer", type=str, default="Adam", help="optimizer class from torch.optim"
)
parser.add_argument("--lr", type=float, default=1e-2)
return parser


# Function to create a gaussian mixture
def gaussian_mixture_pdf(x, mus, sigmas):
mus, sigmas = np.array(mus), np.array(sigmas)
vars = sigmas**2
values = [
(1 / np.sqrt(2 * np.pi * v)) * np.exp(-((x - m) ** 2) / (2 * v))
for m, v in zip(mus, vars)
]
values = np.sum([val / sum(val) for val in values], axis=0)
return values / np.sum(values)

# Create a gaussian mixture
n_wires = 6
x_max = 2**n_wires
x_input = np.arange(x_max)
mus = [(2 / 8) * x_max, (5 / 8) * x_max]
sigmas = [x_max / 10] * 2
data = gaussian_mixture_pdf(x_input, mus, sigmas)

# This is the target distribution that the QCBM will learn
target_probs = torch.tensor(data, dtype=torch.float32)

# Ansatz
layers = tq.RXYZCXLayer0({"n_blocks": 6, "n_wires": n_wires, "n_layers_per_block": 1})

qcbm = QCBM(n_wires, layers)

# To train QCBMs, we use MMDLoss with radial basis function kernel.
bandwidth = torch.tensor([0.25, 60])
space = torch.arange(2**n_wires)
mmd = MMDLoss(bandwidth, space)

# Optimization
optimizer = torch.optim.Adam(qcbm.parameters(), lr=0.01)
for i in range(100):
optimizer.zero_grad(set_to_none=True)
pred_probs = qcbm()
loss = mmd(pred_probs, target_probs)
loss.backward()
optimizer.step()
print(i, loss.item())

# Visualize the results
with torch.no_grad():
pred_probs = qcbm()

plt.plot(x_input, target_probs, linestyle="-.", label=r"$\pi(x)$")
plt.bar(x_input, pred_probs, color="green", alpha=0.5, label="samples")
plt.xlabel("Samples")
plt.ylabel("Prob. Distribution")

plt.legend()
plt.show()
mus, sigmas = np.array(mus), np.array(sigmas)
vars = sigmas**2
values = [
(1 / np.sqrt(2 * np.pi * v)) * np.exp(-((x - m) ** 2) / (2 * v))
for m, v in zip(mus, vars)
]
values = np.sum([val / sum(val) for val in values], axis=0)
return values / np.sum(values)


def main():
set_seed()
parser = _setup_parser()
args = parser.parse_args()

print("Configuration:")
pprint(vars(args))

# Create a gaussian mixture
n_wires = args.n_wires
assert n_wires >= 1, "Number of wires must be at least 1"

x_max = 2**n_wires
x_input = np.arange(x_max)
mus = [(2 / 8) * x_max, (5 / 8) * x_max]
sigmas = [x_max / 10] * 2
data = gaussian_mixture_pdf(x_input, mus, sigmas)

# This is the target distribution that the QCBM will learn
target_probs = torch.tensor(data, dtype=torch.float32)

# Ansatz
layers = tq.RXYZCXLayer0(
{
"n_blocks": args.n_blocks,
"n_wires": n_wires,
"n_layers_per_block": args.n_layers_per_block,
}
)

qcbm = QCBM(n_wires, layers)

# To train QCBMs, we use MMDLoss with radial basis function kernel.
bandwidth = torch.tensor([0.25, 60])
space = torch.arange(2**n_wires)
mmd = MMDLoss(bandwidth, space)

# Optimization
optimizer_class = getattr(torch.optim, args.optimizer)
optimizer = optimizer_class(qcbm.parameters(), lr=args.lr)

for i in range(args.epochs):
optimizer.zero_grad(set_to_none=True)
pred_probs = qcbm()
loss = mmd(pred_probs, target_probs)
loss.backward()
optimizer.step()
print(i, loss.item())

# Visualize the results
if args.plot:
with torch.no_grad():
pred_probs = qcbm()

plt.plot(x_input, target_probs, linestyle="-.", label=r"$\pi(x)$")
plt.bar(x_input, pred_probs, color="green", alpha=0.5, label="samples")
plt.xlabel("Samples")
plt.ylabel("Prob. Distribution")

plt.legend()
plt.show()


if __name__ == "__main__":
main()