Replies: 2 comments
-
Overall, this makes sense to me. Some comments:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
To your point 1, definitely agreed that serializing the job information to disk is important |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Wanted to start a bit of discussion about what the underlying architecture of metriq-gym should be. This will help us as we scale out (1) adding new benchmarks and (2) adding new backends. We'll want to abstract over both of those things while keeping in mind that lots of benchmark execution on hardware is going to be async.
Some ideas to kick start discussion. This is a bit stream of consciousness so likely can be much improved by further discussion.
Goal:
metriq-gym
is designed to reliably run several benchmark workloads on several quantum computing backends. Each benchmark workload returns output data as well as a score:float.Currently these objects are in the package. I’ll list my understanding of their intention.
Abstracting async services
Doing things async means that the BenchProvider should be an Object that helps us retrieve data from that cloud platform with some standard interface, e.g.
or something like that. This way BenchProviders abstract the async service part of things.
We also need a few function types that abstract over the actual execution of quantum programs. A few to keep in mind:
Here I have chosen OpenQASM as the standard format. Of course open to other ways of doing it, but since it seems most everyone can convert into and out of OpenQASM perhaps this is the best format.
Actual Benchmarks
It would seem to make sense that each object of BenchJobType corresponds to a benchmark workload and can produce a score when given an executor, sampler, or counter. Which of those is appropriate I'd imagine depends on the benchmark. Am imagining something like this:
Right now the
clops_benchmark
object is linked directly to Qiskit. The idea is that CLOPS (and other BenchJobType objects) would be backend independent, requiring only either an executor or sampler to be passed in.The goal behind implementing this is to make extension easy.
Extending to other backends
If you are adding a new quantum computer stack then you just need to specify executor, sampler, or counter functions.
If you are adding a new cloud service then you need to make a new BenchProvider object. Switching BenchProvider doesn't necessarily mean you need to switch executor.
Extending to other benchmarks
To add a new benchmark you need only to define a new BenchJobType that follows the abstract schema having a .run() and .score() function of the right type.
Running over backends and benchjobs
Ideally we then can have
Then the scores list can be parsed to output the results in different ways.
Thoughts? What can be improved/added here? Are there other directions you had in mind to do these abstraction layers? @cosenal @vprusso @WrathfulSpatula
Beta Was this translation helpful? Give feedback.
All reactions