Optimization with QAOA
In this tutorial, we will explore how to use the Quantum Approximate Optimization Algorithm (QAOA) to solve a simple optimization problem using QiliSDK.
Note
If you haven’t already, it might be useful to check out these tutorials first: Quantum Basics, Quantum Circuits and Quantum Annealing.
The Problem
Say we have four people and we need to form two teams of two people each. We have some information about how well each pair of people work together, and we want to form the teams such that the total “compatibility” of the teams is maximized. The information about who likes who is as follows:
Alice |
Bob |
Carol |
Dave |
|
|---|---|---|---|---|
Alice |
N/A |
1 |
3 |
4 |
Bob |
1 |
N/A |
5 |
2 |
Carol |
3 |
5 |
N/A |
6 |
Dave |
4 |
2 |
6 |
N/A |
So this means that Alice would prefer to work with Dave, while Bob would prefer to work with Carol, and so on.
Mathematically, we can represent our two teams as binary variables, where \(x_i\) is 0 if person \(i\) is in team 0, and 1 if they are in team 1. The compatibility of team 0 can then be expressed as:
Where \(p_{ij}\) is the compatibility score between person \(i\) and person \(j\). Note that here each term is only non-zero if both \(x_i\) and \(x_j\) are 0, which means that both people are in team 0. Meanwhile the compatibility of team 1 can be expressed as:
Thus we want to maximize the total compatibility \(C = C_0 + C_1\), subject to the constraint that team 1 (and thus team 0) has exactly two people:
One issue with this formulation is that it has a constraint, which makes it more difficult to solve using quantum optimization algorithms. To fix this, we can use some techniques to turn this into a QUBO (Quadratic Unconstrained Binary Optimization) problem. In this case, since we just have a single equality constraint, we can add a squared version of it to the objective function as a penalty term, such that the penalty term is zero when the constraint is satisfied, and positive otherwise. For more details on this process, check out the QUBO reference. Now we can assume that the problem has been reformulated into the following unconstrained optimization problem:
Where \(c_{ij}\) are coefficients that encode the objective function as well as the constraint.
The Solution
Now that we have our problem formulated as a QUBO, we can use QAOA to find an approximate solution to it. QAOA is a hybrid quantum-classical algorithm that uses a parameterized quantum circuit to find approximate solutions to combinatorial optimization problems. It works by alternating between applying a problem Hamiltonian (which encodes the optimization problem) and a mixing Hamiltonian (which helps explore the solution space), with the parameters of the circuit being optimized using a classical optimization algorithm. Think of it like doing quantum annealing, except it’s a digital version that switches between Hamiltonians, rather than an analog algorithm that evolves a generic time-varying Hamiltonian.
Assuming we have reformulated our problem into a QUBO, we can construct the problem Hamiltonian for QAOA, which is given by:
Where \(Z(i)\) is the Pauli-Z operator acting on qubit \(i\), and \(c_{ij}\) are the coefficients from our QUBO formulation.
Our mixing Hamiltonian is typically chosen to be the transverse field Hamiltonian, which is given by:
Since these are Hamiltonians (rather than unitary gates), we need to transform them into unitaries, which is determined by the parameters of our circuit:
Where \(\gamma\) and \(\beta\) are the parameters of our circuit that we will optimize over.
We then do alternating applications of these unitaries, starting with the mixing unitary, and ending with the problem unitary. We should also start in the ground state of the mixing Hamiltonian, which in this case is the equal superposition state. As a circuit, this looks like the following:
Here we have only done two repeats, but we could do more or less depending on the problem.
The parameters \(\gamma\) and \(\beta\) are then optimized using a classical optimization algorithm, with the objective of minimizing the expectation value of the problem Hamiltonian. What this means is that we execute the quantum circuit, measure the output, and then use the results to adjust the parameters to give a better solution.
The Implementation
In QiliSDK, a variety of classes are available to help streamline the implementation.
Starting with the problem Hamiltonian, we can use the QUBO class to represent our QUBO problem,
and then use the QAOA class to construct the QAOA circuit and perform the optimization.
To create our model:
from qilisdk.core.model import QUBO, ObjectiveSense
from qilisdk.core.variables import BinaryVariable, EQ
num_people = 4
vars = [BinaryVariable(f"x{i}") for i in range(num_people)]
preferences = [[0, 1, 3, 4],
[1, 0, 5, 2],
[3, 5, 0, 6],
[4, 2, 6, 0]]
model = QUBO("team_formation_example")
team_1 = sum(
preferences[i][j] * vars[i] * vars[j]
for i in range(num_people)
for j in range(i+1, num_people)
)
team_0 = sum(
preferences[i][j] * (1 - vars[i]) * (1 - vars[j])
for i in range(num_people)
for j in range(i+1, num_people)
)
model.set_objective(
team_0 + team_1,
label="obj",
sense=ObjectiveSense.MAXIMIZE
)
model.add_constraint(
"team_size_constraint",
EQ(sum(vars[i] for i in range(num_people)), 2),
lagrange_multiplier=10
)
Then we use the model to form our QAOA ansatz (the circuit we’re going to optimize):
from qilisdk.digital import QAOA
problem_hamiltonian = model.to_hamiltonian()
ansatz = QAOA(
problem_hamiltonian=problem_hamiltonian,
layers=2,
)
Then we construct a variational problem (combining a classical optimizer and a quantum circuit) and simulate it:
from qilisdk.functionals.variational_program import VariationalProgram
from qilisdk.functionals import DigitalPropagation
from qilisdk.optimizers.scipy_optimizer import SciPyOptimizer
from qilisdk.cost_functions.observable_cost_function import ObservableCostFunction
from qilisdk.readout import Readout
from qilisdk.backends import QiliSim
vqa = VariationalProgram(functional=DigitalPropagation(ansatz),
optimizer=SciPyOptimizer(method="cobyla", tol=1e-7),
cost_function=ObservableCostFunction(problem_hamiltonian))
backend = QiliSim()
result = backend.execute(vqa, readout=Readout().with_sampling(nshots=1000))
print("VQA Result:", result)
This gives something like the following result:
samples={
'0000': 8,
'0001': 31,
'0010': 11,
'0011': 42,
'0100': 13,
'0101': 200,
'0110': 226,
'0111': 18,
'1000': 16,
'1001': 173,
'1010': 131,
'1011': 2,
'1100': 56,
'1101': 18,
'1110': 16,
'1111': 39
}
As you can see from these results, the most common samples are those which satisfy the constraint (i.e. have exactly two 1’s), and among those, the most common ones are those which have a high objective value.