For comparison with other methods, we can examine both the path that these metrics follow and their final values.
Algorithms
-
Quantum Adiabatic Algorithm: The standard implementation of the quantum adiabatic algorithm. For the 2D benchmark, the system is simulated by a Leapfrog integrator to the same simulated time as the pseudospectral method is evolved for QHD. For the QP benchmark, the annealing for QAA is carried out on the D-Wave Advantage 6.1 device. See also https://www.cs.umd.edu/~amchilds/qa/qa.pdf#chapter.29
-
(Stochastic) Gradient Descent: The functions we have are continuous and differentiable, so we have exact gradient information. We use the stochastic differential equation formulation of SGD to numerically simulate SGD in the objective functions.
-
[Nesterov’s Accelerated Gradient Descent]: We use the standard form of NAGD.
-
Interior Point OPTimizer (IPOPT) (COIN-OR): Uses the interior point method to search within the defined bounds.
-
Sparse Nonlinear OPTimizer (SNOPT) (Center for Computational Mathematics at UCSD): Uses a sparse sequential quadratic programming algorithm. It is appropriate for both linear and non-linear problems as well as non-convex problems.
-
MATLAB
fmincon
: MATLAB’s nonlinear programming solver. We use it in its sequential quadratic programming mode. -
Truncated Newton Method: Offered as a method built in to SciPy as
scipy.optimize.minimize(method='TNC')
. -
Quadratically Constrained Quadratic Programming (QCQP): https://stanford.edu/~boyd/papers/qcqp.html