Cette page n'a pas encore été traduite. Vous voyez la version originale en anglais.
Krylov quantum diagonalization
In this lesson on Krylov quantum diagonalization (KQD) we will answer the following:
- What is the Krylov method, generally?
- Why does the Krylov method work and under what conditions?
- How does quantum computing play a role?
The quantum part of the calculations are based largely on work in Ref [1].
The video below gives an overview of Krylov methods in classical computing, motivates their use, and explains how quantum computing can play a role in that workstream. The subsequent text offers more detail and implements a Krylov method both classically, and using a quantum computer.
1. Introduction to Krylov methods
A Krylov subspace method can refer to any of several methods built around what is called the Krylov subspace. A complete review of these is beyond the scope of this lesson, but Ref [2-4] can all give substantially more background. Here, we will focus on what a Krylov subspace is, how and why it is useful in solving eigenvalue problems, and finally how it can be implemented on a quantum computer. Definition: Given a symmetric, positive semi-definite matrix , the Krylov space of order is the space spanned by vectors obtained by multiplying higher powers of a matrix , up to , with a reference vector .
Although the vectors above span what we call a Krylov subspace, there is no reason to think that they will be orthogonal. One often uses an iterative orthonormalizing process similar to Gram-Schmidt orthogonalization. Here the process is slightly different since each new vector is made orthogonal to the others as it is generated. In this context this is called Arnoldi iteration. Starting with the initial vector , one generates the next vector , and then ensures that this second vector is orthogonal to the first by subtracting off its projection on . That is
It is now easy to see that since
We do the same for the next vector, ensuring it is orthogonal to both the previous two:
If we repeat this process for all vectors, we have a complete orthonormal basis for a Krylov space. Note that the orthogonalization process here will yield zero once , since orthogonal vector necessarily span the full space. The process will also yield zero if any vector is an eigenvector of since all subsequent vectors will be multiples of that vector.
1.1 A simple example: Krylov by hand
Let us step through a generation of a Krylov subspace generation on a trivially small matrix, so that we can see the process. We start with an initial matrix of interest to us:
For this small example, we can determine the eigenvectors and eigenvalues easily even by hand. We show the numerical solution here.
# One might use linalg.eigh here, but later matrices may not be Hermitian. So we use linalg.eig in this lesson.
import numpy as np
A = np.array([[4, -1, 0], [-1, 4, -1], [0, -1, 4]])
eigenvalues, eigenvectors = np.linalg.eig(A)
print("The eigenvalues are ", eigenvalues)
print("The eigenvectors are ", eigenvectors)
The eigenvalues are [2.58578644 4. 5.41421356]
The eigenvectors are [[ 5.00000000e-01 -7.07106781e-01 5.00000000e-01]
[ 7.07106781e-01 1.37464400e-16 -7.07106781e-01]
[ 5.00000000e-01 7.07106781e-01 5.00000000e-01]]
We record them here for later comparison: