Theory

Time Evolution using Krylov Methods

In this section, we will discuss some of the mathematics behind time-evolution using Krylov methods.

First, a refresher on a Krylov subspace. Given an initial state , and matrix , the order Krylov subspace is given by:

where span refers to the space spanned by the vectors. In practice, this list of vectors is converted into an orthonormal basis by (modified-)Gram-Schmidt methods, Givens rotations or Householder reflections.

For the evaluation of the time-propagator, we seek approximations of the form:

that is, that we can expand the effect of our evolution operator as a polynomial of order (in ). To do so, first we define our orthonormal basis using the Arnoldi method:

compute 
for :
    compute 
    for  in :
        
        
    
    

then our basis is comprised of the vectors and the subscript refers to the fact that this is the basis formed from the order Krylov subspace.

The Arnoldi method has the useful property, that upon applying the basis vectors to the original matrix, the resulting smaller matrix , is tridiagonal when is Hermitian, and is upper-Hessenberg otherwise. Both are more easily exponentiated, but the tridiagonal case especially so.

Now that we have our basis, we seek to find an approximate solution . More precisely, we are looking for a solution of the minimization problem

Since the columns of are orthonormal, we can split the identity into the projection onto the range of and its orthogonal complement via . Thus,

Expanding, using the pythagorean theorem, and simplifying gives

Note, that the second norm on the right does not include the variable , while choosing

makes the first norm on the right equal to zero. Since norms are non-negative, this expression is, therefore, the desired minimizing argument .

This equation, however, still involves the original matrix exponential. By noting in the orthogonalization process of the Krylov subspace, that ( being the unit vector at index ), we rewrite our solution as

Hence

We would like to move the matrice into the exponentiation, because then we would be able to compute by

which only involves the small matrix . However,

since . Let us assume for a moment that the range of is invariant under the operation of , i.e., for . Then, , since is the orthogonal projector onto . Thus, under this assumption the desired equation does hold, and if we assume that the range of is almost invariant under the action of then the equation is still a good approximation.

As a final point, if we want to compute the time evolution of a quantum system defined by a Hamiltonian , we want to compute for some value of . We note that as the decomposition is invariant with respect to (non-zero) scalar multiples, we may modify to give


For more details on related methods see the following literature:

E. Gallopoulos, Y. Saad "On the parallel solution of parabolic equations." Proceedings of the 3rd international conference on Supercomputing, 1989 17-28. doi: 10.1145/318789.318793

E. Gallopoulos, Y. Saad "Efficient solution of parabolic equations by Krylov approximation methods." SIAM journal on scientific and statistical computing 13(5) 1992: 1236-1264. doi: 10.1137/0913071

Y. Saad, "Analysis of some Krylov subspace approximations to the matrix exponential operator." SIAM Journal on Numerical Analysis 29(1) 1992: 209-228. doi: 10.1137/0729014