Skip to content

Kernel_Polynomial_Method_(KPM)

Dominique Delande edited this page Feb 6, 2023 · 23 revisions

The Kernel Polynomial Method (KPM in the following) is a numerical method used in the and-python package to compute the spectral function, the density of states and the temporal evolution of an arbitrary initial state. In a forthcoming version, it will also be used for fitering in energy an arbitrary state.

Useful references are:

[WWAF] is a good general reference on KPM. A slightly more detailed version is [WF]. It mainly concentrate on density of states and spectral funnctions.

For temporal evolution, see [RM], [HWMSD] and [HKM].

The thesis of Sanjib Ghosh [SG] contains relevant details on the implementation of the KPM method, also for energy filtering. Although Sanjib used a different software (a pure-C program with a slightly different implementation), most of what is written in [SG] is relevant for the and-python package.


The description which follows is essentially based on [WWAF], with contributions of [HWMSD], [HKM] and [SG] for the temporal propagation.

Basics of the KPM method

The starting point is that we want to compute the expectation value of an operator which depends on a disordered Hamiltonian H. The operator can be Tr(\delta(H-E)) if one wants to compute a density of states, a diagonal element of \delta(H-E) for a spectral function, or \exp(-iHt) for the evolution operator.

In general, computing such an operator is a difficult task. In our case, the operator H itself is rather simple in the spatial basis that we use: the disorder is diagonal and the kinetic energy operator (discretized Laplacian) has only few non-zero non-diagonal matrix elements. The basic idea is thus to express the operator which is a function of the Hamiltonian as a power series of the Hamiltonian. While, for e.g. the evolution operator \exp(-iHt), the Taylor series of the exponential could be used, this turns out to be not very efficient and hard to generalize to more complex operator.

The second key ingredient is the use of Chebyshev polynomials as approximations of arbitrary functions. In the simple case of a (sufficiently smooth) real function f(x) of a real variable x in the [-1,1] interval, this is known as the best approximation polynomial of the function f, and can be constructed explicitely, see [WWAF] section 2. One can write:

f(x) = \frac{1}{\pi\sqrt{1-x^2}} \left[ \mu_0 + 2 \sum_{n=1}^{\infty}{\mu_n\ T_n(x)}\right]

with coefficients:

\mu_n = \int_{-1}^{1}{f(x)\ T_n(x)\ \mathrm{d}x}

where T_n is the nth Chebyshev polynomial defined by:

T_n(x) = \cos[n \arccos(x)]

T_n is a polynomial of degree n, obeying the recursion relations:

T_0(x) = 1

T_{-1}(x) = T_1(x) = x

T_{n+1}(x) = 2xT_n(x) - T_{n-1}(x)

By truncating the series at a finite order, one obtains the best polynomial approximation (in some well defined sense which does not matter here) of the function f(x).

It is important to note that the expansion is convergent only in the [-1,1] interval. Outside this interval, the Chebyshev polynomials increase extremely rapidly, spoiling the convergence.


How can one use such an expansion for a disordered Hamiltonian H? Simply by using Chebyshev polynomials of the Hamiltonian itself.

Suppose for a moment that we have an Hamiltonian H' whose spectrum is entirely included in the [-1,1] interval, and that we want to compute the matrix element:

f(E) = \langle \beta | \delta(E-H') | \alpha \rangle

where \alpha and \beta are some known states in the Hilbert space [such a matrix element is the basis for computing a spectral function or a density of states). We can expand the function f(E) (non-zero only in the [-1,1] interval):

f(E) = \frac{1}{\pi\sqrt{1-E^2}} \left[ \mu_0 + 2 \sum_{n=1}^{\infty}{\mu_n\ T_n(E)}\right]

where the weights \mu_n are given by:

\mu_n = \int_{-1}^1 {\langle \beta | \delta(E-H') | \alpha\rangle\ T_n(E)\ dE} = \langle \beta | T_n(H') | \alpha \rangle

Thanks to the recursion relations between Chebyshev polynomials, the \mu_n can be computed recursively. Indeed, one has:

T_{n+1}(H')|\alpha\rangle = 2H' T_n(H')|\alpha\rangle - T_{n-1}(H') |\alpha \rangle

so that it is enough to keep in memory two consecutive vectors (i.e. states in the Hilbert space) T_{n-1}(H')|\alpha\rangle and T_n(H')|\alpha \rangle to generate the \mu_n.

The expensive part of the calculation is the product H' T_n(H')|\alpha\rangle, which involves to apply the Hamiltonian H' onto an arbitrary state T_n(H')|\alpha\rangle. In the spatial basis used in and-python, this is a sparse matrix-product operation which involves only few arithmetic operation per vector component. As it is also essentially local, it can be organized in a cache-friendly way.

Clone this wiki locally