Density Matrix Renormalization Group for Dummies Gabriele De Chiara,1, 2 Matteo Rizzi,2 Davide Rossini,2 and Simone Montangero2, 3 1

CNR-INFM-BEC & Dipartimento di Fisica, Universita di Trento, I-38050 Povo, Italy

2

NEST-CNR-INFM & Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy

arXiv:cond-mat/0603842 v1 31 Mar 2006

3

Institut f¨ ur Theoretische Festk¨ orperphysik

Universit¨ at Karlsruhe 76128 Karlsruhe, Germany. (Dated: April 8, 2006)

Abstract We describe the Density Matrix Renormalization Group algorithms for time dependent and time independent Hamiltonians. This paper is a brief but comprehensive introduction to the subject for anyone willing to enter in the field or write the program source code from scratch. An open source version of the code can be found at http://qti.sns.it/dmrg/phome.html.

1

I.

INTRODUCTION

The advent of information era has been opening the possibility to perform numerical simulations of quantum many-body systems, thus revealing completely new perspectives in the field of condensed matter theory. Indeed, together with the analytic approaches, numerical techniques provide lot of information and details otherwise inaccessible. However, the simulation of a quantum mechanical system is generally a very hard task; one of the main reasons is related to the number of parameters required to represent a quantum state. This value usually grows exponentially with the number of constituents of the system [21], due to the corresponding exponential growth of the Hilbert space. This exponential scaling drastically reduces the possibility of a direct simulation of many-body quantum systems. In order to overcome this limitation, many numerical tools have been developed, such as Monte Carlo techniques [1] or efficient Hamiltonian diagonalization methods, like Lanczos and Davidson procedures [2]. The Density Matrix Renormalization Group (DMRG) method has been introduced by White in 1992 [3, 4]. It was originally devised as a numerical algorithm useful for simulating ground state properties of one-dimensional quantum lattices, such as the Heisenberg model or Bose-Hubbard models; then it has also been adapted in order to simulate small twodimensional systems [5, 6]. DMRG traces its roots to Wilson’s numerical Renormalization Group [7] (RG), which represents the simplest way to perform a real-space renormalization of Hamiltonians. Starting from a numerical representation of some microscopic Hamiltonian in a particular basis, degrees of freedom are iteratively added, typically by increasing the size of the finite system. Then less important ones are integrated out and accounted for by modifying the original Hamiltonian. The new Hamiltonian will thus exhibit modified as well as new couplings; renormalization group approximations consist in physically motivated truncations of the set of couplings newly generated by the elimination of degrees of freedom. In this way one obtains a simplified effective Hamiltonian that should catch the essential physics of the system under study. We point out that the DMRG can also be seen as a variational method under the matrix-product-form ansatz for trial wave functions: when the renormalization scheme converges to a fixed point, the ground state and elementary excited states in the thermodynamic limit can be simply expressed via an ansatz form which can be explored variationally, without referencing to the renormalization construction [8]. 2

Very recently, influence from the quantum information community has led to a DMRG-like algorithm which is able to simulate the temporal evolution of one-dimensional quantum systems [9, 10, 11, 12, 13, 14, 15]. Quantum information theory has also allowed to clarify the situations in which this method can be applied efficiently. Indeed, it has been shown [9] that the efficiency in simulating a quantum many-body system is strictly connected to its entanglement behavior. More precisely, if the entanglement of a subsystem with respect to the whole is bounded (or grows logarithmically with its size) an efficient simulation with DMRG is possible. Up to now, it is known that ground states of one dimensional lattices (whether critical or not) satisfy this requirement, whereas in higher dimensionality it is not fulfilled as the entanglement is subject to an area law [16]. On the other hand, the simulation of the time evolution of critical systems may not be efficient even in one dimensional systems as the block entanglement can grow linearly with time and block size [17, 18]. In a different context, it has also been shown that in a quantum computer performing an efficient quantum algorithm (Shor’s algorithm and the simulation of a quantum chaotic system) the entanglement between qubits grows faster than logarithmically [19, 20]. Thus, t-DMRG cannot efficiently simulate every quantum one dimensional system; nonetheless, its range of applicability is very broad and embraces very different subjects. The aim of this review is to introduce the reader to the last development of DMRG codes, briefly but in a comprehensive way. In Sec. II we describe the basics of time independent DMRG algorithm, in Sec. III we introduce the measurement procedure (a more detailed exposition is given in Ref. [6] and references therein). In Sec. IV the time dependent DMRG algorithm is explained. Finally, in Sec. V we provide some numerical examples, and in Sec. VI we discuss some technical issues regarding the implementation of a DMRG program code. In the last section the reader can find the schemes of the DMRG algorithms, both for the static and time dependent case. Further material can be found on our website (http://qti.sns.it/dmrg/phome.html), where the t-DMRG code will be released with an open license.

3

II.

THE STATIC DMRG ALGORITHM

As yet pointed out in the introduction, the tensorial structure of the Hilbert space of a composite system leads to an exponential growth of the resources needed for the simulation with the number of the system constituents. However, if one is interested in the ground state properties of a one-dimensional system, the number of parameters is limited for non critical systems or grows polynomially for a critical one [16]. This implies that it is possible to rewrite the state of the system in an more efficient way, i.e. it can be described by using a number of coefficients which is much smaller than the dimension of the Hilbert space. Equivalently, a strategy to simulate ground state properties of a system is to consider only a relevant subset of states of the full Hilbert space. This idea is reminiscent of the renormalization group (RG) introduced by Wilson [7]. In the RG procedure one typically begins with a small part of a quantum system (a block B of size L, living on an m-dimensional Hilbert space), and a Hamiltonian which describes the interaction between two identical blocks. Then one projects the composite 2-block system (of size 2L) representation (dimension m2 ) onto the subspace spanned by the m lowest-lying energy eigenstates, thus obtaining a new truncated representation for it. Each operator is consequently projected onto the new m-dimensional basis. This procedure is then iteratively repeated, until the desired system size is reached. RG was successfully applied for the Kondo problem, but fails in the description of strongly interacting systems. This failure is due to the procedure followed to increase the system size and to the criterion used to select the representative states of the renormalized block: indeed the decimation procedure of the Hilbert space is based on the assumption that the ground state of the entire system will essentially be composed of energetically low-lying states living on smaller subsystems (the forming blocks) which is not always true. A simple counter-example is given by a free particle in a box: the ground state with length 2l has no nodes, whereas any combination of two grounds in l boxes will have a node in the middle, thus resulting in higher energy. A convenient strategy to solve the RG breakdown is the following: before choosing the states to be retained for a finite-size block, it is first embedded in some environment that mimics the thermodynamic limit of the system. This is the new key ingredient of the DMRG algorithm; the price one has to pay is a slowdown of the system growth with the number 4

of the algorithm’s iterations: from the exponentially fast growth Wilson’s procedure to the DMRG linear growth (very recently, in the context of real-space renormalization group methods, a new scheme which recovers the exponential growth has been proposed; this is based upon a coarse-graining transformation that renormalizes the amount of entanglement of a block before its truncation [22]). In the following, we introduce the working principles of the DMRG, and provide a detailed description to implement it in practice (for a pedagogical introduction see for example [23, 24]).

A.

Infinite-system DMRG

Keeping in mind the main ideas of the DMRG depicted above, we now formulate the basis structure of the so called infinite-system DMRG for one-dimensional lattice systems.The typical scenario where DMRG can be used is the search for an approximate ground state of a 1D chain of neighbor interacting sites, each of them living in a Hilbert space of dimension D. As in Wilson’s RG, DMRG is an iterative procedure in which the system is progressively enlarged. In the infinite system algorithm we keep enlarging the system until the ground state properties we are interested in (e.g. the ground state energy per site) have converged. The system Hamiltonian is written as: ˆ = H

XX i

ˆ Vˆi (q) J(q)Sˆi (q)Tˆi+1 (q) + B(q)

(1)

q

where J(q) and B(q) are coupling constants, and {Sˆi (q)}q , {Tˆi (q)}q and {Vˆi (q)}q are sets of operators acting on the i-th site. The index q refers to the various elements of these sets. For example, in a magnetic chain these can be angular momentum operators. For simplicity we will not describe the case of position dependent couplings, since it can be easily reduced to the uniform case. The algorithm starts with a block composed of one site B(1, D) (see Fig. 1a); the arguments of B refer to the number of sites it embodies, and to the number of states used to describe it. From the computational point of view, a generic block B(L, mL ) is a portion of memory which contains all the information about the block: the block Hamiltonian, its basis ˆ B for B(L, mL ) and other operators that we will introduce later. The block Hamiltonian H includes only the local terms (i.e. local and interaction terms where only sites belonging to the block are involved). The next step consists in building the so called left enlarged block, 5

by adding a site to the right of the previously created block. The corresponding Hamiltonian ˆ E is composed by the local Hamiltonians of the block and the site, plus the interaction H term: ˆE = H ˆB + H ˆS + H ˆ BS . H

(2)

The enlarged block is then coupled to a similarly constructed right enlarged block. If the ˆ E ′ can be system has global reflection symmetry, the right enlarged block Hamiltonian H obtained just by reflecting the left enlarged block [25]. By adding the interaction of the two ˆ supB is then built, which describes the global enlarged blocks, a super-block Hamiltonian H system: ˆ supB = H ˆE + H ˆ E′ + H ˆ SS ′ . H

(3)

ˆ supB should From now on, we refer to the sites S and S ′ as the free sites. The matrix H finally be diagonalized in order to find the ground state ψG , which can be rewritten in ket notation as: |ψG i = ψaαβb |aαβbi .

(4)

Hereafter Latin indexes refer to blocks, while Greek indexes indicate free sites; implicit summation convention is assumed. From |ψG i one evaluates the reduced density matrix ρˆL of the left enlarged block, by tracing out the right enlarged block: ρˆL = TrR |ψG i hψG | = ψaαβb ψa∗′ α′ βb |a αi ha′ α′ | .

(5)

The core of the DMRG algorithm stands in the renormalization procedure of the enlarged block, which eventually consists in finding a representation in terms of a reduced basis with at most m (fixed a priori) elements. This corresponds to a truncation of the Hilbert space of the enlarged block, since mL+1 = min(mL D, m) [26]. These states are chosen to be the first mL+1 eigenstates of ρL , corresponding to the largest eigenvalues. This truncated change of basis is ˆ L→L+1 (where the subscripts stand performed by using the mL D×mL+1 rectangular matrix O for the number of sites enclosed in the input block and in the output renormalized block), whose columns, in matrix representation, are the mL+1 selected eigenstates. To simplify notations, let us introduce the function g(a, α) = D(a − 1) + α, which acts on a block index a and on the next free site index α and gives an index of the enlarged block running from 1 to mL D. The output of the full renormalization procedure is a truncated enlarged block B(L + 1, mL+1 ), which coincides with the new starting block for the next DMRG iteration. 6

This consists in the new block Hamiltonian: ˆ B′ = O ˆ† ˆ ˆ H L→L+1 HE OL→L+1 = ∗ g(a,α) c

g(a,α) g(a′ ,α′ )

= OL→L+1 HE

(6) g(a′ ,α′ ) c′

OL→L+1 |ci hc′ |

and in the local operators: ′ ˆ† ˆ ˆ SˆL+1 (q) = O L→L+1 SL+1 (q) OL→L+1

(7)

written in the new basis. These are necessary in the next step, for the construction of the interaction between the rightmost block site and the free site. The output block B(L + ˆ L→L+1 which identifies the basis states of the new block. 1, mL+1 ) includes also the matrix O It is worth to emphasize that we can increase the size of our system without increasing the number of states describing it, by iteratively operating the previously described procedure. We now summarize the key operations needed to perform a single DMRG step. For each DMRG step the dimension of the super-block Hamiltonian goes from 2L to 2L + 2, thus the simulated system size increases by 2 sites. The infinite-system DMRG, with reflection symmetry, consists in iterating these operations: 1. Start from left block B(L, mL ), and enlarge it by adding the interaction with a single site. 2. Reflect such enlarged block, in order to form the right enlarged block. 3. Build the super-block from the interaction of the two enlarged blocks. 4. Find the ground state of the super-block and the mL+1 = min(mL D, m) eigenstates of the reduced density matrix of the left enlarged block with largest eigenvalues. ˆ L→L+1 , thus obtaining B(L + 5. Renormalize all the relevant operators with the matrix O 1, mL+1 ). Notice that at each DMRG step the ground state of a chain whose length grows by two sites is found. By contrast, the number of states describing a block is always m, regardless of how many sites it includes. This means that the complexity of the problem is a priori fixed by m and D (while D is imposed by the structure of the simulated system, m ≥ D is a parameter which has to be appropriately set up by the user, in order to get the desired precision for 7

a block

b site

DMRG step 1

reflection

DMRG sweep 1

enlarged block

renormalization

FIG. 1: Schematic procedure for the DMRG algorithm. On the left part (a) one iteration of the infinite-system DMRG algorithm is shown: starting from the system block B(L, mL ) and adding one free site to it, the enlarged block B(L, mL ) • is formed. Here for simplicity we assume that the system is reflection-symmetric, thus the environmental right block is taken equal to the left block. Then, after having created the super-block B(L, mL ) • • B(L, mL ), a renormalization procedure is applied in order to get the new block for the next DMRG iteration. On the right part (b) the scheme of a complete finite-system DMRG sweep is depicted.

the simulation; see also Sec. V). In Sec. VI we will discuss how it is possible to extract the ground state of the super-block Hamiltonian without finding its entire spectrum, by means of efficient numerical diagonalization methods, like Davidson or Lanczos algorithms [39]. We stress that at each DMRG step a truncation error ǫtr is introduced: ǫtr =

X

λi

(8)

i>m

where λi are the eigenvalues of the reduced density matrix ρL in decreasing order. The error ǫtr is the weight of the eigenstates of ρL not selected for the new block basis. In order to perform a reliable DMRG simulation, the parameter m should be chosen such that ǫtr remains small, as one further increases the system size. Usually one should keep ǫtr < 10−6 . 8

For critical 1D systems ǫtr decays as a function of m with a power law, while for 1D systems away from criticality it decays exponentially, thus reflecting the entanglement properties of the system in the two regimes: a critical system is more entangled, therefore more states have to be taken into account.

B.

Finite-system DMRG

The output of the infinite-system algorithm described before is the (approximate) ground state of an “infinite” 1D chain. In other words, one increases the length of the chain by iterating DMRG steps, until a satisfactory convergence is reached. However, for many problems, infinite-system DMRG does not yield accurate results up to the wanted precision. For example, the strong physical effects of impurities or randomness in the Hamiltonian cannot be properly accounted for by infinite-system DMRG, as the total Hamiltonian is not yet known at intermediate steps. Moreover, in systems with strong magnetic fields, or close to a first order transition, one may be trapped in a metastable state favoured for small sizes (e.g. by edge effects). Finite-system DMRG manages to eliminate such effects to a very large degree, and to reduce the error almost to the truncation error [3]. The idea of the finite-system DMRG algorithm is to stop the infinite-system algorithm at some preselected super-block length Lmax , which is subsequently kept fixed. In the following DMRG steps one applies the steps of infinite-system DMRG, but only one block is increased in size while the other is shrunk, thus keeping the super-block size constant. Reduced basis transformations are carried out only for the growing block. When the infinite-system algorithm reaches the desired system size, the system is formed by two blocks B(Lmax /2 − 1, m) and two free sites, as shown in the first row of Fig. 1b. The convergence is then enhanced by the so called “sweep procedure”. This procedure is illustrated in the sequent rows of Fig. 1b. It consists in enlarging the left block with one site and reducing the right block correspondingly in order to keep the length fixed. In other words, after one finite-system step the system configuration is B(Lmax /2, m) • • B(Lmax /2 − 2, m) (where • represents the free site). While the left block is constructed by enlarging B(Lmax /2 − 1, m) with the usual procedure, the right block is taken from memory, as it has been built in a previous step of the infinite procedure and saved. Indeed, during the initial 9

ˆ i→i+1 , the block Hamiltonians H ˆ B (i) infinite-system algorithm one should save the matrices O and the interaction operators Sˆi (q) for i = 1, Lmax /2−1. The finite-system procedure goes on increasing the size of the left block until the length Lmax − 4 is reached. At this stage a right block B(1, D) with one site is constructed from scratch and the left block B(Lmax − 3, m) is obtained through the renormalization procedure. Then, the role of the left and right block are switched and the free sites start to sweep from right to left. Notice that at each step the renormalized block B(i, mi ) has to be stored in memory. During these sweeps the length of the chain does not change, but the approximation of the ground state improves. Usually two or three sweeps are sufficient to reach convergence in the energy output. Up to now we concentrated on a single quantum state, namely the ground state. It is also possible to find an approximation to a few number of states (typically less than 5): for example, the ground state and some low-excited state [3]. These states are called target states. At each DMRG step, after the diagonalization, for each target state |ψk i one has to calculate the corresponding reduced density matrix ρk , by tracing the right enlarged block. Then a convex sum of these matrices with equal weights [6] is performed: nk 1 X ρ= ρk . nk k=1

(9)

Finally ρ has to be diagonalized in order to find the eigenbasis and the transformation ˆ In this way the DMRG algorithm is capable of efficiently representing not only matrices O. the Hilbert space “around” the ground state, but also the surroundings of the other target states. It is worth noting that targeting many states reduces the efficiency of the algorithm because a larger m has to be used for obtaining the same accuracy. An alternative way could be to run as many iterations of DMRG with a single target state as many states are required.

C.

Boundary conditions

The DMRG algorithm, as it has been depicted above, describes a system with open boundary conditions. However, from a physical point of view, periodic boundary conditions are normally highly preferable to the open ones, as surface effects are eliminated and finitesize extrapolation gives better results for smaller system sizes. In the presented form, the DMRG algorithm gives results much less precise in the case of periodic boundary conditions 10

than for open boundary conditions [6, 27, 28]. Nonetheless, periodic boundary conditions can be implemented by using the super-block configuration B • B •. This configuration is preferred over B • • B because the two blocks are not contiguous.

III.

MEASURE OF OBSERVABLES

Besides the energy, DMRG is also capable to extract other characteristic features of the target states, namely to measure the expectation values of a generic quantum observable ˆ . Properties of the Lmax -site system can be obtained from the wave functions |ψi of the M super-block at any point of the algorithm, although the symmetric configuration (with free sites at the center of the chain) usually gives the most accurate results. The procedure is to use the wave function |ψi resulting from the diagonalization of the super-block (see the scheme in Sec.II A, step 4), in order to evaluate expectation values. ˆ (i), living on one single site i. If one is We first concentrate on local observables M performing the finite-system DMRG algorithm, it is possible to measure the expectation ˆ (i) at the particular step inside a sweep in which i is one of the two free sites. value of M The measure is then a simple average: ∗ ˆ (i) |ψi = ψaαβb hψ| M M (i)αα′ ψaα′ βb

(10)

where i is the first free site. In the special cases in which the observables refer to the extreme sites (i = 1 or i = Lmax ), the measurement is performed when the shortest block is B(1, D), following the same procedure. It is also possible to measure an observable expectation value while performing the infinite-system algorithm. In this case there are two possibilities: either i is one of the two central free sites or not. In the former case the measurement is performed as before, ˆ in the truncated DMRG basis. At each DMRG while in the latter one should express M ˆ (i) must be updated in the new basis using the O ˆ matrix, as in iteration the operator M ˆ (i) → O ˆ† M ˆ (i) O. ˆ The measurement is then computed as: Eq. (7): M ∗ ˆ (i)i = ψaαβb hM Maa′ ψa′ αβb

(11)

if site i belongs to the left block and analogously if i belongs to the right block. ˆ For non local observables, like a correlation function Pˆ (i) Q(j), the evaluation of expectation values depends on whether i and j are on the same block or not. The most convenient 11

way in order to perform such type of measurements is to use the finite-system algorithm. ˆ + 1). We can Let us first consider the case of nearest neighbor observables Pˆ (i) and Q(i ˆ + 1)i when i and i + 1 are the two free sites. In measure the expectation value hPˆ (i) Q(i ˆ are simply (D × D) and we do not have this case the dimensions of the matrices Pˆ and Q to store these operators in block representation. The explicit calculation of this observable is then simply: ∗ ˆ + 1)i = ψaαβb hPˆ (i) Q(i Pαα′ Qββ ′ ψaα′ β ′ b .

(12)

ˆ In general, measures like hPˆ (i) Q(j)i (where i and j are not nearest neighbor sites) can also be evaluated. This task can be accomplished by firstly storing the block representation of ˆ Pˆ (i) and Q(j), and then by performing the measure when i belongs to a block and j is a free site or vice-versa. Analogously, it is possible to evaluate measures in the case when i belongs to the left block, while j to the right one. What should be avoided is the measure of ˆ hPˆ (i) Q(j)i when i and j belong to the same block. The reason is that, due to the truncation, ˆ the representation of the product Pˆ (i) Q(j) is imprecise. To overcome this difficulty one can use the compound operators as described in [3, 6]. Finally, we stress that usually the convergence of measurements is slower than that of energy, since more finite-system DMRG sweeps are required in order to have reliable measurement outcomes (typically between five and ten).

IV.

TIME DEPENDENT DMRG

In this section we describe an extension of the static DMRG, which incorporates real time evolution into the algorithm. Various different time-dependent simulation methods have been recently proposed [9, 11, 12, 29], but here we restrict our attention to the algorithm introduced by White and Feiguin [12]. The aim of the time-dependent DMRG algorithm (t-DMRG) is to simulate the evolution of the ground state of a nearest-neighbor one dimensional system described by a Hamiltonian ˆ following the dynamics of a different Hamiltonian H ˆ 1 . In few words, the algorithm starts H, with a finite-system DMRG, in order to find an accurate approximation of the ground state ˆ Then the time evolution of |ψG i is implemented, by using a Suzuki-Trotter |ψG i of H. ˆ decomposition [30, 31] for the time evolution operator Uˆ = e−iH1 t .

The DMRG algorithm gives an approximation to the Hilbert subspace that better de12

scribes the state of the system. However, during the evolution the wave function changes and explores different parts of the Hilbert space. Thus, the truncated basis chosen to represent the initial state will be eventually no more accurate. This problem is solved by updating the truncated bases during the evolution. The first effort, due to Cazalilla and Marston, consists in enlarging the effective Hilbert space, by increasing m, during the evolution [29]. However, this method is not very efficient because if the state of the system travels sufficiently far from the initial subspace, its representation becomes not accurate, or m grows too much to be handled. Another solution has been proposed in [12]: the block basis should be updated at each temporal step, by adapting it to the instantaneous state. This can be done by repeating the DMRG renormalization procedure using the instantaneous state as the target state for the reduced density matrix. ˆ In order to approximately evaluate the evolution operator Uˆ = e−iH1 t we use a Suzuki-

Trotter decomposition [30, 31]. The first order expansion in time is given by the formula: ˆ1t −iH

e



Lmax Y−1

ˆ 1 (L,L+1)dt −iH

e

L=1

!n

,

(13)

ˆ L,L+1 is the where n = t/dt gives the discretization of time t in small intervals dt, and H interaction Hamiltonian (plus the local terms) between site L and L + 1. Further decompositions at higher orders can be obtained by observing that the Hamiltonian can be divided P ˆ 1 (L, L + 1), containing only even bonds, and the in two addends: the first, Fˆ = L even H ˆ=P ˆ ˆ ˆ second, G L odd H1 (L, L + 1), containing only odd bonds. Since the terms in F and G

commute, an even-odd expansion can be performed: ˆ1t −iH

e

n  ˆ −iGdt −iFˆ dt −iFˆ dt 2 2 e e . ≈ e

(14)

This coincides with a second order Trotter expansion, in which the error is proportional to dt3 . Of course, one can enhance the precision of the algorithm by using a fourth order expansion with error dt5 [32]: ˆ −iHt

e

=

5  Y

−ipi Fˆ

e

dt 2

ˆ dt −i pi Fˆ −ipi G

e

e

i=1

dt 2

n

+ O(dt5 ) ,

(15)

where all pi = 1/(4 − 41/3 ), except p3 = 1 − 4p1 < 0, corresponding to evolution backward in time.

13

Nonetheless, the most serious error in a t-DMRG program remains the truncation error. A nearly perfect time evolution with a negligible Trotter error is completely worthless if the wave function is affected by a relevant truncation error. It is worth to mention that t-DMRG precision becomes poorer and poorer as time grows larger and larger, due to the accumulated truncation error at each DMRG step. This depends on Lmax , on the number of Trotter steps and, of course, on m. At a certain instant of time, called the runaway time, the t-DMRG precision decreases by several order of magnitude. The runaway time increases with m, but decreases with the number of Trotter steps and with Lmax . For a more detailed discussion on the t-DMRG errors and on the runaway time, see Gobert et al. [33]. The initial wave function |ψG i can be chosen from a great variety of states. As an example, for a spin 1/2 chain, a factorized state can be prepared by means of space dependent magnetic fields. In general, it is also possible to start with an initial state built up by transforming the P ground state as |ψA i = Lmax Aˆi |ψG i, where Aˆi are local operators. The state |ψA i can be i=1

obtained by simply performing a preliminary sweep, just after the finite-system procedure, in which the operators Aˆi are subsequently applied to the transforming wave function, when i is a free site [12]. In summary, the t-DMRG algorithm is composed by the following steps: ˆ 1. Run the finite-system algorithm, in order to obtain the ground state |ψG i of H. 2. If applicable, perform an initial transformation in order to set up the initial state |ψA i.

3. Keep on the finite-system procedure by performing sweeps in which at each step the ˆ

operator e−iH1 (L,L+1)dt is applied to the system state (L and L + 1 are the two free sites for the current step). 4. Perform the renormalization, following the finite-system algorithm, and store the maˆ for the following steps. trices O 5. At each step change the state representation to the new DMRG basis using White’s state prediction transformation [34] (see below). 6. Repeat points 3 to 5, until a complete dt time evolution has been computed. White’s prediction transformation is computed as follows: at any DMRG, one has the left block B(L − 1, m) and right block B(Lmax − L − 1, m) description. To transform a quantum 14

state |ψi of the system in the new basis for the next step (corresponding to the blocks B(L, m) ˆ O ˆ L−1→L and O ˆ† and B(Lmax −L−2, m)) one uses the matrices O: . The first Lmax −L−2→Lmax −L−1

matrix transforms a block of length L − 1 in a block of length L and it has been computed in the current renormalization. The second one transforms a block of length Lmax − L − 1 in a block of length Lmax − L − 2; this matrix is recovered from memory, since it has been computed at a previous step. The transformed wave function then reads: ∗ g(a′ ,α′ ) a g(β,b) b′ ψ˜aαβb = OL−1→L OLmax −L−2→Lmax −L−1 ψa′ α′ αb′ .

(16)

To compute the system time evolution using the second order Trotter expansion of ˆ dt

Eq. (14), one should perform 3/2 sweeps for each time interval dt: in the first e−iF ˆ −iGdt

applied, in the second e

−iFˆ dt 2

, finally a third half sweep is needed to apply e

2

is

again.

ˆ

Since at each step the operator e−iH1 (L,L+1)dt is computed on the two current free sites L and L + 1, its representation is given in terms of a D2 × D2 matrix, and most remarkably it is exact. As stated before, to increase the simulation precision, one can expand the time evolution operator to the fourth order as in Eq. (15). The implementation of this expansion requires 5 × 32 sweeps; thus the computational time is five times longer than the one needed using Eq. (14). Finally, we remark that this algorithm for the time evolution is a small modification of the finite-system algorithm: the main difference is the computation of a factor of the Trotter expansion instead of performing the diagonalization procedure at each step. Notice that the measurements are performed in the same way as in the finite-system algorithm.

V.

NUMERICAL EXAMPLES

In this section we report some numerical examples on the convergence of the DMRG outputs with respect to the user fixed parameters m, and (t, dt). Let us first focus on the static DMRG algorithm. The main source of error is due to the step-by-step truncation of the Hilbert space dimension of the system block from m × D to m. The parameter m must be set up very carefully, since it represents the maximum number of states used to describe the system block. It is clear that, by increasing m, the output becomes closer and closer to the exact solution, which is eventually reached in the limit of m ∼ DL (in that case the algorithm would no longer perform truncation, and the only source of error would be due 15

to inevitable numerical roundoffs). As an example of the output convergence with m, in Fig. 2 we plotted the behavior of the ground state energy in the one-dimensional spin 1 Bose-Hubbard model as a function of m (see [37] for a detailed description of the physical system). The convergence is exponential with m, as can be seen in the figure. In the inset the CPU-time dependence with m is shown and the dashed line shows a power law fit of data, mα with α ≃ 3.2. -0.1680 6

E CPU-t (sec)

10

-0.1685

-0.1690

5

10

4

10

3

10

2

-0.1695

10

10

50

100

150

100

m

200

-0.1700

-0.1705 0

50

m

200

FIG. 2: Ground state energy for the spinorial Bose-Hubbard model with a fixed number of total particles n = Lmax as a function of m. A 1D chain of Lmax = 50 sites has been simulated with a 3-sweep finite-system algorithm. The dashed line is an exponential fit: E(m) = E0 (1 + C0 e−α0 m ) with E0 ≃ −0.17046, C0 ≃ −0.035, α0 ≃ 0.05. Inset: CPU-time dependence with m; dashed line shows a power law fit t ∼ mα with α ≃ 3.2. Numerical simulation presented here and in the following figures have been performed on a 1.6GHz PowerPC 970 processor with 2.4GB RAM memory [38].

We now present an example of convergence of the t-DMRG with m and (t, dt). We consider the dynamical evolution of the block entanglement entropy in a linear XXZ chain (see [18] for more details). The state of the system at time t = 0 is the anti-ferromagnetic state. The initial state evolves with the Hamiltonian of the XX model from an initial product

16

state to an entangled one. This entanglement can be measured by the von Neumann entropy of the reduced density matrix of the block ρ(t): S(t) = −Trρ(t) log2 ρ(t)

(17)

In the example we calculate S(t) for a block of size 6 in a chain of length 50. The time evolution has been calculated form t = 0 to t = 3 with a fixed Trotter time step dt = 5 · 10−2 that ensures that the Trotter error is negligible with respect to the truncation error. Since

6

S(t) 5 4 3

CPU-t (sec)

10

2 1

4

10 10

3

2

10

1

0

10 10

0 0

1

50

2

100

m

t (a. u.)

3

FIG. 3: Time evolution (in arbitrary units) of the von Neumann entropy S(t) for m = 15 (crosses), 30 (stars), 50 (triangles), 75 (diamonds), 100 (squares), 150 (circles). The t-DMRG data are compared with the exact result (solid line). Inset: CPU-time dependence with m; dashed line shows a power law fit t ∼ mα and α ≃ 3.14.

the XX model can be solved analytically, we are able to compare the exact results with the t-DMRG data. In Fig. 3 we show S(t) as a function of time for various values of m, from 15 to 150, and compared with the exact data. In the inset we present the CPU-time as a function of m, which behaves as a power-law mα with α ≃ 3.14, confirming the estimate given in the inset of Fig. 2. The deviation ε = |Sexact − Sm |/Sm as a function of m and of time is shown in Fig. 4. The typical fast convergence of the DMRG result with m is recovered only when m is greater than a critical value mc (two distinct regimes are clear 17

in Fig. 4). This is due to the amount of entanglement present in the system: an estimate of the number of states needed for an accurate description is given by mc ∝ 2S(t) . Thus, it is always convenient to keep track of entropy to have an initial guess for the number of states needed to describe the system [33]. On the other hand, if m is increased too much, the Trotter error will dominate and smaller dt is needed to improve accuracy. 0

ε

10

10

10

-1

-2

ε

10

10

10

-2

-3

10

10

0

-4

-4

-6

10 0

0.5

1

2

1.5

2.5

3

t (a. u.) -5

10 10

50

100

m

150

FIG. 4: Deviation ε = |Sexact − Sm |/Sm at time t = 3 as a function of m. Inset: ε as a function of time for various values of m: 15 (crosses), 30 (stars), 50 (triangles), 75 (diamonds), 100 (squares), 150 (circles).

VI.

TECHNICAL ISSUES

In this section we explain some technicalities regarding the implementation of DMRG and t-DMRG code. They are not essential in order to understand the algorithm, but they can be useful to anyone who wants to write a code from scratch, or to modify the existing ones. Some of these parts can be differently implemented, in part or completely skipped, depending on the computational complexity of the physical system under investigation.

18

A.

Hamiltonian diagonalization

The ground state of the Hamiltonian is usually found by diagonalizing a matrix of dimensions (m D)2 × (m D)2 . Typically the DMRG algorithm is used when one is only interested in the ground state properties (at most in few low-energy eigenstates). The diagonalization time can thus be greatly optimized by using Lanczos or Davidson methods: these are < 10) of eigenstates close to a previously chosen target capable to give a small number (∼ energy in much less time than exact diagonalization routines. Moreover they are optimized for large sparse matrices, (that is the case of typical super-block Hamiltonians) and they do not require as input the full matrix. What is needed is just the effect of it on a generic state |ψi, which lives in a (mD)2 dimensional Hilbert space. The Hamiltonian in Eq. (1) can be written as: ˆ = H

X

ˆ ⊗ B(p) ˆ , A(p)

(18)

p

ˆ ˆ where A(p) and B(p) act respectively on the left and on the right enlarged block. Thus, only this matrix multiplication has to be implemented: out ψaαβb =

X

ˆ g(a,α) g(a′ ,α′ ) B(p) ˆ g(b,β) g(b′ ,β ′ ) ψ in′ ′ ′ ′ A(p) aαβb

(19)

p

In this way it is possible to save a great amount of memory and number of operations, since ˆ ˆ the dimensions of A(p) and B(p) are (m D) × (m D), and not (m D)2 × (m D)2 . As an example, the typical m value for simulating the evolution of a Lmax = 50 spin 1/2 chain (D = 2) is m ∼ 50. This means that, in order to store all the ∼ 108 complex numbers ˆ supB in double precision, ∼ 1.6 Gbytes of RAM is needed. Instead, each of the two of H ˆ requires less than 200 kbytes of RAM. matrices Aˆ and B

B.

Guess for the wave function

Even by using the tools described in the previous paragraph, the most time consuming part of a DMRG step remains the diagonalization procedure. The step-to-step wave function transformation required for the t-DMRG algorithm, which has been described in the previous section, can also be used in the finite-system DMRG, in order to speed up the super-block diagonalization [34]. Indeed the Davidson or Lanczos diagonalization methods are iterative algorithms which start from a generic wave function, and then recursively modify it, until 19

the eigenstate closest to the target eigenvalue is reached (up to some tolerance value, fixed from the user). If a very good initial guess is available for the diagonalization procedure, the number of steps required to converge to the solution can be drastically reduced and the time needed for the diagonalization can be reduced up to an order of magnitude. In the finite-system algorithm the system is changing much less than in the infinite algorithm, and an excellent initial guess is found to be the final wave function from the previous DMRG step, after it has been written in the new basis for the current step. White’s prediction is used in order to change the basis of the previous ground state with the correct ˆ as in Eq. (16). It is also possible to speed up the diagonalization in the infiniteoperators O, system algorithm, but here the search for a state prediction is slightly more complicated (see e.g. [35, 36]).

C.

Symmetries

If the system has a global reflection symmetry, it is possible to take the environment block equal to the system block, in the infinite-system procedure. Namely, the right enlarged block is simply the reflection of the left one. To avoid the complication of the reflection we can consider an alternative labelling of the sites, as shown in Fig. 5. In this case left and right enlarged blocks are represented by exactly the same matrix. 1

L/2 L/2+1

1

L/2

L/2+1

L

L

FIG. 5: Alternative labelling of sites, to be used in the environment reflection procedure (in case of globally reflection-symmetric systems).

If other symmetries hold, for example conservation of angular momentum or particle number, it is possible to take advantage of them, such to considerably reduce the CPU-time for diagonalization. The idea is to rewrite the total Hamiltonian in a block diagonal form, and then separately diagonalize each of them. If one is interested in the ground state, he simply has to compare the ground state energies inside each block, in order to find the eigenstate corresponding to the lowest energy level. One may also be interested only in the ground state 20

with given quantum numbers (for example in the Bose-Hubbard model one can search for the system ground state with a given number of particles); in this case the diagonalization of the block Hamiltonian corresponding to the wanted quantum numbers is sufficient. In order to write the Hamiltonian in block diagonal form the eigenstates of the reduced density matrix have to be labelled according to the various quantum numbers. Attention must be paid when truncating to the reduced basis: it is of crucial importance to retain whole blocks of eigenstates with the same weight, inside a region with given quantum numbers. This helps in avoiding unwanted artificial symmetry breaking, apart from numerical roundoff errors.

D.

Sparse Matrices

Operators typically involved in DMRG-like algorithms (such as block Hamiltonians, updating matrices, observables) are usually represented by sparse matrices. A well written programming code takes advantage of this fact, thus saving large amounts of CPU-time and memory. Namely, there are standard subroutines which list the position (row and column) and the value of each non null element for a given sparse matrix.

E.

Storage

Both the static and the time dependent DMRG require to store a great number of operators: the block Hamiltonian, the updating matrices, and if necessary the observables for each possible block length. One useful way to handle all these operators is to group each of them in a register, in which one index represents the length of the block. Operatively, we store all these operators in the fast-access RAM memory. However, for very large problems one can require more than the available RAM, therefore it is necessary to store these data in the hard disk. The read/write operations from hard disk dramatically slow down a program performance, thus they should be avoided, if possible.

F.

Algorithm Schemes

Figures 6-7 show a schematic representation of DMRG and t-DMRG code.

21

Acknowledgments

We thank J.J. Garcia-Ripoll and C. Kollath for some technical comments while developing the DMRG code and for feedback. We also thank G.L.G. Sleijpen [39] for providing us with the Fortran routine to find the ground state. This work has been performed within the “Quantum Information” research program of Centro di Ricerca Matematica “Ennio de Giorgi” of Scuola Normale Superiore and it has been partially supported by IBM 2005 Scholars Grants – Linux on Power and by NFS through a grant from the ITAMP at Harvard University and the Smithsonian Astrophysical Observatory. SM acknowledges support from the Alexander Von Humboldt Foundation. GDC acknowledges support from the European Commission through the FP6-FET Integrated Project SCALA, CT-015714.

[1] See e.g. A Guide to Monte Carlo Simulations in Statistical Physics, D.P. Landau and K. Binder, Cambridge University Press (2000). A.W. Sandvik and J. Kurkij¨arvi, Phys. Rev. B 43, 5950 (1991). [2] H.A. van der Vorst, Computational Methods for large Eigenvalue Problems, in P.G. Ciarlet and J.L. Lions (eds), Handbook of Numerical Analysis, Volume VIII, North-Holland (Elsevier), Amsterdam (2002). [3] S.R. White, Phys. Rev. Lett. 69, 2863 (1992); [4] S.R. White, Phys. Rev. B 48, 10345 (1993). [5] I. Peschel, X. Wang, M. Kaulke, and K. Hallberg, eds., Density-Matrix Renormalization (Springer Verlag, Berlin, 1999). [6] U. Schollw¨ ock, Rev. Mod. Phys. 77, 259 (2005). [7] K.G. Wilson, Rev. Mod. Phys. 47, 773 (1975). ¨ [8] S. Ostlund and S. Rommer, Phys. Rev. Lett. 75, 3537 (1995). [9] G. Vidal, Phys. Rev. Lett. 91, 147902 (2003). [10] G. Vidal, Phys. Rev. Lett. 93, 040502 (2004). [11] A.J. Daley, C Kollath, U. Schollw¨ ock, and G. Vidal, J. Stat. Mech. P04005 (2004). [12] S.R. White and A.E. Feiguin, Phys. Rev. Lett. 93, 076401 (2004). [13] A.E. Feiguin and S.R. White, Phys. Rev. B 72, 020404 (2005).

22

FIG. 6: Basic scheme of the infinite/finite DMRG algorithm. Here we have supposed, for simplicity, that the system is globally reflection symmetric (thus the environment block is taken equal to the system block). The shadowed rectangle is the basic renormalization stage that will be used also in the t-DMRG algorithm (see Fig. 7).

23

[14] S.R. Manmana, A. Muramatsu, and R.M. Noack, AIP Conf. Proc. 789, 269 (2005), condmat/0502396. [15] J.J. Garcia-Ripoll, cond-mat/0602305. [16] G. Vidal, J.I. Latorre, E. Rico, and A. Kitaev, Phys. Rev. Lett. 90 227902 (2003). J.I. Latorre, E. Rico, and G. Vidal, Quant. Inf. and Comp. vol.4 no.1,048 (2004). [17] P. Calabrese and J. Cardy, J. Stat. Mech.: Theor. Exp. P04010 (2005). [18] G. De Chiara, S. Montangero, P. Calabrese, and R. Fazio, J. Stat. Mech.: Theor. Exp. P03001 (2006). [19] R. Orus and J.I. Latorre, Phys. Rev. A 69 052308 (2004). [20] S. Montangero, Phys. Rev. A 70 032311 (2004). [21] R. Feynman, Int. J. Theor. Phys. 21, 467 (1982). [22] G. Vidal, cond-mat/0512165. [23] A.L. Malvezzi, Braz. J. of Phys. 33, 55 (2003). [24] R.M. Noack and S.R. Manmana, AIP Conf. Proc. 789, 93 (2005), cond-mat/0510321. [25] If mirror symmetry does not hold the right enlarged block must be built up independently. [26] If m is bigger than D2 we do not truncate the basis at the first step. The truncation starts when m < DL and L is the number of spins in the enlarged block. [27] M.C. Chung and I. Peschel, Phys. Rev. B 62, 4191 (2000). [28] F. Verstraete, D. Porras, and J.I. Cirac, Phys. Rev. Lett. 93, 227205 (2004). [29] M.A. Cazalilla and J.B. Marston, Phys. Rev. Lett. 88, 256403 (2002); Phys. Rev. Lett. 91, 049702 (2003). [30] M. Suzuki, Prog. Theor. Phys. 56, 1454 (1976). [31] H.F. Trotter, Proc. Am. Math. Soc. 10, 545 (1959). [32] U. Schollw¨ ock, J. Phys. Soc. Jpn. 74 (Suppl.), 246 (2005). [33] D. Gobert, C. Kollath, U. Schollw¨ ock, and G. Sch¨ utz, Phys. Rev. E 71, 036102 (2005). [34] S.R. White, Phys. Rev. Lett. 77, 3633 (1996). [35] U. Schollw¨ ock, Phys. Rev. B 58, 8194 (1998). [36] L. Sun, J. Zhang, S. Qin, and Y. Lei, Phys. Rev. B 65, 132412 (2002). [37] M. Rizzi, D. Rossini, G. De Chiara, S. Montangero, and R. Fazio, Phys. Rev. Lett. 95, 240404 (2005). [38] See for example http://www-03.ibm.com/servers/eserver/bladecenter/

24

[39] To find the ground state we used the JDQZ routine implemented by G.L.G. Sleijpen et al. http://www.math.ruu.nl/people/sleijpen/.

25

FIG. 7: Basic scheme of the time-dependent DMRG algorithm. The index i refers to the discretized time and n = t/dt.

26

de Chiara et al, Density Matrix Renormalization Group for Dummies ...

from the accomplished task. • Facilitate student-to-student interactions and process learners. understanding. Whoops! There was a problem loading this page. Retrying... de Chiara et al, Density Matrix Renormalization Group for Dummies.pdf. de Chiara et al, Density Matrix Renormalization Group for Dummies.pdf. Open.

319KB Sizes 0 Downloads 285 Views

Recommend Documents

de Vleeschouwer et al 2006
Email: Steven.devleeschouwer@ uz.kuleuven.ac.be. Received ...... Capuano S 3rd, Murphey-Corb M, Falo LD Jr, Donnenberg AD: Maturation and trafficking of ...

Chapter 5 Density matrix formalism
In chap 2 we formulated quantum mechanics for isolated systems. In practice systems interect with their environnement and we need a description that takes this ...

Regulatory User Group - Bourse de Montréal
May 1, 2017 - REGULATORY USER GROUP ... Following circular 155-16, the Regulatory Division of Bourse de ... Chief Compliance Officer - V-P - Director.

Micallef et al. 2008
National Oceanography Centre, University of Southampton, European Way, Southampton, SO14 3ZH, ... 8100±250 cal yrs BP (Haflidason et al., 2005), the ... veyed using state-of-the-art acoustic imaging techni- ...... Freeman, San Francisco.

Estimates on Renormalization Group Transformations - CiteSeerX
Apr 25, 1997 - Each K(X; ) should be Frechet-analytic in in a complex strip around the ...... kA(t)kt which is the unique formal power series in t; h solution to the ...

Estimates on Renormalization Group Transformations | CiteSeerX
Apr 25, 1997 - the short and long distance behavior of various quantum field theories. We generally ...... where V (F) is de ned on each cell by. (V (F))( ) = V ( ) ...

Estimates on Renormalization Group Transformations
Apr 25, 1997 - University of Virginia. Charlottesville, VA 22903 ..... composed of bonds b connecting the centers of the blocks in X. The length jbj of a ...... is convergent for t; h su ciently small depending on the initial data kAkG(0);?;h. The.

Renormalization group made clearer
I attempt to explain the use of renormalization group in quantum field theory from an elementary point of view. I review ... renormalization group approach is treated as a purely mathematical technique (the Woodruff-Goldenfeld method) that ..... matc

Claisse et al 2014Platform_Fish_Production_w_supporting_info.pdf ...
Claisse et al 2014Platform_Fish_Production_w_supporting_info.pdf. Claisse et al 2014Platform_Fish_Production_w_supporting_info.pdf. Open. Extract.

et al
Jul 31, 2008 - A new algorithm was developed to extract the biomarker from noisy in vivo data. .... Post Office Box 5800, 6202 AZ Maastricht, Netherlands.3Depart- ment of ... School of Medicine, Broadway Research Building, Room 779, 733.

Hernandez-de-Lara-et-al-2014.pdf
Hecho el depósito que establece la ley 11.723. Page 3 of 48. Hernandez-de-Lara-et-al-2014.pdf. Hernandez-de-Lara-et-al-2014.pdf. Open. Extract. Open with.

Vaz Ferreira et al-Cynolebias-Ecoetología de reproducción.pdf ...
Page 3 of 8. Page 3 of 8. Vaz Ferreira et al-Cynolebias-Ecoetología de reproducción.pdf. Vaz Ferreira et al-Cynolebias-Ecoetología de reproducción.pdf. Open.

Stierhoff et al
major influence on subsequent recruitment, particu- larly for ... hypoxia could affect survival rates and recruitment through subtle effects .... using SPSS software.

(Cornelius et al).
rainforest in Chile, IV- dry Chaco in Argentina, and V- tropical forests in Costa Rica (map modified from ..... Chaco is subject to logging and conversion to.

International Advisory Services Group - Bourse de Montréal
Mar 25, 2009 - Technology. Back-office – Futures. Regulation. MCeX ... Lisa Lake Langley. President and Chief Compliance Officer. Richard Andrew Ness.

Aecon Group Inc. (ARE)- Acquistion - Bourse de Montréal
Website: www.m-x.ca. CIRCULAR 152-17. October 26, 2017. ANTICIPATED CONTRACT ADJUSTMENT. Aecon Group Inc. (ARE). Acquistion.

International Advisory Services Group (IASG) ULC - Bourse de Montréal
Technology. Back-office – Futures. Regulation. MCeX. CIRCULAR. September 16, 2009. INTERNATIONAL ADVISORY SERVICES GROUP (IASG) ULC.

DHM2013_Vignais et al
Table 1: Mean normalized resultant joint force (JF) and joint moment ... the mean joint reaction force of the distal joint was ... OpenSim: open-source software to.

Schmidt et al, in press
At the beginning of the experimental session, participants were asked to read and familiarize themselves with ..... Feldman R., & Eidelman A. I. (2007). Maternal postpartum ... In G. E. Stelmach & J. Requin (Eds.), Tutorials in motor behavior (pp.

VanLavieren et al PolicyReport_LessonsFromTheGulf.pdf ...
VanLavieren et al PolicyReport_LessonsFromTheGulf.pdf. VanLavieren et al PolicyReport_LessonsFromTheGulf.pdf. Open. Extract. Open with. Sign In.

Altenburger et al
Jun 30, 2005 - email: [email protected]. JEL Classification: ... The Universities Act 2002 (official abbreviation: UG 2002), which is still in the stage of .... Thirdly, ICRs can serve marketing purposes by presenting interpretable and.

figovsky et al
biologically active nanochips for seed preparation before planting; enhance seed germination, enhance seed tolerance to pathogens, salinization, draught, frost, ...

Aecon Group Inc. (ARE)- Acquistion - Bourse de Montréal
Telephone: (514) 871-2424. Toll-free within Canada and the U.S.A.: 1 800 361-5353. Website: www.m-x.ca. CIRCULAR 152-17. October 26, 2017. ANTICIPATED CONTRACT ADJUSTMENT. Aecon Group Inc. (ARE). Acquistion. THE FOLLOWING INFORMATION IS PREPARED FOR