## 1. Introduction

Accurate and efficient solution to the Schrödinger equation is important in the fields of atomic physics, molecular physics, and chemical dynamics, which include the dynamics of atoms and molecules in time‐dependent electromagnetic fields, a variety of atomic and/or molecular collision problems, molecular photodissociation, molecular dynamics on a surface, and in describing the behavior of materials subjected to internal and external forces, and the production of solitons and quantum vortices in Bose‐Einstein condensates. In many situations, especially at the nanoscale level, the evolution of the physical system occurs over multiple length and timescales, and a computationally efficient approach is a necessity to make progress. Theoretical modeling of all of these phenomena is important to obtain a fundamental understanding of atomic processes at the nanoscale level and to provide a scientific base for the design and development of nanostructured materials and for the optimal control over a chemical reaction.

The numerical solution of the time‐dependent (TD) Schrödinger equation relies heavily on the discretization of the variables (*r*, *t*) in coordinate space, or named as grid method [1–4]. The usage of numerical grid methods, which have numerically spectral convergence, has become the most important tools to solve the Schrödinger equation and has been well developed in the past decades because the grid methods are convenient for evolving the TD Schrödinger equation with an initial value with iterative approaches. The early attempts to demonstrate the computational viability of various grid methods for molecular dynamics have been largely superseded by applications to specific problems and deeper research into more sophisticated molecular systems. The grid methods, however, for efficiently solving the electronic Schrödinger equation are under active development currently, mainly due to the Coulomb‐attractive and repulsive singularities, which are quite different from the potential energy surface (PES) experienced by the nuclei in molecular dynamics calculations.

Once a grid method has been chosen, a suitable iterative method, or an evolving method for the initial wave function, must be determined accordingly. The efficiency of the evolving methods of the Schrödinger equation with time‐independent (TID) or TD Hamiltonian is crucial for computer modeling of the quantum dynamics. They have been well developed before 1990; among them, the most famous two are the Chebyshev polynomial expansion with a TID Hamiltonian and split operator with a TD or TID Hamiltonian [5,6]. Recently, there is particular interest in developing efficient and accurate propagators with a TD Hamiltonian.

An efficient quantum dynamics code comes from an optimal combination of the time propagator and the spatial grid method. Especially, recent rapid developments in hardware capabilities, as exemplified by the Intel's many integrated cores, Nvidia graphics processing unit cards, have added another dimension to increasing the number of floating‐point operations per second that can be executed.

In the final of this part, we would like to note that, in the past, usually we would like to state that the solutions of the Schrödinger equation with the TID method, that is, matrix diagonalization methods, are much more computer consuming than the solutions with the TD method, that is, iterative methods for an initial value problem. Along with the development of the numerical method, such as the filter method or spectral transform method, this is true only conditionally [4,7,8]. Especially, with the intention of the real Chebyshev wave packet method, there is no distinction between the TID and TD method anymore [4].

In this work, the spatial grid methods and time propagators to solve the Schrödinger equation with TD and TID Hamiltonian will be reviewed, and some of their applications, that is, describing the resonances and isotope effects in molecular reaction dynamics, molecular dynamics on surface, and electronic dynamics in atoms and molecules induced by ultrashort laser pulses, will be presented.

## 2. The Schrödinger equation

The time‐dependent Schrödinger equation usually can be written as

and for the dynamics of a molecule, the Hamiltonian and wave function are functions of ** R** and

**, where**

*r***is a collection of the coordinates of electrons and**

*r***is a collection of the coordinates of nuclei. In the case where we are interested in the dynamics of nuclei, usually, first the electronic structure theory is applied to construct the potential energy surface (PES) using the Born‐Oppenheimer approximation, then the coordinate**

*R***disappears in the equation and we can conveniently carry out molecular dynamics simulations. In the case where we are interested in the electronic dynamics processes, especially which are induced by ultrashort laser pulses, the coordinate**

*r***has to be considered explicitly, and usually the coordinate**

*r***is fixed since the movement of the nuclei is much more slow.**

*R*In any case, in a modern solution to the TD Schrödinger equation, the spatial coordinates of the Hamiltonian have to be discretized to facilitate the wave function propagation, where iterative evaluations of the action of Hamiltonian operator on the wave function are required. Sometimes, the time variable also requires discretization; thus, the wave function propagation is accomplished by a series of advancing the wave function from

## 3. The spatial discretization methods

Discretization of the spatial coordinates is very important in an efficient and accurate quantum dynamics calculation. For a special problem, a careful designed grid method would lead to substantial decrease of the computational effort. Basically, there are three approaches to the discretization of the spatial coordinates of the Hamiltonian: (1) local methods, such as the finite difference methods; (2) global methods, which includes all of the spectral methods and the corresponding grid methods; and (3) spectral element methods. Each has its advantages and limitations.

The local method usually applies with low‐order finite difference method and is simple to use, thus leading to very sparse Hamiltonian matrix. However, they usually have limited accuracy since they converge with both the number of the grid points and the spacing of the grid points. When boundary region is important in the model, we need to impose the boundary conditions carefully, which results in an asymmetric Hamiltonian matrix, and unphysical states may arise. The grid points in finite difference calculations connect with each other only “locally,” that is, connect with the nearby grid points; thus, parallel algorithm can be efficiently applied with MPI.

On the contrary, the global methods usually are derived based on the classical orthogonal polynomial. The grid point connects with all of the other points, necessarily or unnecessarily, which leads to the dense Hamiltonian matrix. However, the global method usually has spectral convergence, and results of machinery accuracy can be obtained. Due to limited accuracy of finite element method, spectral element method is much more popular in a quantum molecular and atomic dynamics calculation [9–11]. The grid points in an element of the spectral element are determined by the Lobatto‐Chebyshev or Lobatto‐Legendre quadrature. To set up the connection between different elements, many proposals have been put forward. The spectral element usually leads to numerical accuracy between the spectral method and low‐order finite difference method, and results in Hamiltonian matrix sparser than that using the global method. Currently, the spectral element method has popular applications in solving electronic Schrödinger equation induced by ultrashort laser pulses. However, due to the denser grid points around both ends of each element, the spectral range of the Hamiltonian matrix usually is necessarily large; especially, high‐order Lagrange polynomials are applied in each element for obtaining results of high accuracy. This unnecessary huge spectral range results in much difficulty in searching for an efficient matched time propagator.

Besides these three spatial discretization methods, we also have higher order finite difference method, spectral difference method, and the distributed approximation functional (DAF) method [12–16]. They usually exhibit spectral convergence, exactly same as the global spectral methods. However, they connect only with the nearby “necessary” grid points, thus leading to denser Hamiltonian matrix than that with low‐order finite difference method, but sparser Hamiltonian matrix than that with the global methods. The grid points distribute evenly in the spectral difference method, but the grid points distribute exactly the same as their precedent Gauss quadrature of the kernel in the DAF method. They avoid the discontinuities between elements in the spectral method, thus an optimal choice in a calculation with smooth interaction potential functions. Since the grid points connect with each other semilocally, methods of this kind are suitable in a parallel calculation also, similar to the spectral element method. The higher order difference or spectral difference method and DAF method have been applied successfully in the molecular dynamics field. Recently, Sun also puts forward DAF method, which is capable of accurately describing the Coulomb singularity.

In the following, brief introductions to several spatial discretization methods, which currently are popular in a quantum dynamics calculation, will be given.

### 3.1. Discrete variable representation (DVR)

The DVR has been used quite successfully for a variety of problems, including the calculation of the ro‐vibrational spectra of complex molecules and the reactive scattering cross sections of atoms and molecules [17,18]. In a usual variational or spectral calculation, one chooses some, in principle, complete, orthonormal basis set, *φ*, with truncation to a finite number of terms, *N*, and then evaluates all of the matrix elements of the Hamiltonian analytically, or by a quadrature technique. If the spectral basis is chosen to be one of the classical orthogonal functions, satisfying the three‐term recursion relationship,

then there exists an associated Gauss quadrature rule with the same number of points and weights, which integrates exactly any integrand of degree (2*N* - 1) or less. By diagonalizing the tridiagonal matrix built from the recursion coefficients *α* and *β*, the points and weights may be obtained. Under these conditions, it is possible to transform from the original spectral basis of *N* functions, *φ*, to a new basis of DVR or coordinate functions, called

Here, all of the integrals are performed using the Gauss quadrature rule; there are no approximations as a consequence of the fact that the quadrature rule is exact for all integrands (2*N* - 1) or less. The Fourier functions or particle in a box, which are not even polynomials, can be verified to exactly satisfy Eq. (2) with a suitably chosen quadrature rule. In all of these cases, there exists a unitary transformation between the original spectral and coordinate basis. In the DVR, the potential *V*(*x*) can be written as

since *V*(*x*) can be written as

Thus, the matrix elements of the potential operator are equal to their values at the Gauss quadrature points and diagonal. In a wide class of problems, this approximation has been proved to be accurate. As a result, there are significant advantages in transforming to the DVR basis over the original spectral basis in that the complex matrix elements of the potential are diagonal and are simply equal to their values as the quadrature points. This is very convenient in a numerical calculation since it leads to a sparse form of the Hamiltonian matrix.

As the other part of the Hamiltonian, the matrix elements of the kinetic energy operator, while not diagonal in the DVR basis, usually can be evaluated simply and exactly or analytically. Since the kinetic operator part of the Hamiltonian matrix is a separable sum over the particle and coordinate variables, a direct product DVR basis results in a very sparse matrix representation in *n*‐body problems. This has a number of advantages which can be well exploited in the iterative stage, where the action of the Hamiltonian matrix on the wave function needs to be evaluated many times, such as the time propagation step or the iterative computation in Lanczos method for calculating the eigenstates. In a practical calculation, the basis functions are not necessary to be the classical polynomials or the trigonometric functions. Usually, the eigenstates calculated in a reduced dimensional model are taken as the basis function, which often leads to efficient potential optimized DVR. In the past decades, complicated but efficient sequential truncation techniques and phase optimization methods have been developed for solving ro‐vibrational states of polyatomic molecules [8,19]. Calculations of reactive scattering and ro‐vibrational state of molecules with dimensionality of nine using the DVR technique in a direct product form have been reported.

Usually, the DVR invented in the field of molecular dynamics does not work well with electronic dynamics; especially, the Coulomb singularity was rigorously included. The exception is the Lobatto‐DVR method, which based on the Legendre polynomials but with Gauss‐Lobatto quadrature [20]. However, with suitable phase optimization technique, the trigonometric functions can work well with the Coulomb singularities, even for describing the electrons of diatomic molecule in cylindrical coordinates.

The Lagrange meshes (LM) method is popular in the field of atomic physics [21]. Same as the DVR method, the LM method is a global method, using information from the whole domain of definition of the studied problem, an approximate spectral method taking the form of a collocation method. The LM method is a special case of DVR method, and when classical orthogonal polynomials are adopted as the basis, the LM and the DVR methods are identical. However, the LM method is derived from the Lagrange functions, which make it to be convenient for regularization of the singularity, starting from the basis function. However, usually, the quadrature in a DVR calculation is obtained by diagonalizing the coordinate matrix

where

This fact makes DVR be convenient for various generalizations and for the optimization of the DVR. The basic idea is to get rid of the constraints on the choice of the mesh points and associated weights. They are close to the collocation spirit, but they cannot be so easily related to the LM method. It has been proven that these generalizations are capable of giving very accurate results.

The quadrature discretization (QD) method is efficient for calculating low‐bound states of molecules [22]. Same as the DVR and the LM methods, the QD method is a global method. The QD method is very similar to DVR method, but using nonclassical polynomials. In the QD method, a differential operator is replaced by a matrix. The *m*th derivative *m*th power of a matrix representing *N* polynomials,

where matrix

with the auxiliary potential that reads as

When

where

### 3.2. Multidomain spectral element method

The Coulomb‐attractive singularities between nuclei and electrons are very difficult to describe accurately. At the same time, usually the range of the radial coordinate in a calculation has to be large, which leads to large Hamiltonian matrix using a global spectral method and results in heavy computational effort. The Lobatto‐DVR, using the Lagrange interpolation polynomials on the Gauss‐Legendre‐Lobatto quadrature, is particularly suitable for describing the Coulomb‐attractive singularity, whose quadrature, however, clusters densely around the two grid ends. Thus, average efficiency of the quadrature is not high. To overcome these two difficulties, the multidomain spectral method was purposed [3,4]. In this method, the whole range of the radial coordinate is divided into *M* segmentations or finite elements with boundaries,

The wave functions are represented in a basis of functions which are local to each finite element. In this spectral element method (which is also named as finite element [FE]‐DVR), a few of the Lobatto‐DVR functions,

which fulfill

are applied for each of the element. The basis function is then defined explicitly with *N* quadrature as

where the “bridge” function is used to ensure the communication between two adjacent elements and guarantees the continuity of the wave function. Correspondingly, the kinetic matrix is given by

assuming each element has same *N* Lagrange functions. Further,

where *N‐*order Legendre polynomials. For

thus, the first and last diagonal values can be found to be

**Figure 1** illustrates the structure for the kinetic energy matrix of one dimension. Since the many‐particle kinetic energy operators are the sum of one‐particle operators, the generalization to tensor product basis sets is straightforward. **Figure 2** shows the gird points of a typical FE‐DVR grid with 11 basis functions per element and the resulting grid spacing.

Even the finite‐element DVR has been proven to work well in many cases; the oscillatory distribution of the grid and the bridge function leads to unphysically large spectral range of the Hamiltonian matrix, as shown in the upper panels of **Figure 3**. This fact not only leads to low grid efficiency but also makes the iterative propagator, such as short‐time iterative Lanczos (SIL) method, converge slowly [23]. To overcome this difficulty, one can use the variable mapping method or prolate spheroidal wave functions to get a more uniform spatial grid far from the origin and dense spatial grid around the origin, to increase the numerical efficiency [24,25]. In the bottom panels of **Figure 2**, the gird distribution becomes nearly even after optimization, which improves the efficiency of the grid. At the same time, the spectral range of the Hamiltonian matrix reduces several times against that of the original FE‐DVR.

On the other hand, in order to exploit the sparseness of the Hamiltonian matrix in finite element DVR, the powerful split operator propagator cannot be applied, which even is insensitive to the spectral range of the Hamiltonian. The common choice is the short‐time iterative Lanczos method, which requires a significant amount of computer memory. And, due to the fact that the continuity between elements only is guaranteed with the wave function other than the derivatives, which introduce numerical complications, results with spectral accuracy cannot be obtained.

### 3.3. Distributed approximating functionals (DAF) method

Even though it is known that for well‐conditioned problems, that is, when infinitely differentiable solutions exist almost everywhere, global methods provide a better accuracy than local methods using information from pieces of the domain, such that the Lobatto‐DVR method is capable of giving results of higher accuracy than finite element DVR method, the drawback is that the Hamiltonian matrix is dense and sometimes requires more computational effort. The higher‐order finite difference (FD) method is a good choice for obtaining results of high accuracy, but possibly with sparsest Hamiltonian matrix, especially with spectral difference method where accelerating weights are applied.

However, in the case of Coulomb potential with singularity, the boundary conditions are very important to the convergence of the FD method. Usually with well‐defined boundary conditions, the resulted Hamiltonian matrix is asymmetric, which leads to unphysical states that may be numerically unstable.

The distributed approximating functional (DAF) method presents a good solution to these problems, which reduce the correlated region around the grid point by a controllable Gaussian weight

where

with a controllable parameter

and

The potential operator has exactly the same form as that in the precedent DVR method. Thus, the Hamiltonian matrix for one dimension has similar form as that using higher‐order FD method, as shown in **Figure 4**.

As observed from Eq. (3), the DAF concept can be conveniently applied with many DVR kernels. The usage of DAF and the corresponding DVR method can be switched back and forth conveniently for checking convergence and saving computational effort, since the introduction of the DAF only reduces the computational effort, without affecting the numerical efficiency of its precedent DVR. The DAF method has been successfully applied for solving vibrational bound state, reactive scattering process, and calculating electronic states with explicit Coulomb singularity.

### 3.4. Sparse grid (SG)

With increasing dimensionality, the computational effort for solving Schrödinger equation increases exponentially using the basis space by a direct product expansion of a one‐dimensional (1D) basis, regardless of how efficient the grid is for one dimension. As we understand, the physically required phase space does not increase so fast with increasing dimensionality in a specified energy range. Thus, many grid points (combinations) from different degrees of freedom in a direct product form are actually useless.

In 1963, Smolyak first introduced a multivariable integral method to overcome this difficulty [31]. This approach constructs a multidimensional multilevel basis by a special truncation of the tensor product expansion of a one‐dimensional multilevel basis, which holds the promise of alleviating the restriction of excessively large numbers of unknowns and solves a set of differential equations using significantly fewer unknown. In the limit of high accuracy, the number of unknowns required by the sparse grid combination technique is independent of the spatial dimensionality of the problem. For example, asymptotically, a spatial 3D problem requires the same order of unknowns as a spatial 1D problem.

Sparse grids for solving a discretized partial differential equation (PDE) were introduced in 1990 by Zenger [32], in order to significantly reduce the number of degrees of freedom, while causing only a marginal increase in the representation error relative to the standard discretization. In 1992, Griebel et al. [33] showed that, for two and three dimensions, the sparse grid complexity and representation error can also be achieved by the so‐called combination technique which is suitable for a parallel computation.

The sparse grid combination technique can be understood as a multivariate extrapolation technique. Instead of solving a set of differential equations on a single grid, solutions are obtained on a number of semi‐coarsened grids. After solving these semi‐coarsened problems, the solutions are combined to obtain a single, more accurate solution. Currently, there is focused interest for solving various PDE using sparse grid technique with different quadratures and in combination with multidomain techniques.

There are many studies on the introduction of the sparse grid method in the literature, and their theoretical details are not presented here. With multidimensional sparse grid and hierarchical bases, vibrational states of a molecule with dimensionality of 64 have been obtained [34], electronic structures of small molecules have been solved [35–40], and solution of the time‐dependent Schrödinger equation of model potentials with split operator method, with hierarchical Fourier basis, have been reported [41].

In passing, we note that the coordinate choice is important in an efficient quantum dynamics calculation, especially for a calculation using sparse grid technique where the combination technique can be effectively realized. In spirit, the usage of hierarchical basis is very similar to the high‐dimensional representations (HDMR) and multiconfiguration time‐dependent Hartree (MCTDH) method [42], and general applications of the multidimensional sparse grid and hierarchical basis in the high‐dimensional atomic and molecular dynamics are expected in the near future [43]. Inspecting the prospective development of multidomain technique for solving PDE and the coordinate problem in a state‐to‐state reactive scattering calculation, we might also expect that an exploration of multidomain technique in the field of molecular reaction dynamics would be prospective.

## 4. Time discretization methods

Besides the importance of the coordinate discretization methods for solving the TD Schrödinger equation, the efficiency of a time propagator is also vital. Especially, it is not surprised that the efficiency of a time propagator is only high for special spatial discretization methods. During the past decades, various time propagators have been developed, which roughly can be classified into two classes: One is short‐time propagator which often is of low order and the other one is long‐time propagator by a polynomial expansion which often is accurate. In the following, only the most useful ones are briefly reviewed.

### 4.1. Split operator method (second and higher order)

The second‐order split operator method is the most popular one in a quantum dynamics calculation, especially in a molecular quantum dynamics calculation in combination with the DVR method [1]. The second‐order split operator method is realized as

or

for the TID Hamiltonian or TD Hamiltonian, ignoring the time ordering. With the DVR method or the Fourier basis, where fast Fourier transform (FFT) can be applied, the split operator can be conveniently implemented. Higher order split operator can be numerically much more efficient than the second‐order split operator for evolving the wave function of a diatomic molecule or triatomic reactive scattering [44,45].

### 4.2. Crank‐Nicolson and spectral transformed Crank‐Nicolson propagator

The Crank‐Nicolson is particularly popular for solving a TD Schrödinger equation in the field of atomic physics. One reason, which is well known, is that the Coulomb singularity is difficult for a spectral method, but the finite difference is good for describing the Coulomb singularity thus often adopted. The finite difference in combination with the Crank‐Nicolson propagator is very efficient as a propagator of TD Schrödinger equation by solving a linear equation. The other one reason is not so explicit and needs a bit more explanation, which comes from the concept of spectral transformed Hamiltonian [46]. As we know, with a TID Hamiltonian

and

Thus

By retaining the first Taylor expansion term of

On the other hand, we can define a spectral transformed Hamiltonian

Hamiltonian

For this equation, we conveniently have an exact short‐time propagator

Therefore, in the traditional usage of the Crank‐Nicolson propagator, the wave function actually is propagated with the spectral transformed Hamiltonian

In the calculation involving Coulomb singularity, the spectral range of the Hamiltonian usually is quite huge, which imposes the upper limit for the time step by

As seen from Eq. (4), this spectral transformation reduces the original huge spectral range of Hamiltonian

### 4.3. Short‐time iterative Lanczos method

Another popular short‐time propagator is the short‐time iterative Lanczos (SIL) method [47]. The Lanczos method initially was proposed to calculate the eigenvalues and eigenfunctions of a symmetric Hamiltonian matrix by solving a tridiagonal matrix, which was given by a similar transformation of the original Hamiltonian matrix through the Krylov vectors. Therefore, the fundamental idea of the Lanczos method is to iteratively build a small, orthornormal set of vectors using the Krylov subspace generated from the solution to the time‐dependent Schrödinger equation at the previous time step as the initial vector. This small set of vectors are used to diagonalize the Hamiltonian, which enables the exponentiation of the matrix to be carried out. The SIL method is realized by initializing

and by expressing generally

where the coefficients are given by

The projected subspace representation of the Hamiltonian operator has the tridiagonal form

Then, the propagation can be accomplished by diagonalizing a small matrix

The SIL method requires memory usually one order larger than that required in a calculation using the split operator or Crank‐Nicolson method. However, since SIL does not need to switch between different representations, unlike that using the split operator method, the SIL method can explore the block sparseness of the kinetic operator matrix for one dimension. Several works have been reported on the combination usage of the SIL method and the FE‐DVR method. On the other hand, the iteration number for a convergent result using the SIL has direct ratio to the spectral range of the Hamiltonian. Therefore, even the SIL method takes well the advantage of the sparseness of the Hamiltonian matrix using the FE‐DVR method; the unphysically large spectral range resulted from the finite element method reduces the convergence of the SIL method, which requires more iterations, and thus more memory and more computational effort.

### 4.4. Symplectic propagator

Symplectic time propagators for time‐dependent Schrödinger equation are of particular interest, especially higher order ones. They can be very accurate and implemented simply [48]. For a typical symplectic propagator, the implementation can be done by separately expressing the wave function into real and imaginary parts,

where both

which can be rewritten as

with

The evolution operator of Eq. (5) can be written as

which is an orthogonal and symplectic matrix. Direct evaluation of Eq. (6) is expensive. By using composition methods, evaluation of Eq. (6) can be written as

where

Various coefficients determined by the composition methods have been reported, which have been proven highly accurate for solving a time‐dependent Schrödinger equation to molecular dynamics.

Even symplectic propagator implemented in this form is accurate and straightforward; it is unstable and gives diverged results when the adopted time step is larger than an upper limit. Usually, this upper limit is not large, which is determined by the spectral range of the Hamiltonian. Thus, the numerical efficiency of symplectic propagator, usually, is not high. This is quite different from the split operator in exponential form whose time step is only determined by the numerical error. This fact explains its much less popularities in a practical quantum dynamics calculation.

### 4.5. Chebyshev and real Chebyshev polynomial expansion with a TID Hamiltonian

The most popular long‐time propagator is Chebyshev polynomial propagator, introduced by Taz and Kosloff [2]. It is accomplished as follows:

where *n*‐order of Chebyshev polynomial.

where *n*‐order Bessel function of first class. Since the Chebyshev polynomials can be obtained by the following iteration,

the propagation of the wave function by the Chebyshev polynomial expansion can be implemented by an iterative procedure.

For the initial wave function which has only the real part, the time evolution of the wave function can be achieved in the order domain of Chebyshev polynomial, instead of the time domain [49,50]. This can be understood as follows:

With

and

Therefore, the real and imaginary parts of the wave function can be propagated separately. By introducing the spectral transformed Hamiltonian,

Then, in the space of Ξ, the evolution of the real‐wave function

Here, the time step

with

### 4.6. Propagator with explicitly TD Hamiltonian

When the Hamiltonian is time‐dependent, such as in a case where there is interaction between laser pulse and molecule or atom, the time ordering in a calculation using short‐time propagator for numerical results of high accuracy has to be considered sometimes. For short‐time propagators, one can supplement with a Euler‐MacLaurin expansion for the time‐integrated nonhomogeneous term or with a Magnus expansion for considering the time ordering [53,54]. Tal‐Ezer and his coworkers recently introduced a new Chebyshev polynomial expansion method for nonhomogenous time‐dependent Schrödinger equation. For more details, one may refer Ref. [55].

## 5. Boundary conditions

In a solution to TD Schrödinger equation using spatial discretization method, the spatial grid range has to be limited. However, during the propagation time, the wave function usually can extend out of the grid. We have to artificially impose the boundary condition, which is capable of damping the wave function before reaching the end of the grid, but without affecting the interior wave function. A usual choice is by including the absorbing potential in the potential operator or by imposing damping function to the wave function. Smooth exterior complex scaling method was reported to be a more accurate way to impose the boundary conditions in a numerical solution to TD Schrödinger equation, which has been applied in a calculation of electronic dynamics induced by ultrashort laser pulses.

## 6. Applications

With the pioneer work in the solution to the Schrödinger equation in the field of molecular quantum dynamics, the developments of quantum wave packet now have a general application in the field of atomic physics, molecular physics, molecular reaction dynamics, and the structure simulations of semiconductors or nanomaterials. In the following sections, only those chosen from the author's personal interest are briefly reviewed, which may not be the most representative works.

### 6.1. Feschbash resonance in the reaction of F + H_{2}/HD (*v*_{0} = 0, *j*_{0} = 0) and Cl + HD (*v*_{0} = 1, *j*_{0} = 0)

The reactive resonances discover the quasi‐bound states of the reaction complex with unique clarity, and they do exist. Recognition of the reactive resonances is helpful for understanding how elementary chemical processes at a single quantum‐state level take place. F + H_{2} and its isotopic analogs are the most classical prototypes [56].

Interest in the F + H_{2} reaction largely results from Lee's benchmark molecular beam studies [57] and early chemical laser work [58]. In 2000, the work on the F + HD reaction of Dong et al. [59] discovered the existence of reactive resonances in crossed beam experiments, where a resonance‐enhanced step in the excitation function was observed.

In 2006, with both theory and experiment exhibiting consistent behavior on XXZ PES, the work of Qiu et al. [60] reported the evidence for the resonances in the F + H_{2} reaction, as shown in **Figure 5**. The appearance of this report is enabled by the crossed molecular beam combined with high‐resolution Rydberg state tagging technique, the developments on quantum scattering method development, and ab initio method. The sharp forward peak arising at the collision energy of 0.52 kcal/mol is actually due to the interference between the first two Feshbach resonances. With total angular momentum as zero, there are two resonance states at collision energies of 0.26 and 0.46 kcal/mol. Along with increasing *J*, the resonance energy will shift to higher collision energy. The three‐dimensional (3D) scattering wave function at the collision energy of 0.26 kcal/mol suggests the existence of three nodes along the H‐F coordinate (which correlates to the HF product) in the HF‐H’ complex with zero node along the reaction coordinate. The projection of the *J* = 0 scattering wave function at 0.26 kcal/mol to the HF vibrational states suggests that the dominant character of this wave function is HF (*v*’ = 3), with the outgoing waves mostly on HF (*v*’ = 2). This implies that the resonance state at 0.26 kcal/mol is the lowest resonance state, (003), which is trapped in the HF (*v*’ = 3)‐H’ vibrational adiabatic potential (VAP) well.

The 3D scattering wave function for *J* = 0 at the collision energy of 0.46 kcal/mol suggests the existence of three nodes along the HF coordinate (which correlates to the HF product) in the HF‐H’ complex, with a single node along the reaction coordinate. The projection of the *J* = 0 scattering wave function at 0.46 kcal/mol to the HF vibrational states indicates that the dominant character in this wave function is HF (*v*’ = 3), but with the outgoing waves also mostly on HF (*v*’ = 2). This indicates that the resonance state at 0.46 kcal/mol is the excited reaction resonance state trapped in the HF (*v*’ = 3)‐H’ VAP well. This resonance state can be assigned to the (103) resonance state, which has one‐quantum vibration along the reaction coordinate, zero‐quantum vibration on the bending motion (or hindered rotation), and three‐quanta vibration along the HF stretching. The resonance schemes were shown in **Figure 5**.

Their next work at higher collision energy (0.94 kcal/mol) revealed the tunneling and shape resonance effects [61], other than Feschbash resonance leading to the forward scattering in the reaction of F + H_{2}, which indicated that the reactive resonances played quite distinguished roles in the same reaction but at different collision energies.

In 2008, Ren et al. [62] measured the DCSs at several different collision energies, which suggested strong variation as a function of collision energies, resulting from the existence of strong reactive resonance states. The theoretical DCSs on the new version PES developed by Zhang and his coworkers (FXZ PES) showed good agreement with the experimental observation, as shown in **Figure 6**, which demonstrated that the F + H_{2} reaction is the first reaction which can be investigated with spectroscopy accuracy, besides the H + H_{2} reaction.

To have a better understanding about the reactive resonance state, the ground resonance state wave function of the F + HD → HF + D, along with the two‐dimensional (2D) minimal potential, which was optimized along the angle degree of freedom, is presented in **Figure 7**. It is seen there that the wave function shows features of a semibound state. The outgoing part, corresponding to the HF(*v*’ = 2) product, has two nodes of structure, but the inside peak has three nodes, corresponding to an excited vibrational state of *v*’ = 3.

In 2015, the resonances in the Cl +HD (*v*_{0} = 1, *j*_{0} = 0) reaction were experimentally observed, a first example besides the resonances in F + H_{2} and F + HD [63]. The quantum dynamics revealed that the resonances in the Cl + HD reaction come from the bond‐softening due to the strong anharmonicity over the transition barrier. Reactive resonances of this kind are expected to arise in many reactions; thus, the existence of reactive resonances is not so rare as we had thought.

### 6.2. Isotope effects in the O + O_{2} reaction

The O + O_{2} isotope exchange reactions play an important role in measuring the oxygen isotopic composition of a number of trace gases in the atmosphere. Their temperature dependence and kinetic isotope effects (KIEs) provide important constraints on our understanding of the origin and mechanism of these and other uncommon oxygen KIEs, which are important in the atmosphere [64,65]. Since there is a deep potential well in the process of the reaction and relative large mass of O atom, the rigorous state‐to‐state quantum dynamics calculation requires much computation. Thanks to the development of the numerical methods and computer techniques, on the most recent Dawes‐Lolur‐Li‐Jiang‐Guo (DLLJG) potential energy surface [66], calculation of the state‐to‐state differential cross sections and thermal reaction rate constant is possible [67–70].

For studying the KIEs in the O + O_{2} reaction, using the reactant coordinate‐based time‐dependent wave packet method, where DVR method was applied for each degree of freedom, the integral cross sections (ICSs) of the ^{18}O + ^{32}O_{2} and ^{16}O + ^{36}O_{2} reactions with the initial states of (*v*_{0}, *j*_{0}) = (0, 1), (0, 5), (0, 9), and (0, 21) have been calculated explicitly on the newly constructed DLLJG PES. The ICSs for other *j*_{0} ≤ 21 values were estimated by a *j*_{0}‐interpolation method and those for *j*_{0} > 21 were estimated by extrapolation. These ICSs yield the corresponding initial state‐specified reaction rate coefficients. Thermal reaction rate coefficients with Boltzmann, averaging over all relevant initial states were thus calculated approximately, and they exhibit clear negative temperature dependences for both reactions.

The calculated thermal reaction rate coefficients of both the ^{18}O + ^{32}O_{2} and ^{16}O + ^{36}O_{2} reactions on the DLLJG PES show a clear negative temperature dependence, in sharp contrast with the positive temperature dependence obtained on the earlier modified Siebert‐Schinke‐Bittererova (mSSB) PES [71]. In addition, there is an improved agreement between the calculated KIE and the experiment. These results validated the absence of the “reef” structure in the entrance/exit channels of the DLLJG PES, which is present in the mSSB PES.

The calculated total reaction probabilities of the ^{18}O + ^{32}O_{2} and ^{16}O + ^{36}O_{2} reactions on the DLLJG PES as a function of the total angular momentum (*J*) suggest that the O + O_{2} exchange reactions are dominated by resonances at very low collision energies (<0.2 eV) immediately above the reaction threshold, as shown in **Figure 8**. These resonances depend strongly on the masses of the oxygen atoms involved and/or the zero‐point energy difference between the reactant and product diatoms. Though it appears that the isotopic effects in the exchange reactions come from the ZPE difference, the underlying physical mechanism for the isotope effects is shown here to result from strong near‐threshold reactive resonances which mediate the reactions. Thus, it was first time pointed out that the accurate characterization of the reactivity for these near‐thermoneutral reactions immediately above the reaction threshold is important for correct characterization of the thermal reaction rate coefficients [47].

### 6.3. State‐to‐state cross sections calculation of a chemical reaction

Theoretical method for a state‐to‐state calculation is of vital importance in the field of molecular reaction dynamics, which enables a detailed comparison between theoretical results and the experimental differential cross sections by crossed molecular beams [72]. Traditionally, there is only product coordinate‐based method to achieve this goal using time‐dependent quantum wave packet method in Jacobi coordinates. In 1996, John Zhang and coworkers put forward an efficient reactant‐product decoupling method for a direct reaction. Recently, reactant coordinate‐based method and transition state wave packet method were also proposed for a state‐to‐state reaction calculation.

The transition state wave packet (TSWP) method is a relatively new method for computing the reactive *S*‐matrix elements, and has recently been demonstrated to be accurate and efficient for computing *J* = 0 state‐to‐state reaction probabilities for three, four, and also six atoms [73–79]. The TSWP method is based on the quantum transition state theory of Miller [80,81], who provided a direct way to calculate the cumulative reaction probability (CRP), *k*(*T*), from a flux correlation function on the dividing surfaces located near the transition state [82]. Recently, Manthe and coworkers extended this formulism to compute the *S*‐matrix elements from generalized flux correlation functions of TSWPs [49,83]. Specifically, a set of initial TSWPs are calculated as the eigenstates of the thermal flux operator, which is defined in the transition state region as [84]

where *S*‐matrix elements are obtained from the resulting cross‐correlation functions:

where the energy‐resolved projection amplitudes in the product (*p*) and reactant (*r*) channels,

where {*T*, and

In addition to its conceptual clarity, the TSWP method has several numerical advantages. First, the propagation of each TSWP is essentially an inelastic calculation in the appropriate Jacobi coordinates, in which much smaller bases/grids are needed. One of course needs to transform the initial TSWPs prepared in one coordinate into the other, but this transformation is performed only once in a small region near the transition state. Second, the existence of either a pre‐reaction or a post‐reaction well has a limited impact on the computational costs, as the analysis plane can be placed deep into the asymptotic region without significantly impeding the numerical efficiency. Third, in contrast to the initial state‐specific wave packet (ISSWP) approach, the entire *S*‐matrix can be obtained at all energies in one complete calculation. Finally, the multiple propagations of TSWPs are “embarrassingly” parallel, adding further to its numerical efficiency. However, with the increase of energy, the number of involved TSWPs grows rapidly, especially for those systems with small vibrational frequencies of the activated complex.

The reactant coordinate‐based (RCB) method for computing the reactive *S*‐matrix elements has been widely used for the triatomic reaction systems [40,85–89], but it is only recently that the RCB method is extended to tetra‐atomic systems [90,91]. The key idea of the RCB method is to propagate the time‐dependent initial wave packet deep into the product asymptotic region, where the product states are well defined on an analysis plane. There are generally two schemes to define the product states on the analysis plane: one evaluates product states on the grids of reactant coordinates with an interpolation scheme, while the other projects product states onto a set of intermediate coordinates. The interpolation scheme saves grid points in expressing the product wave functions, thus saving computer memory, and the intermediate coordinate method is numerically more efficient.

In the intermediate coordinate scheme, the projection plane for a particular product channel is designed to include the corresponding scattering coordinate that is defined as the separation of the two products. This enables the expression of the product wave packet in a product form, in which the product wave packet in the scattering coordinate is simply a delta. The remaining degrees of freedom are chosen from reactant coordinate, which renders the projection of time‐dependent wave packet on the intermediate coordinate very efficient. The calculation of overlaps between the time‐dependent wave packets and the product states is carried out on the intermediate coordinate by transforming them from the respective reactant and product Jacobi coordinates into a set of intermediate coordinates.

Compared to the TSWP method, the RCB method has the advantage of propagating the wave packet in only the reactant Jacobi coordinates, which allows the simultaneous analysis of the two product channels with only a single propagation. However, RCB has a genesis of ISSWP method, and it only calculates a column of the *S*‐matrix.

### 6.4. Dissociative chemisorption of water on transition‐metal surfaces

The interaction of molecular species with metal surfaces is of great importance to many industrial applications. The dissociative chemisorption of water on transition‐metal surfaces plays a pivotal role in understanding many industrial heterogeneous processes such as steam reforming, and represents the rate‐limiting step in low‐temperature WGS reaction on copper catalysts [92]. Tremendous progress has been made for the dissociative chemisorption dynamics.

A total of 9 degrees of freedom should be considered for the dynamics of H_{2}O molecule on a rigid surface (as shown in **Figure 11**), rendering it formidable to carry out a fully coupled nine‐dimensional (9D) quantum dynamics calculations. Guo and coworkers performed six‐dimensional (6D) quantum dynamics calculations to study the mode specificity of H_{2}O and bond selectivity of HOD on a rigid flat Cu(111) surface [93–95], employing their 6D PESs developed by permutationally invariant polynomials (PIP). The 6D model neglects the effects of impact sites and surface corrugation, because the surface‐lateral coordinates (*X* and *Y*) and the azimuthal angle were fixed at the values of transition state. The first quantum‐state resolved molecular beam experiment on the dissociative chemisorptions of D_{2}O on Ni(111) demonstrated a large enhancement in reactivity upon excitation of the asymmetric stretching mode of D_{2}O [96]. The observed mode specificity was semiquantitatively understood by Guo and coworkers using the 6D quantum dynamical approach on a 6D PES. Recently, Jiang and Guo investigated 6D site‐specific dissociation probabilities of this system, but based on a new 9D PES developed by permutation‐invariant polynomial‐neural network (PIP‐NN) method [97].

Very recently, we reported the first seven‐dimensional (7D) quantum dynamics study for the dissociative chemisorption of H_{2}O on Cu(111), based on an accurate 9D PES developed by neural network approach [98,99]. The dissociation probabilities exhibit strong azimuthal angle dependence, and large differences were seen between 7D and 6D results, indicating that the 6D quantum model, neglecting the azimuthal angle, can introduce substantial errors. A significant step forward in simulating gas‐surface reactions from the first‐principles was recently achieved, where we reported the first full‐dimensional quantum dynamics study of chemisorption process of H_{2}O on rigid Cu(111), with all 9 degrees of freedom (9D) fully coupled on a global potential energy surface [100]. We found that the full‐dimensional quantum‐mechanical reactivity is quite different from the corresponding results obtained by previous reduced‐dimensional models, indicating the importance of the challenging full‐dimensional quantum‐mechanical calculations to achieve a quantitatively accurate understanding of this reaction. The excitations in vibrational modes of H_{2}O, in particular the stretching modes, are more efficacious than increasing the translational energy in promoting the reaction, much stronger than observed in previous reduced‐dimensionality quantum studies. The full‐dimensional quantum‐mechanical calculations not only offer the dynamic features with the highest accuracy but also allow a conclusive examination of previous dynamical approximations.

### 6.5. Electronic states of H_{2}^{+} and H_{2} by mapped DVR using cylindrical coordinate

For describing the low‐lying electronic state of a diatomic molecule, the prolate spheroidal coordinate is a natural choice. However, it may be not an optimal choice for describing high‐lying state. The Hamiltonian operator of electrons in a diatomic molecule is very simple and easy for numerical evaluation. It is well known that the Coulomb singularity numerically is very difficult to deal with, and in cylindrical coordinate, it is even more difficult.

To overcome this difficulty, a numerical scheme by a combination of mapped sinc DVR and cosine DVR method was introduced in the work by Lin and Sun [101]. The mapped function for variables is as follows:

andThe parameter **Figure 12** as a function of

With this mapped DVR method, the energy of the lowest electronic state, which is most sensitive to the accuracy of the description of the singularity, can be given with 10 significant numbers only with 65 and 100 grid points for _{2} molecule, including correlation of electrons. It was found that a straightforward extension of the method for H_{2}^{+} only is capable of giving the energy of ground electronic state with three or four significant digits, with the parameters as follows:

The nuclear distance is 1.4 Bohr (a.u.). The grid range for

**Figure 13** presents the potential energy curves of the ground electronic state and low‐lying excited states of H_{2} as a function of nuclear distance, calculated by the mapped DVR method (symbols) with the same parameters listed above, along with those (lines) obtained with the MRCI/AV6Z method in the Molpro package [44]. The potential energy curves of the excited electronic states obtained by the mapped DVR method are always lower than those calculated by the MRCI/AV6Z method; however, the curve of the ground electronic state by the mapped DVR method is a little higher than the one obtained by the MRCI/AV6Z method in the dissociation region, because of the strong electronic correlation energy of the ground electronic state. Further numerical studies indicate that by making the grid point distribute more densely around the middle position of the two nuclei, the accuracy of description of the mapped DVR method for the electronic correlation effects can be improved by two orders, which may be enough for a quantum calculation of the electronic dynamics induced by ultrashort laser pulses [102].

### 6.6. Electronic dynamics in bichromatic circularly polarized attosecond laser fields

Bichromatic circularly polarized attosecond laser pulses are new tools for investigating electron dynamics in strong field ionization of atoms, molecules, and even surfaces, following early experiments on polarization properties of high‐intensity high‐order harmonic generation [103]. Photoionization in atoms and molecules by various frequency combinations of co‐rotating and counter‐rotating bichromatic circularly polarized attosecond ultraviolet pulses exhibits signature of spiral interference patterns in photoelectron momentum distributions. The generation of electronic currents such as vortices [104–106] has been proposed as sources of attosecond magnetic field pulses [107,108]. It is found that the helicity of circularly polarized pulses is the essence for the spiral photoelectron distributions. Starace and coworkers [82] have shown that for the cases with two identical frequency circularly polarized attosecond ultraviolet pulses, circularly symmetric patterns independent of the electron ejection angle are produced in photoelectron momentum distributions in atomic He. The corresponding momentum rings are also insensitive to the pulse phases. Electronic vortices can only be obtained by opposite helicity laser pulses.

We numerically solve the three‐dimensional time‐dependent Schrödinger equation of the aligned molecular ion H_{2}^{+} at equilibrium. A five‐point finite difference method combined with fast Fourier transform technique is adopted to describe the molecular Hamiltonian in spatial space [85]. The electron wave function is propagated in time domain with high‐order split‐operator methods [109,110]. We use an efficient method by calculating a radial flux (electron current density at an asymptotic point) to simulate the ionization spectra [111]. At large asymptotic point, the angular flux distributions can be ignored. As a result, we only need to consider the radial part of the electronic flux along the radial direction. Results show that in bichromatic circular polarization ionization, spiral interference patterns can be observed for both cases with co‐rotating and counter‐rotating components, which are sensitive to the pulse frequencies, phase, and time delays, as shown in **Figure 14**. We analyze these results by an attosecond perturbation ionization model. Coherent electron wave packets with same kinetic energies created respectively by the two color pulses in the continuum interfere with each other. Photoionization distributions are functions of the photoelectron momentum and the ejection angle, thus giving rise to spiral distributions. Such sensitivity of the spiral electron distributions to the laser parameters offers an ideal means of characterizing these pulses and timing ultrafast photoionization processes by such circular attosecond pulses.

## 7. Conclusion

Recent developments of quantum wave packet in atomic and molecular dynamics and their applications are reviewed. The continuous expansion of computing power and numerical techniques in quantum wave packet method leads to solve the molecular dynamics with increasingly complicated problems. Foreseeably, more general applications of the quantum wave packet method will result in more exciting discoveries in exploration of the quantum dynamics of molecules and atoms.