1.1. Inorganic solids
Many inorganic solid materials are of great interest from a fundamental point of view or for technologic applications. The challenge to theory and computation is that they are governed by different length scales, where from meters (m) down to micrometers (μm) classical mechanics and continuum models provide proper descriptions (for example using finite element methods). However, if the length scale is down to nanometers (nm) or atomic dimensions measured in Å, such as for modern devices in the electronic industry (for example in magnetic recording) or surface science and catalysis, the properties are determined (or critically influenced) by the electronic structure and thus quantum mechanics. Understanding the properties at the atomic scale is often essential for improving or designing modern materials in a systematic way. Computation has become a key element in this process, since it allows one to analyze and interpret sophisticated measurements or to plan future experiments in a rational way, replacing the old trial and error scheme. Instead of trying all kinds of elements to improve a material by preparation, characterization and functional analysis, a simulation with computers is often much more efficient and allows one to “narrow the design space”. Why should one prepare or measure a sample that is not promising based on modern computation? Some facilities (for example those providing synchrotron radiation) have already adapted this concept, since the beam time for measurements is limited and thus they should be used for promising investigations only.
There is a classical treatment at the atomic scale that is often based on atomic force fields. In this case the interactions between the atoms are specified with forces which are parameterized usually in a way to reproduce a set of experimental data such as equilibrium geometries, bulk muduli or vibrational (phonon) frequencies. For a class of materials, in which good parameters are known, this can be a useful approach to answer certain questions, since force-field calculations require little computational effort. The main drawback is that no information can be provided about the electronic structure, since force fields do not explicitly contain electrons. Therefore these approaches will not be covered here, although they have reached a high level of sophistication.
Here we focus on the electronic structure of solids (metals, insulators, minerals, etc.) and surfaces (or interfaces) which require a quantum mechanical treatment. In recent review articles [1-5] one possible approach has been described, which is based on the WIEN2k program package [6, 7] that has been developed in my group during the last 35 years (see www.wien2k.at). Several other computer codes are available which will not be covered here. Each of them may have a different focus in terms of efficiency, accuracy, sophistication, capabilities (properties), user friendliness, parallelization, documentation etc. It should be stressed that the large variety of computer codes is very beneficial for this growing field of computational material sciences, since these codes all have advantages and disadvantages compared to others. For obvious reasons we will focus on our own code to illustrate important aspects of this field.
Some basic concepts will be described below as summarized in Figure 1. As a very first step one needs to represent the atomic structure of a solid material as is outlined in Section 2. Idealized assumptions must be made which one should keep in mind when theoretical results are compared with experiments. In order to describe the electronic structure of a system of interest by means of quantum mechanics (Section 3) we first briefly mention some fundamental concepts of solid state physics (like symmetry). Next we sketch how chemists handle quantum mechanics and then focus on the most important theory for solids, namely density functional theory. Before we can describe how to solve the corresponding Kohn-Sham equations (Section 4) it is necessary to select which electrons should be included in the calculations: all of them or only the valence electrons. The form of the potential must also be chosen, where we discuss such methods as pseudo-potential, muffin-tin or full-potential approximations. This choice is important for deciding which basis set can best describe the wave functions of the electrons. A relativistic treatment will be important if the system contains heavier elements. For magnetic systems spin-polarization is essential and thus must be included. The main ideas of the all-electron full-potential linearized-augmented-plane-wave (LAPW) with local orbitals method as implemented in WIEN2k are briefly described. This should be sufficient, since there is a good documentation of the underling details.
Section 5 lists selected results which can be obtained with WIEN2k. Since about 2500 groups-both from academic and industrial institutions – are using this code there are many details available in literature (see the link papers at www.wien2k.at). Based on the experience of developing WIEN2k, it is appropriate to make some general comments about the computer code development (Section 6). As more and more experimental scientists in this field rely on computer simulations, it is useful to raise some critical questions and summarize several points, which can cause deviations between theoretical results and experiment, a topic covered in Section 7. In Section 8 concluding remarks are made and then the very appropriate acknowledgments.
2. Atomic structure
The properties of materials at the nanometer (nm) scale or of atomic dimensions (measured in Å) are essentially determined by the electronic structure. In such a case one tries to represent the material of interest (such as a solid, a surface or a molecule) as a collection of atoms, which play the role of building blocks. It is important to realize that in practice one is forced to assume an idealized atomic structure in theoretical work, which deviates from the real structure that may be studied experimentally.
A few examples will illustrate this important point: Let us first consider a molecule which in theory is studied in vacuum, whereas in experiment it is often measured on a support (surface) or in a solution. The latter may be simulated by surrounding the molecule by a few solvent molecules or by using an embedding scheme with a dielectric constant (simulating the solvent). Recently the combination of a quantum mechanical (QM) treatment of the molecule with a cruder mechanical mechanics (MM) representation of the environment is used in QM/MM schemes. Such treatments will necessarily be approximate.
As a second example we focus on a solid. In early days one modeled a solid as a cluster of atoms, but due to size limitations this cannot represent bulk properties. With the increase in computer power we assume ‒ especially in theory ‒ that a solid is a perfect crystal and can be characterized by a unit cell that is repeated to infinity in all three dimensions. This means that one assumes periodic boundary conditions. A real crystal, however, is certainly finite. For experimental studies a crystalline sample is often available in the form of a powder consisting of small crystal domains. Even if experiments are carried out on a single crystal, it still has a surface and imperfections (such as defects or impurities, etc.). In compounds there are additional uncertainties, such as the stoichiometry, which may not be perfect, or the atomic arrangement, which may deviate from the underlying idealized order.
The situation of a perfect single crystal is illustrated for TiO2 crystallizing in the rutile structure (Figure 2). The symmetry belongs to one of the 230 space groups that are tabulated in the International Tables for Crystallography . Nowadays this information is also available from the Bilbao Crystallographic Server (www.cryst.ehu.es/cryst/). For a given crystal the unit cell must be specified with the three lattice parameters (a, b, c) and the corresponding angles (α, β, γ). For the atomic positions the Wyckoff positions must be defined, where for each type of atoms only one of the equivalent atoms needs to be specified, while all others are generated by symmetry. For known structures this type of information is available for example from the inorganic crystal structure data base  or can be taken from standardized CIF files, which are often available directly from literature.
In Figure 3 the unit cell of a borocarbide is shown, which contains main group elements (B and C), transition metals (Ni), and rare earth elements (Nd). Often interesting materials have such complex compositions. In this case the crystal structure is well known but for an understanding of the properties of this compound it is essential that all these atom types (from light to heavy) can be properly treated. Details of this compound can be found in .
In modern material science it is often mentioned that “nano materials” have significantly different properties than their bulk analogs. A simple explanation can be provided by estimating the ratio of atoms on the surface with respect to those in the bulk of the material. The atoms on the surface have a different coordination number than the atoms in the bulk and thus have a different bonding environment. Consequently these atoms may move to a relaxed atomic position with respect to the ideal bulk crystal structure. In a nano particle a significant fraction of atoms are on the surface, whereas in a large single crystal this fraction is rather small and thus can (to a good approximation) be neglected, so that periodic boundary conditions may be used for calculating many properties.
In all the cases mentioned above the assumed atomic structure is idealized and differs from the real structure of a material that is investigated experimentally. These aspects should be kept in mind when theoretical results are compared with experimental data (as will be discussed in Section 7). This idealization is an advantage for theory with respect to experiments, since in computations the atomic structure is clearly defined as input. This is in contrast with experimental studies in which the material is often not so well characterized, in terms of stoichiometry, defects, impurities, surfaces, disorder etc. When theory simulates a structure, which is not a good representation of the real system, deviations between theory and experiment must be expected irrespective of the accuracy of the theoretical method.
With the concept of a supercell one can approximately simulate some aspects of a real system. For example one can artificially enlarge a unit cell by forming a 2 x 2 x 2 supercell, containing eight times as many atoms as the original. Figure 4 shows this case for a simple cubic case. In such a supercell one can, for example, remove an atom (representing a defect) or substitute one atom by another one (simulating a substitution) or add vacuum (about 10-15 Å) on one side of the cell (to represent a surface).
The possibilities of such supercells are schematically shown in Figure 5. A two-dimensional array of atoms is called slab. In a multi-layer slab the central layer approximately represents the bulk of a system, whereas the top (and bottom) layer can be used to represent a surface provided the distance to the next layer is sufficiently far away (due to the periodic boundary). On such an artificial surface one can place molecules and study for example catalytic reactions. In all such supercells one still has introduced an artificial order, since for example a defect will have a periodic image in the neighboring cells. The larger one can make the supercell the less critical the interaction between each periodic images become but this requires higher computational effort. Therefore one must make compromises.
The use of supercells is steadily increasing, since a more realistic modeling of real structures becomes attractive. In large supercells (with a few hundred atoms) one can even approximately model disorder as was recently illustrated for so called misfit layer compounds, in which the layers of PbS and TaS2 can be stabilized by occasionally substituting Pb by Ta . Such aspects will be mentioned in Section 5.
3. Quantum mechanics
3.1. DFT Fundamentals
As mentioned in the previous section we focus on materials at the atomic scale. Here we use periodic boundary conditions and start with an ideal crystal structure that is studied at zero temperature. Our unit cell (or supercell) may contain several atoms. With present computer technology unit cells with around one thousand atoms can still be simulated within a reasonable time.
Quantum mechanics (QM) governs the electronic structure which is responsible for the properties of the system, such as the relative stability, chemical bonding, relaxation of the atoms, phase transitions, electrical, mechanical or magnetic behavior, etc. In addition many quantities related to experimental data (such as spectra) are determined by QM principles.
Several basic concepts from solid state physics and group theory are needed to characterize the electronic structure of solids as summarized for example in . Here we just briefly mention some of these concepts such as the Born-Oppenheimer approximation, according to which the electrons can move independent form the nuclei (which can be assumed to be at fixed positions), or the direct lattice and the Wigner-Seitz cell. Owing to the translational symmetry of a crystal, it is convenient to define a reciprocal lattice with the Brillouin zone as the unit cell. The symmetry is defined by operators for translation, rotation, reflection or inversion and leads to group theory with the space group and point group. The electronic structure of an infinite solid looks so complicated that it would seem impossible to calculate it. Two important steps make the problem feasible. The single particle approach, in which each electron moves in an average potential V(r) that is translational invariant V(r+T)=V(r) under the translation T. The second important concept is the Bloch theorem, which defines how the wave function (which is not translational invariant) changes under T, namely by a phase factor, called the Bloch factor eikT
where k is a vector in reciprocal space that plays the role of a quantum number in solids. The k vector can be chosen in the first Brillouin zone, because any k’ that differs from k by just a lattice vector K of the reciprocal lattice has the same Bloch factor and the corresponding wave function satisfies the Bloch condition again.
3.1.2. Quantum chemistry and ab initio methods
The quantum mechanical treatment of a system on the atomic scale has been discussed in many papers (for example in [5, 12]) and thus it is sufficient to summarize a few basic concepts here. According to the Pauli principle, because electrons are indistinguishable Fermions, their wave functions must be antisymmetric when two electrons are interchanged leading to the phenomenon of exchange. In a variational wave-function description this can be enforced by forming one Slater determinant (set up from one-electron wave functions), representing the well-known Hartree-Fock (HF) approximation. The HF equations have the computational disadvantage that each electron moves in a different potential (becoming orbital dependent). In HF the exchange is treated exactly but correlation effects, caused by the specific Coulomb interaction between the electrons are omitted by definition. Correlation can be included by more sophisticated approaches such as configuration interaction (CI) in which additional Slater determinants (including single, double or triple excitations into unoccupied states) are added in order to increase the variational flexibility of the basis set . Another treatment of correlation effects is the coupled cluster (CC) scheme that is often used in quantum chemistry . Such schemes are labeled ab initio (or first principles) methods and are highly accurate refinements that can reach an almost exact solution. Unfortunately the corresponding computational effort dramatically increases with N7, where the system size is proportional to N, the number of electrons. Such nearly exact solutions would be desirable but in practice they can only be obtained for relatively small systems (atoms or small molecules). When the system size is significantly larger (as in condensed matter applications) approximations are unavoidable.
In quantum mechanics the term ab inito means that for a simulation of a material it is sufficient to know its constituent atoms (or isotopes) but the rest is governed by quantum mechanics. One does not need to know whether a material is insulating, metallic, magnetic, or has any other specific property. In principle an ab initio calculation should determine these properties from the atomic structure alone. A different situation occurs for calculations that are based on parameters that had been fitted to known properties of other systems that are similar to the material of interest. The latter type of calculations is often less demanding (in terms of computer resources) but is necessarily biased towards the related class of materials for which the parameters had been determined. Consequently one cannot find an unconventional behavior. In practice, however, it helps to know something about the system in order to choose proper approximations in the complicated quantum mechanical calculations. For example, why should one perform a spin-polarized calculation knowing that the system is not magnetic.
3.1.3. Density Functional Theory
The well-established scheme to calculate electronic properties of solids is based on density functional theory (DFT), for which Walter Kohn has received the Nobel Prize in chemistry in 1998. Fifty years ago, in 1964, Hohenberg and Kohn  have shown that the total energy E of an interacting inhomogeneous electron gas (as it appears in atoms, molecules or solids), in the presence of an external potential (coming from the nuclei) is a functional of the electron density ρ which uniquely defines the total energy E of the system, i.e E[ρ].
The four terms correspond to the kinetic energy (of non-interacting electrons), the nuclear-electronic interaction energy Ene, the Coulomb energy (including the self-interaction) and the exchange correlation energy Exc, which contains all the quantum mechanical contributions. This theorem is still exact. From a numerical point of view one can stress that the first three terms are large numbers while the last is essential but small and thus can be approximated. Thus one does not need to know the many-body electronic wave function. This is an enormous simplification. To clarify this point, consider the very simple case of a system (atom or molecule) with 100 electrons but which is still small. Each electron needs to be described by a wave function which depends on three space coordinates and the spin. Therefore the many-body wave function would depend on 400 coordinates. According to DFT all that is needed is the density ρ(r) which only depends on the position r, i.e. on three coordinates. Unfortunately the exact form of the functional is not known but the conditions it should satisfy have been formulated, as will be discussed in Section 3.3.
3.2. The Kohn-Sham equations
From a practical point of view it was essential to formulate DFT in such a way that it could be applied. According to the variational principle a set of effective one-particle Schrödinger equations, the so-called Kohn-Sham (KS) equations , must be solved (Equation 3) as highlighted in Figure 1 and Figure 6. In this way DFT is a universal approach to the quantum mechanical many-body problem, where the system of interacting electrons is mapped in a unique manner onto an effective non-interacting system that has the same total density. The non-interacting particles of this auxiliary system move in an effective local one-particle potential, which consists of a classical mean-field (Hartree) part and an exchange-correlation part Vxc (due to quantum mechanics) that, in principle, incorporates all correlation effects exactly. Eqn.3 shows its form (written in Rydberg atomic units) for an atom with the obvious generalization to molecules and solids.
The four terms represent the kinetic energy operator, the external potential from the nucleus, the Coulomb-, and exchange-correlation potential VC and Vxc. The KS equations must be solved iteratively till self-consistency is reached (as illustrated in Figure 6). The iteration cycles are needed due to the interdependence between orbitals and potential. In the KS scheme the electron density is obtained by summing over all occupied states, i.e. by filling the KS orbitals (with increasing energy) according to the aufbau principle.
A typical computation is illustrated in Figure 6. For a system of interest the unit cell must be specified by the lattice constants a, b, c and the corresponding angles (α, β, γ). In addition each atomic position is defined by the Wyckoff positions (as mentioned in Section 2). For this fixed atomic structure the self consistent field (SCF) cycle starts. As a first guess for the crystalline density one can superimpose atomic densities of neutral atoms placed at their proper positions in the unit cell. With this density one can generate a potential (within DFT). In each iteration i the DFT Kohn-Sham equations must be solved as illustrated on the right hand side.
Instead of using a uniform mesh of k-points s in the Brillouin zone (BZ) it is sufficient to restrict the k-points s to the irreducible wedge of the BZ by applying symmetry relations present in the system. From each star of equivalent k-points s only one must be calculated and its corresponding density is weighted according to the k-points symmetry (reducing the computational effort). For each k-points the Kohn-Sham equations must be solved.
The KS wave functions are expanded in basis sets as will be described in the next section. The expansion coefficients Ckn are determined by the variational method by minimizing the expectation value of the total energy with respect to these coefficients. This procedure leads to the generalized eigenvalue problem, HC=ESC, where H is the Hamiltonian, S the overlap matrix, C contains the coefficients and E the energies. After diagonalization we obtain for each energy Enk the KS orbital ψnk and thus can calculate the corresponding electron density, where n is the band index.
By summing over all occupied states (with Ek smaller than the Fermi energy) the output density is obtained. This output density can be mixed with the input density of the previous iterations to obtain a new density for the next iteration. In order to reduce the number of iterations to reach self consistency, several schemes have been suggested (see section 5 in ). A recent mixing scheme is the multisecant version  which includes information from several previous iterations from the SCF cycle as samples of a higher dimensional space to generate the new density, from which the VC (solving Poisson’s equation) and Vxc (within DFT) potentials can be generated for the next SCF iteration. The exact functional form of the potential Vxc is not known and thus one needs to make approximations. With these potentials the new KS orbitals can be obtained. This closes the SCF cycle.
The SCF cycles are continued till convergence is reached, for example when the total energy of two successive iterations deviates from each other by less than a convergence criterion ε (e.g. 0.001 Ry). At this stage one can look at forces acting on the atoms in the unit cell. If symmetry allows there can be forces on the atoms which are defined as the negative gradient of the total energy with respect to the position parameters. Take for example the rutile TiO2 structure (Figure 2), in which oxygen sits on Wyckoff position 4f which has the coordinates (x, x, 0) where x is not specified by symmetry. In this case x can be varied to minimize the energy and thus a force can occur on the oxygen which vanishes at the equilibrium geometry. However, Ti is located at the Wyckoff position 2a with the fixed coordinates (0, 0, 0) and (½, ½, ½) and thus these positions are fixed and no force will act on Ti. When all atoms are essentially at their equilibrium positions (with forces around 0) then one can change the volume of the unit cell and minimize the total energy E. This would correspond to the equilibrium geometry of the system in the given structure. After this minimization is completed one can, as the last step, calculate various properties for this optimized structure.
The treatment of exchange and correlation effects has a long history and is still an active field of research. Some aspects were summarized in the review articles [1-5] but also in many other papers in this field. The reader is encouraged to look at recent developments. An excellent book  by Cottenier covers DFT and many aspects around the WIEN2k program package and thus is highly recommend to the reader for finding further details.
For the present presentation it is worth mentioning a few historical aspects: in 1951 Slater  proposed the replacement of the non-local Hartree-Fock exchange by the statistical exchange, called Slater’s exchange. In the 1970ths this was modified by scaling it with the exchange parameter α (for each atom) called the Xα method , which was widely used for solid state calculations. It was designed to approximate Hartree-Fock, which (by construction) treats exchange exactly but neglects correlation effects completely. By making a local approximation for the potential the Xα method indirectly included correlation effects making it better than Hartree-Fock but also less accurate, since exchange is treated only approximately. This type of error cancellation is typical for many DFT functionals.
Early applications of DFT were done by using results from quantum Monte Carlo calculations  for the homogeneous electron gas, for which the problem of exchange and correlation can be solved exactly. Although no real system has a constant electron density, one can at each point in space use the homogenous electron gas result to treat exchange and correlation, leading to the original local density approximation (LDA). Surprisingly LDA works reasonably well but has some shortcomings mostly due to the tendency to overbind atoms, which cause e.g. too small lattice constants. The next crucial step in DFT was the implementation of the generalized gradient approximation (GGA), for example the version by Perdew, Burke, Ernzerhof (PBE)  which improved LDA by adding gradient terms of the electron density. For several cases this GGA gave better results and thus for a long time PBE has been a standard for many solid state calculations. During recent years, however, several improvements of GGA were proposed, which fall in two categories, both with good justifications:
Semi-empirical GGA, which contain parameters that are fitted to accurate (e.g. experimental or ab initio) data
ab initio GGA, in which all parameters are determined by satisfying fundamental constraints (mathematical condition) which the exact functional must obey.
One criterion for the quality of a calculation is the equilibrium lattice constant of a solid, which can be calculated by minimizing the total energy with respect to volume. By studying a large series of solids (as shown in Figure 7) some general trends can be found (): LDA has the tendency of overbinding, leading to smaller lattice constants than the experiment. GGA in the version of PBE  always yield larger lattice constants, which sometimes are above the experimental value. The more recently suggested modifications, as discussed in , lead to a clear improvement at least for the lattice parameters. In addition, there are other observables (such as cohesive energy or magnetism, to mention just two), which depend on the functional. The best agreement with experiment may require different functionals for various properties. So far no functional works equally well for all cases and all systems. Therefore one must acknowledge that an optimal DFT functional has not yet been found, which is the reason why this remains an active field of research.
A systematic improvement of the exchange and correlation treatment as in quantum chemistry (section 2.3) starting from Hartree-Fock to (full) configuration interaction (CI) or coupled cluster (CC) approaches did not exist for solids and DFT. In 2005 such a scheme was proposed in  and was called Jacob’s ladder for DFT which becomes progressively more demanding in terms of computational requirements. In Figure 8 the five rungs of this ladder (“to heaven”) are briefly mentioned which indicate what is needed at each level of theory. In LDA the exchange correlation energy εxc is just a function of the density ρ; in the next rung it depends also on the gradient of the density, in meta-GGA εxc is in addition a function of the Laplacian of the density and the kinetic energy density t (see e.g. ). In rung 4 one goes from the simple dependence on the density alone, to an orbital description, which (for occupied orbitals) allows a correct description of exchange, like in Hartree-Fock. At this level one limits the computation space to the occupied orbitals but can extend it to the hybrid functions (mixing a fraction of Hartree-Fock with a part in DFT). In the highest rung also unoccupied orbitals are included, as for example in the scheme called random phase approximation (RPA).
There are well documented cases for which conventional DFT calculations (LDA or GGA) disagree even qualitatively with experimental data and lead, for instance, to predict a metal instead of an insulator. One of the reasons can be the presence of localized states (often f-electrons or late transition metal d-orbitals) for which correlation is very strong. For these highly correlated systems one must go beyond simple DFT calculations. One simple form of improvement is to treat theses local correlations by means of a Hubbard U (see ) but use LDA or GGA for the rest of the electrons. With this parameter U the on-site Coulomb repulsion between the localized orbitals is included, but by introducing a parameter. This approach is generally called LDA+U. In a simple picture, U stands for the energy penalty of moving a localized electron to the neighboring site that is already occupied.
The Kohn-Sham energy eigenvalues εi (in equation 3) should – formally speaking – not be interpreted as excitation energies (except for the highest one). Nevertheless optical excitations are commonly described in the independent particle approximations, using these quasi particle states from DFT in the single-particle picture. One well known case is the energy gap of insulators, which in this crude single-particle picture is typically underestimated by about 50 per cent. This has been well known for some time (see e.g. section 6.7 of ), since even the exact Kohn-Sham gap misses the integer discontinuity Δxc between occupied and unoccupied states. It is worth considering that in Hartree Fock the gap found would typically be too large. This is one of the reasons, why hybrid functionals were suggested which mix Hartree Fock with DFT in order to produce the correct gap. Better estimates of the quasi-particle spectrum can be obtained by GW calculations employing many-body perturbation theory, which is significantly more computationally expensive. Recently a modified Becke Johnson (mBJ) potential was proposed , which is still a local potential (and thus cheap) but yields energy gaps close to experiment..
When the Coulomb potential is written in terms of the density (third term in equation 2) it contains the unphysical self interaction of an electron with itself. In Hartree-Fock this term is exactly canceled by the exchange term. Due to the approximation in DFT, this cancellation is not complete and thus in some functionals a self-interaction-correction (SIC) is added .
The van der Waals (vdW) interaction is not described in the simple DFT approximations like LDA or GGA but can be treated with higher order treatments (rung 4 or 5 in Jacob’s ladder) which become computationally rather expensive. A pragmatic solution is to add a vdW correction based on adjustable parameters (see for example Grimme’s scheme ).
In connection with the term “ab inito” ‒ as the quantum chemists define it ‒ it is appropriate to consider the situation for large systems: the strategy differs for schemes based on HF (wave function based) or DFT. In HF based methods (including CI and CC) the Hamiltonian is well defined and can be solved almost exactly for small systems but for large cases only approximately (i.e. due to limited basis sets). In DFT, however, one must first choose the functional that is used to represent the exchange and correlation effects (or approximations to them) but then one can solve this effective Hamiltonian almost exactly. Thus in both cases an approximation enters either in the first or second step. This perspective illustrates the importance of improving the functionals in DFT calculations, since they define the quality of the calculation. The advantage for DFT is that it can treat relatively large systems.
4. Solving the Kohn-Sham equations with WIEN2k
4.1. The all electron case
A schematic summary of the main choices one has to make for computations is shown in Figure 1, where our selections are marked in red. We want to represent a solid with a unit cell (or a Supercell) as discussed in section 2 and thus invoke periodic boundary conditions. The system may contain all elements of the periodic table, from light to heavy, main group, transition metals, or rare earth atoms as for example shown in Figure 3. Let us look at Ti (with the atomic number Z=22) as an example. Its electronic configuration is 1s22s22p63s23p63d24s2 (Figure 9).
We surround the Ti nucleus by an atomic sphere with a radius of 2 Bohr (about 1Å). With respect to this sphere, the Ti electronic states can be classified in three categories:
core states, which are low in energy, have their wave functions (or electron densities) reside completely inside this sphere.
valence states, which are high in energy and are delocalized.
semi-core states, which are in between (medium energy), have a charge leakage (a few per cent of the charge density lies outside the sphere) and have a principal quantum number that is one less than the valence states (i.e. 3s vs. 4s, or 3p vs. 4p).
Traditionally the electronic properties of a material due to the chemical bonding are associated with only the valence electrons and thus the core electrons are often ignored. A typical scheme is the so called frozen core approximation, in which the electron density from the core electrons does not change during the SCF cycle (see Figure 6). This is often justified but there are cases like hyperfine interactions, where the change of the core electrons can contribute significantly and even more so the semi-core states. An all-electron treatment has therefore the advantage of being able to explore the contribution from all electrons to certain experimental data (e.g. the electric field gradient).
As long as a solid contains only light elements, non-relativistic calculations are well justified, but as soon as a system of interest contains heavier elements, relativistic effects must be included. In the medium range of atomic numbers (up to about 54) so called scalar relativistic schemes  are often used, which properly describe the main contraction or expansion of various orbitals (due to the Darwin s-shift or the mass velocity term) but omit spin-orbit coupling. Such schemes are computationally relatively simple and thus recommended for a standard case. The inner electrons can reach a high velocity leading to a mass enhancement. This causes a stronger screening of the nuclear charge by the relativistic core electrons with respect to a non-relativistic treatment and affects the valence electrons. The spin-orbit contribution can be included in a second–variational treatment  and is needed for heavier elements. The core electrons are treated by solving Dirac’s equation, whereas the semi-core and valence states are described with the scalar relativistic scheme. In the latter spin remains a good quantum number and thus spin-polarized calculations are valid to treat magnetic systems.
4.2. The choice of the potential
Figure 1 schematically shows the topics where one needs to make a choice. We want to represent a solid with a unit cell (or a supercell). Relativistic and spin-polarization effects can be included as mentioned above. The next crucial point is the choice of the potential that is closely related to the basis sets. This aspect is extensively explained in  discussing the advantages and problems connected with pseudo potentials. The main idea is to eliminate the core electrons and replace the real wave functions of the valence states by pseudo wave functions which are sufficiently smooth so that they can be expanded in a plane wave basis set. In the outer region of an atom, where the chemical bonding occurs they should agree with the real wave function. In principle – in mathematical terms – plane waves form a complete basis set and thus should be able to describe any wave function. However, the nodal structure (for example of a 4s wave function) close to the nucleus would need to be described by extremely many plane waves.
For the all-electron case within DFT the potential looks like the one shown in Figure 10. Near each nucleus it has the form Z/r but between the atoms it is nearly flat. In the muffin-tin approximation the potential is assumed to be spherically symmetric around the atom but constant in between. In the full-potential case the potential (without any approximation of its shape) can be represented as a Fourier series in the interstitial region, but in each atomic sphere (with a radius RMT) it can be expressed as a radial function VLM(r) multiplied by crystal harmonics, which are linear combinations of spherical harmonics having the point group symmetry of the atom α for the proper LM value. In this notation the muffin-tin case is the first term in both cases, namely the 0 0 component for LM (i.e. only the spherical part inside each atomic sphere) and a constant for the Fourier series (for the interstitial region). In the 1970ths the muffin-tin approximation was widely used because it made calculations feasible. For closely packed systems it was acceptable but for more covalently bonded systems like silicon (or even surfaces) it is a very poor approximation. Another drawback of the muffin-tin approximation was that the results depended on the choice of sphere radii, whereas in the full-potential case this dependence is drastically reduced. Due to the muffin-tin approximation different computer codes obtained results that did not agree with each other. This has changed with the use of full-potential calculations. Nowadays different codes based on the full potential yield nearly identical results provided they are carried out to full convergence and use the same structure and DFT version. This has given theory a much higher credibility and predictability (see Section 7).
4.3. The choice of the basis sets
For solving the Kohn Sham equations (see Figure 1) basis sets are needed. A linear combination of such basis functions shall describe the Kohn-Sham orbitals. One can use analytic functions-such as Slater type orbitals (STO) or Gaussian type orbitals-or just plane waves (for example in connection with pseudo potentials). Already in 1937 Slater  proposed the augmented plane wave (APW) method. The development of APW and its linearized version, which led to the WIEN code  and later to its present version WIEN2k  was described in detail in recent reviews [1-5]. An extensive description including many conceptual and mathematical details is given in . Therefore only the main concepts will be summarized below.
4.4. The APW based method and the WIEN2k code
In the APW method one partition the unit cell into (non-overlapping) atomic spheres (type I) centered at the atomic sites and the remaining interstitial region (II) (Figure 11). Inside each atomic sphere (region I) the wave functions have nearly an atomic character and thus (assuming a muffin-tin potential) can be written as a radial function times spherical harmonics. It should be stressed that the muffin tin approximation (MTA) is used only for the construction of the APW basis functions and only for that. The radial Schrödinger equation is solved numerically (and thus highly accurately), but as input the energy must be provided, which makes the basis set energy dependent. In region II the potential varies only slowly and thus the wave functions can be well expressed in a series of plane waves (PW). Each plane wave is augmented by the atomic partial waves inside each atomic sphere (i.e. the PW is replaced inside the spheres). The corresponding weight Aℓm of each partial wave can be fixed by a matching condition at the sphere boundary (as indicated in Figure 11 and Figure 13).
Three schemes of augmentation (APW, LAPW, APW+lo) have been suggested over the years and illustrate the progress in this development of APW-type calculations that was discussed in [18, 4, 5]. Here only a brief summary will be given. The energy dependence of the atomic radial functions uℓ(r,E) can be treated in different ways. In Slater’s APW  this was done by choosing a fixed energy E, which leads to a non-linear eigenvalue problem, since the basis functions become energy dependent.
In the linearized APW, called LAPW, O. K. Andersen , suggested to linearize (that is treat to linear order) this energy dependence as illustrated in Figure 12. The radial Schrödinger equation is solved for a fixed linearization energy Eℓ (taken at the center of the corresponding energy bands) leading to uℓ(r, Eℓ) but adding an energy derivative of this function (taken at the same energy) in order to retain the variational flexibility. This linearization is a good approximation in a sufficiently small energy range around Eℓ. In LAPW the atomic function inside the sphere α is given by a sum of partial waves, namely radial functions times spherical harmonics labeled with the quantum numbers (ℓ, m).
The two coefficients Aℓm and Bℓm (weight for function and derivative) – as given in Figure 12-can be chosen so as to match each plane wave (characterized by K) continuously (in value and slope) to the one-center solution inside the atomic sphere at the sphere boundary (for details see e.g. ). The main advantage of the LAPW basis set is that it allows finding all needed eigenvalues with a single diagonalization, in contrast to APW, which has the non-linear eigenvalue problem. Historically, the more strict constrain (a matching of both value and slope) had the disadvantage that in LAPW more PWs were needed to reach convergence than in APW. The LAPW basis functions u and it derivative are recalculated in each iteration cycle (see Figure 6) and thus can adjust to the chemical changes (for example due to charge transfer) requiring an expansion or contraction of the radial function. The LAPW method made it computationally attractive to go beyond the muffin-tin approximation and to treat both the crystal potential and the charge density without any shape approximation (called full-potential) as pioneered by the Freeman group .
In section 4.1 the partition of electronic states in core, semi-core and valence states was described and illustrated for Ti in Figure 9. Let us focus on the p-type orbitals. The 2p core state is treated fully relativistic as an atomic core state while the valence 4p state is computed within LAPW using a linearization energy at the corresponding high energy. The 3p semi-core states reside mostly inside the Ti sphere but have a “core-leakage” of a few per cent. The 3p states are separated in energy from the 4p states and thus the linearization (with the linearization energy of the 4p state) would not work here. For such a case Singh  proposed adding local orbitals (LO) to the LAPW basis set in order to accurately treat states with different principal quantum numbers (e.g. 3p and 4p) while retaining orthogonality. In this example the 3p LOs look very similar to the 3p radial function but are constrained to have zero value and slope at the sphere radius RMT.
The concept of LO fostered another idea, namely the APW plus local orbitals (APW+lo) method by Sjöstedt et al . These local orbitals are labeled in lower case to distinguish them from the semi-core LO. In APW+lo, one returns to the APW basis but with the crucial difference that each radial function is expanded at a fixed energy. The matching is again (as in APW) only made between values (Figure 13). This new scheme is significantly faster while maintaining the convergence of LAPW .
The APW+lo scheme therefore combines the best features of all APW-based methods. It was known that LAPW converges somewhat slower than APW due to the constraint of having differential basis functions and thus it is an improvement to return to APW but only for the orbitals involved in chemical bonding. The energy-independent basis introduced in LAPW is crucial for avoiding the general eigenvalue problem of APW and thus is also used for all higher ℓ components. The local orbitals provide the necessary variational flexibility to make this new scheme efficient but they are added only where needed (to avoid any further increase in basis set). The crystalline wave functions (of Bloch type) are expanded in these APWs leading to a general eigenvalue problem. The size of the matrix is mainly given by the number of plane waves (PWs) but is increased slightly by the additional local orbitals that are used. As a rule one can say that about 50-100 PWs are needed for every atom in the unit cell in order to achieve good convergence.
5. Results with WIEN2k
The WIEN2k code is widely used and thus there is an enormous literature with many interesting results which cannot all be covered here. Many of the publications with WIEN2k can be found on the web page www.wien2k.at under the heading papers. A selected list of results, that can be obtained with WIEN2k, is provided below, where references are specified either to the original literature or in some cases to review articles [4, 5].
After the SCF cycle has been completed one can look at various standard results: the Kohn-Sham eigenvalues Enk can be shown along symmetry lines in the Brillouin zone giving the energy band structure. A symmetry analysis can determine the corresponding irreducible representation (see Fig.1 of ). For each of these states with Enk the wave function (a complex function in three dimensions) contains information about how much the various regions of the unit cell contribute. In the APW framework this can be done by using the partial charges qtℓm which define the fraction of the total charge density of this state (normalized in the unit cell) that resides in the atomic sphere t and comes from the orbital characterized by the quantum numbers ℓm. The fraction of the charge that resides in the interstitial region is contained in qout. These numbers, which depend on the choice of sphere radii, help to interpret each state in terms of chemical bonding. This is an advantages of this type of basis set. There is a useful option to show the character of bands. As one example, three options of presenting the band structure are illustrated for the refractory metal titanium carbide TiC shown in Fig.1 of reference  showing the Ti-d (eg symmetry) and C-p character bands, which dominate the bonding in this case. The crystal field of TiC splits the fivefold degenerate Ti-d orbitals into t2g and eg states (with a degeneracy of 3 and 2 respectively). Another example is the band structure of Cu shown in Fig. 188.8.131.52 of .
The Fermi surface in a metal is often crucial for an understanding of properties (for example superconductivity). It can be calculated on a fine k-mesh and plotted (for example with XCrysDen ).
With a calculation for a (sufficiently fine) uniform mesh of k-points s in the irreducible Brillouin zone as discussed in connection with Figure 6 one can determine the density of states (DOS), which gives a good description of the electronic structure. The total DOS can be decomposed into its components by means of the qtℓm values mention above. This decomposition becomes even more important in complicated cases, for example if one wants to find which state originates from an impurity atom in a supercell.
The electron density is the key quantity in DFT and thus contains the crucial information for chemical bonding but the latter causes only small changes. Therefore it is often useful to look at difference densities, computed as difference between the SCF density of the crystal minus the superposed atomic densities (of neutral atoms), because in this presentation the changes due to bonding become more apparent. Sometimes it is useful to look at the densities corresponding to states in a selected energy window using various graphical tools (2-or 3-dimensional plots). Another possibility is a topological analysis according to Bader’s theory of atoms in molecules . It allows among other details, one to uniquely define atomic charges within atomic basins, a relevant quantity for charge transfer. See also chapter 6.3 of .
The typical chemical bonds, like covalent, ionic or metallic bonds, can well be described within DFT. For their analysis the APW type basis is very useful because it can provide chemical interpretations in term of orbitals. Van der Waals (vdW) interactions, however, are not properly represented in conventional DFT: They can approximately be included by adding a Grimme correction, for example .
The total energy of a system is the main quantity within DFT. Especially for large systems this can be a rather large number, but nowadays it can be calculated with high precision. The interest is often in total energy differences for example to find out which of two structures is more stable. In such cases the two calculations need to be done in a very similar fashion (same functional, comparable k-mesh and basis set, same sphere sizes, etc.). It is also possible to compare cohesive or atomization energies, where the atoms must be modeled in the same fashion as the crystal (that is in a large supercell containing just the isolated atom).
The derivative of the total energy with respect to the nuclear coordinates yield the force acting on the atom, which is needed for structure optimization. See also chapter 6.5 of , in which such an optimization is discussed in connection with the bonding of hexagonal boron nitride on a Rh (111) metal substrate, where the two systems have a lattice mismatch of about 8 per cent (Figure 14). This mismatch in lattice spacing requires that 13 x 13 unit cells of h-BN are needed to match 12 x 12 unit cells of the underlying Rh (111) lattice to make it commensurate (with periodic boundary conditions). This special surface arrangement was called “nanomesh” with spacing of about 3.2 nm. In order to simulate this system with a supercell, the face-centered-cubic (fcc) metal layer is represented with three layers (with a 12 x 12 Rh lattice) which are covered (on both sides on the metal slabs) with a single layer of BN (with a 13 x 13 BN lattice) and then an empty region is added to simulate the surface. Although this is still a crude model of the real situation, it illustrates which kind of large systems can be studied nowadays. This model system contained 1108 atoms per unit cell but could be computed with WIEN2k. The corresponding calculations have shown that BN is no longer flat but becomes corrugated due to the different binding situations between BN and the metal substrate, which depends on the local geometry that is favorable (“pores” in some regions but unfavorable “wires” in others. This corrugated BN surface was found to agree with experimental data (for further details see [41-43]). Another example is the investigation of so called misfit layer compounds , in which the bonding between the layers of TaS2 and PbS required that some Pb atoms are replaced by Ta in an disordered fashion. Relatively large supercells were needed in order to represent this cross substitution. After relaxing the atomic positions the more likely arrangements have been determined on the basis of total energy differences.
In the case of magnetic systems spin-polarized calculations can provide the magnetic moments. In addition to collinear magnetic systems also non collinear magnetism can be handled, which was for example used in a study of UO2 (see . Another example is the Verwey transition that was investigated for double perovskite BaFe2O5. At low temperature this system has a charged-ordered state (with Fe2+and Fe3+at different sites) but above the Verwey transition temperature at about 309 K a valence mixed state with the formal oxidation state Fe2.5+appears. DFT calculations made it possible to interpret this complicated situation, see  and section 7.4.1 of . In the latter it was mentioned that it is now possible to use such calculations to look for fine details such as the magneto-crystalline anisotropy. This is defined as the total energy difference between a case, where the magnetic moment is in the y direction (with the lowest energy) or the x direction. In this case the difference in energy is found to be about 0.4 mRy but the total energy is-115,578.24065 Ry. Therefore the quantity of interest is in the tenth decimal illustrating the numerical precision that is needed for such a quantity. Needless to say that extremely well converged calculations were required, in which both cases are treated practically the same. This is necessary to have a cancellation of errors.
The electric field gradient (EFG) is a ground state property that is sensitive to the asymmetric charge distribution around a given nucleus. By measuring the nuclear quadrupole interaction (e.g. by NMR) the EFG can be determined experimentally provided the nuclear quadrupole moment is known. This is a local probe which often helps to clarify the local atomic arrangement. See also chapter 6.4 of . The EFG is a case where the semi-core states can significantly contribute as was shown for TiO2 in the rutile structure . Another important result was the nuclear quadrupole moment of 57Fe, the most important Mössbauer isotope. On the basis of DFT calculations for the EFG of several iron compounds this quantity had to be adjusted by about a factor of two .
Recently also the NMR shielding (chemical shifts) can be obtained , where the all-electron treatment opens the possibility of analyzing the dominant contributions that determine the chemical shifts as has been illustrated for fluorides .
The calculation of various spectra (X-ray emission or absorption), optical spectra or energy loss near edge structure (ELNES) spectra can be performed within the independent particle model. Some structures in the excitation spectra of interacting electrons, called quasi-particle peaks, can be directly related to the excitation of independent electrons as they are treated within DFT. However, others (for example satellite structures) cannot be understood in such a simple way and require more sophisticated approaches. For example, including the electron core-hole interactions require the solution of the Bethe-Salpeter equation (BSE) as was illustrated for x-ray spectra . Often such schemes are based on many-body perturbation theory. One of such approaches is the GW approximation . This scheme allows calculating accurate band gaps or ionization potentials, which are not well determined by DFT eigenvalues. The GW approach is also available in connection with WIEN2k .
The interpretation of scanning tunneling spectroscopy (STM) data often require a simulation by theory, which can distinguish between proposed surface structures. It is based on the Tersoff-Hamann  approximation, in which the images can be obtained from the charge density originating form a set of eigenstates within a certain energy window around the Fermi energy consistent with the applied voltage used in the STM measurements (see e.g. Fig. 6 in ).
Phonons can be calculated based on the dynamical matrix, which is obtained by displacing one atom in a large unit cell (or supercell) in a certain direction and determining the forces on all the other atoms. The necessary independent displacements are determined by the symmetry of the cell. By diagonalizing the dynamical matrix the phonon frequencies can be determined. Such information is also useful for example in connection with ferroelectrics, structural stability, thermodynamics or phase transitions.
For the analysis of phase transitions a fundamental understanding requires a combination of concepts, namely group theory, DFT calculations, frozen phonons, soft modes or bilinear couplings, and Landau theory. This was illustrated, for example for an Aurivillius compound , which shows multiple instabilities and has a phase transition to a ferroelectric state. For high pressure phase transitions a modified Landau theory was proposed and applied .
Maximally localized Wannier functions can be calculated with wien2wannier  and provide a good starting point for more sophisticated many body theory. Dynamical mean field theory (DMFT) is one such example as is illustrated in . Another extension of WIEN2k is the calculation of Berry phases with wien2kPI as modern theory of polarization in a solid (for details see ref ).
Computer graphics and visualization (see ] can help to analyze the many intermediate results (atomic structure, character of energy bands, Fermi surfaces, electron densities, partial density of states, etc.). The more complex a case is the more support from computer graphics is needed. For an element one can plot all the energy bands, but for systems with over 1000 atoms one would be lost interpreting the band structure without the help of visualization.
6. Computer code development
From the experience of developing the WIEN2k code some general conclusions can be drawn. Some of the historical perspectives have been summarized in section 7 of . During the last three to four decades it was often necessary to port the code to new architectures starting from main-frame computers, vector processors, PCs, PC-clusters, shared-memory machines, to multi-core parallel supercomputers. The power of computers has increased in several areas by many orders of magnitude such as the available memory (from kB to TB), the speed of communication (e.g. infiniband) all the way to the processors (CPU). An efficient implementation of a code made it necessary to closely collaborate with mathematicians and computer scientists in order to find the optimal algorithms, which perform well on the available hardware. One example is the idea of using the scheme of iterative diagonalization . A significant portion of the computational effort in the WIEN2k calculations is the solution of the general eigenvalue problem (see Figure 6) which must be solved repeatedly within the SCF cycle. Changes from iteration to iteration are often small and thus one can use the information from the previous iteration to define a preconditioner for the next iteration and thus simplify the diagonalization and speed up the calculation.
Another aspect is the implementation of linear algebra libraries (e.g. SCALAPACK, MKL), which were highly optimized by other groups, who spend a lot of effort on these tasks, and helped us significantly to speed up our code. Simultaneously, increased computer power made it possible to treat much larger systems, especially using massive parallelization. The matrix size that we could handle on the available hardware has increased by about a factor 1000 over the last several decades. Since solving the general eigenvalue problem scales as N3, the computer power needed to solve a 1000 times bigger system must be about a factor 109 higher, which is available now.
Often our computational strategy had to be changed or extended. For example, to compute a metallic crystal with a small unit cell many k-points s in the Brillouin zone were needed to reach a good convergence. In this case k-points parallelization was optimal. Nowadays we can treat large unit cells (containing about 1000 atoms). In such a case, the reciprocal space is small and thus only few k-points s are needed for a good calculation. This requires new parallelization strategies, in which the large matrices must be distributed to many processors, where data locality and reduced communication is essential for achieving good parallel performance. Another aspect is the complexity of the code with the many tasks that need to be solved (see Figure 6). If only a small fraction is not parallelized, it may keep many processors waiting for the result that is calculated on only a single processor. This has often led to new bottle necks, which did not occur for smaller systems and thus were ignored but load balancing is important. Better computer power requires a continuous improvement of the code.
There are completely different ways of distributing a code (giving representative examples):
open source with a free download
use for registered user (with or without license fee), source code made available
limited access for registered users (with a yearly license)
software companies that distribute only executables
From a commercial point to view it is understandable that a company wants to have strict rules and do not make the source code available. From a scientific perspective, the WIEN2k group favors, the source code is made available to the registered users, who pay a small license fee once. This policy has helped to generate a “WIEN2k community”, from which many researchers around the world have contributed to the development of the code and can contribute to do so in the future. It has helped in many aspects, such as to find and fix bugs, but also to add new features which are made available to all the WIEN2k users. In addition, several valuable suggestions were made, which allowed improving the documentation as well as implementing requested new features. We have organized more than 20 WIEN2k workshops worldwide, in which users are introduced to important concepts and learn how to run calculations and use kinds of associated tools. It has become a standard to help each other and thus contribute to the development of computations of solids and surfaces. In total this policy has had very positive impacts for WIEN2k and the field.
The user friendliness of WIEN2k has been improved over the years. A graphical user interface w2web was mainly developed by Luitz (see ) and is especially useful for novice users or in cases which are not done routinely. Later many default options were implemented, which were based on the experience of many previous calculations. This has made it much simpler to set up a calculation. For novice users or experimentalists this helps one to get started without being an expert. However, there is also a drawback, namely the danger that the code will be more used like a black box: “push a button and receive the result”. In the old version the users were forced to think about how to run the calculation and thus had to look at details. This is a common problem, which all codes face.
With all the possibilities mentioned in the previous section it is often useful to combine different theories according to their advantages but keeping in mind their disadvantages. About 20 years ago the fields of quantum chemistry, DFT and many-body theory were completely separated and there was hardly any cooperation between them: this has fortunately changed. The strength and weaknesses of the different approaches are recognized and mutually appreciated. The solution of complex problems can only be found in close collaboration of the corresponding experts.
7. Theory and simulations
7.1. Theory compared to experiment
Independent of which computer code is used for computations some general questions should be asked, when theory and experiment do not agree. Some possible reasons for a disagreement are listed below (Figure 15):
Is the atomic structure model that was chosen for the computation adequate for the experimental situation, as already discussed in Section 2. An advantage of theory is that the structure is well defined, because it is taken as input. Experiments may have uncertainties (stoichiometry, defects, impurities, disorder). It can also be that the theory is based on an idealized structure such as infinite crystal, whereas in the experiment surface effects cannot be neglected. If the latter are included in a supercell calculation, one has periodic boundary conditions and thus still assumes an ordered structure, while in the experiment the sample is disordered or contains some defects or impurities. A delicate question for the experimentalist is whether the sample that has been measured is (at least) close to the system that was assumed for the simulation.
Is the chosen quantum mechanical treatment appropriate for the given system? Is a mean field DFT approach adequate? Are more sophisticated treatments (especially for correlation) needed or can the self-interaction within DFT cause the problem? Is there a significant dependence on the functional chosen within DFT?
Is the performed calculation fully converged to the required accuracy, for example in terms of the basis set (for example in the number of plane waves or in other cases the choice of pseudo potential) or the underlying k-points mesh? In this context an evaluation of the chosen computer code can be important. Recently error estimates of solid state DFT calculations have been derived  in which the WIEN2k code plays the role of providing the standard (i.e. the most accurate calculation). The idea is that different implementations of the same first principles formalism (the same DFT functional) should lead to the same results or predictions. Would a different code yield other results (within a small error bar)? These tests showed that significant improvements of standard pseudo potentials were necessary, in order to reach the required accuracy. It was shown in  that the typical deviation (e.g. in total energy) between codes of different accuracy is an order of magnitude smaller than the typical difference with experiment.
Is the property of interest a ground state property or are excited states involved? For example the electron core-hole interaction requires at least a treatment based on the Bethe-Salpeter equation (BSE) (see ).
How about temperature and pressure? Often the calculated results corresponds to T=0 K but the experiment is carried out at room temperature. Can this difference be ignored? If phonons are included in the calculations, then at least thermodynamic estimates for a system at a higher temperature can be included. Varying the temperature is easy for experiments but difficult for theory. For pressure it is the other way around. Pressure is easy for theory but extremely difficult for experiments.
When a computation agrees with the experimental observations one should be careful because this is not a proof that the issues mentioned above are non-existent. It can be the case that there is a cancellation of errors: something like a crude model that is poorly converged. For a theorist there is always the temptation to stop improving a calculation when it already agrees with the experiments. Sometimes a deviation helps to find a better treatment in terms of atomic models, quantum mechanics (DFT functional), basis sets, temperature or some other factor that may be important in the specific case.
7.2. Results that can be provided by computations
It is appropriate to list aspects, where theory has advantages over experimental work:
On can carry out computer experiments irrespective of the abundance, environmental effects or cost of materials. Even unstable or artificial systems (which cannot be measured) can be computed. One might wish to understand why they cannot be prepared. Sometimes several proposals (based on experiments or intuition) are under discussion: as long as they are not too many in number, theory can explore all of them and hopefully find the one which agrees with what is known about the system.
In surface science and catalysis the calculation of observable properties are often needed, such as STM images, X-ray spectra, vibrational frequencies, electric field gradients, etc.. In addition, comparing total energies of proposed atomic structure (after a structure optimization) are often essential determining which atomic structure is likely to be correct. For an understanding of such material problems (for example at a surface) a combination of theory and experiment is essential.
With a good calculation and by using all available tools one can gain insight and a fundamental understanding, especially in terms of trends, for which perfect agreement with experiment is not necessary. On such a basis systematic predictions can be made which can replace the trial and error scheme that is often used in materials optimization.
One often wants to know the driving force for a certain change in properties. Is it coming from a substitution, the difference in chemical bonding, or from the related relaxation of the lattice around the impurity? Sometimes it is possible to set up artificially intermediate models which vary only one of these parameters at a time. Then an analysis can provide the answer.
For materials with a clear structure and moderate correlation effects theory can predict experimental results. However, there are many interesting cases in material science, where the details matter. Often the interest comes, because the system is close to a transition (e.g. becoming magnetic, ferroelectric, or close to a metal-insulator transition). In such cases the two phases of the system can be rather close (e.g. in energy) and thus need special attention. Take the perovskite SrTiO3 as an example. In this well known structure Ti is surrounded by six oxygen atoms forming an octahedron. Under pressure (or with temperature) these octahedra tilt in a certain fashion leading to a structural phase transition. If DFT theory yields a lattice constant that deviates by about 1 per cent from experiment, one could call this good agreement. In this case, however, this small deviation causes a difference of about 3 percent in the unit cell volume, which is sufficiently large to determine whether or not the tilt occurs (and thus such a detail matters). In such a case, one can choose another functional (for better agreement in volume) or one can carry out the calculation for the experimental volume, which can be obtained experimentally with high precision. With this choice a calculation can describe the phase transition properly.
In this chapter a selection of aspects, which play a role in modern computational theory of solids, has been given. From the atomic structure to the properties of inorganic materials a wide field of disciplines had to be included which are listed below:
chemistry: intuition, interpretation, chemical bonding, stability
physics: fundamentals and concepts, quantum mechanics, relativity
crystallography: space groups, symmetry relations, group subgroup relations
material science: understanding of trends, application, availability, environment
mathematics: formalism, algorithms, numerics, accuracy
computer science: data management, data bases, memory, communication, parallelization, load balancing, efficiency
It is clear that many details had to be skipped and only a few references could be given to guide the reader to the corresponding literature. Details of using the computer codes often change and thus it is highly recommended to look at the updated versions on the web (www.wien2k.at) or at the newest literature in this field. Selected results that can be obtained with WIEN2k had been summarized in this chapter. For a given atomic structure the electronic structure (band structure, density of states, and electron density) provides the basis for understanding chemical bonding. The corresponding total energy allows to judge relative stabilities of various phases or modifications. The effects of surfaces or even disordered structures can be simulated with sufficiently large supercells. Properties of insulators, metals, superconductors, or magnets etc. can be explained. Several quantities (such as spectra) can be computed which allow a direct comparison with experimental data. In some cases it is necessary to go beyond conventional DFT in order to reach agreement with experiment but DFT results are often an important and useful starting point.
It shall be stressed again that it is very useful to have a large variety of computer codes in this field. Different codes each have their emphasis on various aspects, such as accuracy, efficiency, user friendliness, robustness, portability with respect to hard-ware, features, properties and more. Some codes are more specialized for certain cases (e.g. treating only insulators) but do not work for other systems. This variety has helped to increase the importance of simulations in this field. For the comparison between theories (simulations) and experimental data several general considerations are summarized which are important for all kinds of computer codes.
I am very grateful and want to thank the many researchers who have helped developing the WIEN2k code and made many important contributions or suggestions. First of all I want to thank Peter Blaha, who has been part of the WIEN2k team for almost 35 years and who is the main person managing the code development. He has also been involved in many research projects that were carried out with this code. In addition I want to thank S.B. Trickey from the University of Florida, who has initialized the development and publication of the code (see , and the review ), as well as the other co-authors of the code  and the many researchers, who have contributed as acknowledged in a link at www.wien2k.at. I want to thank in particular Eamon McDermott, who has helped improving the English.