Open access peer-reviewed chapter

Evaluating DSGE Models: From Calibration to Cointegration

Written By

Bjørnar Karlsen Kivedal

Reviewed: 25 April 2023 Published: 17 May 2023

DOI: 10.5772/intechopen.111677

From the Edited Volume

Econometrics - Recent Advances and Applications

Edited by Brian W. Sloboda

Chapter metrics overview

82 Chapter Downloads

View Full Metrics

Abstract

This chapter examines the historical development of estimating new Keynesian dynamic stochastic equilibrium (DSGE) models. I focus, in particular, on how cointegration can be used in order to test and estimate the relationships in these models using a simple RBC model as an example. Empirical evaluation of a model is critical to validate the theory, and this should be an essential step when analyzing DSGE models. The chapter illustrates the use of various estimation techniques when estimating DSGE models and compares these methods to using cointegration when estimating and evaluating DSGE models.

Keywords

  • DSGE models
  • calibration
  • estimation
  • cointegration
  • RBC model

1. Introduction

Some of the first aggregate macroeconometric models describing national business cycles were developed by Jan Tinbergen in the 1930s. A model for the US was published in 1939 [1], estimated recursively by the ordinary least squares method, based on theoretical dynamic business cycle models such as the one developed by [2]. Tinbergen’s work was further developed by [3], who discussed testing economic theory by statistical inference using empirical observations. Furthermore, [4] emphasized using a system of simultaneous equations in order to model the economy and suggested using other estimation methods than ordinary least squares on each equation. Several macroeconometric models were constructed for the US following this, most notably from the work by the Cowles Commission for Research in Economics such as the models by [5, 6]. These were followed by a number of other models of the same type. See, for example, [7] for an historical overview of macroeconometric models.

Macroeconometric models such as these were constructed based on historical data, which was used both for estimating the parameters and for the model structure. A structural change in the economy could, therefore, lead to the econometric model not being relevant any more. If these models were not invariant to such changes, they would not be usable for policy analysis, as pointed out by [8]. This became known as “the Lucas critique,” suggesting that the behavior of the agents in the economy needed to be explained by a structural model instead of aggregate historical relationships. This was needed in order to have a model invariant to policy changes. Particularly, the parameters of the model, which determines tastes and preferences should be invariant to policy changes, while the remaining parts of the model should be regarded as stochastic.

In response to the Lucas critique, real business cycle (RBC) models, as introduced by [9], used microeconomic foundations, where consumers and firms optimized their intertemporal utility or profits using rational expectations. Extensions of the model with various rigidities, monopolistic competition, and short-run non-neutrality of monetary policy led to new Keynesian models,1 which later has become the standard both for forecasting and policy evaluation (See e.g. [10]). These models are examples of dynamic stochastic general equilibrium (DSGE) models, and they are typically solved by finding the first-order conditions for the optimization problems of the representative agents of the model. The first-order conditions are then expressed in log deviation from the steady state of the model such that a (log) linear model is obtained. This yields a model where the variables are expressed as log deviations from their representative steady-state values, that is. approximating percentage deviation from steady state. Furthermore, the part of the model based on preferences should be invariant to policy changes since policy changes should be modeled as stochastic. Hence, the structural DSGE model may be tested by imposing the hypotheses from the model as restrictions on a statistical model. This amounts to testing the Lucas critique since the structural part of the model is tested. If it is not rejected, the model may be useful for policy analysis. DSGE models are often used for analyzing monetary policy. Among the most popular models used are the medium scale models in [11, 12], focusing on the US and the Euro area, respectively.

The RBC model of [9] can be considered a cornerstone of DSGE models, and DSGE models are typically extended versions of RBC models. RBC models include optimizing agents with rational expectations, and only one shock is sufficient to generate business cycles. This shock is usually a shock to technology or productivity and modeled as an exogenous variable that enters the production function. In addition to this, DSGE models also include frictions, which take a lot of observed dynamics into account. Most importantly are price stickiness, usually modeled as Calvo pricing [13]. Other frictions, such as wage rigidities (see [14, 15]), are also often found in DSGE models. Other shocks and rigidities are also often included in models in order to allow for more detailed dynamics. However, many of these frictions can be found relatively unimportant empirically, see [11], and thus not necessary to explain the dynamics found in the data.

In general, a nonlinear DSGE model can be formulated as

Etfyt+1ytyt1ut=0E1

and has a rational expectations solution

yt=gyt1ut.E2

A linear approximation of such as model is usually used. This is given as

ŷt=Tθŷt1+Rθut,E3

where T and R are time-invariant (which means that they depend on the structural parameters of the model) and utN0Q.ŷt=logyt/y¯. This is then solved for the representative agent with full information about the model and the structural shocks. For more details, see, for example [16], which a lot of the presentation in this chapter is inspired by. Other useful sources for more information are [17, 18, 19].

The next section presents a simple RBC model, which is a special case of a DSGE model. The following sections use this model as an example in order to illustrate calibration, generalized method of moments, full information maximum likelihood, and Bayesian methods. Section 7 presents the cointegrated vector autoregressive model and how to test implications of a DSGE model, while the final section concludes. There is also some code relevant for investigating the model shown in Appendix A.

Advertisement

2. A simple RBC model

If we consider the simple RBC model in [20], we have households that maximize

Ett=0βtlnct+γ1ntE4

subject to the budget constraint

xt+ct=wtnt+rtktE5

and

kt+1=1δkt+xt.E6

Here, ct is consumption and nt labor (hours worked) in time t. γ is the utility weight, xt investment, wt real wage, rt rental rate of capital, kt capital stock, and δ the depreciation rate.

This yields the first-order conditions

1/ct=βEt1/ct+11+rt+1δE7
γct=wt,E8

which provides the optimal choice of ct,nt and kt+1. Eq. (7) is an Euler equation, while eq. (8) is the marginal rate of substitution.

A single good is produced by perfectly competitive firms (who maximize their profits each period)

yt=ztktαnt1α,E9

where 0<α<1,yt is output and zt is the technology shock. The technology shock follows an exogenous stochastic process

lnzt+1=ρlnzt+εt+1,E10

where εt is independently, identically, and normally distributed with zero mean and variance σ2.

The firm chooses the input levels (capital and labor) to maximize profits, and the marginal product of labor (capital) equals the marginal product of the real wage (rental rate).

Hence, the competitive equilibrium is the sequence of prices wtrtt=0 and allocations ctntxtkt+1ytt=0 such that firms maximize profits, agents maximize utility, and all markets clear. The structural parameters of the model are, thus, β,γ,δ,α and ρ. These parameters describe behavior, and we are, therefore, interested in assessing the value of these parameters.

Some steady-state relationships of the model are

kn=1/β+δ1/α1/α1E11
cy=1αβ+δ1δ.E12

Hence, the long-run relationship between capital and hours worked, k/n, and the long-run relationship between consumption and output, c/y, may be described by a combination of structural parameters and should, thus, be constant in the long run.

Such a model is often log-linearized (i.e. written in terms of log deviation from the theoretical steady state; x̂tlogxtlogx) in order to have a stationary representation. This yields,

Etĉt+1=ĉt+αβk/nα1α1Etk̂t+1+1αEtn̂t+1+Etẑt+1n̂t=1/αĉt+k̂t+1/αẑtŷt=αk̂t+1αn̂t+ẑtŷt=1δk/n1αĉt+11δk/n1αx̂tk̂t+1=1δk̂t+δx̂tẑt+1=ρẑt+εtE13

where the log deviations can be interpreted as percentage deviations from the steady state.

The log-linearized model has the solution

st=Φξtξt=Dξt1+vt.E14

or

ŷtn̂tĉt=ϕykϕyzϕnkϕnzϕckϕczk̂tẑtk̂tẑt=d11d12d21d22k̂t1ẑt1+εtkεtz.E15

The eigenvalues and eigenvectors of the matrix in the system with expectational terms are used in order to calculate the Φ matrix.

There are different ways of obtaining values for the parameters βγδαρ in the model. Using calibration, we choose the values subjectively or objectively in order to use the model for simulation, while we may estimate the value of the parameters based on observed economic data ctntytt=0T by using statistical methods. In the next sections, we will compare calibration and estimation using generalized method of moments (GMM), full information maximum likelihood (FIML), Bayesian methods, and the cointegrated vector autoregressive (CVAR) model.

Advertisement

3. Calibration

At first, these models were mainly calibrated and simulated, as proposed by [9]. This was done by fixing the values of structural parameters according to empirical studies in microeconomics or moments of the data such as long-run “great ratios” representing historical relationships between the variables.

We can calibrate the RBC model in [20], by setting the parameter values of the model, that is. assigning values to βγδαρ. This was popular before one was technically able to estimate large models and used in order to undertake computational experiments with the model. Hence, it is not possible to estimate parameters and test hypotheses regarding these when using calibration.

Calibration is, often, used in order to back out shocks (shock decomposition) and compares correlations between simulated variables and the data. Outcomes of the calibrated model may then be compared to descriptive statistics (e.g. the moments) of the data, and the model may be used in order to forecast or conduct policy analysis. The in-sample forecast performance may be assessed using measures such as root mean squared errors.

Hence, it is possible to do judgments of how the model fits the empirical reality even if we do not estimate the model. Calibration may also be useful as a first impression of the model before it is completely developed and estimated. However, we are not able to say anything about uncertainty. It may also be a useful approach when data are not available or only small samples can be obtained, which can be relevant for some regions and countries.

Advertisement

4. Generalized method of moments

Later, estimation using generalized method of moments (GMM) for single equations was conducted in order to estimate some of the parameters in the model. See, for example, [21] or [22] for estimation of the new Keynesian Phillips curve. GMM was introduced by [23] and first applied to DSGE models by [24, 25].

The method consists of minimizing the distance between some functions of the data and the model. Estimation can, therefore, be conducted using the (nonlinear) first-order conditions such that it is not required that we solve the model before estimating the parameters. However, we need a set of moment conditions in order to perform GMM, and it is a type of limited information estimation since we only utilize part of the theoretical model and not necessarily observations for all of the variables in the model. In particular, we have no likelihood function but only specific moments of interest that are adjusted to the data (called matching moments or orthogonality conditions).

Hence, we aim to minimize the distance between the observed moments from the sample and the population moments as implied by the model. In general, we have that the estimate of a parameter θ (is)

θ̂T=argminθQTθE16

where

QT=1Tt1Tfytθ'WT1Tt1Tfytθ.E17

Here, WT is the weighting matrix, which is used if there are more moment conditions than parameters. We, thus, seek to minimize QT, which is the square product of the sample moment, by the value of θ. The GMM estimator of θ is, thus, the value of θ that minimizes QT.

We may consider the Euler equation in the RBC model in [20], which was

1/ct=βEt1/ct+11+rt+1δ.E18

In order to estimate the parameters βδ, we have two conditions since there are two parameters in one condition (equation). The first moment condition can be the Euler equation

Etβct+1ct1+rt+1δ=0,E19

or more correctly that Etβct+1ct1+rt+1δ1=0. The second may be

Etβct+1ct1+rt+1δctct1=0,E20

since any zero factors multiplied by a factor of some observation will be zero.

Hence, the data are ct+1ctrt+1, and the instruments 1ctct1.rt could also have been used as an instrument. This implies that the average (first moments) of the data series is used in order to estimate parameter values.

When using GMM, the choice of instrument may impact the estimation. We may also have issues with unobserved variables. If analytical moment conditions are impossible or hard to obtain, they can be computed numerically by simulation (often called simulated GMM). This is particularly useful if there are unobservable variables in Euler equations such as the one above or there are nonlinear function of steady-state parameters. Further, a large sample is needed in order for asymptotic theory to apply, and Monte Carlo studies have not been favorable to GMM [26].

Advertisement

5. Full information maximum likelihood

In order to identify the structural parameters of the system (i.e. the complete theoretical model), the full system should be estimated. Full information maximum likelihood (FIML) estimation, such as, for example, in [26], can be used in order to estimate the parameters of new Keynesian models. Hence, this uses full information from the model rather than the limited information approach used in GMM, where we looked at some moment conditions. When using maximum likelihood, the estimated parameters will be the ones that provide the maximum of the likelihood function or the log of the likelihood function.

In general, the data (y) depend on the unknown parameters θ through a probability density function

yfyθ.E21

The estimator is then θ̂, and it is a function of the data

θ̂=gy.E22

Given the observed y0, the estimator is then obtained by the likelihood function

θ̂=argmaxθfy0θ,E23

or the log of this function. We then get the value of θ that provides the maximum of fy0θ. That is, the parametes that yield the maximum probability of observing y0.

Since the equations in DSGE models typically are nonlinear, we need to solve the model first and obtain a linear representation of the model. This provides a system where all the endogenous variables are expressed as a function of the exogenous variables and parameters of the model. However, a linear approximation of the model is often solved instead. The variables are then represented as deviation from the theoretical steady state (see Section 2). The structural parameters are estimated, and the model is assumed to be the true data generating process, see, for example, [26] or [27].

Almost all log-linearized DSGE models have a state-space representation

xt=Axt1+Bεtyt=Cxt+Dηt,E24

where xt is a vector containing the endogenous and exogenous state variables, and yt is a vector containing the observed variables. Hence, yt= is measurement equation, linking data to model. The objective is to estimate the parameter given the observed yt. The error term ηtN0R is independent of xt, and εtN0Q is independent of x0,x1,,xt and y1,,yt. Further, the matrices A, B, C, and D contain nonlinear functions of the structural parameters.

If both xt and yt contain observables, the state-space representation is a restricted VAR(1). If not, we may use a Kalman filter [28] in order to obtain the expected value of the unobservable variables and the likelihood function. This provides one-step-ahead forecast errors (in-sample) and the recursive variance of forecast errors. Hence, the Kalman filter gives the expected value of all of the potentially unobserved variables given the history of the unobserved variables.

FIML also has some limitations, depending on what the DSGE model looks like. Firstly, we need as many shocks as observable variables in order to perform the estimation. This is known as stochastic singularity, and we, thus, often need to add shocks or errors to the model in order to utilize the full potential of a data set with a lot of variables. The model in [20] has only one shock εt, but three observables ytctnt. We can, thus, add structural shocks or measurement errors to the model if we want to utilize data on all observables.

When using FIML, we assume that the model is the correct representation of the data generating process (DGP). Hence, FIML is sensitive to misspecification since we estimate the model under this assumption. We often also have partial or weak identification of parameters when using FIML, see, for example, [29]. Both of these issues may give an issue with “the dilemma of absurd parameter estimates” [30], which implies that FIML estimates of structural parameters can often be at odds with additional information economists may have.

Advertisement

6. Bayesian methods

Since DSGE models contain a lot of parameters and often use a relatively small sample of quarterly data, the likelihood function typically contains a lot of local maxima and minima and nearly flat surfaces [19], making identification hard. In order to circumvent this issue, DSGE models are often estimated using Bayesian estimation. This combines a prior distribution with a likelihood-based estimation such as FIML presented in the previous section. This, thus, takes some of the problems with maximum likelihood estimation into account. However, the estimated parameters using Bayesian methods do not necessarily reflect all of the information in the data since prior distributions will influence the estimates to some extent. Using priors may also hide identification problems, which is an issue often neglected when estimating DSGE models [29].

The main difference between FIML and Bayesian methods is the way data are treated or interpreted. In frequentist methods such as FIML, the parameters are fixed and the data are random. This allows us to estimate the variance of the estimator and their confidence intervals, that is. the interval that θ̂ lies in 1α percent of the time. Bayesian inference assumes that data are fixed and that the parameters are unknown. We may, therefore, focus on the variance of the parameter (rather than the variance of the estimator). Confidence intervals in Bayesian estimation will show the interval that has the highest probability of including θ conditional on the observed data, a prior distribution on θ, and a functional form (the DSGE model in our case).

If we have the model fxθ and the prior fθ, we want to find the posterior probability density function fθx. Using Bayes’ rule, we have

fθy=fyθfθfyE25
fθyfyθfθ.E26

Hence, the posterior kernel equals the model multiplied by the prior. We can, thus, find the distribution for the unknown parameter θ. Additionally, a point estimate of the posterior can be found, typically as the mean, median, or mode of the posterior distribution or by a loss function θ˜=argminθ̂Eθ̂θ. The mean squared error, the absolute error, and the max aposteriori will, respectively, yield the mean, median, and maximum of the posterior distribution. Bayesian estimation is, thus, a combination of maximum likelihood estimation and a prior distribution. It is also important to remember that the data are fixed when using Bayesian estimation. We do, therefore, not necessarily seek to use the results for generalizing purposes, while we try to find the parameter(s) that gives the highest probability of observing the data at hand when using maximum likelihood.

In Bayesian estimation, the aim is to find the posterior density function fθy0. This shows how the parameters θ depend on the data y. θ is assumed random, while it was assumed deterministic in the case of maximum likelihood estimation. A prior distribution function fθ, thus, needs to be specified before estimation as this is combined with a likelihood estimation as in FIML. Prior distributions may be subjective or objective. Subjective priors are a result of subjective opinions, while objective priors can be priors found by microeconomic empirical studies [31]. The weight we put on the prior distribution relative to the likelihood function also needs to be chosen a priori. We, thus, have two extreme cases of Bayesian estimation: 1) No weight on the prior (e.g. flat priors), which will be similar to FIML, and 2) Full weight on the prior and none on the likelihood, consistent with calibration. Hence, Bayesian estimation can be considered a combination of calibration and FIML.

The posterior is simulated by an algorithm such as a Monte Carlo method, and the accepted parameter values will form a histogram, which can be smoothed to provide the posterior distribution function.

An advantage of Bayesian estimation is that we may avoid identification issues that often are a problem when using FIML. However, we are also prone to hide these issues, which may be a problem. As argued in [32], Bayesian estimates should be compared to FIML estimates in order to see what role the priors have. Another advantage of Bayesian estimation is that we do not need to assume that the model is the correct DGP as for FIML and GMM.

Using prior information may also be an advantage since this is available information that then is taken into account in the estimation process even if it is not part of the model or the data set used in the estimation. However, if the same data are used for prior information as for the Bayesian estimation, for example,. great ratios, the priors do not add any information. It is also possible to compare different models via posterior odds ratios, see [33].

However, it may be difficult to replicate results from Bayesian estimation due to computationally intensive simulation methods (Metropolis-Hastings algorithm) [18]. For an overview of recent developments in Bayesian methods, see [34].

Advertisement

7. The cointegrated VAR model

DSGE models often contain variables that are nonstationary such as prices, wages, GDP, and productivity, and we use a log-linearized model with stationary variables in order to estimate the model with FIML or Bayesian methods. The data are then usually filtered by, for example, . the Hodrick-Prescott or the band pass filter in order to separate the trend and the cyclical component of the nonstationary data series, see, for example, [17]. Hence, the cyclical component of a variable in the data should correspond to the deviation from steady state for a variable in the theoretical model and is then used in order to estimate the (log-linearized) DSGE model. While the filtered cycle measures deviations from an estimated trend, the log deviation in the theoretical model measures deviation from the theoretical steady state. Hence, there may be a mismatch between the trend component of the data and the theoretical trending relationships in the model, expressed by the steady-state relationships of the model. This should be taken into account when estimating DSGE models since the steady-state relationships are expected to correspond to the long-run relations of the observed variables.

We saw that the log-linear system may be solved to yield a purely backward-looking solution such that it is represented by a vector autoregressive (VAR) model containing cross-equations restrictions from the DSGE model if all of the variables are observables.2 An estimated VAR model should, therefore, be similar to the solution of a new Keynesian model if the model is the true data generating process.

Since the solution of the DSGE model takes the form of a restricted VAR model, another approach for estimating such a model is to first estimate an unrestricted VAR model and then impose various restrictions on it from the theoretical DSGE model. This implies going from a general to a specific model, and it allows testing the restrictions as they are imposed on the unrestricted model. If the restrictions are rejected, the theoretical model can be modified such that it is more in line with the empirical observations.

A VAR model with k lags may be written as

Zt=Π1Zt1+ΠkZtk+εt,E27

where Zt is a vector of observed variables. A DSGE model has this representation (typically with k=1 lag) if all of its variables are observable as shown in (24). The VAR may be reformulated to a vector error correction model (VECM) such as

ΔZt=Γ1ΔZt1++Γk1ΔZtk+1+αβ˜'Z˜t1+γ0+γ1t+εt,E28

where β˜'=ββ0β1,Z˜t1=Zt11t',εtIN0Ω for t=1,,T, and Z1,Z0 is given. γ0 is a constant. If there are one or more linear combinations of nonstationary (integrated of order one, I(1)) variables that are stationary (integrated of order zero, I(0)), they can be considered cointegration relationships. These are found by imposing reduced rank on the estimated VAR and will yield the cointegrated vector autoregressive (CVAR) model [36]. The cointegration rank is found through statistical tests and should match what is implied by theory (e.g. the number of steady-state relationships in the DSGE model). Common stochastic trends should cancel through steady-state relationships if they are driven by unit roots.

Additionally, the data do not need to be pre-filtered when using this approach since assumptions from the theoretical model on the stochastic trends may be tested and imposed. First, we find the number of cointegrating vectors in the data. These represent the long-run properties of the data and should correspond to the steady state of the theoretical model. The long-run properties of the model are then imposed as restrictions on the β vectors in the VECM eq. (28). There should, for example,. be a constant relationship between capital k and hours worked n and between consumption c and output y in the model in [20], as shown in eq. (11) and eq. (12).

For an example of this, see [37], which tests several restrictions from the theoretical DSGE model in [27] using the CVAR framework. Similar testing of the long-run properties of DSGE models can be found in [38, 39]. This is in line with using the VAR model as a statistical model and test theory through the probabilistic approach as suggested by [3], see, for example, [40].

Short-run restrictions may also be imposed and tested through cross-equation restrictions on the VAR representation of the data such as imposing the restrictions suggested by the parameters in (15). See [41] for an example of this. Using the CVAR model thereby allows using frequentist methods while dealing with potential misspecification. Hence, we do not need to use Bayesian methods if we would like to relax the assumption that the model is the true DGP as in GMM and FIML, but we can test it in the CVAR framework. If the restrictions from the DSGE model are rejected when tested in the CVAR model, this may suggest misspecification. The theoretical model can then be modified to be more in line with what we find empirically.

Advertisement

8. Conclusion

As shown in the chapter, calibration may be useful for assessing the relevance of a theoretical model by, for example, . simulations. This is often necessary if data are not available for many of the variables in the model. Calibration may also be used as a preliminary step in modeling and evaluation.

Generalized methods of moments do not require that we need to solve the model before estimation, and we do not need observations on all of the variables in the model. This avoids the problem of stochastic singularity, which is an issue when we use full information estimation methods. However, this also implies that we usually only focus on a subset of the model and relevant variables.

Full information maximum likelihood and Bayesian estimation both involve using the complete model (usually in a log-linearized form) and take full advantage of the data. While maximum likelihood may have identification issues for the structural parameters of the model, Bayesian methods can address this by using prior distributions for parameters. However, the choice of priors and the chosen prior weight may impact the estimates, and thus affect the estimates such that the data set at hand is not allowed to speak freely.

By using the cointegrated vector autoregressive model, we are able to test the theoretical implications of the model, in particular the long-run implications of a model, rather than assuming that the model is the true data generating process as with full information maximum likelihood or generalized method of moments. We also do not need to filter the data before estimating the model, removing the problem of a potential mismatch between the theoretical steady state and the long-run relationships in the data. Hence, if we would like to take full advantage of the data while also testing the implications from the model, the cointegrated vector autoregressive model is a relevant tool. We may use it as a preliminary step to assess the empirical relevance of a theoretical model or use it as a fully specified macroeconometric model.

Advertisement

The methods illustrated in this chapter to evaluate and estimate Hansen’s RBC model [20] is possible to carry out and investigate using available code.

For calibration of the model and estimation using full information maximum likelihood and Bayesian methods, the most convenient approach is perhaps to use Dynare code, available at Johannes Pfeifer’s home page on Github: https://github.com/johannespfeifer/dsge_mod [43] For more information about Dynare, which is a program that you can run using Matlab or Octave, see www.dynare.org. For GMM estimation of Hansen’s RBC model, see [44].

In order to test the long-run implications of a DSGE model, I have estimated a cointegrated VAR with quarterly data on output, consumption, hours worked, and capital from 1960 to 2002 using R. The code is shown below. The data set is available at [45] and was used in order to test the implications of the model in [27] by [37]. In the code below, there is a test of one of the long-run restrictions of Hansen’s model found in the steady states for the output-to-consumption ratio using commands in the urca package [46] in R. I also include the dummy variables accounting for extraordinary institutional events used in [37] to specify the model.

alldata <− read.table(file = “irelanddata.csv”,

           sep = “;”, header=TRUE).

logdata <− subset(alldata, select=c(qtr,Ly,Lc,Lh,LCapP)).

colnames(logdata)[colnames(ldata) == “LCapP”] =“Lk”.

dummyvar <− read.table(file = “dummies.csv”,

           sep = “;”, header=TRUE).

total <− merge(logdata,dummyvar,by=“qtr”).

attach(total).

data <− cbind(Ly,Lc,Lh,Lk).

dum <− cbind(Ds7801,Dp7003,Dp7403,Dp7404,Dp7801,Dtr8001).

cointd <− ca.jo(total, type=‘trace’, K=2,

        season=4, dumvar=dum).

summary(cointd).

H <- matrix(byrow=TRUE,

c(1,0,0,

−1,0,0,

0,1,0,

0,0,1), c(4,3)).

betarestrictions <− blrtest(z=cointd, H=H, r=1).

summary(betarestrictions).

First, the data are loaded into the object alldata. I then take the natural logarithm of the variables that are used and place them in logdata. The variable for capital is then renamed in order to match the theoretical model, which uses the letter k for capital. The dummy variables from a separate dummies.csv file matching the dummy variables from [37] are loaded into dummyvar, and the data are combined into the data frame total and attached. The data are then separated into the endogenous variables in data and the exogenous dummy variables in dum.

By using the command ca.jo, I estimate the VAR model and test for the number of reduced rank. This is then set to r=2 as in [37], and the restriction of a constant long-run relationship between Ly and Lc is imposed on the beta matrix.

The restriction of a stationary long-run relationship between consumption and income LyLcI0 yields a p-value of 0, indicating that we reject it. This is perhaps not surprising, given the plot of the difference between log of income and log of consumption as shown in Figure 1, where we observe an upward trend and not stationarity.

Figure 1.

Difference between log of income and log of consumption.

Advertisement

Notes

This chapter has been written on the background of the trial lecture titled “Describe and compare different methods for analyzing DSGE models: Calibration, GMM, FIML, Bayesian methods, and CVAR” for defending my Ph.D. in Economics at the Norwegian University of Science and Technology, as well as the introductory chapter from the thesis “Testing economic theory using the cointegrated vector autoregressive model: New Keynesian models and house prices”, see [42].

References

  1. 1. Tinbergen J. Business Cycles in the United States of America, 1919–1932. London: League of Nations, Economic Intelligence Service; 1939
  2. 2. Frisch R. Propagation problems and impulse problems in dynamic economics. In: Economic Essays in Honour of Gustav Cassel. London: Allen and Unwin; 1933. pp. 171-205
  3. 3. Haavelmo T. The probability approach in econometrics. Econometrica: Journal of the Econometric Society. 1944;12:1-114
  4. 4. Haavelmo T. The statistical implications of a system of simultaneous equations. Econometrica, Journal of the Econometric Society. 1943;11(1):1-12
  5. 5. Klein LR. Economic Fluctuations in the United States, 1921–1941. Vol. 176. New York: Wiley; 1950
  6. 6. Klein LR, Goldberger AS. An Econometric Model of the United States, 1929–1952. Vol. 9. Amsterdam: North-Holland Publishing Company; 1955
  7. 7. Welfe W. Macroeconometric Models. Advanced Studies in Theoretical and Applied Econometrics. Heidelberg: Springer Berlin; 2013
  8. 8. Lucas RE. Econometric policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy. 1976;1:19-46
  9. 9. Kydland FE, Prescott EC. Time to build and aggregate fluctuations. Econometrica: Journal of the Econometric Society. 1982;50(6):1345-1370
  10. 10. Galí J. Monetary Policy, Inflation, and the Business Cycle: An Introduction to the New Keynesian Framework. Princeton and Oxford: Princeton Univ Pr; 2008
  11. 11. Smets F, Wouters R. Shocks and frictions in us business cycles: A bayesian dsge approach. American Economic Review. 2007;97(3):586-606
  12. 12. Smets F, Wouters R. An estimated dynamic stochastic general equilibrium model of the euro area. Journal of the European Economic Association. 2003;1(5):1123-1175
  13. 13. Calvo GA. Staggered prices in a utility-maximizing framework. Journal of Monetary Economics. 1983;12(3):383-398
  14. 14. Blanchard O, Galí J. Real wage rigidities and the new Keynesian model. Journal of Money, Credit and Banking. 2007;39:35-65
  15. 15. O. Blanchard and J. Galí, Labor markets and monetary policy: A new Keynesian model with unemployment, American Economic Journal: Macroeconomics, vol. 2, pp. 1–30, April 2010
  16. 16. Canova F. Methods for Applied Macroeconomic Research. Princeton and Oxford: Princeton University Press; 2007
  17. 17. DeJong DM, Dave C. Structural Macroeconometrics. Princeton and Oxford: Princeton University Press; 2007
  18. 18. Tovar C. DSGE models and central banks, economics: The open-access. Open-Assessment E-Journal. 2009;3:20090016. DOI: 10.5018/economics-ejournal.ja.2009-16
  19. 19. Fernández-Villaverde J. The econometrics of DSGE models. SERIEs: Journal of the Spanish Economic Association. 2010;1(1):3-49
  20. 20. Hansen GD. Indivisible labor and the business cycle. Journal of Monetary Economics. 1985;16(3):309-327
  21. 21. Galí J, Gertler M, López-Salido J. Robustness of the estimates of the hybrid New Keynesian Phillips curve. Journal of Monetary Economics. 2005;52(6):1107-1118
  22. 22. Galı J, Gertler M. Inflation dynamics: A structural econometric analysis. Journal of Monetary Economics. 1999;44(2):195-222
  23. 23. Hansen LP. Large sample properties of generalized method of moments estimators. Econometrica: Journal of the Econometric Society. 1982;50(4):1029-1054
  24. 24. Christiano LJ, Eichenbaum M. Current real-business-cycle theories and aggregate labor-market fluctuations. The American Economic Review. 1992;82(3):430-450
  25. 25. Burnside C, Eichenbaum M, Rebelo S. Labor hoarding and the business cycle. Journal of Political Economy. 1993;101(2):245-273
  26. 26. Lindé J. Estimating new-Keynesian Phillips curves: A full information maximum likelihood approach. Journal of Monetary Economics. 2005;52(6):1135-1149
  27. 27. P. N. Ireland, A method for taking models to the data, Journal of Economic Dynamics and Control, vol. 28, pp. 1205–1226, March 2004
  28. 28. Kalman RE. A new approach to linear filtering and prediction problems. Journal of Basic Engineering. 1960;82:13-45
  29. 29. Canova F, Sala L. Back to square one: Identification issues in DSGE models. Journal of Monetary Economics. 2009;56(4):431-449
  30. 30. An S, Schorfheide F. Bayesian analysis of dsge models. Econometric Reviews. 2007;26(2–4):113-172
  31. 31. Del Negro M, Schorfheide F. Priors from general equilibrium models for VARS. International Economic Review. 2004;45(2):643-673
  32. 32. Fukac M, Pagan A. Issues in adopting DSGE models for use in the policy process, Australian National University, Centre for Applied Macroeconomic Analysis. CAMA Working Paper. 2006;10:2006
  33. 33. Rabanal P, Rubio-Ramírez JF. Comparing new keynesian models of the business cycle: A bayesian approach. Journal of Monetary Economics. 2005;52(6):1151-1166
  34. 34. Fernández-Villaverde J, Guerrón-Quintana PA. Estimating dsge models: Recent advances and future challenges. Annual Review of Economics. 2021;13:229-252
  35. 35. Fernández-Villaverde J, Rubio-Ramirez JF, Sargent T, Watson MW. ABCs (and Ds) of understanding VARs. American Economic Review. 2007;97(3):1021-1026
  36. 36. Johansen S. Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control. 1988;12(2):231-254
  37. 37. Juselius K, Franchi M. Taking a DSGE model to the data meaningfully, economics: The open-access. Open-Assessment E-Journal. 2007;1:4
  38. 38. Kivedal BK. A DSGE model with housing in the cointegrated VAR framework. Empirical Economics. 2014;47(3):853-880
  39. 39. Kivedal BK. A new keynesian framework and wage and price dynamics in the Usa. Empirical Economics. 2018;55(3):1271-1289
  40. 40. Juselius K. The Cointegrated VAR Model. Methodology and Applications. New York: Oxford University Press; 2006
  41. 41. Bårdsen G, Fanelli L. Frequentist evaluation of small dsge models. Journal of Business & Economic Statistics. 2015;33(3):307-322
  42. 42. Kivedal BK. Testing Economic Theory Using the Cointegrated Vector Autoregressive Model: New Keynesian Models and House Prices. PhD thesis. Trondheim: Norwegian University of Science and Technology. Faculty of Social Sciences and Technology Management. Department of Economics; 2013
  43. 43. Johannes Pfeifer’s home page on Github. https://github.com/johannespfeifer/dsge_mod [Accessed: March 24, 2023]
  44. 44. Burnside AC. Real Business Cycle Models: Linear Approximation and GMM Estimation. Washington, D.C.: Mimeo, The World Bank; 1999
  45. 45. Juselius K and Franchi M. Taking a DSGE Model to the Data Meaningfully [Dataset]. 2009
  46. 46. Pfaff B, Zivot E, Stigler M, Pfaff MB. Package urca, unit root and cointegration tests for time series data. R Package Version. 2016:1-2. Available from: http://cran.pau.edu.tr/web/packages/urca/urca.pdf

Notes

  • Although they were developed simultaneously as RBC models.
  • If only some variables are observed, it has a state space representation in form of a vector autoregressive regressive moving average (VARMA) model, see, for example, [35]).

Written By

Bjørnar Karlsen Kivedal

Reviewed: 25 April 2023 Published: 17 May 2023