Home Business & Economics Testing monetary policy optimality using volatility outcomes: a novel approach
Article Publicly Available

Testing monetary policy optimality using volatility outcomes: a novel approach

  • Federico Ravenna EMAIL logo
Published/Copyright: April 28, 2016

Abstract

We propose a method to assess the efficiency of macroeconomic outcomes using the restrictions implied by optimal policy DSGE models for the volatility of observable variables. The method exploits the variation in the model parameters, rather than random deviations from the optimal policy. In the new Keynesian business cycle model this approach shows that optimal monetary policy imposes tighter restrictions on the behavior of the economy than is readily apparent. The method suggests that for the historical output, inflation and interest rate volatility in the United States over the 1984–2005 period to be generated by any optimal monetary policy with a high probability, the observed interest rate time series should have a 25% larger variance than in the data.

JEL: E30

1 Introduction

The business cycle theory that has become prevalent in the last three decades assumes that business cycle volatility is the result of exogenous shocks. Fiscal and monetary policy can affect the propagation of these shocks throughout the economy, and the resulting volatility in aggregate economic variables.

Since the 2008 recession, monetary authorities in countries affected by disruptions in the financial intermediation system have engaged in an unprecedented increase in the number of policy tools adopted to sustain the economy. To what extent the length and severity of the recession is accounted for by the size of the shocks hitting the economy, or by the inadequacy of the policy response, is still an open question, especially in the United States and the Euro area. A key problem in assessing the historical performance of monetary policies is how to distinguish the amount of economic volatility that is an efficient outcome given the shocks driving the business cycle – that is, the volatility that would obtain conditional on the optimal policy – and the volatility resulting from suboptimal policymaking. Because exogenous shocks are typically unobservable, any assessment of the policy performance must rely on the restrictions implied by a DSGE model for the co-movement of observable variables.

This paper investigates the restrictions implied by optimal policy DSGE models for the volatility of observable endogenous variables, and proposes a method to use these restrictions in order to assess the efficiency of macroeconomic outcomes. Linearized DSGE models where optimal policy is implemented at every period are by construction singular – they predict the time series for one variable is a non-stochastic function of the other variables’ time series. The data will reject the restrictions of optimal policy models almost surely. Therefore, estimated DSGE models always include random shocks to the equation describing the behavior of the policymaker. Our approach defines a non-empty set of volatility outcomes by including all outcomes generated by alternative parameterizations of the model conditional on the true optimal policy, rather than the outcomes generated by a single parameterization conditional on random deviations from the optimal policy. We show that the two approaches have radically different implications. While including random deviations in an optimal policy DSGE model implies that nearly any observed volatility outcome can be generated by the model, using a parametric family of models where policy is truly optimal results in a well-defined and limited set of volatility outcomes. We label this set of outcomes the optimal policy space. Our results show that optimal policymaking in widely used DSGE business cycle frameworks impose very tight restrictions on observed macroeconomic outcomes. One way to interpret this result is that the historical time series of random deviations from optimal policymaking arising in estimated DSGE models are not only necessary to explain the period-by-period behavior of the endogenous variables and the policy instrument, but may be necessary to even simply match a summary statistic as the vector of endogenous variables’ volatility over the estimation period. While our approach cannot provide a conditional assessment of policy – that is, whether policy reacted optimally to business cycle shocks estimated in a specific historical episode – it can provide a metric to measure average deviations from optimal policy over longer historical periods.

The underpinnings of our approach can be summarized as follows. A DSGE model defines a map M(β, ΣU) between the covariance matrix ΣU of the shocks vector Ut and the covariance matrix ΣY of the endogenous variables vector Yt, where the vector β includes the model deep parameters. In estimated DSGE models of the business cycle the map M implies that any volatility sample outcome has a nonzero probability of being generated by the model. This is the consequence of two assumptions. First, business cycle models are solved using a linear approximation, resulting in equilibrium law of motion of the form, at its simplest, Yt=AUt. Second, the linear solution is assumed nonsingular by ensuring that the number of exogenous shocks and observable endogenous variables are identical.[1] In optimal policy models, this implies including a random shock in the policy optimality condition. Then, regardless of the restrictions imposed by optimal policymaking on the model A, any outcome Yt can be explained by some random vector Ut, since for any given nonsingular model and covariance outcome ΣY it holds ΣU=A−1ΣYA−1.

Rather than building the map M as a linear function of ΣU for a nonsingular model with random deviations from the optimal policy, we build the map M(β, ΣU) for a parametric family of optimal-policy singular models, indexed by the parameter set β, and we assume no random deviations from the optimal policy condition. Therefore, the set of volatility outcomes generated by optimal policymaking – the image of M(β, ΣU), which we label the optimal policy space – is not of measure zero. At the same time, the nonlinearity of the map implies that there may exist volatility outcomes with zero probability. For example, in the DSGE model we use to illustrate the methodology, any volatility outcome for the output gap and the inflation rate (σxt2,σπt2) belongs to the optimal policy space as the weight placed on different objectives in the policymaker’s loss changes. At the same time, there exist volatility outcomes for output and the inflation rate (σyt2,σπt2) that cannot be supported by optimal policy, regardless of the policymaker’s preferences.

We use this approach to show how optimal policy would restrict the volatility outcomes for observable variables in the new Keynesian model, a widely used small-scale monetary business cycle model. Based on this model, the 1985–2004 sample observation for US macroeconomic variables would have zero probability of being generated by optimal policymaking. We provide a measure of the distance between an inefficient outcome and the optimal policy space, and show that for the historical US output, inflation and interest rate volatility to be generated by any optimal monetary policy with a high probability, the observed interest rate time series should have a 25% larger variance than in the data.

Given our methodology sets a low bar for a volatility outcome to be optimal – it can only identify the set of sample outcomes with non-zero probability, while including in the optimal policy space outcomes that may or may not be generated by optimal policymaking – we interpret this result as evidence that popular models used to provide monetary policy prescriptions impose tighter restrictions on the behavior of the economy than is readily apparent. Intuitively, alternative models belonging to a parametric family may imply a very different mapping between the volatility of exogenous shocks and endogenous variables, and very different impulse responses conditional on a one standard deviation exogenous shock. Yet the same models may be unable to generate very different sets of unconditional volatility outcomes.

The paper is organized as follows. Section 2 defines the optimal policy space. Section 3 discusses the restrictions on the volatility outcomes imposed by the optimal monetary policy in the new Keynesian model. Section 4 evaluates the US policy performance using the optimal policy space, and discusses a probabilistic interpretation of the extent of inefficiency for a given volatility outcome. Section 5 presents related literature and Section 6 concludes.

2 The optimal policy space

2.1 Definitions

To simplify the notation, in the following we assume that the vector β includes both the model deep parameters and the elements of ΣU. Define M(β) as the map between a given model’s parameters and all the entries in the covariance matrix ΣY.[2] Let the volatility space and the optimal policy space be defined as follows:

Definition 1:Let β be a vector of parameters,pa policy rule and Z(β; p) a law of motion for n endogenous variables conditional on policyp. Let the vector-valued function M(β; p): DRrRnassociated with Z(β; p) map every vector βRrto a unique vector of variances (σY12,,σYn2) of the n endogenous variables. Define the set Vpas the image of M(β; p). The set Vpis called the volatility space for model Z conditional on policypand parameter vector β.

Definition 2:Define the set Voas the volatility space Vpassociated with M(β; o) conditional on the optimal policyp=o. The set Vois called the optimal policy space.

For the optimal policy space to be a useful tool, we require that Vo⊆Rn and VoRn−1 for an appropriate choice of n>1, so that Vo is a proper n–dimension subset of Rn, and is not of measure zero.

2.2 The optimal policy space for a parametric family of singular models

It is well known that parameterized linear optimal policy DSGE models, described by the linear law of motion Yt=AUt where Yt is an r×1 vector and Ut is a s×1 vector, with s<r, are singular. In this case, the domain of M(β) is simply β=vec(ΣU), and for an appropriate choice of s, Vo is a s-dimension hyperplane in Rr. Consider a subset of observable variables [Y1, Y2, …, Yn] where n<r. Then, conditional on the model A either all vectors [σY12,σY22,,σYn2] belong to the optimal policy space (and Vo is an improper subset of Rn+) if s=n, or any vector [σY12,σY22,,σYn2] almost surely does not belong to the optimal policy space if s<n. This is the consequence of the fact that, in this case, the mapping M(β) is itself linear: M(β)=Cβ. The Appendix illustrates this point in detail.

In the following, assume instead that any model parameter k is allowed to belong to the domain of M(β) so that β=[vec(ΣU), k1, …, kh]′, implying that in general M(β; o) is a nonlinear vector-valued function M:DRsRn. In this case, it is possible for Vo to be a proper subset of Rn and at the same time not to be contained in any lower-dimension subspace, even if the associated Z(β; o) model’s law of motion is described by the linear map Yt=AUt and A is of rank s<n. This property ensures that in general Vo is a non-trivial subset of Rn. Effectively, verifying whether an outcome (σY12,,σYn2) is optimal amounts to checking whether a vector [σY12,σY22,,σYn2] belongs to the image of the function M(β; o).[3] Intuitively, the nonlinearity of the mapping M(β; o) allows Vo to be “small” with respect to Rn, rather than being of measure zero.

When M(β) is a linear map, and rank(C)=s<n (as will happen whenever rank(A)=s<n) Vo is a s-dimension hyperplane, implying M(β; o) can be rewritten as a map between vectors in Rs and vectors in Rn, for s<n. A similar notion can be extended to the case when M(β; o) is nonlinear using the following definitions (Baxandall and Liebeck 1986):

Definition 3:A function Γ: SRsRnis smooth if it is a C1function and if for allgS the Jacobian JM,gis of maximum possible rank min(s, n).

Definition 4:A subset KRnis called a smooths – surface if there is a region of S inRsand a smooth function ρ: SRsRnsuch that ρ(S)=K.

The latter definition implies that if a smooth ρ(S) exists, the image K of M(β; o) can be parametrically described by a vector-valued function ρ of s variables. The smoothness condition on ρ requires that the Jacobian matrix of ρ at any point in the domain has at least s independent column vectors. The constant rank theorem (Conlon 2001) ensures existence of ρ(S). When for all gS it holds that rank(JM,g )=n, then the function ρ(S)≡M(β; o) maps into a smooth n – surface and the probability that [σY12,σY22,,σYn2]Vo=K is non-zero. Contrary to the case when the mapping M(β) is linear, the n – surface describing the optimal policy space does not need to span the whole codomain of M(β). This allows the set Vo to define a proper subset of optimal policy outcomes, and to be able to discriminate optimal and suboptimal volatility realizations.

3 Optimal monetary policy space in the new Keynesian model

Consider a log-linear new Keynesian model (Benigno and Woodford 2005; Walsh 2005) describing the dynamics of inflation πt, the interest rate it, the output gap x˜t=ytyt, where yt is output and yt is its efficient level:

(1)x˜t=1φ(itEtπt+1r˜tn)+Et(x˜t+1) (1)
(2)πtγπt1=λx˜t+β˜Et(πt+1γπt)+λut (2)

where φ is the coefficient of relative risk aversion for the representative household divided by the consumption share of output, β˜ is the household’s discount rate, λ is a function of behavioral parameters. It is assumed that a constant share of firms can adjust the price in each period, while the remaining share indexes the price to a fraction γ of last period’s aggregate inflation rate. The variables ut and r˜tn are linear combinations of all the exogenous shocks (a technology shock at, a tax shock τt, a government spending shock Gt), and are correlated. The appendix describes the full model derivation, and the mapping between the reduced form parameters β˜,φ, λ, γ and the structural parameters.

Let the policymaker’s objective function be:

(3)Wt=12ΩEti=0β˜i{αx˜t+i2+(πt+iγπt+i1)2} (3)

The parameter α specifies how the policymaker trades off fluctuations in output gap and inflation. We assume that α depends on exogenous policymaker preferences.[4]

In order to illustrate the main result, it is useful to start from a simplified model where γ=0 and appropriate transfers ensure that the steady state is efficient In this case the model in eqs. (1), (2), (3) simplifies to the basic new Keynesian model, as found for example in Clarida, Galí and Gertler (1999), where movements in r˜tn can be interpreted as “demand shocks,” since they are not correlated with ut, and can be perfectly offset by the policymaker. The time-consistent solution to the optimal policy problem requires:

(4)πt=αλx˜t (4)

The law of motion for πt, x̃t under the optimal policy is:

(5)πt=αqut;x˜t=λqut (5)

When ut is described by an AR(1) stochastic process with autocorrelation parameter ρu, we obtain q=1λ2+α(1β˜ρu). In this model any outcome (σπt2,σx˜t2) could be generated by an optimal policy for appropriate values of α,σut2 belonging to the set [0, ∞]. Using definition 1 and 2, the optimal policy space of the variables (πt,x˜t) associated with the law of motion Z(β; o) for β=[σut2,α] is Vo=R2+. Since any vector [σπt2,σx˜t2] belongs to the image of M(β; o) any volatility outcome can be generated by an optimal policy. Therefore, the model does not put any meaningful restrictions on the observable volatility to discriminate between efficient and inefficient outcomes.

Consider the optimal policy space of the variables (πt, x̃t, it) for β=[σut2,σr˜tn2,σutr˜tn,α]. The law of motion for (πt, it) implies:

(6)σπt2=(αλ)2σxt2 (6)
(7)σit2=(αλγπ)2σxt2+σr˜tn22αqγπσutr˜tn (7)

where γπ=[ρu+φλα(1ρu)]. The optimal policy space is a 3 – surface, and is a proper subset of R3+.Figure 1 shows a subset of the hyperplanes in Vo . The set Vo is composed by an infinite number of the hyperplanes in the figure, each indexed by a value for σr˜tn2, since the range of observable outcomes for σit2 is bounded from below, but not from above. In this instance, the model implies that only a limited subset of macroeconomic outcomes is optimal, though the set Vo encompasses a very large range of outcomes.

Figure 1: Optimal policy hyperplanes belonging to the optimal policy space Vo for the variables (πt, x̃t, it) and for β=[σut2, σr˜tn2, σutr˜tn, α]′$\beta  = [\sigma _{{u_t}}^2,{\rm{ }}\sigma _{\tilde r_t^n}^2,{\rm{ }}{\sigma _{{u_t}\tilde r_t^n}},{\rm{ }}\alpha ]'$ using the baseline new Keynesian model. Each hyperplane is indexed by a value for σr˜tn2.$\sigma _{\tilde r_t^n}^2.$
Figure 1:

Optimal policy hyperplanes belonging to the optimal policy space Vo for the variables (πt, x̃t, it) and for β=[σut2,σr˜tn2,σutr˜tn,α] using the baseline new Keynesian model. Each hyperplane is indexed by a value for σr˜tn2.

To obtain much tighter restrictions on Vo , we compute the mapping M(β; o) for the set of endogenous variables (πt, yt, it).[5] Conditional on the optimal policy (4), define:

M(β;o)[σπt2σyt2σit2]=[α2q2σut2λ2q2σut2+1φ2(1ρa)2σr˜tn22φ(1ρa)λqσutr˜tn(αqγπ)2σut2+σr˜tn22αqγπσutr˜tn]

where β=[σut2,σr˜tn2,σutr˜tn,α]. The set Vo for this model is a 3 – surface, as can be checked by computing rank[JM,g ]. The optimal policy space is shown in Figure 2. Contrary to the earlier case, the set VoR3+ for (πt, yt, it) includes a set of outcomes for σit2 bounded from above and below for any (σπt2,σyt2). The intuition for the result is straightforward. Even if conditional on the optimal policy demand shocks r˜tn do not affect πt and x̃t, they affect yt and it. As a consequence, for given σπt2 optimal outcomes where σyt2 is larger imply that σit2 is larger too. As for cost-push shocks ut, they increase the volatility of all three variables.

Figure 2: A subset of the optimal policy space Vo for the variables (πt, yt, it) and for β=[σut2, σr˜tn2, σutr˜tn, α]′$\beta  = [\sigma _{{u_t}}^2,{\rm{ }}\sigma _{\tilde r_t^n}^2,{\rm{ }}{\sigma _{{u_t}\tilde r_t^n}},{\rm{ }}\alpha ]'$ using the baseline new Keynesian model.
Figure 2:

A subset of the optimal policy space Vo for the variables (πt, yt, it) and for β=[σut2,σr˜tn2,σutr˜tn,α] using the baseline new Keynesian model.

The set Vo is not of measure zero. Optimal outcomes in the R3+ space do not align on a two-dimension hyperplane because for different combinations (σut2,σr˜tn2,α) there may exist more than one outcome for σit2 corresponding to the same outcome (σπt2,σyt2).

Note that parameterizations where Vo is of measure zero in R3+ do exist. If ρu=ρa=0, using the definitions for q and γπ we obtain:

(8)M(β;o)[σπt2σyt2σit2]=[α2(λ2+α)2σut2λ2(λ2+α)2σut2+1φ2σr˜tn22φλλ2+ασutr˜tn(φλ)2(λ2+α)2σut2+σr˜tn22φλλ2+ασutr˜tn] (8)

Eq. (8) shows that σyt2=1φ2σit2 for any value of λ and φ. Therefore the Jacobian of M(β; o) has two proportional columns for any β. Since rank [JM,g ]=2, the optimal policy space cannot be a 3 – surface. The image K can be parameterized by the function ρ: SR2R3:

ρ(S)[g1g2φ2g2]

where g1=α2(λ2+α)2σut2 and g2=λ2(λ2+α)2σut2+1φ2σr˜tn22φλλ2+ασutr˜tn. In this case, Vo is a 2 – surface in R3, implying any outcome is suboptimal almost surely.

In general, by finding the appropriate combination of n endogenous variables, it may be possible to obtain an optimal policy space conditional on a model Z(β; o) that includes only a bounded set of outcomes for at least one variable. While we illustrated the methodology with an example where we can derive analytically the mapping M(β; o), the set Vo can be obtained for any DSGE model, and for an appropriately chosen vector of endogenous variables using numerical methods.[6] The set Vo can be used to assess the restrictions the optimal policy implies for observable economic volatility.

This methodology can be readily extended beyond the case of optimal policy rules. It can in fact be employed to define the volatility space for any given rule for monetary policy, including any functional form for a policy rule depending on endogenous variables. The volatility space will then define the set of outcomes related to a given Taylor rule, assuming the policymaker never deviates from the interest rate prescribed by the rule, and for any value of the Taylor-rule parameter-vector. The volatility space for a Taylor rule functional form can be easily compared with the optimal volatility space, in a given model. The optimal policy in eq. (4) can be implemented by the instrument rule:

(9)it=γπEtπt+1+r˜tn (9)

A suboptimal Taylor rule, could be described, for example, by the instrument rule in eq. (9) under the assumption that the coefficient summarizing the response of policy to expected inflation be different from γπ:

it=γtaylorEtπt+1+r˜tn

The volatility space conditional on the Taylor rule will be different from the optimal policy space. First, the vector β now includes the value for the coefficient γtaylor. This, in itself, provides an additional degree of freedom. However, we cannot draw a general inference about the resulting implications for the size of the volatility space relative to the optimal policy space, since also the law of motion for all endogenous variables will now be dependent on γtaylor. The mapping between the volatility of exogenous shocks and the volatility of endogenous variables depends nonlinearly on the model parameters, therefore the added degree of flexibility in the parameterization may only lead to volatility outcomes which already belong to Vo . This for example is the case in our baseline model, where eq. (4) shows that the relationship between the volatility of πt and x̃t only depends on the ratio α/λ and not on each of the two parameters independently.

4 US volatility outcomes and optimality of monetary policy

4.1 Restrictions from the new Keynesian model and implications for historical US macroeconomic volatility

As an illustration of our methodology, consider the optimal policy space for the variables (πt, yt, it) conditional on the model in eqs. (1), (2), (3). We consider two sets of parameters β, and several alternatives for the implied optimal policy, depending on the choice of objective function and the definition of optimality adopted. We allow for endogenous inflation persistence by setting γ=0.5 and consider an economy with a distorted steady state, so that any shock will affect all the endogenous variables under the time-consistent optimal policy. While this is a stylized model, it is widely used in theoretical and empirical work. Since the model’s equilibrium law of motion has multiple endogenous and exogenous state variables, it is not feasible to build analytically the mapping M(β; o) as in Section 3. The set of optimal outcomes Vo is instead computed numerically by solving the model over a multi-dimensional grid of the parameter’s space, and finding for each parameterization the implied volatility of the endogenous variables.

In our first experiment, we examine the optimal policy space fixing the model’s deep parameters, except for the values of the shocks’ volatilities and the objective function parameter α. We assume that the relative weight α across objectives in the policy objective function is independent of the deep parameters of the model, so that β=[σat,στt,σatτt,α].[7] This allows the central banker to have a different welfare definition from the social welfare, which is defined by the utility functional of the representative agent. Computationally, it relaxes the restriction linking α to the deep parameters of the model. Figure 3 plots Vo (similar in shape to the plot in Figure 2) for the time-consistent optimal policy, together with the outcome (σπt,σyt,σit) for the US over the period 1984:1–2005:1. There is no combination of the volatility of exogenous shocks and policymaker preferences that could have generated the observed (σπtUS,σytUS,σitUS) as an optimal policy outcome.

Figure 3: A subset of the optimal policy space Vo for the variables (πt, yt, it) and for β=[σat, στt, σatτt, α]′ using a new Keynesian model with endogenous inflation persistence and a distorted steady state. The plot shows the historical volatility outcome for the US over the period 1984:1–2005:1. Output yt is detrended seasonally adjusted non-farm business sector real GDP. Inflation πt is seasonally adjusted CPI inflation. Interest rate it is 3-month government bond. Data is sampled at quarterly intervals.
Figure 3:

A subset of the optimal policy space Vo for the variables (πt, yt, it) and for β=[σat, στt, σatτt, α]′ using a new Keynesian model with endogenous inflation persistence and a distorted steady state. The plot shows the historical volatility outcome for the US over the period 1984:1–2005:1. Output yt is detrended seasonally adjusted non-farm business sector real GDP. Inflation πt is seasonally adjusted CPI inflation. Interest rate it is 3-month government bond. Data is sampled at quarterly intervals.

We then build the function M(β; o) for the time-consistent optimal policy and for β=[σat,στt,χ,γ,θ,ν]. Unless otherwise specified, in this and all the following experiments we assume the policymaker preferences maximize the representative household’s utility, so that the value of α in eq. (3) is a well-defined function of the values chosen for the deep parameters, and does not need to be included in the vector β. We include in β the structural parameters of the model, presented in the Appendix: χ is the share of firms that cannot optimally adjust the price in each period, γ is the fraction of last period’s aggregate inflation rate to which the share χ of firms indexes the price, θ is the firms’ demand elasticity, ν is the inverse of labor supply wage elasticity. Table 1 reports the range of variation for the model’s parameters.[8] Even allowing for a larger set β, we still obtain that (σπtUS,σytUS,σitUS)Vo.

Table 1:

New Keynesian model parameter space used to compute optimal policy space Vo =M(β; o) for β=[σat, στt, χ, γ, θ, v]′.

New Keynesian modelParameter range for US optimal policy space
γχvθ
0.2–0.820.1–0.660.1–1.174–16

Other parameters are set as in Walsh (2005). Model is described by the time-consistent solution to maximization of eq. (3) given eqs. (1) and (2) and assuming the policymaker’s objective function maximizes the utility of the representative household. Parameter χ is the share of firms that cannot optimally adjust the price in each period, γ is the fraction of last period’s aggregate inflation rate to which the share χ of firms indexes the price, θ is the firms’ demand elasticity, v is the inverse of labor supply wage elasticity. Parameter values outside the range in Table 1 result in outcomes (σπt,σyt,σit) further from the historical US observation for the sample 1984:1–2005:1.

Finally, our numerical results show that the outcome (σπtUS,σytUS,σitUS) does not belong to Vo for a number of alternative objective functions. This result holds under the assumption that the policymaker adopts the timeless perspective optimal commitment policy, and under the alternative assumption that the policymaker adopts the wrong objective function assuming γ=0 in eq. (3), a case considered in Walsh (2005). We also examine the optimal policy space for a time-consistent policy where the policymaker’s objective function allows for an interest rate-smoothing objective, as suggested by Woodford (2003). Including an interest rate smoothing objective may improve welfare outcomes, even if the reduction of interest rate volatility is not a social objective in itself. We define:

Wt=12ΩEti=0β˜i{x˜t+i2+λππt+i2+λΔ(itit1)2}

and compute the optimal policy space for β=[σat,στt,χ,γ,θ,ν,λπ,λΔ]. Also in this case, the outcome (σπtUS,σytUS,σitUS) does not belong to Vo .

These results can be explained by two observations. First, all the model parameterizations imply different responses of endogenous variables to exogenous shocks. But many of the resulting models are nearly observationally equivalent in terms of unconditional volatility outcomes (σπt,σyt,σit): the same outcome (σπt,σyt,σit) can be generated with alternative parameterizations by different vectors [σat,στt,χ,γ,θ,ν]. Second, changes in a parameter do not necessarily add useful degrees of freedom to enlarge Vo . For example, in the optimal policy space for (σπt,σx˜t,σit) of the basic new Keynesian model a change in λ is observationally equivalent to a change in α, since the relationship between x̃t and πt and between x̃t and it in eqs. (6) and (7) depends on the ratio α/λ.

The difficulty in finding a model within the parametric family such that the US outcome belongs to the optimal policy space has three alternative interpretations.

First, US monetary policymaking was indeed suboptimal. After all, the building of the optimal policy space does allow for any possible parameterization in the vector [σat,στt,χ,γ,θ,ν], including parameterizations that may be inconsistent with available empirical evidence, and is robust to several alternative assumptions for the optimal policy computation. Finally, the optimal policy space has by construction weak power against detecting suboptimal policies: historical outcomes may belong to Vo even if they are the result of period-by-period suboptimal policies.

Second, the DSGE model propagation mechanism is incomplete or inaccurate. Conditional on optimal monetary policy, it puts implausible restrictions on the endogenous variables’ variances. This conclusion leads to question whether the optimal policy prescriptions derived from stylized DSGE models such as the one used are appropriate to guide real-world policymaking. Medium-scale models, such as the Smets and Wouters (2007) model, may provide more flexibility in terms of the parameterization of the functional forms describing the dynamics of the endogenous variables belonging to the optimal policy space. As the number of free parameters increases, for a given set of variables, there is the chance that the optimal policy space will span a larger subset of the variables’ volatility space. At the same time, the map M(β; o) depends on the equilibrium law of motion for the endogenous variables, therefore the cross-equation restrictions across a larger number of parameters may imply that the optimal policy space will span a smaller subset of the variables’ volatility space, relative to the stylized model we considered.

Third, the information set of the policymaker may be different from the one available to the econometrician. This implies that the policy assessment computed with final data for the endogenous variables may erroneously conclude that policymaking was suboptimal even if the monetary authority was reacting optimally to the information available in real time. Consider for example the target rule for optimal policy defined in equation (4). If the policymaker can only measure the output gap with a random observation error ξt the targeting rule yields:

(10)πt=αλ(x˜t+ξt) (10)

implying for given volatility of the output gap, the volatility of inflation increases. The targeting rule (10) though assumes that the policymaker is not aware of the observation error – for example, of future revisions for the final data on GDP, productivity or employment. Within the stylized model we consider in this section, when endogenous variables are imperfectly observed the true optimal policy is given by:

(11)Et{πt|Ωt}=αλ{x˜t|Ωt} (11)

Note that the problem of imperfect observability of the true macroeconomic aggregates – that is, the problem of conducting policy using real-time data – does not necessarily imply that the aggregate volatility of macroeconomic variables will increase. When the optimal policy is chosen according to eq. (11), it can be shown that the resulting optimal imperfect-information outcomes πtI and x˜tI are given by

(12)πtI=(1+λ2α)πt+λφr˜tn;x˜tI=1φr˜tn. (12)

where πt is the perfect-information outcome, and to facilitate comparison with the perfect information case we assumed that all exogenous shocks are iid.[9] Compared to the case of perfect information, equation (12) implies that inflation volatility will increase, while output-gap volatility may increase or decrease, depending on the relative volatility of demand and cost-push shocks. However, since the optimal volatility space will change, this example shows that taking into account the information set Ωt available to the policymaker can play a potentially important role when using our suggested methodology to assess policy outcomes.

4.2 A Probabilistic interpretation of the inefficiency of a volatility outcome

The optimal policy space does not provide a measure of the distance between an inefficient volatility outcome and the set of efficient outcomes. In this section we define such a measure by evaluating how large an additional source of randomness in the model should be for an inefficient outcome to belong to the set Vo . Note that this assessment relies on final data, rather than the real-time information set available to the policymaker. Therefore, our measure of deviation from the optimal policy outcome may in part be explained by the difference in the information set available to the econometrician and to the policymaker.

Consider the largest optimal policy space built to assess the US macroeconomic performance in the previous section, where we assumed β=[σat,στt,χ,γ,θ,ν]. The monetary authority enforces the time-consistent optimal policy, and the deep parameter values are summarized in Table 1. We now assume the observable interest rate itobs is described by

itobs=it+wt

where wt is a random variable with variance σwt2=x100σit2. The value x gives the variance of the variable wt as a percent share of the variance of the unobservable variable it, which is assumed to behave according to the optimal policy. In the econometric literature wt is assumed to represent a measurement error. It can be interpreted as summarizing the volatility in ito which is not explained by the DSGE model.

By adding a third source of randomness, we enlarge the set Vo of optimal policy outcomes, and obtain a measure of how large deviations of σitoUS from the volatility implied by the optimal policy need to be to have a nonzero probability of observing the outcome (σπtUS,σytUS,σitoUS), conditional on the data-generating process in eqs. (1), (2), (3) and on all possible vectors β=[σat,στt,χ,γ,θ,ν]. For each value of σwt, we compute the probability of a given bounded set around (σπtUS,σytUS,σitoUS) over all the outcomes M(β). The probability is calculated for the standard deviation of a variable zt belonging to the 5% interval [bzt,USL,bzt,USH] centered around the observation σztUS. Finally, let VoiR+ be the optimal policy space for the variable it and Voπ,yR2+ be the optimal policy space for the variables (πt, yt). To scale the result we compute the probability of an outcome σit[bit,USL,bit,USH] belonging to Voi conditional on any value within the 5% interval for (σπt,σyt) belonging to Voπ,y. Formally, we compute

Pr{[(σitVoi)(bit,USLσitbit,USH)]|(σπt,σyt)Voπy(bπt,USLσπtbπt,USH)(byt,USLσytbyt,USH)]}

Figure 4 plots the conditional probability against the variance σwt2, computed as a percent share x of the variance σit2. Including a third source of randomness implies that the outcome (σπtUS,σytUS,σitoUS) can be the result of optimal policymaking, and the variable x provides a simple measure of the additional randomness needed for the US observation to belong to Vo .

Figure 4: Probability of the outcome {[σito∈(±2.5%×σitoUS)]∩[σπt∈(±2.5%×σπtUS)]∩[σyt∈(±2.5%×σytUS)]}$\{ [{\sigma _{i_t^o}} \in ( \pm 2.5\%  \times \sigma _{i_t^o}^{{\rm{US}}})] \cap [{\sigma _{{\pi _t}}} \in ( \pm 2.5\%  \times \sigma _{{\pi _t}}^{{\rm{US}}})] \cap [{\sigma _{{y_t}}} \in ( \pm 2.5\%  \times \sigma _{{y_t}}^{{\rm{US}}})]\} $ belonging to the optimal policy space Vo , conditional on the outcome {[σπt∈(±2.5%×σπtUS)]∩[σyt∈(±2.5%×σytUS)]}$\{ [{\sigma _{{\pi _t}}} \in ( \pm 2.5\%  \times \sigma _{{\pi _t}}^{{\rm{US}}})] \cap [{\sigma _{{y_t}}} \in ( \pm 2.5\%  \times \sigma _{{y_t}}^{{\rm{US}}})]\} $ belonging to the optimal policy space Voπ,y.$V_o^{\pi ,y}.$ Horizontal axis measures variance of the measurement error for observed interest rate itobs$i_t^{{\rm{obs}}}$ as a percent share of the variance for the optimal interest rate it, given by σwt2=x100σit2.$\sigma _{wt}^2 = {x \over {100}}\sigma _{{i_t}}^2.$
Figure 4:

Probability of the outcome {[σito(±2.5%×σitoUS)][σπt(±2.5%×σπtUS)][σyt(±2.5%×σytUS)]} belonging to the optimal policy space Vo , conditional on the outcome {[σπt(±2.5%×σπtUS)][σyt(±2.5%×σytUS)]} belonging to the optimal policy space Voπ,y. Horizontal axis measures variance of the measurement error for observed interest rate itobs as a percent share of the variance for the optimal interest rate it, given by σwt2=x100σit2.

5 Related literature

A growing literature investigates the data fit of micro-founded DSGE models to the data conditional on an optimal monetary policy. Most related research focused on forward and backward-looking small macroeconomic models used in the monetary policy literature. Soderstrom, Soderlind, and Vredin (2002) use informal calibration to match an optimal policy new Keynesian model dynamics to US data. Dennis (2004), Favero and Rovelli (2003) and Salemi (2006) estimate structural models subject to the restriction that the policy rule minimizes the policymaker loss function.

Given a time series for the observables (Y1tYnt) with covariance matrix ΣY the approach adopted by these authors produces estimates for the deep parameters, the policymaker preferences, and a time series for a vector of shocks with nonsingular covariance matrix such that the theoretical model can generate the historical data. This approach also implies that there will exist an estimated parameter vector, including random deviations from the optimal policy, such that the historical volatility outcome can be generated by the model.

Salemi (2006) shows how to use the nonsingular model estimation approach to compute a statistical test for optimal policymaking. The optimal policy imposes cross-equation restrictions on the estimated parameters, and their impact on the likelihood of the model can be exploited for testing. The optimal policy space we propose is instead built exploiting the restrictions imposed by truly optimal policymaking in a parametric family of singular models on the volatility of observable variables. Compared to the assumptions used by papers estimating a non-singular model with deviations from the optimal policy behavior, the singular-model approach we propose makes stronger assumptions on the behavior of the policymaker. On the other hand, the use of the optimal policy space as a diagnostic tool for the efficiency of macroeconomic outcomes relaxes the demand on the data fit since policies that are period-by-period suboptimal may still result in volatility outcomes belonging to the optimal policy space.

Clearly a three-equations model, as the one adopted in this paper, can only provide a stylized description of the economy’s behavior. Yet small optimal policy DSGE models are estimated to gain insight into the preferences of the policymaker, and are often relied upon by economists to illustrate and generate policy prescriptions and guidelines. Computing the optimal policy space for such models provides important insights into the restrictions on the data that the models imply.

6 Conclusions

This paper studied the restrictions implied by optimal policy DSGE models for the volatility of observable endogenous variables.

Our approach relies on the restrictions imposed by optimal policymaking on the variance of the endogenous variables in singular models. To generate a non-trivial set for the volatility of observable variables – which we label the optimal policy space – we introduce variation in the behavioral parameters when building the set of outcomes consistent with the model. We show that a DSGE model can be associated with a well-defined subset of all the possible volatility outcomes, which is not of measure zero. This is the result of the nonlinearity of the mapping between a DSGE model parameter space and the implied volatility of the endogenous variables. Nonsingular models, which assume random perturbations to optimal policymaking, imply no observable outcome has zero probability.

We illustrated our method by building the optimal policy space of a widely used new Keynesian model. Conditional on this model, recent US monetary policymaking would have zero likelihood of being the result of optimal policymaking. Since this approach has by construction low power in discriminating optimal policy outcomes, we interpret the result as evidence that widely used optimal policy models can only be consistent with a very limited set of volatility outcomes, regardless of the parameterization adopted.

In the case of a simple new Keynesian model we were able to find a closed-form solution for the mapping M(β; o) defining the optimal policy space, describing the volatility of endogenous variables as functions of the volatility of exogenous shocks. When a closed-form solution is available, the rank of the Jacobian matrix associated with M(β; o) can be examined to assess whether the optimal policy space for a given set of endogenous variables is of measure zero. We showed that when a closed-form solution is not available, numerical simulations can be performed to generate the optimal policy space. Thus this approach can be readily extended to medium-scale DSGE models.

Acknowledgments

I would like to thank Bart Hobijn, Peter Ireland, Andre Kurmann, Luca Sala, Ulf Soderstrom, Peter Tillmann, Mathias Trabandt, Carl Walsh and two anonymous referees for very helpful comments and suggestions, and Daniel Beltran for excellent research assistance.

Appendix

A.1 The optimal policy space for a singular model: the case of a linear mappingM(β)

This section shows that for a singular model, as in the case of a parameterized linear optimal policy DSGE model, the mapping M(β) is linear.

Assume M(β; o) is a linear map and is equal to:

(13)M(β;o)=Cβ (13)

where β is an k×1 vector and C is an n×k matrix. For an unrestricted vector β two outcomes are possible. When the matrix C is of rank n its columns span the space Rn. Then Vo =Rn and necessarily Vo =Vp for any policy p such that rank(C)=n. When C is of rank s<n its columns span the subspace Rs and Vo is a s-dimension hyperplane.

For a linear model and β including only the entries for the exogenous shocks’ covariance matrix the map M(β; o) can be written as in eq. (13). Let the model associated with M(β; o)be described by the stationary law of motion Yt=AUt where Yt is an n×1 vector of endogenous variables with covariance matrix ΣY and Ut is an s×1 vector of exogenous shocks with covariance matrix ΣU. For β≡vec(ΣU) we can write

(14)M(β;o)=T(AA)vec(ΣU) (14)

where T is an n×nn matrix with unitary value at entry [i,(i1)n+i]i=1n and zero otherwise, so that M(β; o) is equal to the diagonal of ΣY. If A is of rank n the linear map vec(ΣY)=(AA) vec(ΣU) spans the space defined by the vectorization of n×n positive semi-definite symmetric matrices, and the matrix T(AA) is of rank n. Because ΣU is a positive semi-definite symmetric matrix, M(β; o) does not span Rn. It will though span Rn+, since M(β; o) is just the main diagonal of ΣY, and any vector gRn+ is the main diagonal of at least one positive semi-definite matrix. If A is of rank s<n, also T(AA) is of rank s<n. This is the case of a singular model, where Vo is a s-dimension hyperplane in Rn. Therefore, conditional on the model A either all vectors [σY12,σY22,,σYn2] belong to the optimal policy space (and Vo is an improper subset of Rn+) if s=n, or any vector [σY12,σY22,,σYn2] almost surely does not belong to the optimal policy space if s<n.

A.2 Solution of the Benigno and Woodford (2005) model

Consider the New Keynesian model for inflation πt, output gap xt, interest rate it as described in Walsh (2005) and Benigno and Woodford (2005):

(15)xt=1φ(itEtπt+1rtn)+Et(xt+1) (15)
(16)πtγπt1=λxt+β˜Et(πt+1γπt)xt=ytytn (16)

where rtn is the Wicksellian real rate of interest, yt is output, ytn is the level of output that would obtain in the flexible-price equilibrium, φ is the coefficient of relative risk aversion for the representative household divided by the consumption share of output, β˜ is the household’s discount rate. It is assumed that a constant share of firms can adjust the price in each period, while the remaining share indexes the price to a fraction γ of last period’s aggregate inflation rate. When prices can optimally adjust in every period the rational expectation equilibrium solution for ytn and rtn does not depend on it:

ytn=ϕ1Gt+ϕ2at+ϕ3τtrtn=ϕ4Et(yt+1nytn)+ϕ5Et(Gt+1Gt)ϕ1=φω+φϕ2=ζ(1+v)ω+φϕ3=[τ¯/(1τ¯)]ω+φϕ4=φϕ5=(1sC)ω=ζ(1+v)1

The variable Gt is defined as exogenous government consumption (in log-deviations from the steady state), at is an exogenous productivity shock, τt is an exogenous income tax shock. The parameter ζ is the elasticity of firm output with respect to labor input, v is the inverse of the wage elasticity of labor supply, ω is the inverse of the elasticity of firm marginal cost with respect to output, τ̅ is the steady state tax rate, sC is the consumption steady state share of output, φ is the coefficient of relative risk aversion for the representative household divided by sC. The elasticity of inflation with respect to xt is given by:

λ=(1χ)(1χβ˜)χ(1+θω)(ω+φ)

In the absence of transfers to correct the steady state distortions arising from taxes and imperfect competition, or in the case τt≠0, the efficient level of output y* is different from yn and is given by:

yt=w1ytn+w2Gt+w3τtw1=ω+φ+Φ(1φ)ξw2=Φσ(ω+φ)ξsCw3=τ¯/(1τ¯)ξξ=(ω+φ)+Φ(1φ)Φσ(sC11)(ω+φ)Φ=1θ1θ(1τ¯)

where θ is the firms’ demand elasticity. The second order approximation to the utility of the household can be written as:

(17)Wt=12ΩEti=0β˜i{αx˜t+i2+(πt+iγπt+i1)2}x˜t=(ytyt) (17)

where x̃t is the welfare-relevant output gap. Wt is equal to the household’s welfare for α=α* where

α=λw1θ

The model in (15), (16) can be expressed in terms of the endogenous variables appearing in the objective function (17):

(18)x˜t=1φ(itEtπt+1r˜tn)+Et(x˜t+1) (18)
(19)πtγπt1=λx˜t+β˜Et(πt+1γπt)+λutr˜tn=ϕ4Et(yt+1yt)+ϕ5Et(Gt+1Gt)ut=ytytn (19)

The variable ut is a linear combination of all the exogenous shocks. The variable Φ is a measure of the steady state distortions in the economy. If appropriate transfers ensure, as is often assumed, that the steady state is efficient, then Φ=0. Benigno and Woodford (2005) show that in this case w1=1, w2=0, and

ut=w3τt

Assume γ=0. Then the problem faced by the optimal policymaker can be written as:

(20)Max12ΩEti=0β˜i{αx˜t+i2+πt+i2} (20)
(21)stx˜t=1φ(itEtπt+1r˜tn)+Et(x˜t+1) (21)
(22)πt=λx˜t+β˜Etπt+1+λut (22)
(23)ut=w3τt (23)
(24)r˜tn=ϕ4Et[φ(Gt+1Gt)+ζ(1+v)(at+1at)ω+φ]+ϕ5Et(Gt+1Gt) (24)

In this model movements in at or Gt can be interpreted as “demand shocks” since they affect r˜tn but not ut, therefore do not affect the trade-off between the stabilization objectives and can be perfectly offset by the policymaker. The variable ut takes the interpretation of a “cost push” shock, and depends only on movements in τt. Assuming, that sC=1, Gt=ρGGt1+εGt,at=ρaat1+εat,εt~iid, ρG=ρa it holds:

(25)r˜tn=ϕ4Et(yt+1yt)yt=1φ(1ρa)r˜tn (25)

Eq. (25) holds also for sC<1 and Gt=0∀ t or for sC<1 and ρg=1.

The optimal time-consistent policy is given by the FOC:

πtγπt1=αλ(1+β˜γ)xt

The timeless perspective optimal commitment policy is given by the FOC:

πtγπt1=(αλ)(xtxt1)

Baseline parameterization: The parameterization follows Walsh (2005) unless otherwise stated in the main text.

χ=0.66γ=0.5β˜=0.99φ=0.16ϕ=1.5θ=7.88sC=0.8v=0.49τ¯=0.2ρa=0.95ρG=0.95ρτ=0.95

References

Baxandall, P., and H. Liebeck. 1986. Vector Calculus. Oxford: Clarendon Press.Search in Google Scholar

Benigno, P., and M. Woodford. 2005. “Inflation Stabilization and Welfare: the Case of a Distorted Steady State.” Journal of the European Economic Association 3 (6): 1185–1236.10.3386/w10838Search in Google Scholar

Bierens, H. 2007. “Econometric Analysis of Linearized Singular Dynamic Stochastic General Equilibrium Models.” Journal of Econometrics 136: 595–627.10.1016/j.jeconom.2005.11.008Search in Google Scholar

Clarida, R., J. Galí, and M. Gertler. 1999. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37 (4): 1661–1707.10.3386/w7147Search in Google Scholar

Conlon, L. 2001. Differentiable Manifolds. Boston: Birkhauser.10.1007/978-0-8176-4767-4Search in Google Scholar

Dennis, R. 2004. “Inferring Policy Objectives from Economic Outcomes.” Oxford Bulletin of Economics and Statistics 66: 735–764.10.1111/j.1468-0084.2004.100_1.xSearch in Google Scholar

Favero, C., and R. Rovelli. 2003. “Macroeconomic Stability and the Preferences of the Fed: A Formal Analysis, 1961–1998.” Journal of Money, Credit and Banking 35: 546–556.10.1353/mcb.2003.0028Search in Google Scholar

Kwakernaak, H. 1979. “Maximum Likelihood Parameter Estimation for Linear Systems with Singular Observations.” IEEE Transactions on Automatic Control 24 (3): 496–498.10.1109/TAC.1979.1102065Search in Google Scholar

Lai, Hung-pin. 2008. “Maximum Likelihood Estimation of Singular Systems of Equations.” Economic Letters 99: 51–54.10.1016/j.econlet.2007.05.027Search in Google Scholar

Salemi, M. 2006. “Econometric Policy Evaluation and Inverse Control.” Journal of Money, Credit and Banking 38: 1737–1764.10.1353/mcb.2006.0092Search in Google Scholar

Smets, F., and R. Wouters. 2007. “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach.” American Economic Review 97(3): 586–606.10.1257/aer.97.3.586Search in Google Scholar

Soderstrom, U., P. Soderlind, and A. Vredin. 2002. “Can a Calibrated New – Keynesian Model of Monetary Policy Fit the Facts?” Sveriges Riksbank Working Paper 140.Search in Google Scholar

Walsh, C. 2005. “Endogenous Objectives and the Evaluation of Targeting Rules for Monetary Policy.” Journal of Monetary Economics 52: 889–911.10.1016/j.jmoneco.2005.07.003Search in Google Scholar

Woodford, M. 2003. “Optimal Interest Rate Smoothing.” Review of Economic Studies 70: 861–886.10.1111/1467-937X.00270Search in Google Scholar

Published Online: 2016-4-28
Published in Print: 2016-6-1

©2016 by De Gruyter

Downloaded on 31.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/bejm-2015-0008/html
Scroll to top button