Home Disentangling Permanent and Transitory Monetary Shocks with a Nonlinear Taylor Rule
Article Open Access

Disentangling Permanent and Transitory Monetary Shocks with a Nonlinear Taylor Rule

  • Juan Ángel Lafuente EMAIL logo , Mercedes Monfort , Rafaela Pérez and Jesús Ruiz
Published/Copyright: December 31, 2021

Abstract

This article provides an estimation method to decompose monetary policy innovations into persistent and transitory components using the nonlinear Taylor rule proposed in Andolfatto, Hendry, and Moran (2008) [Are inflation expectations rational? Journal of Monetary Economics, 55, 406–422]. To use the Kalman filter as the optimal signal extraction technique, we use a convenient reformulation for the state equation by allowing expectations to play a significant role in explaining the future time evolution of monetary shocks. This alternative formulation allows us to perform the maximum likelihood estimation for all the parameters involved in the monetary policy as well as to recover conditional probabilities of regime change. Empirical evidence on the US monetary policy making is provided for the period covering 1986-Q1 to 2021-Q2. We compare our empirical estimates with those obtained based on the particle filter. While both procedures lead to similar quantitative and qualitative findings, our approach has much less computational cost.

JEL classification: C22; F31

1 Introduction

State-space models are useful for many economic applications. As it is well known, under normality, the classical Kalman filter provides the minimum-variance estimate of the current state considering the most recent signal. This prediction is just the conditional expectation. However, under nonlinearity and/or nonnormality, the filtering procedure developed by Kalman (1960) becomes nonoptimal. Two alternatives have been developed in the literature to deal with this aspect: (a) the use of first-order Taylor series expansion to get linearized equations (transition and/or observation) and (b) the use of simulations techniques based on sequential estimation of conditional densities through lot of replications. The first alternative leads to biased estimators. As to the second approach, the seminal articles of Fernández-Villaverde and Rubio-Ramírez (2005, 2007) show how to deal with the likelihood-based estimation of nonlinear dynamic stochastic general equilibrium (DSGE) models with nonnormal shocks using a sequential Monte-Carlo method (particle filter). This procedure requires a heavy computational burden.

This article rethinks about the nonoptimality of the Kalman filter by revisiting the signal extraction problem proposed in Andolfatto, Hendry, and Moran (2008). These authors consider a nonlinear Taylor rule where regime shifts reflect the updating of the central bank’s inflation target. Such rule could be useful not only to analyze monetary policy making through the lens of a Taylor rule but also in the context of New-Keynesian models that incorporate imperfect monetary policy credibility and/or changes in the Central Bank’s inflation target (see, e.g., Aruoba & Schorpheide, 2011; Cogley, Primiceri, & Sargent, 2010; Grossi & Tamborini, 2012; Ireland, 2007; Kozicki & Tinsley, 2005; Milani & Treadwell, 2012). The article contributes to the literature by providing an optimal use of the Kalman filter to estimate persistent and transitory monetary shocks when permanent shifts in the inflation target take place. Therefore, we focus on how to estimate a Taylor rule where central banks’ smoothing of interest rates is time varying because of time-varying inflation targeting. Our nonlinear framework from the perspective of conventional monetary policy making is also interesting even in scenarios with interest rates close to the zero-lower bound. The recent study of Anzuini (2021) provides empirical evidence on the presence of nonlinearities in the transmission mechanism of unconventional large-scale asset purchases made by the Federal Reserve after the global financial crisis of 2008.

We consider a new state-space representation that requires the use of state-contingent matrices, and expectations play a significant role in monetary policy making. Our procedure has two clear advantages over the standard particle filter: (a) the possibility of performing a maximum likelihood estimation of the parameters involved in the monetary policy and, therefore, the estimation of conditional time-varying probabilities of regime switching and (b) a remarkable lower computational cost. Moreover, it could be incorporated into simulation algorithms for DSGE models in a straightforward manner.

To provide an empirical comparison between our estimation procedure and the particle filter, we estimate permanent and transitory monetary shocks from the quarterly US data covering the period 1985–2021. We find that the evidence of a regime change in the US monetary policy making during the period 1984–1999 is weak. However, after the Great Moderation, 9/11, the recession that started in March 2001 and the subprime crisis are three events clearly affecting inflation targets in terms of the long-term nominal anchor. Moreover, the financial crisis and the arising of the COVID-19 pandemic are crucial events that influence the US monetary policy making. To further examine the performance of our estimation procedure, we compare our results with those obtained using the particle filter. We show how, after the financial crisis, both approaches lead to not only very close point estimates of the parameters governing the dynamics of the nonlinear Taylor rule but also similar probability distributions of the estimated deviations of the current inflation target from its long-term mean.

The rest of the article is organized as follows: The next section reminds the nonlinear Taylor rule in which we focus. Section 3 describes the reformulation of the state-space representation proposed. Section 4 presents empirical evidence for the US, while Section 5 compares our empirical findings with those based on the particle filter. Finally, Section 6 summarizes and provides concluding remarks.

2 The Econometric Problem

Consider the following Taylor rule with time-varying inflation targeting (Andolfatto et al. (2008)):

(1) i t = ( 1 ρ ) [ r + π + α ( π t π t ) + β ( y t y t ) ] + ρ i t 1 + u t ,

where r is the long-run equilibrium real interest rate, π t denotes the inflation target, y t y t is the output gap, ρ is the parameter accounting for monetary policy inertia, and u t represents the monetary shocks, which can be interpreted as errors underlying the central bank’s control over the policy instrument. We suppose that the time evolution of this shock can be represented as follows:

(2) u t + 1 = φ u t + e t + 1 , 0 , < φ 1 , e t + 1 N ( 0 , σ e 2 ) .

Following Andolfatto et al. (2008), a second disturbance to monetary policy is considered. This noise represents the change in the proper rate of inflation the central bank should pursue because of changes in the economic outlook. We express these shifts as z t = π t π , so that z t represents the deviation of the current target ( π t ) from its long-term (time-invariant) mean ( π ). Increases of z t in the range of positive values mean that monetary policy stance becomes more expansionary because the central bank relaxes its short-run inflation target. On the contrary, decreases of z t in the range of negative values represent a tightening of monetary policy. It is expected that these shifts will exhibit significant duration:

(3) z t + 1 = z t , with probability p , g t + 1 , with probability 1 p ,

with g t + 1 N ( 0 , σ g 2 ) .

Combining the definition of z t with equation (1), the Taylor rule can be rearranged as follows:

(4) i t = ( 1 ρ ) [ r + π + α ( π t π ) + β ( y t y t ) ] + ρ i t 1 + ( 1 ρ ) ( 1 α ) z t + u t ε t .

Therefore, monetary shocks in the above Taylor rule ( ε t ) are a combination of a persistent ( ( 1 ρ ) ( 1 α ) z t ) and a transitory ( u t ) innovation.

Researchers interested in incorporating the above monetary rule as a plausible representation scheme for monetary policy making into a DSGE model consider that agents need to learn about the decisions by the central bank in two ways: (a) they should solve a signal extraction problem to break down the aggregate shock into the permanent and the transitory components and (b) they should act as econometricians to estimate parameters φ , σ e 2 , σ g 2 , and p . The next section explains how to deal with both aspects.

3 State-Space Representation and Maximum Likelihood Estimation

Andolfatto et al., 2008 propose the following state-space representation for the monetary shocks in the above-mentioned Taylor rule:

(5) z t + 1 u t + 1 = p 0 0 ϕ z t u t + N t + 1 e t + 1 , where N t + 1 = ( 1 p ) z t , with prob . p , g t + 1 p z t , with prob . 1 p , ε ˆ t = ( 1 ρ ) ( 1 α ) 1 z t u t ,

where the observable signal, ε ˆ t , is the ordinary least squares estimate of the error term in the Monetary Authority’s reaction function (equation (4)).

As pointed out by Andolfatto et al. (2008), the use of the Kalman filter is not fully optimal because z t is a mixture of a Bernoulli process and a Gaussian noise. To overcome the absence of nonnormality, let us consider an alternative formulation of the time evolution of z t that requires a state-space representation with state-contingent matrices in the state equation.[1] This alternative formulation is as follows:

(6) z t + 1 u t + 1 = ϕ 0 0 φ z t u t + ϖ S t + 1 0 0 0 E t z t + 1 E t u t + 1 + δ S t + 1 0 0 1 g t + 1 e t + 1 ,

(7) ε ˆ t = ( 1 ρ ) ( 1 α ) 1 z t u t ,

where

φ ( 0 , 1 ) , ϖ S t + 1 = 1 φ p , if S t + 1 = 1 , with prob . p , φ p , if S t + 1 = 0 , with prob . 1 p ,

and

(8) δ S t + 1 = 0 , if S t + 1 = 1 , with prob . p , 1 , if S t + 1 = 0 , with prob . 1 p .

Proposition 1

If ϖ S t + 1 a n d    δ S t + 1 are defined as in equation (7), the dynamics of z t is observationally equivalent to equation (5) from the perspective of conditional mean.

Proof

From equation (6), we have that

z t + 1 = ϕ z t + ϖ S t + 1 E t z t + 1 + δ S t + 1 g t + 1 ,

and, therefore, the conditional expectation of z t + 1 is as follows:

(9) E t z t + 1 = ϕ 1 p ϖ S t + 1 = 1 ( 1 p ) ϖ S t + 1 = 0 z t .

From equation (8), with probability p, S t + 1 = 1 , and z t + 1 = z t ; then:

φ z t + ϖ S t + 1 = 1 E t z t + 1 + δ S t + 1 = 1 g t + 1 = z t φ z t + ϖ S t + 1 = 1 φ 1 p ϖ S t + 1 = 1 ( 1 p ) ϖ S t + 1 = 0 z t + δ S t + 1 = 1 g t + 1 = z t .

This equation holds when:

(10) δ S t + 1 = 1 = 0 , and φ 1 + ϖ S t + 1 = 1 1 p ϖ S t + 1 = 1 ( 1 p ) ϖ S t + 1 = 0 = 1 .

Again from equation (8), but with probability 1 − p, S t + 1 = 0 , and z t + 1 = g t + 1 ; then:

φ z t + ϖ S t + 1 = 0 E t z t + 1 + δ S t + 1 = 0 g t + 1 = g t + 1 φ z t + ϖ S t + 1 = 0 φ 1 p ϖ S t + 1 = 1 ( 1 p ) ϖ S t + 1 = 0 z t + δ S t + 1 = 0 g t + 1 = g t + 1 .

This equation holds when:

(11) δ S t + 1 = 0 = 1 , and  1 + ϖ S t + 1 = 0 1 p ϖ S t + 1 = 1 ( 1 p ) ϖ S t + 1 = 0 = 0

Equations (10) and (11) define a system for the variables { ϖ S t + 1 = 1 , ϖ S t + 1 = 0 } , with the following solution:

ϖ S t + 1 = 1 = 1 ϕ p ; ϖ S t + 1 = 0 = ϕ p .

Note that the representation that we propose is a function of the parameter ϕ . Next, we demonstrate that there is a unique value of ϕ in terms of probability p that yields the same conditional variance as in equation (5) for the z t process.

Proposition 2

Our representation yields the same conditional variance as in equation (5) for the z t process if φ = p / 2 .

Proof

In accordance with equation (4), the conditional variance of z t is as follows:

(12) var t ( z t + 1 ) = E t ( z t + 1 E t z t + 1 ) 2 = E t z t + 1 = p z t E t ( z t + 1 p z t ) 2 = p ( 1 p ) z t 2 + ( 1 p ) σ g 2 .

Using our representation, we have:

(13) var t ( z t + 1 ) = E t ( z t + 1 E t z t + 1 ) 2 = E t z t + 1 = p z t E t ( z t + 1 p z t ) 2 = E t ( ϕ z t + ϖ S t + 1 E t z t + 1 + δ S t + 1 g t + 1 p z t ) 2 = E t [ ( ϕ ( 1 ϖ S t + 1 ) p ) z t + δ S t + 1 g t + 1 ] 2 = p ( 2 ϕ 1 ) 2 z t 2 + ( 1 p ) E t [ p z t + g t + 1 ] 2 = p ( 2 ϕ 1 ) 2 z t 2 + ( 1 p ) ( p 2 z t 2 + σ g 2 ) .

Substituting φ = p / 2 into equation (13) is straightforward to get expression in equation (12).□

Our state-space formulation, which is characterized by having Gaussian innovations, is:

(14) ξ t + 1 = F ξ t + B ( S t + 1 ) E t ξ t + 1 + U ( S t + 1 ) υ t + 1 ,

(15) ε ˆ t = H ' ξ t ,

where:

ξ t + 1 = [ z t + 1 u t + 1 ] , υ t + 1 = [ g t + 1 e t + 1 ] , F = p 2 0 0 ϕ , B ( S t + 1 ) = ϖ S t + 1 0 0 0 , U ( S t + 1 ) = δ S t + 1 0 0 1 , E [ υ t υ t ] = σ g 2 0 0 σ e 2 , H = ( 1 ρ ) ( 1 α ) 0 1 ,

and ϖ S t + 1   and   δ S t + 1 are defined as in (P).

Equations (14) and (15) define a state-space system (see Hamilton, 1994, chapter 13), where equation (14) is the state equation and equation (15) is the observation equation.

For each of the two relevant histories, S t = k ( k = { 0 , 1 } ), the equations for the Kalman filter are[2]:

K t ( k ) = P t | t 1 ( k ) H ( H P t | t 1 ( k ) H ) 1 , ξ ^ t + 1 | t ( k ) = ( I B ( S t = k ) ) 1 F [ ξ ^ t | t 1 ( k ) + K t ( k ) ( ε ˆ t H ξ ^ t | t 1 ( k ) ) ] , P t + 1 | t ( k ) = F P t | t 1 ( k ) F F K t ( k ) P t | t 1 ( k ) F + U ( S t = k ) Q U ( S t = k ) .

Next, we describe how to get the log-likelihood function to be maximized with respect to the parameters ϕ , p , σ g 2 , and σ e 2 :

Step 1: Computing the density functions for each history:

The conditional density function of ε ˆ t to Y t 1 ( ε ˆ 1 , ε ˆ 2 , , ε ˆ t 1 ) ' is:

f ( ε ˆ t | Y t 1 , S t = k ; θ ) = ( 2 π ) 1 / 2 ω t ( k ) 1 / 2 exp 1 2 μ ^ t ( k ) [ ω t ( k ) ] 1 μ ^ t ( k ) = ( 2 π ) 1 / 2 ω t ( k ) 1 / 2 exp 1 2 [ μ ^ t ( k ) ] 2 / ω t ( k ) ,

where ω t ( k ) = H P t | t 1 ( k ) H ; μ ˆ t ( k ) = ε ˆ t H ξ ^ t | t 1 ( k ) , and θ ( ϕ , p , σ g 2 , σ e 2 ) ' .

Step 2: Computing the marginal density function of ε ˆ t conditional to Y t 1 :

f ( ε ˆ t | Y t 1 ; θ ) = k = 0 1 f ( ε ˆ t | Y t 1 , S t = k ; θ ) P [ S t = k ] .

Step 3: Obtaining the log-likelihood function of ε ˆ :

ln L ( θ ) = t = 1 T ln f ( ε ˆ t | Y t 1 ; θ ) .

Once the parameters have been estimated, the probability of a regime change in the current period conditional on a given shock can be estimated as follows:

Pr [ S t = 0 | ε ˆ t ] = Pr [ S t = 0 ] f ( ε ˆ t | Y t 1 , S t = 0 ; θ ˆ ) f ( ε ˆ t | Y t 1 ; θ ˆ ) ,

where θ ˆ denotes the vector of estimated parameters.

4 Empirical Evidence

In this section, we show how to use our estimation method to provide empirical evidence on monetary policy making for the United States through the lens of a Taylor rule. We use quarterly data retrieved from the Federal Reserve Bank of St. Louis. In particular, we collect information on the federal fund rate, which is the interest rate at which depository institutions trade federal funds with each other overnight, inflation, gross domestic product (GDP), and the output gap, measured by the difference of the real gross domestic product and the real potential gross domestic product as a percentage of the real potential gross domestic product. We consider the sample period covering the first quarter of 1986 to the second quarter of 2021. We start in 1986 due to the Fed changing approach to monetary policy in the late 1970s and early 1980s. In October 1979, the Federal Open Market Committee (FOMC) began to target the quantity of money (nonborrowed reserves) instead of the price of bank reserves. As a consequence, the average fed funds rate fluctuated greatly between 1979 and 1982, and M1 started to show wide fluctuations that do not appear to have been related to economic conditions. Starting in late 1982, the Federal Reserve shifted back to its approach of targeting the price rather than the quantity of money. Figure 1 depicts the time evolution of the data used.

Figure 1 
               Time series for inflation, output-gap and interest rate for the US economy.
Figure 1

Time series for inflation, output-gap and interest rate for the US economy.

A least square regression of the following Taylor rule:

(16) i t = β 0 + β 1 i t 1 + β 2 π t + β 3 ( y t y t ) + ε t ,

yields the following parameter estimates (standard deviations in brackets):

i t = 0.0024 + 0.9347 i t 1 + 0.0292 π t + 0.1103 ( y t y t ) + ε ˆ t    [ 0.0001 ] [ 0.0003 ] [ 0.0110 ] [ 0.0005 ] .

Consistent with previous empirical research, a significant point estimate of the lagged policy rate is detected, suggesting very slow partial adjustment in the US monetary policy making. Also, the estimated response for the deviation of the short-run inflation target from its long-run counterpart is consistent with the Taylor principle, that is, the nominal interest rate raises more than point for point when inflation exceeds the target inflation rate.

Figure 2 depicts the time evolution for the probability of regime change conditional to a given monetary shock, as well as the permanent component of the monetary shock, that is, the deviation of the current inflation target from its long-term mean ( z ˆ t | t 1 ).

Figure 2 
               Monetary policy with time-varying inflation target.
Figure 2

Monetary policy with time-varying inflation target.

Our empirical findings show a high probability of regime change in the latter half of the 1980s. This is consistent with historical monetary policy making in the US[3]: The Fed responded to the October 1987 stock market crash in a number of ways: (a) accommodated the increased demand for currency and bank reserves with extensive open market purchases and (b) it also dropped its federal fund rate target from around 7.5% to about 6.75%. Later, in the spring of 1988, with the core consumer price index (CPI) inflation running at about 4.5%, the Fed reacted to inflationary pressures and began to raise the fund rate to nearly 10% in March 1989.

After the Great Moderation, the probability of regime change approaches unity just in March and December 2001. On November 26, 2001, the National Bureau of Economic Research announced that the US economy had been in recession since March 1, 2001. However, as Mostaghimi (2004) notes, there was some speculation that even though the US monetary authorities had anticipated the severity of the problems in the US economy in 2000, they hesitated to act promptly because of the prolonged the US presidential election process. Another probable regime change detected is immediately after the unexpected shock of 9/11 event, which undoubtedly accelerated the decline in consumer confidence first noted in August 2001. After the terrorist attack, the Fed took up the challenge of maintaining and managing countercyclical policy in a stable price environment. To face the crisis, target federal funds rates were lowered quickly, and the US monetary policy was easy during the period 2002–2006.

It is also observed two potential regime changes in the first quarter of 2008 and 2009, which are both related to the subprime mortgage crisis. The initial signals for the crisis in financial markets can be dated in June–July 2007 (problems at the Bear Stearns hedge fund); next, economic growth weakened, and the recession officially started in December 2007. In March 2008, Bear Stearns collapsed, while Lehman Brothers followed in September 2008. By late 2008, nominal interest rates were close to the zero bound, but financial markets were not responding as expected. The Fed took additional measures. On March 18, 2009, the press release made by the Fed stated: “to provide greater support to mortgage lending and housing markets, the Committee decided today to increase the size of the Federal Reserve’s balance sheet further by purchasing up to an additional $750 billion of agency mortgage-backed securities, bringing its total purchases of these securities to up to $1.25 trillion this year, and to increase its purchases of agency debt this year by up to $100 billion to a total of up to $200 billion. Moreover, to help improve conditions in private credit markets, the Committee decided to purchase up to $300 billion of longer-term Treasury securities over the next six months.”

As to the time-varying estimates of the difference between the current and the long-term targeted rates, Figure 2 suggests that the short-run inflation target has been close to a constant since 1984 and extremely more volatile (relative to the post-1984 period) in the early 1980s. Such extreme realizations are at odds with a variety of estimates previously reported in the literature (e.g., Aruoba & Schorpheide, 2011; Cogley et al., 2010; Ireland, 2007), probably reflecting that the 1980–1984 period, roughly corresponding to the Volcker disinflation, is difficult to model with the rule under scrutiny. However, after the Great Moderation, the regime changes detected in monetary policy making are matched with substantial updates in the current inflation target.

It is also remarkable that our empirical evidence suggests that, during the period 1994–2000, the monetary policy implemented by the Federal Reserve was, in general, based on short-run inflation targets below the long-term target. This path for flexible inflation targeting is consistent with no accommodative monetary policy, in line with the Fed’s policy during this period. The economic environment at the beginning of the past decade was sharply affected by the terrorist attack of September 11, 2001. During the period covering 1999–2001, our estimates reveal two significant updates of inflation target, in the fourth quarter of 1999 and 2001, respectively. These two “regime shifts” are motivated not only by geopolitical uncertainties derived from the terrorist attack but also by the weak recovery of the US economy after the moderate recession between March and November 2001. For the period 2001–2004, the estimated discrepancy between the current inflation target and the long-term inflation target is, on average, positive, revealing that inflation did not appear as a serious concern in the short run for the FOMC during this period. Therefore, the maximum sustainable employment arises now as the only relevant goal in this period. Both aspects explain the aggressive response of the Fed in 2002 and 2003. As pointed out by Bernanke (2010), the discrepancy between the actual federal funds rates and the values implied by the Taylor rule during this time period is the most commonly cited evidence that monetary policy was too easy to prevent further bubbles in financial markets. However, our empirical findings suggest that the Fed managed the federal fund rates in accordance with short-run and long-run inflation targets. However, we can observe that the period 2004–2006 is characterized by negative differences between current inflation targets and the long-term inflation target. This suggests that, as a difference with the previous period (2001–2004), the Fed should now face the classical trade-off between employment and inflation in monetary policy making. And to prevent for inflationary pressures that might cause the US economic growth, especially encouraged by the aggressive response of the Fed after 2001, just in June 2004 the Federal Market Committee began to raise the target rate, reaching 5.25% in June 2006. In a similar way as Svensson (2010), we can conclude that our empirical evidence on flexible inflation targeting suggests that the US monetary policy was implemented accordingly with the macroeconomic conditions after the Great Moderation.[4]

In 2007–2009 period, called as the Great Recession, two clear changes in inflation targeting are detected. Indeed, the FOMC lowered its target for the federal funds rate from 4.5% at the end of 2007 to 2% at the beginning of September 2008. The recession ended in June 2009, but given that economic weakness still was persisted, and the Fed applied a forward guidance intended to convince the public that rates would stay low. For example, in December 2012, the committee anticipates that exceptionally low interest rates would be appropriate while unemployment rate remains above a threshold value of 6.5%.

Our empirical findings suggest that no regime change occurred between the aftermath of the financial crisis and the outbreak of Coronavirus Disease 2019. Along the pandemic scenario, our estimations suggest two additional regime changes in the fourth quarter of 2019 and the third quarter of 2020, respectively. As to the first one, it should be remembered that due to muted inflation pressures, the FOMC lowered the target range for the federal funds rate at its July, September, and October meetings by 25 basis points each. The second regime change is related to the implementation note released by the FOMC on June 16, 2021. In this note, the FOMC stated that “The Committee seeks to achieve maximum employment and inflation at the rate of 2 percent over the longer run. With inflation having run persistently below this longer-run goal, the Committee will aim to achieve inflation moderately above 2 percent for some time so that inflation averages 2 percent over time and longer-term inflation expectations remain well anchored at 2 percent.” The regime change for the third quarter of 2020 is also reflecting the massive purchase of securities made by the Fed. Between mid-March and early December of 2020, the Fed’s portfolio of securities held outright, which includes commercial mortgage-backed securities, grew about 70%.

5 An Alternative Approach: The Particle Filter

An alternative approach to estimate deviations of the current inflation target from its long-term mean is the particle filter.[5] We now explore whether this approach leads to important qualitative and quantitative differences compared to results obtained using our state-space representation equation (4).

We compare initially the performance of parameter estimations governing the dynamics of z t , which is what we are mainly interested in this application. As to point estimates, the next table shows the estimated parameters with the two alternatives using 5,000 particles. We show the estimated probability of regime change ( p ), the estimated volatility of permanent and transitory shocks ( σ e 2 and σ g 2 , respectively) and the autoregressive (AR) equation (1) parameter that corresponds to the time evolution of the transitory shock ( φ ).

Except for the volatility associated with the stationary component, confidence intervals based on the particle filter contain the point estimates we get using the state-space representation proposed. However, our method tends to underestimate the unconditional probability of regime change and the volatility of the shock arising under regime change.

As to the variable z t , Figure 3 shows the time evolution of the estimated discrepancies between the current inflation target and its long-term counterpart using both procedures.

Figure 3 
               Deviation estimated of the current inflation target from its long-term mean.
Figure 3

Deviation estimated of the current inflation target from its long-term mean.

Figure 4 
               Density functions of z
                  
                     t
                   variable using both estimating approaches.
Figure 4

Density functions of z t variable using both estimating approaches.

Table 1

Estimates of structural parameters using the particle filter and the Kalman filter with the proposed representation, respectively

p ˆ φ ˆ σ ˆ g σ ˆ e
Kalman filter (our formulation) 0.7170 0.4922 0.1541 0.0013
(0.0669) (0.0928) (0.0202) (0.0002)
Particle filter (5,000 particle) 0.9008 0.5709 0.3416 0.0023
(0.1027) (0.0233) (0.1532) (0.0004)
Table 2

Comparison after the financial crisis

p ˆ φ ˆ σ ˆ g σ ˆ e
Panel I. Point estimates
Kalman filter (our formulation) 0.9002 0.5601 0.4001 0.0010
(0.0806) (0.0975) (0.003) (0.0001)
Particle filter (5,000 particle) 0.9295 0.5709 0.4902 0.0011
(0.0288) (0.0233) (0.1867) (0.0001)
Panel II. Wilcoxon test
Confidence interval for the median difference [−0.0008, 0.0017] p-value 0.2463

While the qualitative pattern looks like pretty similar, the variation range of z t is clearly different. We use the Wilcoxon rank-sum test to compare the two samples, and we cannot accept the null of zero median difference between both time series. But the lower limit of the 99% confidence interval is very close to zero[6], suggesting that both density functions should not dramatically depart from each other, as we can observe in the next figure.

5.1 Robustness Check

To check whether the above pattern is representative for the entire period analyzed, we now focus on the period after the 2008 financial crisis. In particular, we consider the subsample period from 2008-Q4 to 2021-Q2. This period is characterized by inflation remaining persistently below the inflation targets of central banks in many advanced economies despite an unprecedented monetary expansion (Fiedler, Gern, Jannsen, & Wolters, 2019). Also, by the use of additional monetary instruments beyond the traditional federal fund rate (forward guidance about the future policy rate, Large-Scale Asset Purchases) due to transitory liquidity traps.[7] However, as stated by Svensson (2010), flexible inflation targeting remained as the best-practice monetary policy before, during, and after the financial crisis.

The next table summarizes the point estimates as well as the nonparametric testing using the Wilcoxon test.

For the most recent period, the point estimates with both estimation techniques become almost identical with the exception of the volatility of the noise associated with regime changes. However, the difference does not appear as remarkable, in the sense that the point estimate with the Kalman filter is compatible with the 95% confidence interval obtained for that parameter with the particle filter. But interestingly enough, we cannot reject the null hypothesis that the median difference is zero. This way, we conclude that, for the most recent period, both estimation procedures lead to similar probability distributions of the estimated deviations of the current inflation target from its long-term mean.

6 Conclusion

This article proposes an estimation procedure to decompose monetary shocks into permanent and transitory components using an inertial Taylor rule and the monetary innovations scheme proposed in Andolfatto et al. (2008). We developed a novel state-space representation that allows us an optimal use of the Kalman filter. Our convenient reformulation of the state-space model representation enables the maximum likelihood estimation of the parameters involved in the time evolution of persistent and transitory monetary shocks, including the conditional probability of regime change. Researches interested in using new Keynesian DSGE models could take advantages of our estimation procedure in order to incorporate imperfect knowledge of the monetary policy rule implemented by the Central Bank.

We provide empirical evidence on the US historical monetary policy making through the lens of a Taylor during the period 1985–2021. Consistent with previous findings, the evidence for a regime change in the inflation target during the nineties is extremely weak. However, 9/11, the recession that started in March 2001 and the subprime crisis were significant events that affected the US monetary policy making in the last decade. We check the robustness of our empirical findings on flexible inflation targeting by comparing our estimations with those obtained using the particle filter. It is showed that the estimated deviations of the short-run inflation target from its long-run counterpart are remarkably similar over time. Beyond the 2008 financial crisis both estimation procedures lead to similar probability distributions for this variable.

With a lower computational burden, our estimation procedure has the clear advantage of recovering conditional probabilities of time varying inflation targeting. This allows to compare such probabilities with those obtained based on a regime-switching approach with a constant long-term inflation target but with time-varying responses to output gap and inflation as in Klingelhöfer and Sun (2018). In the case of both estimated probabilities being close to one for a given time period, it might be interesting to assess whether regime change is jointly due to, not only a new targeting regime but also the updating of responses. We leave this extension as a topic for further research.

Acknowledgments

We do our acknowledgments to the entities that have provided financial support to the research. Juan Ángel Lafuente acknowledges financial support from “Ministerio de Ciencia, Innovación y Universidades” through PGC2018-095072-B-I00 project. Juan Angel Lafuente and Mercedes Monfort are grateful for support from the University Jaume I research project UJI-B2020-26. Juan Ángel Lafuente, Rafaela Pérez and Jesús Ruiz acknowledge financial support from “Ministerio de Ciencia, Innovación y Universidades” through ECO2015-67305-P project.

  1. Conflict of interest: Authors state no conflict of interest.

Appendix A

This Appendix describes how to get equations for the Kalman filter using our state-space representation with Gaussian innovations.

Following Hamilton (1994), we consider the following state-space system:

(A.1) ξ t + 1 r × 1 = F r × r ξ t r × 1 + B r × r E t ξ t + 1 r × 1 + U r × r υ t + 1 r × 1 ,

(A.2) y t n × 1 = H ' n × r ξ t r × 1 + w t n × 1 ,

with

(A.3) E ( υ t υ τ ' ) = Q , for t = τ 0 , otherwise,

(A.4) E ( w t w τ ' ) = R , for t = τ 0 , otherwise .

We assume that { y 1 , y 2 , , y T } are observable variables and that, B, U, H, Q, and R are known with certainty.

The Kalman filter calculates the forecasts ξ ˆ t + 1 | t recursively, and, associated with each of these forecasts, the Kalman Filter computes the Mean Squared Error matrix: P t + 1 | t E [ ( ξ t + 1 ξ ˆ t + 1 | t ) ( ξ t + 1 ξ ˆ t + 1 | t ) ' ] .

The forecasting of y t is as follows:

y ˆ t | t 1 E ( y t | Y t 1 ) = H ' E ( ξ t | Y t 1 ) = H ' ξ ˆ t | t 1 ,

where Y t 1 = ( y t ' , y t 1 ' , , y 1 ' ) ' .

The associated mean squared error is:

E [ ( y t y ˆ t | t 1 ) ( y t y ˆ t | t 1 ) ' ] = H ' P t | t 1 H + R .

Next we update ξ t considering the information set available at time t as follows:

(A.5) ξ ˆ t | t E ˆ ( ξ t | Y t ) = ξ ˆ t | t 1 + { E [ ( ξ t ξ ˆ t | t 1 ) ( y t y ˆ t | t 1 ) ' ] } × { E [ ( y t y ˆ t | t 1 ) ( y t y ˆ t | t 1 ) ' ] } 1 ( y t y ˆ t | t 1 ) = ξ ˆ t | t 1 + P t | t 1 H ( H ' P t | t 1 H + R ) 1 ( y t H ' ξ ˆ t | t 1 ) .

with mean squared error:

(A.6) P t | t E [ ( ξ t ξ ˆ t | t ) ( ξ t ξ ˆ t | t ) ' ] = E [ ( ξ t ξ ˆ t | t 1 ) ( ξ t ξ ˆ t | t 1 ) ' ] { E [ ( ξ t ξ ˆ t | t 1 ) ( y t y ˆ t | t 1 ) ' ] } × { E [ ( y t y ˆ t | t 1 ) ( y t y ˆ t | t 1 ) ' ] } 1 × { E [ ( y t y ˆ t | t 1 ) ( ξ t ξ ˆ t | t 1 ) ' ] } = P t | t 1 P t | t 1 H ( H ' P t | t 1 H + R ) 1 H ' P t | t 1 .  

Next, we forecast ξ t + 1 given the current set of available information as follows:

ξ ˆ t + 1 | t E ˆ ( ξ t + 1 | Y t ) = F E ˆ ( ξ t | Y t ) + B E ˆ ( E t ( ξ t + 1 ) | Y t ) + U E ˆ ( υ t + 1 | Y t ) = F ξ ˆ t | t + B ξ ˆ t + 1 | t

where, given that υ t + 1 and w t are Gaussian, we use that ξ ˆ t + 1 | t = E t ( ξ t + 1 ) .

Rearranging the above equation we have

(A.7) ξ ˆ t + 1 | t = ( I B ) 1 F ξ ˆ t | t .

Substituting equation (A.7) into equation (A.9):

(A.8) ξ ˆ t + 1 | t = ( I B ) 1 F ξ ˆ t | t 1 + ( I B ) 1 F K t ( y t H ' ξ ˆ t | t 1 ) ,

where

(A.9) K t = P t | t 1 H ( H ' P t | t 1 H + R ) 1 .

Considering not only that ξ t + 1 = F ξ t + B E t ( ξ t + 1 ) + U υ t + 1 , but also that E t ( ξ t + 1 ) = ξ ˆ t + 1 | t = F ξ ˆ t | t + B ξ ˆ t + 1 | t , we obtain the expression for the forecasting error: ξ t + 1 ξ ˆ t + 1 | t = F ( ξ t ξ ˆ t | t ) + U υ t + 1 .

Thus, the Mean Squared Error associated to ξ ˆ t + 1 | t can be obtained as follows:

(A.10) P t + 1 | t = E [ ( F ( ξ t ξ ˆ t | t ) + U υ t + 1 ) ( F ( ξ t ξ ˆ t | t ) + U υ t + 1 ) ' ] = F P t | t F ' + Q ˜ .

Substituting equation (A.6) into equation (A.8):

(A.11) P t + 1 | t = F P t | t 1 F ' F K t H ' P t | t 1 F ' + Q ˜ .

Summarizing, given ξ ˆ 1 | 0   and   P 1 | 0 , the Kalman Filter computes recursively ξ ˆ t + 1 | t   and   P t + 1 | t using the equations (A.8), (A.9) and (A.11).

Appendix B

The particle filter is an alternative to overcome nonnormality. In this appendix, we describe how to evaluate the likelihood function of monetary innovations using a Sequential Monte Carlo Filter when the AHM-representation is considered.

The Andolfatto et al. (2008) specification is:

(A.12) z t + 1 = p z t + N t + 1 ,

(A.13) ε t = ( 1 ρ ) ( 1 α ) z t + u t ,

where N t + 1 = ( 1 p ) z t , with prob . p g t + 1 p z t , with prob . 1 p , where g t + 1 N ( 0 , σ g 2 ) u t + 1 = φ u t + e t + 1 , where e t + 1 N ( 0 , σ e 2 ) .

Assuming that z 0 = 0 , we proceed as follows:

Step 1: Evaluate the probability of u t | t 1 :

  1. We draw a random sample of size I = 10,000 from the uniform distribution in (0,1) and from a Normal distribution with zero mean and σ g 2 variance. We call each observation of these two initial samples as U i 1 and x i 1 , i = 1 , 2 , , I . Now, we use these two samples to generate a new sample the we denote N 1 | 0 as follows:

    N i 1 | 0 = 0 , if U i 1 p , x i 1 , if U i 1 > p , i = 1 , 2 , , I ,

    where 1 p is the probability of a regime change. We use the sample N 1 | 0 to generate an additional sample that we denote z 1 | 0 as follows:

    z i 1 | 0 = p z 0 + N i 1 | 0 , i = 1 , 2 , , I ,

    Without loss of generality, we assume z 0 = 0

  2. Next, we use the estimated value for the first element of the noise vector ε t , that we denote as ε 1 ^ , to generate a random sample for the innovation u t as follows:

    u i 1 | 0 = ε 1 ^ ( 1 ρ ) ( 1 α ) z i 1 | 0 , i = 1 , 2 , , I ,

  3. We evaluate the relative weight for each observation u i 1 | 0 :

    q u i 1 | 0 = p ( u i 1 | 0 ) i = 1 I p ( u i 1 | 0 ) , i = 1 , 2 , , I ,

    where the probability p ( u i 1 | 0 ) corresponds to a Gaussian distribution with zero mean and σ e 2 1 ϕ variance.

  4. We update the initial sample z 1 | 0 by performing a weighted sampling with replacement in accordance with the above-mentioned weights.

  5. We repeat the process described in i) to v) for each estimated component of the noise vector ε t .

Step 2: Using the Law of the Large Numbers:

p ( ε t | ε t 1 ) 1 I i = 1 I p ( u i , t | u i , t 1 ) , i = 1 , 2 , , I ,

where the conditional distribution of u i , t is N ( ϕ u i , t | u i , t 1 , σ e 2 ) . Once the conditional probabilities for monetary innovations are computed, we can evaluate the likelihood function as: p ( ε 1 , ε 2 , , ε T ) = i = 1 T 1 I i = 1 I p ( u i , t | u i , t 1 ) , where T denotes the sample size.

Step 3: We maximize the likelihood with respect to the parameters parameters ϕ , σ e 2 , σ g 2 and p.

References

Andolfatto, D. , Hendry, S. , & Moran, K. (2008). Are inflation expectations rational? Journal of Monetary Economics, 55, 406–422.10.1016/j.jmoneco.2007.07.004Search in Google Scholar

Aruoba, S. B. , & Schorpheide, F. (2011). Sticky prices versus monetary frictions: An estimation of policy trade-offs. American Economic Journal: Macroeconomics, 3, 60–90.10.1257/mac.3.1.60Search in Google Scholar

Anzuini, A. (2021). The non-linear effects of the Fed asset purchases. Studies in Nonlinear Dynamics & Econometrics, 20200022. 10.1515/snde-2020-0022.Search in Google Scholar

Bernanke, B. (2010, January). Monetary policy and the housing bubble . Atlanta Georgia: Speech at the Annual Meeting of the American Economic Association.Search in Google Scholar

Bu, C. , Rogers, J. , & Wu, W. (2021). A unified measure of fed monetary policy shocks. Journal of Monetary Economics, 118, 331–349.10.1016/j.jmoneco.2020.11.002Search in Google Scholar

Cogley, T. , Primiceri, G. E. , & Sargent, T. (2010). Inflation-Gap Persistence in the U.S. American Economic Journal: Macroeconomics, 2, 43–69.10.3386/w13749Search in Google Scholar

Fernández-Villaverde, J. , & Rubio-Ramírez, J. F. (2007). Estimating macroeconomic models: A likelihood approach. The Review of Economic Studies, 74, 1059–1087.10.3386/t0321Search in Google Scholar

Fernández-Villaverde, J. , & Rubio-Ramírez, J. F. (2005). Estimating dynamic equilibrium economies: linear versus nonlinear likelihood. Journal of Applied Econometrics, 20, 891–910.10.1002/jae.814Search in Google Scholar

Fiedler, S. , Gern, K. J. , Jannsen, N. , & Wolters, M. (2019). Growth prospects, the natural interest rate, and monetary policy. Economics. The Open-Access, Open-Assessment E-Journal, 13, 1–34.10.5018/economics-ejournal.ja.2019-35Search in Google Scholar

Grossi, M. , & Tamborini, R. (2012). Stock prices and monetary policy: re-examining the issue in a new keynesian model with endogenous investment. Economics: The Open-Access, Open-Assessment E-Journal, 6, 2012–2014.10.5018/economics-ejournal.ja.2012-14Search in Google Scholar

Hamilton, J. D. (1994). Time series analysis. Princeton, NJ: Princeton University Press.10.1515/9780691218632Search in Google Scholar

Ireland, P. (2007). Changes in federal reserve’s inflation target: Causes and consequences. Journal of Money Credit and Banking, 39, 1851–1882.10.3386/w12492Search in Google Scholar

Klingelhöfer, J. , & Sun, R. (2018). China’s regime-switching monetary policy. Economic Modelling, 68, 32–40.10.1016/j.econmod.2017.04.017Search in Google Scholar

Kozicki, S. , & Tinsley, P. (2005). Permanent and transitory policy shocks in an empirical macro model with asymmetric information. Journal of Economics Dynamics and Control, 29, 1985–2015.10.1016/j.jedc.2005.06.003Search in Google Scholar

Milani, F. , & Treadwell, J. (2012). The effect of monetary policy “news” and “surprises”. Journal of Money Credit and Banking, 44, 1667–1692.10.1111/j.1538-4616.2012.00549.xSearch in Google Scholar

Mostaghimi, M. (2004). Monetary policy, composite leading economic indicators and predicting the 2001 recession. Journal of Forecasting, 23, 463–477.10.1002/for.923Search in Google Scholar

Orphanides, A. (2003). Historical monetary policy analysis and the Taylor rule. Journal of Monetary Economics, 50, 983–1022.10.1016/S0304-3932(03)00065-5Search in Google Scholar

Svensson, L. (2010). Inflation targeting after the financial crisis”, speech by Prof Lars E O Svensson, Deputy Governor of the Sveriges Riksbank, at the International Research Conference “Challenges to Central Banking in the Context of Financial Crisis”, Mumbai, 12 February 2010.Search in Google Scholar

Taylor, J. B. (1993). Discretion versus policy rules in practice. Canergie-Rochester Conference on Public Policy, 39, 195–214.10.1016/0167-2231(93)90009-LSearch in Google Scholar

Received: 2021-09-23
Revised: 2021-12-01
Accepted: 2021-12-02
Published Online: 2021-12-31

© 2021 Juan Ángel Lafuente et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 8.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/econ-2021-0010/html
Scroll to top button