Startseite Fiscal policy uncertainty and US output
Artikel
Lizenziert
Nicht lizenziert Erfordert eine Authentifizierung

Fiscal policy uncertainty and US output

  • Michal Ksawery Popiel EMAIL logo
Veröffentlicht/Copyright: 23. November 2019
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

The rise in US partisan conflict following the Great Recession led to a popular belief that uncertainty about fiscal policy was impeding output growth. I explore this hypothesis by nesting it in a standard structural vector autoregression (SVAR) model traditionally used for estimating fiscal multipliers. I augment the model with stochastic volatility (a measure of uncertainty) and allow that to interact with the endogenous variables. I consider various trend assumptions, subsamples and information sets and find that the evidence does not support this hypothesis. The results reveal that there is no systematic relationship between fiscal policy uncertainty and output. Moreover, a time-varying parameter version of the model shows that the lack of consistency across specifications is not driven by changes in the transmission of uncertainty shocks over time. Finally, I revisit Fernández-Villaverde, Guerrón-Quintana, Kuester, and Rubio-Ramírez (Fernández-Villaverde, J., P. Guerrón-Quintana, K. Kuester, and J. Rubio-Ramírez. 2015. “Fiscal Volatility Shocks and Economic Activity.” American Economic Review 105: 3352–3384) who find a significant negative relationship between fiscal policy uncertainty and output. I show that when their estimation is modified to incorporate the uncertainty around the estimate of uncertainty, their empirical result falls in line with the findings in this paper.

  1. Conflict of interests: Michal Popiel is currently an Associate at Groupe d’analyse, Ltée. Research for this article was undertaken when he was a student at Queen’s University. The views presented in this article are those of the authors and do not reflect those of Groupe d’analyse, Ltée. Groupe d’analyse, Ltée provided no financial support for this article.

Appendix

A Data sources and definitions

Using NIPA tables from the BEA website, yt is GDP in line 1 from Table 1.1.5, for the consolidated government sector (federal, state and local), gt is Government Consumption Expenditures and Gross Investment in line 22 from Table 1.1.5, τt is Government Current Tax Receipts (line 2 of Table 3.1) and Contributions for Government Social Insurance (line 7 of Table 3.1) less Corproate income taxes from Federal Reserve Banks (line 8 in Table 3.2), and for federal government data, gt is Federal Government Consumption Expenditures and Gross Investment in line 9 from Table 3.9.5, τt is Federal Current Tax Receipts (line 2 of Table 3.2) and Contributions for Government Social Insurance (line 11 of Table 3.2) less Corproate income taxes from Federal Reserve Banks (line 8 in Table 3.2). I deflate all of the above series by the GDP deflator in line 1 from Table 1.1.9 and population ages 16 and up obtained from FRED (series B230RC0Q173SBEA). The remaining data and sources are the EPU index from Baker, Bloom, and Davis (2016), obtained from FRED (series USEPUINDXM) and converted from monthly frequency to quarterly averages, the categorical series EPU(t) and EPU(g), obtained from the Policy Uncertainty website[18] and converted from monthly frequency to quarterly averages, the PCI index from Azzimonti (2018), obtained from the Federal Reserve Bank of Philadelphia[19] and converted from monthly frequency to quarterly averages, the quarterly average of the federal funds rate from FRED (series DFF), the spending forecasts from the Survey of Professional Forecasters obtained from the Federal Reserve Bank of Philadelphia website (series drfedgov3 and drslgov3), the defense news series obtained from Valerie Ramey’s website[20] and the series on excess returns from Fisher and Peters (2010) and the implicit tax rate from Leeper, Walker, and Yang (2013) kindly given to me by Karel Mertens.

B SVAR-SV-M model estimation

I use the Gibbs sampler to estimate the SVAR-SV-M model. For discussion of the algorithm and the choice of priors, it is more convenient to rewrite the model in the following way,

(19)zt=XtΨ+ut,
(20)ut=BHt12εt,εtN(0,I),
(21)log(ht)=μ+ρlog(ht1)+υt,υtN(0,Q),

where Xt=[Dt,zt1,,ztp,log(ht),,log(htpλ)] and Ψ is a 3(3p+3(pλ+1)+c)×1 vector of the corresponding coefficients with c = 1 if only a constant is included and c = 2 if both a constant and a linear trend are included, i.e. the number of columns in Dt.

B.1 Prior distributions

The parameter space is broken up into four blocks: (1) VAR coefficients Ψ, (2) structural coefficients B, (3) innovation equation coefficients μ, ρ and Q and (4) stochastic volatilities ht.

VAR coefficients Ψ. I assume a normal distribution for the prior of Ψ. The mean and variance are calibrated based on a homoskedastic version of the model estimated with ordinary least squares using a training sample with data from 1947Q1–1959Q4.[21] To account for the stochastic volatility in the mean, the estimation takes two steps. First, I estimate a homoskedastic VAR using the training sample and then I take the squared residuals as an initial guess of the stochastic volatility and re-estimate the VAR with the logarithm of this series[22] (and its lags if necessary) as an additional explanatory variable. The OLS estimate of the VAR coefficients from this second step Ψ^OLS and four times the estimate of its variance P^OLSΨ=4V^(Ψ^OLS) are the mean and variance for the prior distribution, Ψ0N(Ψ^OLS,P^OLSΨ).

Structural coefficientsB. In order to apply convenient methods from Cogley and Sargent (2005) for simulating the posterior distribution, I follow Pereira and Lopes (2014) and modify the identification equations (4) to ensure that only exogenous variables appear on the right-hand-side of each regression. The modification involves the equation for output. Letting B=A~1B~, the relationship between the reduced-form residuals and structural shocks is changed to

(22)A~ut=B~Ht12εt,
(23)[10θy010001][utτutguty]=[1θg0010ζτζg1][htτεtτhtgεtghtyεty].

There is a one-to-one mapping between the coefficients in (4) and the ones in (23) and the structural shocks estimated using the two identification schemes are identical for the fiscal variables and up to a scale factor for output. The mapping of the coefficients between the identification in (4) and (23) is given by

(24)A~1B~=[10θy010001]1[1θg0010ζτζg1]=[1θyζτθgθyζgθy010ζτζg1],
(25)A1B=[10θy010ξτξg1]1[1θg0010001]=[11θyξτθg+θyξg1θyξτθy1θyξτ010ξτ1θyξτξτθgξg1θyξτ11θyξτ],

so that,

(26)ζτ=ξτ1θyξτ and ζg=ξτθgξg1θyξτ.

The prior of the free parameters in B is also normal. I set the mean to the maximum likelihood estimates – since the identification scheme is non-triangular, a Cholesky decomposition cannot be applied – of the coefficients in B based on the training sample. Let α=[θg,ζτ,ζg] be the vector of free parameters in B. The prior is α0N(α^MLE,P^α), where, as in Benati and Mumtaz (2007) and Baumeister and Peersman (2013), P^α is a matrix with ten times the absolute value of the elements in α^MLE along the diagonal and zeros elsewhere. The scaling of the variance is to account for the relative magnitude of the coefficients, but otherwise arbitrary.

Innovation equation coefficients μ,ρ,Q. The priors for the intercepts and autoregressive coefficients in each of the stochastic volatilities are also normal. I set μ0iN(0,1) and ρ0iN(0.9,0.1), for i(τ,g,y). These values reflect the traditional modeling conventions that specify the volatility process as a random walk (e.g. see Cogley and Sargent, 2005; Primiceri, 2005). The priors for the diagonal elements of Q follow inverse Gamma distributions with scale parameter 0.5 and 1 degree of freedom, i.e. Q0iIG(0.52,12), where once again i(τ,g,y) and Q0i is the diagonal element in Q0 corresponding to the volatility equation for variable i. The scale parameter is larger than typical choices in the literature – for example Jo (2014) and Baumeister and Peersman (2013) set it to 10−4 – and reflects my knowledge of estimates of similar parameters in Born and Pfeifer (2014) and Fernández-Villaverde et al. (2015).

Stochastic volatility ht. The prior for the logarithm of volatility at time t = 0 is normal with the mean set to the OLS estimate of the structural shock variances based on the training sample and the variance set to the identity matrix, i.e. log(h0)N(log(h^0OLS),I3), where h^0OLS=[σ^τ2,σ^g2,σ^y2].

B.2 Initial values and estimation algorithm

I simulate the posterior distributions using the Gibbs sampler. I obtain the starting values for the algorithm by estimating a homoskedastic version of the model in the same way as for the prior distributions. I initialize the stochastic volatility using the squared residuals and set the VAR coefficients Ψ and structural coefficients α to their OLS and MLE estimates. I set the remaining coefficients equal to the means of their prior distributions. I adopt the common notation of using a superscript T to refer to the entire sample. For example, hT={ht}t=1T denotes the entire history of volatility states.

Step 1: drawing structural coefficients α. Conditional on Ψ and hT, the reduced-form residuals are observable and related to the structural innovations by the following set of regression equations,

(27)utg=htgεtg
(28)utτθyuty=θghtgεtg+htτεtτ
(29)uty=ζτhtτεtτ+ζghtgεtg+htyεty.

Due to the identity in (27) and since θy is known, (28) can be rewritten as

(htτ)12(utτθyuty)R=θg(htτ)12utgM+εtτ,

which is a regression equation with standard normal innovations. The posterior for θg is also normal. Letting R be the left-hand-side variable and M be the right-hand-side variable, the posterior, conditional on the data and other parameters in the model, is θgN(α1,P1), where P1=((P^iα)1+MM)1, α1=P1((P^iα)1α^MLEi+MR) and the index i on the prior mean and variance selects the value corresponding to the revenue equation.

Once θg is drawn, the structural innovations εtτ are identified and the same procedure is applied to (29), which is rewritten as

(hty)12uty=ζτ(htτ)12htτεtτ+ζg(htτ)12utg+εty.

The coefficients, conditional on the data and other parameters are drawn from [ζτ,ζg]N(α1,P1), where α1 and P1 are defined in the same way as above but with values corresponding to the output equation.

Step 2: drawing VAR coefficients Ψ. Conditional on hT and α, (19) is a linear regression with a known form of heteroskedasticity. This equation can be transformed into a state-space model,

(30)zt=XtΨt+BHt12εt,
(31)Ψt=Ψt1,

where (30) is the observation equation and (31) is the transition equation. The posterior distribution for the VAR coefficients Ψ is normal with mean ΨT|T=E(ΨT|hT,XT,α) and variance PT|T=Cov(ΨT|hT,XT,α). I use the Carter and Kohn (1994) algorithm and obtain ΨT|T and PT|T from the final iteration of a Kalman filter applied to (30) and (31).

Step 3: drawing innovation equation coefficients μ,ρ,Q. Conditional on hT, the coefficients μ, ρ, Q are drawn using the standard methods for linear regressions. For μ and ρ, the posterior distributions are normal with means and variances combining information from the likelihood and prior in the same way as for the structural coefficients α described above. The posterior for the variance of the error terms in each of the innovation equations has an inverse Gamma distribution, QiIG(i=1Tν^i,t2+0.52,T+12), where ν^i,t are the residuals for the stochastic volatility equation corresponding to variable i based on the posterior draws for μi and ρi. To prevent drawing explosive processes, I use rejection sampling to impose |ρi|1.

Step 4: drawing volatility states hT. Conditional on the VAR coefficients Ψ, the structural coefficients α and the innovation equation coefficients μ, ρ, Q, (20) and (21) form a nonlinear state-space model. Following Cogley and Sargent (2005), I apply the date-by-date independence Metropolis-Hastings algorithm from Jacquier, Polson, and Rossi (1994). This is a univariate algorithm that I can apply to each equation separately due to the diagonality of Q, which makes the equations independent. For details on the sampling distributions and acceptance probability, see B.2.5 in Cogley and Sargent (2005).

Selecting the number of draws and monitoring convergence. Iteratively drawing from the conditional distributions from the four steps described above converges to a sample of coefficients drawn from their joint posterior distribution. To select the number of draws to discard and the number of draws to keep for inference I look at a few convergence diagnostics. A convenient method for determining the number of draws to discard is based on trace plots. These plots trace out consecutive draws for a given parameter over the simulation period. The draws often settle around a stationary mean after starting at some distant point in the parameter space. In my model, this convergence occurs within the first few thousand draws, which I discard.

The number of draws to keep for inference is a less obvious choice because it depends on the persistence of the Markov Chain. If it takes the Gibbs sampler many draws to move from one area of the posterior to another (slow mixing), then the chain needs to run longer to obtain a sufficient amount of independent draws for inference relative to the situation in which the algorithm moves around the posterior quickly (fast mixing). The mixing speed for a given parameter can be determined by the autocorrelation function of its draws. A common practice to alleviate the problems associated with slow-mixing is thinning: keeping only every nth draw where, for example, n can be set to ten. However, as argued by Link and Eaton (2012), thinning is inefficient and – given advancements in computing (particularly for storage) – outdated. Therefore, I choose the number of draws by looking at the autocorrelation functions of a thinned sample and then use the entire sample for inference. A common criterion (e.g. see Primiceri, 2005; Pereira and Lopes, 2014) is to ensure that the 20th autocorrelation for each parameter is close to zero. For instance, if I want to have 1000 draws for inference and thinning the sample by 15 achieves this criteria, then I set the number of draws for inference to 15,000.

C TVP-SVAR-SV-M model estimation

The priors and estimation procedure for the TVP-SVAR-SV-M model have a lot of similarities to their counterparts for the fixed-parameter version of the model.

C.1 Prior distributions

The parameter space is broken up into the same four blocks as in Appendix B and I use all of the same priors described therein. The time-varying version of the model adds only two new parameters, which are the covariance matrices for the stochastic innovations in (13) and (14). As in Primiceri (2005), I assume that the prior distributions for these matrices are inverse Wishart. The covariance matrix S is block diagonal and consists of two blocks: S1 corresponds to the revenue equation and S2 corresponds to the output equation. I assume the following prior distributions WIW(kW2×T0×P^OLSΨ,T0) , S1IW(kS2×P^1α,10), S2IW(kS2×P^2α,10), where T0 is the number of observations in the training sample and, as before,

P^OLSΨ=4V^(Ψ^OLS),P^1α=10|θ^g| and P^2α=10[|ζ^τ|00|ζ^g|],

where the coefficient estimates come from an estimation of a homoskedastic VAR on the training sample as in Appendix B. The parameters kW and kS reflect the prior belief about the amount of time-variation in the coefficients. Following Baumeister and Peersman (2013) and Benati and Mumtaz (2007), I set kS2=kW2=104.

As discussed by Primiceri (2005), another factor in determining the prior for the amount of time-variation in the model coefficients arises from the fact that it has a strong influence on the estimation. One the one hand, I want the data to be able to reflect changes in coefficients arising from underlying changes in the transmission mechanism. On the other hand, if the time-variation is not restricted in some way then the coefficients will adjust at each time period to reduce the residuals to as close to zero as possible. Therefore, the choice of kS2 and kW2 reflects a balance between these forces. I find that when these values are loosened the model misbehaves.[23]

The same issue concerns the choice of the degrees of freedom. The minimum degrees of freedom required to have a proper prior distribution is equal to the number of coefficients plus one. For W this value is 3(3p+3(pλ+1)+c)+1, for S1 it is 2 and for S2 it is 3. I set the degrees of freedom to values higher than these minimums to avert implausible behaviors of the time-varying coefficients. Primiceri (2005) also sets the degrees of freedom for the prior of W to the number of observations in the training sample for the same reasons.

C.2 Initial values and estimation algorithm

I initialize the Gibbs sampler in the same way as described in Appendix B.2 except that I use an estimate of hT from the SVAR-SV-M model to initialize the stochastic volatility rather than using the squared residuals from the homoskedastic regression. The estimation proceeds in the same steps as described in Appendix B.2 but with modifications to the procedure for drawing the VAR coefficients and structural equation coefficients to account for time-variation.

Drawing VAR coefficients ΨT and W. Conditional on hT, αT and W, the VAR coefficients have a state-space representation where (10) is the observation equation and (13) is the transition equation. As in Cogley and Sargent (2005), the joint distribution for the entire history of the VAR coefficients ΨT is given by

p(ΨT|XT,hT,αT)=p(ΨT|XT,hT,αT)t=1T1p(Ψt|Ψt1,XT,hT,αT),

where all the densities on the right-hand-side are conditionally Gaussian and can be obtained by backward recursion from the terminal state of the forward Kalman filter.

I use the Carter and Kohn (1994) algorithm, set the initial values to the prior mean and covariance and iterate on the following equations

(32)Ψt|t1=Ψt1|t1,Pt|t1=Pt1|t1+W,vt|t1=ztXtΨt|t1,ft|t1=XtPt|t1Xt+BtHtBt,Kt=Pt|t1Xtft|t11Ψt|t=Ψt|t1+Ktvt|t1,Pt|t=Pt|t1KtXtPt|t1.

These equations are identical to the ones for the fixed-parameter version of the model except for a key difference in (32), where I augment the variance to account for the stochastic evolution of the coefficients. The final iteration of the forward recursion delivers the mean and variance for the posterior distribution of ΨT, i.e. ΨTN(ΨT|T,PT|T). The remaining history of ΨT is drawn period-by-period through backward recursion where the conditional means and variances are updated to include information about Ψt + 1 for drawing Ψt. For t = 1, …, T − 1, the posterior distribution of Ψt is

ΨtN(Ψt|t+Pt|tPt+1|t1(Ψt+1Ψt),Pt|tPt|tPt+1|t1Pt|t).

The algorithm generates smoothed draws for ΨT that account for information in the entire sample. Conditional on ΨT, the covariance for the innovation equation is drawn from an inverse Wishart distribution,

WIW(kW2×T0×P^OLSΨ+(ω^T)(ω^T),T+T0),

where ω^t=ΨtΨt1 and ω^T is the entire history of these residuals.

Drawing structural equation coefficients αT and S. Drawing the structural equation coefficients follows the same procedure as described above for the VAR coefficients. Conditional on ΨT, hT and S, the reduced-form residuals and structural shocks form the following set of state-space models

(33)[10θy010001][utτutguty]=[1θg,t0010ζτ,tζg,t1][htτεtτhtgεtghtyεty],
(34)[θg,tζτ,tζg,t]=[θg,t1ζτ,t1ζg,t1]+et,

with (33) specifying two measurement equations and (34) their corresponding transition equations.

With utg=htgεtg, the structural equation for tax revenue forms the following state-space model

(htτ)12(utτθyuty)=θg,t(htτ)12utg+εtτ,θg,t=θg,t1+e1,t.

I use the Carter and Kohn (1994) algorithm to run the Kalman filter forward and then take draws for θgT using backward recursion form the terminal state. The draw for θgT makes εtτ observable in the output equation, which forms a second state-space model given by

(hty)12uty=ζτ,t(htτ)12htτεtτ+ζg,t(htτ)12utg+εty,
[ζτ,tζg,t]=[ζτ,t1ζg,t1]+e2,t.

Once again, I obtain draws of the coefficients ζτT and ζgT from their posterior distributions using the Kalman filter with backward recursion.

Conditional on αT, the covariance matrices for each of the innovation equations are drawn from inverse Wishart distributions,

S1IW(kS2×P^1α+(e^1T)(e^1T),T+10),S2IW(kS2×P^2α+(e^2T)(e^2T),T+10),

where

e^1,t=θg,tθg,t1 and e^2,t=[ζτ,tζg,t][ζτ,t1ζg,t1],

and the entire histories of these residuals are given by e^1T and e^2T, respectively.

Drawing μ, ρ, Q and hT follows exactly the same steps as in Appendix B.1. I also monitor convergence and determine the number of draws using the methods described in that section.

References

Alloza, M. 2017. “Is fiscal policy more effective in uncertain times or during recessions?” Banco de España Working Paper 1730.10.2139/ssrn.3024538Suche in Google Scholar

Auerbach, A. J., and Y. Gorodnichenko. 2012. “Measuring the Output Responses to Fiscal Policy.” American Economic Journal: Economic Policy 4: 1–27.10.3386/w16311Suche in Google Scholar

Azzimonti, M. 2018. “Partisan conflict and private investment.” Journal of Monetary Economics 93: 114–131.10.3386/w21273Suche in Google Scholar

Baker, S. R., N. Bloom, and S. J. Davis. 2016. “Measuring Economic Policy Uncertainty.” Quarterly Journal of Economics 31: 1593–1636.10.3386/w21633Suche in Google Scholar

Baumeister, C., and G. Peersman. 2013. “The Role of Time-Varying Price Elasticities in Accounting for Volatility Changes in the Crude Oil Market.” Journal of Applied Econometrics 28: 1087–1109.10.1002/jae.2283Suche in Google Scholar

Benati, L. 2013. “Economic Policy Uncertainty, the Great Recession, and the Great Depression.” University of Bern Working Paper.Suche in Google Scholar

Benati, L., and H. Mumtaz. 2007. “US Evolving Macroeconomic Dynamics: A Structural Investigation.” European Central Bank Working Paper 746.10.2139/ssrn.978374Suche in Google Scholar

Blanchard, O., and R. Perotti. 2002. “An Empirical Characterization of the Dynamic Effects of Changes in Government Spending and Taxes on Output.” Quarterly Journal of Economics 117: 1329–1368.10.3386/w7269Suche in Google Scholar

Born, B., and J. Pfeifer. 2014. “Policy Risk and the Business Cycle.” Journal of Monetary Economics 68: 68–85.10.1016/j.jmoneco.2014.07.012Suche in Google Scholar

Carriero, A., T. Clark, and M. Marcellino. 2018. “Measuring Uncertainty and its Impact on the Economy.” The Review of Economics and Statistics 100: (5): 799–815.10.1162/rest_a_00693Suche in Google Scholar

Carter, C. K., and R. Kohn. 1994. “On Gibbs Sampling for State Space Models.” Biometrika 81: 541–553.10.1093/biomet/81.3.541Suche in Google Scholar

Cogley, T., and T. J. Sargent. 2005. “Drifts and Volatilities: Monetary Policies and Outcomes in the Post WWII US.” Review of Economic Dynamics 8: 262–302.10.1016/j.red.2004.10.009Suche in Google Scholar

Davig, T., and E. M. Leeper. 2011. “Monetary–Fiscal Policy Interactions and Fiscal Stimulus.” European Economic Review 55: 211–227.10.3386/w15133Suche in Google Scholar

Fernández-Villaverde, J., P. Guerrón-Quintana, K. Kuester, and J. Rubio-Ramírez. 2015. “Fiscal Volatility Shocks and Economic Activity.” American Economic Review 105: 3352–3384.10.1257/aer.20121236Suche in Google Scholar

Fisher, J. D., and R. Peters. 2010. “Using Stock Returns to Identify Government Spending Shocks.” The Economic Journal 120: 414–436.10.1111/j.1468-0297.2010.02355.xSuche in Google Scholar

Jacquier, E., N. G. Polson, and P. E. Rossi. 1994. “Bayesian Analysis of Stochastic Volatility Models.” Journal of Business & Economic Statistics 12: 371–389.10.1080/07350015.1994.10524553Suche in Google Scholar

Jo, S. 2014. “The Effects of Oil Price Uncertainty on Global Real Economic Activity.” Journal of Money, Credit and Banking 46: 1113–1135.10.1111/jmcb.12135Suche in Google Scholar

Johannsen, B. K. 2014. “When are the Effects of Fiscal Policy Uncertainty Large?” Board of Governors of the Federal Reserve System (US) Working Paper No. 2014-40.10.17016/FEDS.2014.40Suche in Google Scholar

Kirchner, M., J. Cimadomo, and S. Hauptmeier. 2010. “Transmission of Government Spending Shocks in the Euro Area: Time Variation and Driving Forces.” European Central Bank Working Paper 1219.10.2139/ssrn.1551801Suche in Google Scholar

Leeper, E. M., A. W. Richter, and T. B. Walker. 2012. “Quantitative Effects of Fiscal Foresight.” American Economic Journal: Economic Policy 4: 115–144.10.3386/w16363Suche in Google Scholar

Leeper, E. M., T. B. Walker, and S.-C. S. Yang. 2013. “Fiscal Foresight and Information Flows.” Econometrica 81: 1115–1145.10.3386/w14630Suche in Google Scholar

Link, W. A., and M. J. Eaton. 2012. “On Thinning of Chains in MCMC.” Methods in Ecology and Evolution 3: 112–115.10.1111/j.2041-210X.2011.00131.xSuche in Google Scholar

Mertens, K., and M. O. Ravn. 2014. “A Reconciliation of SVAR and Narrative Estimates of Tax Multipliers.” Journal of Monetary Economics 68: S1–S19.10.1016/j.jmoneco.2013.04.004Suche in Google Scholar

Mumtaz, H., and F. Zanetti. 2013. “The Impact of the Volatility of Monetary Policy Shocks.” Journal of Money, Credit and Banking 45: 535–558.10.1111/jmcb.12015Suche in Google Scholar

Mumtaz, H., and K. Theodoridis. 2016. “The Changing Transmission of Uncertainty Shocks in the US: An Empirical Analysis.” Journal of Business & Economic Statistics 36: 239–252.10.1080/07350015.2016.1147357Suche in Google Scholar

Mumtaz, H., and P. Surico. 2018. “Policy Uncertainty and Aggregate Fluctuations.” Journal of Applied Econometrics 33: (3): 319–331.10.1002/jae.2613Suche in Google Scholar

Owyang, M. T., V. A. Ramey, and S. Zubairy. 2013. “Are Government Spending Multipliers Greater During Periods of Slack? Evidence from Twentieth-Century Historical Data.” American Economic Review 103: 129–134.10.1257/aer.103.3.129Suche in Google Scholar

Pereira, M. C., and A. S. Lopes. 2014. “Time-Varying Fiscal Policy in the US.” Studies in Nonlinear Dynamics & Econometrics 18: 157–184.10.1515/snde-2012-0062Suche in Google Scholar

Primiceri, G. E. 2005. “Time Varying Structural Vector Autoregressions and Monetary Policy.” Review of Economic Studies 72: 821–852.10.1111/j.1467-937X.2005.00353.xSuche in Google Scholar

Ramey, V. A. 2011. “Identifying Government Spending Shocks: It’s all in the Timing.” Quarterly Journal of Economics 126: 1–50.10.3386/w15464Suche in Google Scholar

Ramey, V. A. 2015. “Macroeconomic Shocks and Their Propagation.” Handbook of Macroeconomics 2: 71–162.10.3386/w21978Suche in Google Scholar

Romer, C. D., and D. H. Romer. 2010. “The Macroeconomic Effects of Tax Changes: Estimates Based on a New Measure of Fiscal Shocks.” American Economic Review 100: 763–801.10.3386/w13264Suche in Google Scholar

Rossi, B., and S. Zubairy. 2011. “What is the Importance of Monetary and Fiscal Shocks in Explaining US Macroeconomic Fluctuations?” Journal of Money, Credit and Banking 43: 1247–1270.10.2139/ssrn.1747192Suche in Google Scholar

Stock, J. H., and M. W. Watson. 2012. “Disentangling the Channels of the 2007-2009 Recession.” NBER Working Paper 18094.10.3386/w18094Suche in Google Scholar


Supplementary Material

The online version of this article offers supplementary material (DOI: https://doi.org/10.1515/snde-2018-0024).


Published Online: 2019-11-23

©2020 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 21.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/snde-2018-0024/html
Button zum nach oben scrollen