Home Business & Economics Constrained Hamiltonian Monte Carlo in BEKK GARCH with Targeting
Article Publicly Available

Constrained Hamiltonian Monte Carlo in BEKK GARCH with Targeting

  • Martin Burda EMAIL logo
Published/Copyright: July 3, 2013
Become an author with De Gruyter Brill

Abstract

The GARCH class of models for dynamic conditional covariances trades off flexibility with parameter parsimony. The unrestricted BEKK GARCH dominates its restricted scalar and diagonal versions in terms of model fit, but its parameter dimensionality increases quickly with the number of variables. Covariance targeting has been proposed as a way of reducing parameter dimensionality, but for the BEKK with targeting the imposition of positive definiteness on the conditional covariance matrices presents a significant challenge. In this article, we suggest an approach based on Constrained Hamiltonian Monte Carlo that can deal effectively both with the nonlinear constraints resulting from BEKK targeting and the complicated nature of the BEKK likelihood in relatively high dimensions. We perform a model comparison of the full BEKK and the BEKK with targeting, indicating that the latter dominates the former in terms of marginal likelihood. Thus, we show that the BEKK with targeting presents an effective way of reducing parameter dimensionality without compromising the model fit, unlike the scalar or diagonal BEKK. The model comparison is conducted in the context of an application concerning a multivariate dynamic volatility analysis of a foreign exchange rate returns portfolio.

JEL Classification: C11; C15; C32; C63

Introduction

Interest in modeling volatility dynamics of time-series data has been growing in many areas of empirical economics and finance. The management of financial asset portfolios involves estimation and forecasting of financial asset returns dynamics. Analysis of the dynamic evolution of variances and covariances of asset portfolios is used for a number of purposes, such as forecasting VaR thresholds to determine compliance with the Basel Accord.

In a recent article by Caporin and McAleer (2012), henceforth CM, the BEKK model (Engle and Kroner 1995) and the dynamic conditional correlation (DCC) model (Engle 2002) are singled out as the “two most widely used models of conditional covariances and correlations” in the class of multivariate GARCH models. CM provide a deep and insightful theoretical analysis of the similarities and differences between the BEKK and DCC. CM note that DCC appears to be preferred to BEKK empirically because of the perceived curse of dimensionality associated with the latter. In an important insight, CM argue that this is a misleading interpretation of the suitability of the two models to be used in practice, as the full unconstrained comparable model versions are both affected by the curse of dimensionality alike. CM argue that either model is able to do “virtually everything the other can do” and hence their usage should depend on the object of interest: BEKK is well suited for conditional covariances while DCC for conditional correlations.

For either model type, parameter parsimony can be achieved by imposition of parametric restrictions on the full model version (Ding and Engle 2001; Engle, Shephard, and Sheppard 2009; Bilio and Caporin 2009). However, doing so is traded off with model specification issues. For example, Burda and Maheu (2013) show that restricted versions of the BEKK, the scalar and diagonal models, are clearly dominated by the full BEKK version in terms of model fit. Indeed, such parameter restrictions generally operate on the parameters driving the model dynamics.

An alternative way of reducing dimensionality is via so-called targeting which imposes a structure on the model intercept based on sample information. Targeting yields model constraints in a structured way, so that its implied long-run solution of the covariance (or correlation) ensures its independence from any of the parameters driving the model dynamics. Hence, targeting differs from merely setting a subset of parameters to zero as in the scalar and diagonal models. Variance targeting estimation was originally proposed by Engle and Mezrich (1996) as a two-step estimation procedure, whereby the unconditional covariance matrix of the observed process is estimated in the first step, and the remaining parameters are estimated in the second step. However, Aielli (2011) shows that the advantage of targeting as a tool for controlling the curse of dimensionality does not hold for DCC models, as the sample correlation is an inconsistent estimator of the long-run correlation matrix. CM also argue that targeting can be rigorously applied only to BEKK and its use in DCC models is inappropriate.

At the same time, enforcing positive definiteness of the conditional covariance matrices in the BEKK with targeting implies a set of model constraints that are nonlinear in parameters and, according to CM, “extremely complicated, except for the scalar case” (p. 742). Furthermore, the model fit of the targeted BEKK relative to the full BEKK remains an open empirical question: does parameter dimensionality reduction via targeting trade off with a model fit that is substantially worse, as in the case of the scalar and diagonal BEKK? Perhaps due to these difficulties and unknowns the BEKK with targeting has not been used extensively in practice, despite its potential benefits.

In this article, we suggest an approach to estimation and inference of the BEKK model with targeting based on Constrained Hamiltonian Monte Carlo (CHMC) (Neal 2011), a recent statistical method that deals very effectively with complicated nonlinear model constraints in the context of relatively high-dimensional problems and computationally costly log-likelihoods. It is particularly useful in cases where it is difficult to accurately approximate the log-likelihood surface around the current parameter draw in real time necessary for obtaining sufficiently high acceptance probabilities in importance sampling (IS) methods, such as for recursive models in finance. Complicated constraints pose further challenge for IS-based methods. In such situations, one would typically resort to random walk (RW) style sampling that is fast to run and does not require the knowledge of the properties of the underlying log-likelihood. Jin and Maheu (2012) used the RW sampler for covariance targeting in a class of dynamic component models. However, RW mechanisms can lead to very slow exploration of the parameter space with high autocorrelations among draws which might require a prohibitively large size of the Markov chain to be obtained in implementation to achieve satisfactory mixing and convergence. CHMC combines the advantages of sampling that is relatively cheap with RW-like intensity but superior parameter space exploration.

CHMC falls within a general class of Markov chain Monte Carlo (MCMC) methods that can be used under both the Bayesian and the classical paradigm, applied to posterior densities or directly to model likelihoods without prior information (Chernozhukov and Hong 2003). To the best of our knowledge, the constrained version of Hamiltonian (or Hybrid) Monte Carlo (HMC) has not yet been used in the economics literature. We elaborate in detail on the CHMC procedure, its application to the BEKK with targeting, and its usefulness in applicability to a wider class of problems of similar nature. We also contrast CHMC performance with RW sampling.

Using CHMC, we perform a model comparison of the BEKK with targeting and the full BEKK, providing evidence that favors the BEKK with targeting in terms of marginal likelihood. Thus, we show that the BEKK with targeting presents an effective way of reducing parameter dimensionality without compromising the model fit, unlike the scalar or diagonal BEKK. The model comparison is conducted in the context of an application concerning a multivariate dynamic volatility analysis of a foreign exchange rate returns portfolio. We believe that the suggested estimation approach and comparison evidence, along with the provided computer code for implementation, will encourage wider use of the BEKK model with targeting as a valuable tool for analysts and practitioners.

The remainder of the article is organized as follows. Section 2 describes the BEKK model and the targeting constraints. Section 3 elaborates on CHMC. Section 4 introduces the application and reports on model comparison results. Section 5 concludes.

BEKK with targeting

The BEKK class of multivariate GARCH models was introduced by Engle and Kroner (1995). Their specification was sufficiently general to allow the inclusion of special factor structures (Bauwens, Laurent, and Rombouts 2006). Here, we focus on the BEKK with all orders set to one, which is the predominantly used version in applications (Silvennoinen and Teräsvirta 2009). Let be a vector of asset returns with and denote the information set as . Under the BEKK model, the returns follow

[1]
[2]

is a positive definite conditional covariance matrix of given information at time , C is a lower triangular matrix, and A and B are matrices. We maintain a Gaussian assumption and a zero intercept for simplicity. The total number of parameters in this model is , collected into the vector .

The BEKK model with targeting is defined as

[3]

where

There are two types of constraints on the model that need to be taken into account. The first type, covariance stationarity constraints, is generally simple to impose, as shown by Engle and Kroner (1995). The second type, positive definiteness of the conditional covariance matrices, is satisfied by construction in the full BEKK version, but becomes difficult to deal with in the targeted case. CM note that one way in which positive definiteness of can be guaranteed is by imposing positive definiteness on but note that such constraint is complicated. Indeed, Figure 1 shows the effect of this constraint on the parameter space on an example of a log-likelihood plot over the vector field of covariance parameters where for (left) and (right). In the white areas not covered by a contour plot, the constraint is violated. The QMLE argmax is on the parameter space boundary which complicates inference. Implementation of this constraint is relatively cheap, since only one constraint check is required per likelihood evaluation, but its consequences are unfavorable for further QMLE analysis. An alternative way of imposing positive definiteness of that we adopt here is to check that all its eigenvalues are positive for each t and take corrective action in the sampling procedure if the constraint is violated. The effect of this constraint on the parameter space is shown in Figure 2 which plots the log-likelihood around the mode over the same vector space as in Figure 1. In our application, we did not detect a parameter without a well-defined mode due to this constraint.

Figure 1: Targeting constraints on the intercepts.
Figure 1:

Targeting constraints on the intercepts.

Figure 2: Targeting constraints on .
Figure 2:

Targeting constraints on .

Nonetheless, an occurrence of an argmax on or very close the parameter space boundary cannot be ruled out a priori in general. Furthermore, the log-likelihood surface can be highly asymmetric, as illustrated in Figure 3 on the case of a contour plot over the space of parameter vector for and for The asymmetry appears more severe in higher dimensions. Due to both of these features, the accuracy of QMLE asymptotic inference, based on a Fisher information matrix that is symmetric by construction, is questionable. MCMC-based procedures, including CHMC presented in the next section, provide an alternative estimation and inference approach for either constraint case, unfettered by the aforementioned likelihood irregularities.

Figure 3: Likelihood Kernel skewness.
Figure 3:

Likelihood Kernel skewness.

3 Constrained Hamiltonian Monte Carlo

The original HMC has its roots in the physics literature where it was introduced as a fast method for simulating molecular dynamics (Duane et al. 1987). Since then, it has become popular in a number of application areas including statistical physics (Akhmatskaya, Bou-Rabee, and Reich 2009; Gupta, Kilcup, and Sharpe 1988), computational chemistry (Tuckerman et al. 1993), or a generic tool for Bayesian statistical inference (Neal 1993, 2011; Ishwaran 1999; Liu 2001; Beskos et al. 2010). HMC and its constrained version, CHMC, apply to a general class of models, nesting the BEKK, that is parametrized by a Euclidean vector for which all information in the sample is contained in the model likelihood assumed known up to an integrating constant. Formally, this class of models can be characterized by a family of probability measures on a measurable space where is the Borel -algebra. The purpose of MCMC methods is to formulate a Markov chain on the parameter space for which, under certain conditions, is the invariant (also called “equilibrium” or “long-run”) distribution. The Markov chain of draws of can be used to construct simulation-based estimates of the required integrals, and functionals of that are expressed as integrals. These functionals include objects of interest for inference on such as quantiles of

The Markov chain sampling mechanism specifies a method for generating a sequence of random variables starting from an initial point in the form of conditional distributions for the draws Under relatively weak regularity conditions (Robert and Casella 2004), the average of the Markov chain converges to the expectation under the stationary distribution:

A Markov chain with this property is called ergodic. As a means of approximation, we rely on large but finite number of draws in which the analyst has the discretion to select in applications.

can be obtained from a given (economic) model and its corresponding likelihood . Typically, has a complicated form which precludes direct sampling in which case the Metropolis–Hastings (M–H) principle is usually employed for from see Chib and Greenberg (1995) for a detailed overview. Suppose we have a proposal-generating density where is a proposed state given the current state of the Markov chain. The M–H principle stipulates that be accepted as the next state with the acceptance probability

[4]

otherwise Then, the Markov chain satisfies the so-called detailed balance condition

which is sufficient for ergodicity. is the probability of the move , if the dynamics of the proposal-generating mechanism were to be reversed. The proposal-generating density can be chosen to be sampled easily, even though may be difficult or expensive to sample from. The popular Gibbs sampler arises as a special case when the M–H sampler is factored into conditional densities. The proposal draws from in eq. [4] are generated in one step.

In contrast, HMC uses a whole sequence of proposal steps whereby the last step in the sequence becomes the proposal draw. This facilitates efficient exploration of the parameter space with the resulting Markov chain. The proposal sequence is constructed using difference equations of the law of motion yielding high acceptance probability even for distant proposals. The parameter space is augmented with a set of independent auxiliary stochastic parameters that fulfill a supplementary role in the proposal algorithm, facilitating the directional guidance of the proposal mechanism. The proposal sequence takes the form starting from the current state and yielding a proposal The detailed balance is then satisfied using the acceptance probability

[5]

In CHMC, constraints are incorporated into the HMC proposal mechanism via “hard walls” representing a barrier against which the proposal sequence, simulating a particle movement, bounces off elastically. Constraints thus do not provide grounds for proposal rejection, eliminating any associated redundancies. Heuristically, the constraint is checked at each step of the proposal sequence, and if it is violated then the trajectory of the sequence is reflected off the hard wall posed by the constraint. This facilitates efficient exploration of the parameter space even in the presence of highly complex parameter constraints. We further synthesize the technical principles of CHMC in a generally accessible form in Appendix A. In the next section we apply CHMC to the task of model comparison of the full BEKK and the BEKK with targeting.

4 Application and model comparison

The model comparison is performed in the context of an empirical application with data on percent log-differences of foreign exchange spot rates for AUD/USD, GBP/USD, CAD/USD, and EUR/USD, from 2000/01/04 to 2011/12/30, total of T = 3,009 observations. We consider cases with two, three and four variables () with data used in the order presented. The associated parameter dimensionality for the full BEKK is 11, 24, and 42, and for the BEKK with targeting 8, 18, and 32, respectively. A time-series plot of the four series is shown in Figure 4, and summary statistics are provided in Table 1 in Appendix B. The sample mean for all series is close to 0, and skewness is small. The sample correlations indicate that all series tend to move together.

For identification, the diagonal elements of C and the first element of both A and B are restricted to be positive (Engle and Kroner 1995). All priors, other than the model identification conditions, are set to be diffused. In the implementation, we utilize the analytical expressions for the gradient of the BEKK log-likelihood from Hafner and Herwartz (2008), which in the case of the BEKK with targeting is subject to the targeting constraints [3]. Although numerical estimates of the gradients could also be used in forming the proposals, evaluation of analytical expressions increases the implementation speed. The initial in the GARCH recursion is set to the sample covariance. Starting from the modal parameter values, we collect a total of 50,000 posterior draws for inference, with 5,000 burnin section. The length of the proposal sequence was set to for the cases respectively, and the stepsize tuned to achieve acceptance rates close to 0.8.

Table 2 reports model comparison results of the full BEKK and BEKK with targeting in terms of marginal log-likelihood (Gelfend and Dey 1994; Geweke 2005). Since all parameters are integrated out, marginal likelihood does not explicitly depend on the dimensionality of the parameter space and hence is suitable for comparison of models with different dimensions. In all cases, , the evidence strongly favors the BEKK model with targeting over the full BEKK model, and this effect increases with dimensionality of variables The evolution of the mean of the conditional covariances over time is presented in Figure 5 for the full BEKK and in Figure 6 for the BEKK with targeting, in four variables. Both plots are virtually identical when examined closely, indicating a minimal loss of model prediction capability as a result of targeting.

Markov chain convergence diagnostics, obtained using the R package coda (Plummer et al. 2012), for both models and variable dimensions are reported in Table 3 confirming the validity of model inference and comparison. All chains have converged within the burnin section, as evidenced by the reported z-scores (mean, minimum, maximum, and standard deviation) of the Geweke (1992) convergence test standardized z-score and p-values of the Heidelberger and Welch (1983) stationary distribution test. The Raftery burnin control diagnostic (Raftery and Lewis 1996) confirms the sufficiency of the length of the burnin section. The CPU run time, also reported in Table 3, took in the order of hours and was virtually equivalent to the wall-clock run time. All chains were obtained using Fortran code with Intel compiler on a 2.8 GHz Unix machine.

The advantages of HMC-based procedures over RW sampling have been well documented (Neal 2011; Pakman and Paninski 2012). We illustrate the CHMC versus RW comparison in Figure 7, showing the trace plot of the Markov chain for the conditional variance parameter in the BEKK with targeting, with 4 variables and 32 parameters. The CHMC trace (left) mixes very well, exploring the tails of the likelihood kernel, while the RW trace (right) stays within a relatively narrow band with minimal tail exploration and without any signs of convergence. Overall, our results support the use of the BEKK with targeting, and CHMC provides a suitable method to sample effectively from its likelihood kernel with nonlinear constraints.

5 Conclusions

In this article, we suggest an effective approach for estimation and inference on the BEKK GARCH model with targeting. The approach is based on CHMC, a recent statistical technique devised to handle nonlinear constraints in the context of relatively costly and irregular likelihoods. Based on a model comparison with the unrestricted version of the BEKK, we present evidence favoring the BEKK with targeting in terms of marginal likelihood. Due to its potential and applicability to similar types of problems, we detail CHMC in a generally accessible form and provide computer code for its implementation. We also provide a comparison of the CHMC and RW sampling. We believe that the elaborated estimation approach and comparison evidence will encourage wider use of both the BEKK model with targeting and CHMC as valuable tools for analysts and practitioners.

Acknowledgments

I would like to thank John Maheu for valuable comments. This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET: www.sharcnet.ca), and it was supported by grants from the Social Sciences and Humanities Research Council of Canada (SSHRC: www.sshrc-crsh.gc.ca).

Appendix A: statistical properties of constrained Hamiltonian Monte Carlo

In this section, we provide the stochastic background for CHMC. This synthesis is based on previously published material (Neal 2011; Burda and Maheu 2012; and references therein). However, the bulk of literature presenting HMC methods does so in terms of the physical laws of motion based on preservation of total energy in the phase-space. Here, we take a fully stochastic perspective familiar to the applied econometrician. The CHMC principle is thus presented in terms of the joint density over the augmented parameter space leading to a Metropolis acceptance probability update.

CHMC principle

Consider a vector of parameters of interest distributed according to the density kernel . Let denote a vector of auxiliary parameters with , where denotes the Gaussian distribution with mean vector 0 and covariance matrix M, independent of . M can be set either to the identity matrix or to a covariance matrix estimated by the inverse of the Fisher information matrix around the mode, as we do in our implementation. Denote the joint density of by Denote the constraint on the parameter space by a density kernel where

Then, the negative of the logarithm of the joint density of is given by the constrained Hamiltonian equation[1]

[6]

CHMC is formulated in the following steps:

  1. draw an initial auxiliary parameter vector

  2. transition from to according to the constrained Hamiltonian dynamics;

  3. accept with probability

[7]

otherwise keep as the next MC draw.

The constraints on the parameter space are reflected in Step 2. We will now describe each step in detail.

Step 1 provides a stochastic initialization of the system akin to a RW draw in order to make the resulting Markov chain irreducible and aperiodic (Ishwaran 1999). In contrast to RW, this so-called refreshment move is performed on the auxiliary variable as opposed to the original parameter of interest setting The initial refreshment draw of is equivalent to a Gibbs step on the parameter space of accepted with probability 1. Since it only applies to it will leave the target joint distribution of invariant, and subsequent steps can be performed conditional on (Neal 2011).

Step 2 constructs a sequence according to the Hamiltonian dynamics starting from the current state with the newly refreshed and setting the last member of the sequence as the CHMC new state proposal The transition from to via the proposal sequence taken according to the discretized Hamiltonian dynamics [10–12] is fully deterministic, placing a Dirac delta probability mass on each conditional on The CHMC acceptance probability in eq. [7] is specified in terms of the difference between the Hamiltonian [6] evaluated at the initial and at the proposal The role of the Hamiltonian dynamics is to ensure that the acceptance probability [7] for is kept close to 1. This corresponds to maintaining the difference close to zero throughout the sequence . This property of the transition from to can be achieved by conceptualizing and as functions of continuous time t and specifying their evolution using the Hamiltonian dynamics equations[2]

[8]
[9]

for For any discrete time interval of duration eqs [8] and [9] define a mapping from the state of the system at time t to the state at time The differential equations [8] and [9] are generally solved by numerical methods, typically the Stormer–Verlet (or leapfrog) numerical integrator (Leimkuhler and Reich 2004). For each step in constructing the proposal sequence CHMC discretizes the Hamiltonian dynamics [8] and [9] as follows: for some small first take a half- step in

[10]

then take a full step in

[11]

check the constraint at for each dimension i of if for any i the constraint is violated then set reversing the proposal dynamics and take further steps in until the constraint is satisfied, then finish with another half- step in

[12]

Intuitively, the proposal trajectory bounces off the “walls” given by the constraint. Since a move in only occurs conditional on for which the constraint is satisfied in which case , and the move in only depends on which does not involve the exact functional form of is inconsequential as long as it is differentiable in to define valid Hamiltonian dynamics.

From the statistical perspective, plays the role of an auxiliary variable that parametrizes (a functional of) providing it with an additional degree of flexibility to maintain the acceptance probability close to one for every k. Even though can deviate substantially from resulting in favorable mixing for the additional terms in in eq. [6] compensate for this deviation maintaining the overall level of close to constant over when used in accordance with eqs [10–12], since and enter with the opposite signs in eqs [8] and [9]. In contrast, without the additional parametrization with if only were to be used in the proposal mechanism as is the case in RW style samplers, the M–H acceptance probability would often drop to zero relatively quickly.

Step 3 applies a Metropolis correction to the proposal In continuous time, or for , eqs [8] and [9] would keep exactly resulting in but for discrete , in general, necessitating the Metropolis step.

System [10–12] is time reversible and symmetric in , which implies that the forward and reverse transition probabilities and are equal: this simplifies the M–H acceptance ratio in eq. [5] to the Metropolis form From the definition of the Hamiltonian in eq. [6] as the negative of the log-joint densities, the joint density of is given by

[13]

Hence, the Metropolis acceptance probability takes the form

The expression for shows, as noted above, that the CHMC acceptance probability is given in terms of the difference of the Hamiltonian equations The closer can we keep this difference to zero, the closer the acceptance probability is to one. A key feature of the Hamiltonian dynamics [8] and [9] in Step 2 is that they maintain constant over the parameter space in continuous time conditional on obtained in Step 1, while their discretization [10–12] closely approximates this property for discrete time steps with a global error of order corrected by the Metropolis update in Step 3.

Appendix B: application and model comparison results

Figure 4: Time-series of log-differences in foreign exchange rates.
Figure 4:

Time-series of log-differences in foreign exchange rates.

Table 1:

Summary statistics.

MeanSDSkewnessSample correlation
AUD/USD–0.01480.90620.660710.56210.63610.5906
GBP/USD0.00170.62330.146610.46130.6794
CAD/USD–0.01180.6188–0.050010.4873
EUR/USD–0.00760.6682–0.10681
Table 2:

Marginal log-likelihood.

VariablesBEKKBEKK with targeting
ParametersMarginal ln LParametersMarginal ln L
211–30,447.068–22,146.67
324–65,402.6918–48,806.99
442–114,063.1332–86,417.73
Table 3:

Convergence diagnostics.

BEKKBEKK with targeting
Variable dimension234234
Parameter dimension11244281832
CPU Time (h:min)1:355:509:531:144:536:45
Geweke z-score mean0.220–0.199–0.1270.0400.022–0.285
Geweke z-score min–0.803–1.704–1.896–0.761–1.522–1.669
Geweke z-score max1.1091.7051.7381.2131.4301.477
Geweke z-score SD0.7781.1710.8680.8240.8780.877
Heidel p-value mean0.5440.4140.3070.4150.5370.484
Heidel p-value min0.1370.0860.0630.2440.0890.053
Heidel p-value max0.9560.8920.9910.5960.9710.986
Heidel p-value SD0.3210.2230.2620.1160.2680.266
Raftery burnin mean92790987132
Raftery burnin min69543040
Raftery burnin max207535116216280
Raftery burnin SD4166544956
Proposal steps L503020503020
Figure 5: Conditional covariances: full BEKK.
Figure 5:

Conditional covariances: full BEKK.

Figure 6: Conditional covariances: BEKK with targeting.
Figure 6:

Conditional covariances: BEKK with targeting.

Figure 7: CHMC (left) vs RW sampling (right).
Figure 7:

CHMC (left) vs RW sampling (right).

References

Aielli, G. P. 2011. Dynamic Conditional Correlation: On Properties and Estimation. Available at http://papers.ssrn.com/sol3/papers.cfmabstract_id=1507743, SSRN Working Paper.10.2139/ssrn.1507743Search in Google Scholar

Akhmatskaya, E., N. Bou-Rabee, and S. Reich. 2009. A Comparison of Generalized Hybrid Monte Carlo Methods with and Without Momentum Flip. Journal of Computational Physics 228(6):2256–65.10.1016/j.jcp.2008.12.014Search in Google Scholar

Bauwens, L., S. Laurent, and J. V. K. Rombouts. 2006. Multivariate Garch Models: A Survey. Journal of Applied Econometrics 21:79–109.10.1002/jae.842Search in Google Scholar

Beskos, A., N. S. Pillai, G. O. Roberts, J. M. Sanz-Serna, and A. M. Stuart. 2010. Optimal Tuning of the Hybrid Monte-Carlo Algorithm. Working Paper, arxiv:1001.4460v1 [math.pr].Search in Google Scholar

Burda, M., and J. M. Maheu. 2013. Bayesian Adaptively Updated Hamiltonian Monte Carlo with an Application to High-Dimensional BEKK GARCH Models. Studies in Nonlinear Dynamics & Econometrics 17. DOI: 10.1515/snde-2013-0020.10.1515/snde-2013-0020Search in Google Scholar

Caporin, M., and M. McAleer. 2012. Do We Really Need Both BEKK and DCC? A Tale of Two Multivariate Garch Models. Journal of Economic Surveys 26(4):736–51.10.1111/j.1467-6419.2011.00683.xSearch in Google Scholar

Chernozhukov, V., and H. Hong. 2003. An MCMC Approach to Classical Estimation. Journal of Econometrics 115(3):293–346.10.1016/S0304-4076(03)00100-3Search in Google Scholar

Chib, S., and E. Greenberg. 1995. Understanding the Metropolis–Hastings Algorithm. American Statistician 49(4):327–35.10.1080/00031305.1995.10476177Search in Google Scholar

Ding, Z., and R. Engle. 2001. Large Scale Conditional Covariance Matrix Modeling, Estimation and Testing. Academia Economic Papers 29:157–84.Search in Google Scholar

Duane, S., A. Kennedy, B. Pendleton, and D. Roweth. 1987. Hybrid Monte Carlo. Physics Letters B 195(2):216–22.10.1016/0370-2693(87)91197-XSearch in Google Scholar

Engle, R. F. 2002. Dynamic Conditional Correlation: A Simple Class of Multivariate Generalized Autoregressive Conditional Heteroskedasticity Models. Journal of Business and Economic Statistics 20:339–50.10.1198/073500102288618487Search in Google Scholar

Engle, R. F., and K. F. Kroner. 1995. Multivariate Simultaneous Generalized Arch. Econometric Theory 11(1):122–50.10.1017/S0266466600009063Search in Google Scholar

Engle, R. F., and J. Mezrich. 1996. GARCH for Groups. Risk 9:36–40.Search in Google Scholar

Engle, R. F., N. Shephard, and K. Sheppard. 2009. Fitting Vast Dimensional Time-Varying Covariance Models. Available at SSRN: http://ssrn.com/abstract=1354497.Search in Google Scholar

Gelfand, A., and D. Dey. 1994. Bayesian Model Choice: Asymptotics and Exact Calculations. Journal of the Royal Statistical Society, Series B 56:501–14.10.1111/j.2517-6161.1994.tb01996.xSearch in Google Scholar

Geweke, J. 1992. Evaluating the Accuracy of Sampling-Based Approaches to Calculating Posterior Moments. In Bayesian Statistics, vol. 4, edited by J. Bernado, J. O. Berger, A. Dawid, and A. Smith. Oxford, UK: Clarendon Press.10.21034/sr.148Search in Google Scholar

Geweke, J. 2005. Contemporary Bayesian Econometrics and Statistics. 169–193. Hoboken, NJ: Wiley.10.1002/0471744735Search in Google Scholar

Gupta, R., G. Kilcup, and S. Sharpe. 1988. Tuning the Hybrid Monte Carlo Algorithm. Physical Review D 38(4):1278–87.10.1103/PhysRevD.38.1278Search in Google Scholar PubMed

Hafner, C. M., and H. Herwartz. 2008. Analytical Quasi Maximum Likelihood Inference in Multivariate Volatility Models. Metrika 67:219–39.10.1007/s00184-007-0130-ySearch in Google Scholar

Heidelberger, P., and P. Welch. 1983. Simulation Run Length Control in the Presence of an Initial Transient. Operations Research 31:1109–44.10.1287/opre.31.6.1109Search in Google Scholar

Ishwaran, H. 1999. Applications of Hybrid Monte Carlo to Generalized Linear Models: Quasicomplete Separation and Neural Networks. Journal of Computational and Graphical Statistics 8:779–99.10.1080/10618600.1999.10474849Search in Google Scholar

Jin, Xin, and John M. Maheu. 2013. Modeling Realized Covariances and Returns. Journal of Financial Econometrics (spring 2013) 11(2):335–369. DOI: 10.1093/jjfinec/nbs022.10.1093/jjfinec/nbs022Search in Google Scholar

Leimkuhler, B., and S. Reich. 2004. Simulating Hamiltonian Dynamics. Cambridge: Cambridge University Press.10.1017/CBO9780511614118Search in Google Scholar

Liu, J. S. 2004. Monte Carlo Strategies in Scientific Computing. Springer Series in Statistics. New York: Springer.10.1007/978-0-387-76371-2Search in Google Scholar

Neal, R. M. 1993. Probabilistic Inference Using Markov Chain Monte Carlo Methods. Technical Report crg-tr-93–1, Department of Computer Science, University of Toronto.Search in Google Scholar

Neal, R. M. 2011. MCMC Using Hamiltonian Dynamics. In Handbook of Markov Chain Monte Carlo, edited by S. Brooks, A. Gelman, G. Jones, and X.-L. Meng, 113–162. Boca Raton, FL: Chapman & Hall/CRC Press.10.1201/b10905-6Search in Google Scholar

Pakman, A., and L. Paninski. 2012. Exact Hamiltonian Monte Carlo for Truncated Multivariate Gaussians. Available at http://arxiv.org/abs/1208.4118, Columbia University.Search in Google Scholar

Plummer, M., N. Best, K. Cowles, K. Vines, D. Sarkar, and R. Almond. 2012. The R Package Coda. Version 0.16–1.Search in Google Scholar

Raftery, A., and S. Lewis. 1996. Implementing MCMC. In Markov Chain Monte Carlo in Practice, edited by W. Gilks, D. Spiegelhalter, and S. Richardson, 115–130. London: Chapman and Hall.Search in Google Scholar

Robert, C. P., and G. Casella. 2004. Monte Carlo Statistical Methods (2nd ed.). New York: Springer.10.1007/978-1-4757-4145-2Search in Google Scholar

Silvennoinen, A., and T. Teräsvirta. 2009. Modeling Multivariate Autoregressive Conditional Heteroskedasticity with the Double Smooth Transition Conditional Correlation Garch Model. Journal of Financial Econometrics 7(4):373–411.10.1093/jjfinec/nbp013Search in Google Scholar

Tuckerman, M., B. Berne, G. Martyna, and M. Klein. 1993. Efficient Molecular Dynamics and Hybrid Monte Carlo Algorithms for Path Integrals. The Journal of Chemical Physics 99(4):2796–808.10.1063/1.465188Search in Google Scholar

Published Online: 2013-7-3
Published in Print: 2015-1-1

©2015 by De Gruyter

Downloaded on 6.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/jtse-2013-0013/html
Scroll to top button