Home Bayesian adaptively updated Hamiltonian Monte Carlo with an application to high-dimensional BEKK GARCH models
Article
Licensed
Unlicensed Requires Authentication

Bayesian adaptively updated Hamiltonian Monte Carlo with an application to high-dimensional BEKK GARCH models

  • Martin Burda EMAIL logo and John M. Maheu
Published/Copyright: May 24, 2013

Abstract

Hamiltonian Monte Carlo (HMC) is a recent statistical procedure to sample from complex distributions. Distant proposal draws are taken in a sequence of steps following the Hamiltonian dynamics of the underlying parameter space, often yielding superior mixing properties of the resulting Markov chain. However, its performance can deteriorate sharply with the degree of irregularity of the underlying likelihood due to its lack of local adaptability in the parameter space. Riemann Manifold HMC (RMHMC), a locally adaptive version of HMC, alleviates this problem, but at a substantially increased computational cost that can become prohibitive in high-dimensional scenarios. In this paper we propose the Adaptively Updated HMC (AUHMC), an alternative inferential method based on HMC that is both fast and locally adaptive, combining the advantages of both HMC and RMHMC. The benefits become more pronounced with higher dimensionality of the parameter space and with the degree of irregularity of the underlying likelihood surface. We show that AUHMC satisfies detailed balance for a valid MCMC scheme and provide a comparison with RMHMC in terms of effective sample size, highlighting substantial efficiency gains of AUHMC. Simulation examples and an application of the BEKK GARCH model show the practical usefulness of the new posterior sampler.


Corresponding author: Martin Burda, Department of Economics, University of Toronto, 150 St. George St., Toronto, ON, M5S 3G7, Canada; and IES, Charles University, Prague, Czech Republic, Phone: +(416) 978-4479

  1. 1

    Although not estimated, we expect our method could be extended to other innovation distributions such as multivariate Student-t with little modification.

  2. 2

    There are notable exceptions, such as Girolami and Calderhead (2011) who also take the statistical perspective, but their paper focuses on RMHMC while here we elaborate on the statistical background to HMC.

  3. 3

    In the physics literature, θ denotes the position (or state) variable and –ln π(θ) describes its potential energy, while γ is the momentum variable with kinetic energy γ′M–1γ/2, yielding the total energy H(θ, γ) of the system, up to a constant of proportionality. M is a constant, symmetric, positive-definite “mass” matrix which is often set as a scalar multiple of the identity matrix.

  4. 4

    In the physics literature, the Hamiltonian dynamics describe the evolution of (θ, γ) that keeps the total energy H(θ, γ) constant.

We would like to thank Ben Calderhead, Mark Girolami, and Radford Neal for helpful discussions. We would also like to thank the participants of the Seminar on Bayesian Inference in Econometrics and Statistics 2011 at the Washington University in St. Louis, Meetings of the Midwest Econometrics Group 2011 at the University of Chicago Booth School of Business, Workshop on High-Dimensional Econometric Modelling 2010 at Cass Business School in London, UK, and MIT-Harvard and Brown University seminar audiences for their insightful comments and suggestions. Both Burda and Maheu thank SSHRC for financial support. This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET: www.sharcnet.ca) and Compute/Calcul Canada. We would like to thank the Editor, Bruce Mizrach, and two anonymous referees for their helpful comments and suggestions

Appendix A: Hamiltonian Monte Carlo

In this section we provide the stochastic background for HMC. This synthesis is based on previously published material, but unlike the bulk of literature presenting HMC in terms of the physical laws of motion based on preservation of total energy in the phase-space, we take a fully stochastic perspective familiar to the applied Bayesian econometrician.2 The HMC principle is thus presented in terms of the joint density over the augmented parameter space leading to a Metropolis acceptance probability update. We hope that our synthesis of the probabilistic perspective on HMC will provide useful insights for practitioners who wish to further explore the HMC principles.

HMC Principle

Consider a vector of parameters of interest distributed according to the posterior density π(θ). Let denote a vector of auxiliary parameters with distributed Gaussian with mean vector 0 and covariance matrix M, independent of θ. Denote the joint density of (θ, γ) by π(θ, γ). Then the negative of the logarithm of the joint density of (θ, γ) is given by the Hamiltonian equation3

Hamiltonian Monte Carlo (HMC) is formulated in the following three steps that we will describe in detail further below:

  1. Draw an initial auxiliary parameter vector

  2. Transition from (θr, γr) to according to the Hamiltonian dynamics;

  3. Accept with probability otherwise keep (θr, γr) as the next MC draw.

Step 1 provides a stochastic initialization of the system akin to a RW draw. This step is necessary in order to make the resulting Markov chain irreducible and aperiodic (Ishwaran 1999). In contrast to RW, this so-called refreshment move is performed on the auxiliary variable γ as opposed to the original parameter of interest θ, setting In terms of the HMC sampling algorithm, the initial refreshment draw of forms a Gibbs step on the parameter space of (θ, γ) accepted with probability 1. Since it only applies to γ, it will leave the target joint distribution of (θ, γ) invariant and subsequent steps can be performed conditional on (Neal 2010).

Step 2 constructs a sequence according to the Hamiltonian dynamics starting from the current state and setting the last member of the sequence as the HMC new state proposal The role of the Hamiltonian dynamics is to ensure that the M-H acceptance probability (2) for is kept close to 1. As will become clear shortly, this corresponds to maintaining the difference close to zero throughout the sequence This property of the transition from (θr, γr) to can be achieved by conceptualizing θ and γ as functions of continuous time t and specifying their evolution using the Hamiltonian dynamics equations4

for i=1,…,d. For any discrete time interval of duration s, (15) and (16) define a mapping Ts from the state of the system at time t to the state at time t+s. For practical applications of interest these differential equations (15) and (16) in general cannot be solved analytically and instead numerical methods are required. The Stormer-Verlet (or leapfrog) numerical integrator (Leimkuhler and Reich 2004) is one such popular method, discretizing the Hamiltonian dynamics as

for some small From this perspective, γ plays the role of an auxiliary variable that parametrizes (a functional of) π(θ, ‧) providing it with an additional degree of flexibility to maintain the acceptance probability close to one for every k. Even though can deviate substantially from resulting in favorable mixing for θ, the additional terms in γ in (14) compensate for this deviation maintaining the overall level of close to constant over k=1, …, L when used in accordance with (17)–(19), since and enter with the opposite signs in (15) and (16). In contrast, without the additional parametrization with γ, if only were to be used in the proposal mechanism as is the case in RW style samplers, the M-H acceptance probability would often drop to zero relatively quickly.

Step 3 applies a Metropolis correction to the proposal In continuous time, or for ε→0, (15) and (16) would keep exactly resulting in but for discrete ε>0, in general, necessitating the Metropolis step. A key feature of HMC is that the generic M-H acceptance probability (2) can be expressed in a simple tractable form using only the posterior density π(θ) and the auxiliary parameter Gaussian density ϕ(γ;0, M). The transition from to via the proposal sequence taken according to the discretized Hamiltonian dynamics (17)–(19) is a fully deterministic proposal, placing a Dirac delta probability mass on each conditional on The system (17)–(19) is time reversible and symmetric in (θ, γ), which implies that the forward and reverse transition probabilities and are equal: this simplifies the Metropolis-Hastings acceptance ratio in (2) to the Metropolis form From the definition of the Hamiltonian H(θ, γ) in (14) as the negative of the log-joint densities, the joint density of (θ, π) is given by

Hence, the Metropolis acceptance probability takes the form

The expression for shows, as noted above, that the HMC acceptance probability is given in terms of the difference of the Hamiltonian equations The closer can we keep this difference to zero, the closer the acceptance probability is to one. A key feature of the Hamiltonian dynamics (15) and (16) in Step 2 is that they maintain H(θ, γ) constant over the parameter space in continuous time conditional on obtained in Step 1, while their discretization (17)–(19) closely approximates this property for discrete time steps ε>0 with a global error of order ε2 corrected by the Metropolis update in Step 3.

The acceptance ratio can only be maintained at exactly one if the proposal trajectory evolution were continuous. However, due its discretization into individual steps, the acceptance probability always deviates from one due to discretization errors. The length of the proposal sequence can then be tuned using ε>0 and L to achieve a desired acceptance rate, analogously to the RW environment. The Hamiltonian dynamics approximately keeps the joint density π(θ, γ) of θ and γ constant, permitting changes in the marginal density π(θ). Due to this feature, the proposal sequence does not move along a “straight” trajectory in the parameter space Θ of θ but rather along a “curve.” This ensures that the proposal sequence does not travel “too far” into the tails and stays in regions with non-zero probability. The ESS is a useful diagnostic tool in this respect: proposals accepted too far from the current state would result in near-independent MCMC draws, bringing the ESS value close to the number of MCMC iterations, but we have not seen such phenomenon occur. Each proposal sequence in HMC and its extensions starts with a “refreshment” of the kinetic auxiliary variable γ newly drawn from N(0, M) where M is the mass matrix. This draw determines the direction in which the proposal sequence propagates through the parameter space. The stochastic nature of γ prevents the chain from getting stuck at the original point or too close to it.

RMHMC

The HMC features proposal dynamics that are based on the Hamiltonian equation of motion (14). The RMHMC is an extension of HMC that results from replacing the mass matrix M in the Hamiltonian equation (14) by the Fisher information matrix F(θ) of the underlying likelihood π(θ). This leads to the augmented Hamiltonian equation

The Hamiltonian equation (21) is non-separable in θ since its derivative with respect to γ

is a function of (γ, θ), and its derivative with respect to θ

is also a function of (γ, θ). Since both these derivatives also contain the Fisher information matrix F(θ) (and the latter one also its inverse F(θ)–1 and derivative ∂F(θ)/∂θIwith respect to θi for each i) while F(θ) is a function of θ, then F(θ), F(θ)–1, and ∂F(θ)/∂θI for each i have to be recomputed at each step during the proposal sequence to obtain the directional dynamics for the proposal given by the derivatives of the Hamiltonian (21). This feature renders RMHMC computationally intensive.

AUHMC Mechanism

The AUHMC is also an extension of HMC, but here the mass matrix in the Hamiltonian equation (14) is replaced by that is fixed constant for the entire leapfrog multi-step proposal sequence Since the derivative of the resulting Hamiltonian with respect to γ is only a function of γ and the derivative with respect to θ is only a function of θ, the proposal dynamics are not burdened by recomputing F(θ), which is only necessary to obtain at the final proposal point This alleviates much of the computational burden necessitated in RMHMC.

As the AUHMC principle is detailed in the main text, here we provide a heuristical description of its implementational algorithm. At the current values of the parameter MCMC draws (θr, γr), first “refresh” the momentum parameter γ by drawing a new value from the normal distribution with mean 0 and variance Then obtain the next step of the proposal sequence by taking a half-step in γ, full step in θ and a half-step in γ as given by the following equations, with k=0:

Take such next step L times in total, for k=0,…,L–1, arriving at which gives us the proposal Update the mass matrix with the new Fisher Repeat running the proposal sequence until convergence of on to a fixed point. Accept the final proposal as the next MCMC draw (θr+1, γ +1) with probability

where ϕ(‧) is the normal density function.

Appendix B: The AUHMC algorithm

Initialize current θ

  for r=1 to R

  {

   initialize j=0

   (j loop) do while

          

   {

   draw for j=0

   and for j>0

   j=j+1

   (k loop) for k=1 to L

   {

   

   }

   

  }

 draw u~U[0, 1]

if*<u) thenelse {θr+1=θ}

}

Appendix C: Proof of Lemma 1 and Theorem 1

Proof of Lemma 1

The AUHMC mapping is a special case of an implicit Runge-Kutta method (Leimkuhler and Reich, 150–151]. Hence, under our Assumptions 2 and 3, the proof of existence of a unique solution is given by Theorem 7.2 of Hairer, Norsett, and Wanner(1993, 206). Specifically, there exists a unique solution to the mapping Tk defined by (7)–(9) which can be obtained by iteration resulting in the repeated use of the triangle inequality that results from the Lipschitz condition satisfying a contraction mapping property.

Proof of Theorem 1

Recall that AUHMC constructs a distant proposal sequence in a sequence of k=1,…,L steps. For a given k (omitting the subscripts r denoting the MCMC steps), define the mapping of (θk, γk) into (θk+1, γk +1) as:

The coefficient notation for corresponds to the general setup of an implicit partitioned Runge-Kutta scheme of Leimkuhler and Reich (2004, 150–151). Here, all are equal to zero unless stated otherwise. Moreover, if in the summation sign the upper index is smaller than the lower index, then the corresponding coefficient or is equal to zero. The Hamiltonian for each k is given by

with

where the right-hand side is defined in (7), and and are implicitly determined in

We will next state the definitions of an adjoint mapping (Leimkuhler and Reich 2004, 84).

Definition 1 The mapping defined by is called the adjoint mapping of Equivalently, given its adjoint is defined by

Given as defined above, its adjoint takes the form

We next proceed to symmetric compositions of mappings with their adjoints.

Definition 2 A mapping is called symmetric if i.e.

The symmetry of then implies its time-reversibility (Leimkuhler and Reich 2004, 87). Knowing a mapping and its adjoint a symmetric mapping is obtained by composition (concatenation) of the two methods

even if neither nor are symmetric individually (Leimkuhler and Reich 2004, 84). The following Lemma provides a simple extension of this result.

Lemma 2. Given a symmetric mapping the mapping

is also symmetric.

Proof.

which satisfies the definition of a symmetric mapping.      ■

Note that since the adjoint of the adjoint is the original mapping, i.e. Lemma 2 can be also equivalently stated as being symmetric.

For L even, let m=L/2, k=m and define the mapping

which, using (22), is symmetric. Then, let

and further

for m=1, …, L/2–1. The final composite mapping then takes the form

Symmetry of follows by repeated application of Lemma 2.

The mappings and are special cases of an implicit partitioned Runge-Kutta method (Leimkuhler and Reich 2004, 150–151] and thus the existence and uniqueness of their solutions follows from Lemma 1. The uniqueness of the solution to and for each k implies that there is a unique solution to Such solution is equivalent to the one given by AUHMC since the AUHMC fixed-point is identical to the fixed point of that solves Since, by Lemma 1, the solution to AUHMC is unique, AUHMC implements which is a symmetric and time reversible mapping, yielding the detailed balance condition of Theorem 1.

Equivalently, from the definition of it follows directly that reversing the momentum at and applying AUHMC solves which, due to symmetry of equals following the same proposal path back to (θr, γr) having negated the momentum again after the final step. This satisfies the definition of reversibility for AUHMC.

We can make an analogy between the pair of Euler B and A methods (Leimkuhler and Reich 2004, 84) and the pair of and In the former pair, the difference is in the point at which we evaluate directional derivatives (θk or θk+1). In the pair of and the difference is in the number of HMC steps needed to reach and which at the solution equal to θ0 and θL respectively, but the directional derivatives are always the same, taken with Mk evaluated at the endpoints of the proposal sequence. However, neither nor is symmetric on its own, and hence we need their concatenation to attain symmetry of the composite mapping.

At the implicit solution of ,

in analogy to the Euler-B method. Also, due to the symmetry of Mk in and from Assumption 1, at the solution of

in analogy to the Euler-A method. These half-steps are performed by AUHMC during its proposal sequence.

M-H acceptance probability

The derivation of the M-H acceptance probability form is standard in the HMC literature and we merely adapt it to the AUHMC notation below. Denote by the proposal density and by the reverse proposal density. Given is constructed by the method of change of variables based on the sequence of steps given by the AUHMC mapping Tk for k=1, …, L. Since Tk is deterministic, placing the Dirac delta δ(‧,‧)=1 unit probability mass at each applying successive transformations Tk yields

where denotes the Jacobian matrix of the transformation Tk with respect to and for each k=1, …, L.

Denote by the reverse mapping obtained from Tk by reversing the signs in the Hamiltonian proposal dynamics. Then

with Conditional on satisfying Assumption 1, the leapfrog transformation defined by (7)–(9) satisfies

Then

for each k = 1, …, L and hence

The ratio in the acceptance probability (2) then satisfies detailed balance in the Metropolis form

since all the Jacobian terms cancel out due to (26). By definition of the Hamiltonian equation in (3), the ratio in (27) is then equivalent to

Appendix D: Fisher information for the multivariate normal density

For the univariate case,

and for the multivariate case

where Dm is the duplication matrix (Magnus and Neudecker 2007). In our empirical application we used the numerical approximation to the diagonal of F(θ) instead of the full matrix for faster speed of the MC runs.

Appendix E: Summary statistics and results

References

Akhmatskaya, E., N. Bou-Rabee, and S. Reich. 2009. “A Comparison of Generalized Hybrid Monte Carlo Methods with and Without Momentum Flip.” Journal of Computational Physics 228 (6): 2256–2265.10.1016/j.jcp.2008.12.014Search in Google Scholar

Bauwens, L., C. S. Bos, H. K. van Dijk, and R. D. van Oest. 2004. “Adaptive Radial-Based Direction Sampling: Some Flexible and Robust Monte Carlo Integration Methods.” Journal of Econometrics 123: 201–225.10.1016/j.jeconom.2003.12.002Search in Google Scholar

Beskos, A., N. S. Pillai, G. O. Roberts, J. M. Sanz-Serna, and A. M. Stuart. 2010. “Optimal Tuning of the Hybrid Monte-Carlo Algorithm.” Working Paper, arXiv:1001.4460v1 [math.PR].Search in Google Scholar

Chib, S., and E. Greenberg. 1995. “Understanding the Metropolis-Hastings Algorithm.” American Statistician 49 (4): 327–335.Search in Google Scholar

Chib, S., and S. Ramamurthy. 2010. “Tailored Randomized Block MCMC Methods with Application to DSGE Models.” Journal of Econometrics 155 (1): 19–38.10.1016/j.jeconom.2009.08.003Search in Google Scholar

Dellaportas, P., and I. D. Vrontos. 2007. “Modelling Volatility Asymmetries: A Bayesian Analysis of a Class of Tree Structured Multivariate GARCH Models.” Econometrics Journal 10 (3): 503–520.10.1111/j.1368-423X.2007.00219.xSearch in Google Scholar

Ding, Z., and R. Engle. 2001. “Large Scale Conditional Covariance Matrix Modeling, Estimation and Testing.” Academia Economic Papers 29: 157–184.Search in Google Scholar

Duane, S., and A. D. Kennedy, B. Pendleton and D. Roweth. 1987. “Hybrid Monte Carlo.” Physics Letters B 195 (2): 216–222.10.1016/0370-2693(87)91197-XSearch in Google Scholar

Engle, R. F. 2002. “Dynamic Conditional Correlation: A Simple Class of Multivariate Generalized Autoregressive Conditional Heteroskedasticity Models.” Journal of Business and Economic Statistics 20: 339–350.10.1198/073500102288618487Search in Google Scholar

Engle, R. F., and K. F. Kroner. 1995. “Multivariate Simultaneous Generalized ARCH.” Econometric Theory 11 (1): 122–150.10.1017/S0266466600009063Search in Google Scholar

Engle, R. F., N. Shephard, and K. Sheppard. 2009. “Fitting Vast Dimensional Time-Varying Covariance Models.” working paper.Search in Google Scholar

Gamerman, D. 1997. “Sampling from the Posterior Distribution in Generalized Linear Mixed Models.” Statistics and Computing 7: 57–68.10.1023/A:1018509429360Search in Google Scholar

Gelfand, A. E., and D. K. Dey. 1994. “Bayesian Model Choice: Asymptotics and Exact Calculations.” Journal of the Royal Statistical Society. Series B (Methodological) 56 (3): 501–514.10.1111/j.2517-6161.1994.tb01996.xSearch in Google Scholar

Geweke, J. 2005. Contemporary Bayesian Econometrics and Statistics. Hoboken, NJ: Wiley.10.1002/0471744735Search in Google Scholar

Geyer, C. J. 1992. “Practical Markov Chain Monte Carlo.” Statistal Science 7: 473–483.10.1214/ss/1177011137Search in Google Scholar

Girolami, M., and B. Calderhead. 2011. “Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods (with Discussion).” Journal of the Royal Statistical Society, Series B 73 (2): 123–214.10.1111/j.1467-9868.2010.00765.xSearch in Google Scholar

Gupta, R., G. W. Kilcup, and S. R. Sharpe. 1988. “Tuning the Hybrid Monte Carlo Algorithm.” Physical Review D 38 (4): 1278–1287.10.1103/PhysRevD.38.1278Search in Google Scholar

Hafner, C. M., and H. Herwartz. 2008. “Analytical Quasi Maximum Likelihood Inference in Multivariate Volatility Models.” Metrika 67: 219–239.10.1007/s00184-007-0130-ySearch in Google Scholar

Hairer, E., C. Lubich, and G. Wanner. 2003. “Geometric Numerical Integration Illustrated by the Störmer–Verlet Method.” Acta Numerica 12: 399–450.10.1017/S0962492902000144Search in Google Scholar

Holmes, C. C., and L. Held. 2006. “Bayesian Auxiliary Variable Models for Binary and Multinomial Regression.” Bayesian Analysis 1 (1): 145–168.Search in Google Scholar

Hoogerheide, L. F., J. F. Kaashoek, and H. K. van Dijk. 2007. “On the Shape of Posterior Densities and Credible Sets in Instrumental Variable Regression Models with Reduced Rank: An Application of Flexible Sampling Methods using Neural Networks.” Journal of Econometrics 139 (1): 154–180.10.1016/j.jeconom.2006.06.009Search in Google Scholar

Hudson, B., and R. Gerlach. 2008. “A Bayesian Approach to Relaxing Parameter Restrictions in Multivariate GARCH Models.” TEST: An Official Journal of the Spanish Society of Statistics and Operations Research 17 (3): 606–627.10.1007/s11749-007-0056-8Search in Google Scholar

Ishwaran, H. 1999. “Applications of Hybrid Monte Carlo to Generalized Linear Models: Quasicomplete Separation and Neural Networks.” Journal of Computational and Graphical Statistics 8: 779–799.Search in Google Scholar

Kurdila, A. J., and M. Zabarankin. 2005. Convex Functional Analysis. Basel: Birkhäuser Verlag.Search in Google Scholar

Leimkuhler, B., and S. Reich. 2004. Simulating Hamiltonian Dynamics. New York: Cambridge University Press.10.1017/CBO9780511614118Search in Google Scholar

Liesenfeld, R., and J. F. Richard. 2006. “Classical and Bayesian Analysis of Univariate and Multivariate Stochastic Volatility Models.” Econometric Reviews 25 (2–3): 335–360.10.1080/07474930600713424Search in Google Scholar

Liu, J. S. 2004. Monte Carlo Strategies in Scientific Computing. New York: Springer Series in Statistics.10.1007/978-0-387-76371-2Search in Google Scholar

Magnus, J. R., and H. Neudecker. 2007. Matrix Differential Calculus with Applications in Statistics and Econometrics. New York: John Wiley & Sons.Search in Google Scholar

Neal, R. M. 1993. Probabilistic Inference Using Markov Chain Monte Carlo Methods. Technical Report CRG-TR-93-1, Dept. of Computer Science, University of Toronto.Search in Google Scholar

Neal, R. M. 2010. “MCMC using Hamiltonian dynamics.” In: Handbook of Markov Chain Monte Carlo, edited by S. Brooks, A. Gelman, G. Jones and X.-L. Meng, Boca Raton: Chapman & Hall / CRC Press.10.1201/b10905-6Search in Google Scholar

Osiewalski, J., and M. Pipien. 2004. “Bayesian Comparison of Bivariate ARCH-type Models for the Main Exchange Rates in Poland.” Journal of Econometrics 123 (2): 371–391.10.1016/j.jeconom.2003.12.005Search in Google Scholar

Pitt, M. K., and N. Shephard. 1997. “Likelihood Analysis of Non-Gaussian Measurement Time Series.” Biometrika 84: 653–667.10.1093/biomet/84.3.653Search in Google Scholar

Robert, C. P., and G. Casella. 2004. Monte Carlo Statistical Methods. 2nd ed. New York: Springer.10.1007/978-1-4757-4145-2Search in Google Scholar

Roberts, G. O., and J. S. Rosenthal. 1998. “Optimal Scaling of Discrete Approximations to Langevin Diffusions.” Journal of the Royal Statistical Society. Series B (Statistical Methodology) 60 (1): 255–268.10.1111/1467-9868.00123Search in Google Scholar

Roberts, G., and O. Stramer. 2003. “Langevin Diffusions and Metropolis-Hastings Algorithms.” Methodology and Computing in Applied Probability 4: 337–358.10.1023/A:1023562417138Search in Google Scholar

Tuckerman, M., B. J. Berne, G. J. Martyna, and M. L. Klein. 1993. “Efficient Molecular Dynamics and Hybrid Monte Carlo Algorithms for Path Integrals.” The Journal of Chemical Physics 99 (4): 2796–2808.10.1063/1.465188Search in Google Scholar

Published Online: 2013-05-24
Published in Print: 2013-09-01

©2013 by Walter de Gruyter Berlin Boston

Downloaded on 27.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/snde-2013-0020/html
Scroll to top button