Abstract
We motive and calculate Newton–Cotes quadrature integration variance and compare it directly with Monte Carlo (MC) integration variance.
We find an equivalence between deterministic quadrature sampling and random MC sampling by noting that MC random sampling is statistically indistinguishable from a method that uses deterministic sampling on a randomly shuffled (permuted) function.
We use this statistical equivalence to regularize the form of permissible Bayesian quadrature integration priors such that they are guaranteed to be objectively comparable with MC.
This leads to the proof that simple quadrature methods have expected variances that are less than or equal to their corresponding theoretical MC integration variances.
Separately, using Bayesian probability theory, we find that the theoretical standard deviations of the unbiased errors of simple Newton–Cotes composite quadrature integrations improve over their worst case errors by an extra dimension independent factor
Funding statement: This work was supported by the Center for Complex Engineering Systems at King Abdulaziz City for Science and Technology and the Massachusetts Institute of Technology.
A Appendix: Direct comparisons to other MC based integration methods?
We attempt to make some direct comparisons between simple quadrature and Markov chain Monte Carlo (MCMC), importance sampling, stratified sampling, and Latin hypercube sampling using this analysis. We use comparisons of these methods to MC from literature and then compare them to simple quadrature using (3.11).
The variance of a MCMC estimate is
where τ is the integrated autocorrelation time.
When the samples are all drawn i.i.d., one finds
The other sampling methods are difficult to compare directly with simple quadrature for a number of reasons.
The variance of importance sampling in the general case is not directly compared to MC in the literature because the “importance sampling distribution”
B Appendix: Simulation results
For
where
as the expected value of the error is zero.
If the expected error is not zero because
The exponent χ in
for
Dimension D | Error exponent | Theory exponent χ | Simulated exponent |
1 | 1.0 | 1.5 | 1.500 |
2 | 0.5 | 1.0 | 1.004 |
4 | 0.25 | 0.75 | 0.752 |
8 | 0.125 | 0.625 | 0.625 |
16 | 0.0625 | 0.5625 | 0.564 |
Dimension D | Error exponent | Theory exponent χ | Simulated exponent |
1 | 2.0 | 2.5 | 2.501 |
2 | 1.0 | 1.5 | 1.498 |
4 | 0.5 | 1.0 | 1.003 |
8 | 0.25 | 0.75 | 0.748 |
16 | 0.125 | 0.625 | 0.625 |
Acknowledgements
We would like to acknowledge the thoughtful conversations we had with Zeyad Al Awwad, Arwa Alanqary, and Nicholas Carrara during the writing of this article.
References
[1] B. Adams, L. Bauman, W. Bohnhoff, K. Dalbey, M. Ebeida, J. Eddy, M. Eldred, P. Hough, K. Hu, J. Jakeman, J. Stephens, L. Swiler, D. Vigil and T. Wildey, Dakota, A multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis: Version 6.3 User’s Manual, Sandia Labs Technical Report SAND2014-4633, New Mexico, 2015. 10.2172/1177048Suche in Google Scholar
[2] F. X. Briol, C. Oates, M. Girolami and M. A. Osborne, Frank–Wolfe Bayesian quadrature: Probabilistic integration with theoretical guarantees, Adv. Neural Inform. Process. Syst. 28 (2015), 1162–1170. Suche in Google Scholar
[3] B. Carpenter, A. Gelman, M. D. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li and A. Riddell, Stan: A probabilistic programming language, J. Statist. Softw. 76 (2017), 1–31. 10.18637/jss.v076.i01Suche in Google Scholar
[4] A. Caticha and A. Giffin, Updating probabilities, AIP Conf. Proc. 872 (2006), 31–42. 10.1063/1.2423258Suche in Google Scholar
[5] W. Gilks and P. Wild, Adaptive rejection sampling for Gibbs sampling, J. R. Stat. Soc. Ser. C. Appl. Stat. 41 (1992), 337–348. 10.2307/2347565Suche in Google Scholar
[6] O. P. Maître and O. M. Knio, Spectral Methods for Uncertainty Quantification, Springer, New York, 2010. 10.1007/978-90-481-3520-2Suche in Google Scholar
[7] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller and E. Teller, Equation of state calculations by fast computing machines, J. Chem. Phys. 21 (1953), 1087–1092. 10.2172/4390578Suche in Google Scholar
[8] R. M. Neal, Slice sampling, Ann. Statist. 31 (2003), 705–741. 10.1214/aos/1056562461Suche in Google Scholar
[9] A. O’Hagan, Monte Carlo is fundamentally unsound, J. R. Stat. Soc. Ser. D. Statist. 2/3 (1987), 247–249. 10.2307/2348519Suche in Google Scholar
[10] A. O’Hagan, Bayes–Hermite quadrature, J. Statist. Plann. Inference 3 (1991), 245–260. 10.1016/0378-3758(91)90002-VSuche in Google Scholar
[11] K. Vanslette, The inferential design of entropy and its application to quantum measurements, Ph.D. thesis, University at Albany SUNY, 2018. Suche in Google Scholar
© 2020 Walter de Gruyter GmbH, Berlin/Boston
Artikel in diesem Heft
- Frontmatter
- Why simple quadrature is just as good as Monte Carlo
- Describing the Pearson 𝑅 distribution of aggregate data
- Approximation of Euler–Maruyama for one-dimensional stochastic differential equations involving the maximum process
- A Bayesian inference for the penalized spline joint models of longitudinal and time-to-event data: A prior sensitivity analysis
- A Bayesian procedure for bandwidth selection in circular kernel density estimation
Artikel in diesem Heft
- Frontmatter
- Why simple quadrature is just as good as Monte Carlo
- Describing the Pearson 𝑅 distribution of aggregate data
- Approximation of Euler–Maruyama for one-dimensional stochastic differential equations involving the maximum process
- A Bayesian inference for the penalized spline joint models of longitudinal and time-to-event data: A prior sensitivity analysis
- A Bayesian procedure for bandwidth selection in circular kernel density estimation