Abstract
Generalized Weighted Analog Sampling is a variance-reducing method for solving radiative transport problems that makes use of a biased (though asymptotically unbiased) estimator. The introduction of bias provides a mechanism for combining the best features of unbiased estimators while avoiding their limitations. In this paper we present a new proof that adaptive GWAS estimation based on combining the variance-reducing power of importance sampling with the sampling simplicity of correlated sampling yields geometrically convergent estimates of radiative transport solutions. The new proof establishes a stronger and more general theory of geometric convergence for GWAS.
Funding statement: The second author gratefully acknowledges partial support from NIH NIBIB Laser Microbeam and Medical Program (LAMMP, P41EB015890) and from NIH R25GM103818.
A Preliminary estimates
The following formula from probability theory can be found in [12] and is used in many of our proofs.
For any random variables X and Y,

As well, the following set-theoretic result arises in estimating probabilities:
Assume that A and B are two subsets of a probability space
and, for two small positive numbers

Then

Proof of Lemma 1.
We will use the following formulas, for any random variables X and Y:

For





where we have used

As for

which is (5.2).
To calculate the variance of

The first term of the right-hand side is equal to zero because, under the
condition

where it can be easily verified that

or

which is (5.3). To calculate




Proof of Lemma 2.
From (2.11) and (5.1), we obtain

We then have

Using the first inequality of (5.5) and also applying Proposition 2, we obtain

which means

as both sides of the inequality are deterministic. Estimate (5.6) is proved.
Now, we derive an upper bound of


or



Since



or



Taking the maximum value of both sides, we obtain



In (A.3), notice that




where κ is defined by (2.4). Therefore, from (A.3), we have


Now using the second inequality of (5.5) and Proposition 2, we have

or

which means

because both sides of the inequality are deterministic and
The proof of Lemma 2 is completed. ∎
Proof of Corollary 3.
According to Chebyshev’s
inequality, for any W and

which means

Using (5.6), we have

Now we just choose

to complete the proof of Corollary 3. ∎
Proof of Lemma 4.
From the definition (3.7) of ζ and equation (3.9) about







Noticing (4.5) and (4.7) about the definition of

Therefore,






which can be estimated by


Obviously, if

and, therefore,


On the other hand, using (2.4),



Applying (A.7) to (A.6), we obtain



Using condition (5.8) and Proposition 2, we obtain

The proof of Lemma 4 is completed. ∎
Proof of Theorem 6.
Noticing the definition (3.7) of ζ and the equation (3.9) satisfied by





Using (2.11), we obtain








Noticing the notations





Using (A.5) in the proof of Lemma 4, we obtain


Since, for any a and b,


Noticing the definition of





Using (5.10) together with Proposition 2, we obtain (5.11).
The proof is completed. ∎
B Estimation of the bias
Proof of Theorem 1.
Recalling the definition of

Note that
Now, from Corollary 3, for

Therefore, from (B.1),

or

According to (4.9), the definition of

where the subscript w indicates that
As for the random variable Z, we can prove it similarly. Notice

Then using (B.2), we obtain the first inequality of (6.1).
This completes the proof of Theorem 1. ∎
C Estimation of second moments
Proof of Theorem 1.
We first estimate

i.e.,
We have

Now, from Corollary 3, for

Therefore, from (C.1),

or


Because of the independence of the random walks for different i and j, the second sum in the braces of (C.2) vanishes. We then have

or

According to condition (7.3), we can apply Lemma 6, specifically inequality (5.12):

Now combining (C.3) and (C.4), and using Proposition 2, we obtain

which is the second inequality of (7.4) (after redefining the constants
As for V, we note

Similarly, we can obtain the first inequality of (7.4).
In order to derive (7.5), we notice that



and

Combining (C.5), (C.6) and (C.7), we obtain

Now using (7.2) and Proposition 2, we obtain

where

Noticing the second
inequality of (7.1), since

which is the first inequality of (7.5). The second inequality can be obtained similarly.
This completes the proof. ∎
D General geometric convergence
Proof of Theorem 1.
We will prove by induction that ,
for any

We have added two additional inequalities to the list (first two inequalities of (D.1)). The reason for doing so is that, to prove the third inequality, we need the first two inequalities.
For

where we have taken
We need to estimate


and

From (D.2), (D.3) and (D.4), we obtain

Applying the first inequality (D.1) and Proposition 2, we obtain

where

or after we replace

Combining (D.5) with (D.6) and noticing Proposition 2, we obtain

Thus (D.1) is proved for

We now prove (D.1) for the general case. Again, according to Chebyshev’s inequality,

or after replacing

or

According to Theorem 1,

where

or by (2.6),

Noticing (4.2), we can find a


We then have

In order to prove the second inequality of (D.1), we notice


Combining (D.11) with (D.12) and applying Proposition 2 (about the lower bound),


Combining (D.8) with (D.13) and applying Proposition 2, we obtain

Applying (D.10), we obtain

We can then pick a

Combining (D.14) with (D.15), when

which is the second inequality of (D.1).
The third inequality of (D.1) can be easily proved. All we have to do
is to go over the steps from (D.2) through (D.7) with the
superscript 1 replaced by m and 0 replaced by

Thus, when
The proof of Theorem 1 is completed. ∎
E Geometric convergence for expansion in basis functions
Proof of Theorem 1.
Using the expressions of the true
solution




Therefore, estimating
The proof is similar to Theorem 1. We will prove by induction
that , for any

We have added two additional inequalities to the list (first two inequalities of (E.2)). The reason for doing so is that, to prove the third inequality, we need the first two inequalities.
For

where we have taken
We need to estimate


and

From (E.3), (E.4) and (E.5), we obtain

Applying the first inequality (E.2) and Proposition 2, we obtain

where

or after we replace

Combining (E.1) with (E.7) and (E.8) and noticing Proposition 2, we obtain

Thus (E.2) is proved for

We now prove (E.2) for the general case. Again, according to Chebyshev’s inequality,

or after replacing

which leads to

or using Proposition 2,

According to Theorem 1,

where

or by (2.6),

Noticing (4.2), we can find a


We then have

In order to prove the second inequality of (E.2), we notice


Combining (E.13) with (E.14) and applying Proposition 2 (about the lower bound),


Combining (E.10) with (E.15) and applying Proposition 2, we obtain


Applying (E.12), we obtain


We can pick a sufficiently large N to make

Combining (E.16) with (E.17), when

which is the second inequality of (E.2).
The third inequality of (E.2) can be easily proved. All we have to do
is to go over the steps from (E.6) through (E.9) with the
superscript 1 replaced by m and 0 replaced by

Thus, when
The proof of Theorem 1 is completed. ∎
References
[1] Booth T., Exponential convergence for Monte Carlo particle transport, Trans. Amer. Nuclear Soc. 50 (1985), 267–268. Search in Google Scholar
[2] Booth T., Zero-variance solutions for linear Monte Carlo, Nuclear Sci. Eng. 102 (1989), 332–340. 10.13182/NSE89-A23646Search in Google Scholar
[3] Booth T., Exponential convergence on a continuous Monte Carlo transport problem, Nuclear Sci. Eng. 127 (1997), 338–345. 10.13182/NSE97-A1939Search in Google Scholar
[4] Case K. M. and Zweifel P. W., Linear Transport Theory, Addison-Wesley, Reading, 1967. Search in Google Scholar
[5] Kong R. and Spanier J., Error analysis of sequential monte carlo methods for transport problems, Monte Carlo and Quasi-Monte Carlo Methods 1998 (Claremont 1998), Springer, Berlin (2000), 252–272. 10.1007/978-3-642-59657-5_17Search in Google Scholar
[6] Kong R. and Spanier J., Sequential correlated sampling methods for some transport problems, Monte Carlo and Quasi-Monte Carlo Methods 1998 (Claremont 1998), Springer, Berlin (2000), 238–251. 10.1007/978-3-642-59657-5_16Search in Google Scholar
[7] Kong R. and Spanier J., Residual versus error in transport problems, Monte Carlo and Quasi-Monte Carlo Methods 2000 (Hong Kong 2000), Springer, Berlin (2002), 306–317. 10.1007/978-3-642-56046-0_20Search in Google Scholar
[8] Kong R. and Spanier J., A new proof of geometric convergence for general transport problems based on sequential correlated sampling methods, J. Comput. Physics 227 (2008), 9762–9777. 10.1016/j.jcp.2008.07.016Search in Google Scholar PubMed PubMed Central
[9] Kong R. and Spanier J., Geometric convergence of adaptive monte carlo algorithms for radiative transport problems based on importance sampling methods, Nuclear Sci. Eng. 168 (2011), 197–225. 10.13182/NSE10-29Search in Google Scholar
[10] Lai Y. and Spanier J., Adaptive importance sampling algorithms for transport problems, Monte Carlo and Quasi-Monte Carlo Methods 1998 (Claremont 1998), Springer, Berlin (2000), 273–283. 10.1007/978-3-642-59657-5_18Search in Google Scholar
[11] Powell M. J. D. and Swann J., Weighted uniform sampling–a Monte Carlo technique for reducing variance, J. Inst. Math. Appl. 2 (1966), 228–236. 10.1093/imamat/2.3.228Search in Google Scholar
[12] Ross S., A First Course in Probability, 5th ed., Prentice-Hall, Upper Saddle River, 1998. Search in Google Scholar
[13] Spanier J. and Gelbard E. M., Monte Carlo Principles and Neutron Transport Problems, Addison-Wesley, Reading, 1969. Search in Google Scholar
[14] Spanier J. and Kong R., A new adaptive method for geometric convergence, Monte Carlo and Quasi-Monte Carlo Methods 2002 (Singapore 2002), Springer, Berlin (2004), 439–449. 10.1007/978-3-642-18743-8_27Search in Google Scholar
[15] Silverman B. W., Density Estimation for Statistics and Data Analysis, Chapman and Hall, London, 1986. Search in Google Scholar
[16] Advanced Monte Carlo Methods, CRIAMS report LANL-03-001 to Los Alamos National Laboratory, February, 2003. Search in Google Scholar
© 2016 by De Gruyter
Articles in the same Issue
- Frontmatter
- A new proof of geometric convergence for the adaptive generalized weighted analog sampling (GWAS) method
- Ninomiya–Victoir scheme: Strong convergence, antithetic version and application to multilevel estimators
- Numerical computation for backward doubly SDEs with random terminal time
- Vector Monte Carlo stochastic matrix-based algorithms for large linear systems
Articles in the same Issue
- Frontmatter
- A new proof of geometric convergence for the adaptive generalized weighted analog sampling (GWAS) method
- Ninomiya–Victoir scheme: Strong convergence, antithetic version and application to multilevel estimators
- Numerical computation for backward doubly SDEs with random terminal time
- Vector Monte Carlo stochastic matrix-based algorithms for large linear systems