Decision theoretic Bayesian hypothesis testing with the selection goal
-
Naveen K. Bansal
Consider a probability model Pθ,α, where θ=(θ1,θ2,…,θk)T is a parameter vector of interest, and α is some nuisance parameter. The problem of testing null hypothesis H0: θ1=θ2=…=θk against selecting one of k alternative hypotheses Hi:θi=θ[k] > θ[1], i=1,2,…,k, where θ[k]=max{θ1,θ2,…,θk} and θ[1]=min{θ1,θ2,…,θk}, is formulated from a Bayesian decision theoretic point of view. This problem can be viewed as selecting a component with the largest parameter value if the null hypothesis is rejected. General results are obtained for the Bayes rule under monotonic permutation invariant loss functions. Bayes rules are obtained for k one-parameter exponential families of distributions under conjugate priors. The example of normal populations is considered in more detail under the non-informative (improper) priors. It is demonstrated through this example that the classical hypothesis testing yields a poor power as compared to the Bayes rules when the alternatives are such that a small fraction of the components of θ have significantly high values while most of them have low values. Consequences of this for the high dimensional data such as microarray data are pointed out.
© Oldenbourg Wissenschaftsverlag