Incorporating the Empirical Null Hypothesis into the Benjamini-Hochberg Procedure
-
Debashis Ghosh
Abstract
For the problem of multiple testing, the Benjamini-Hochberg (B-H) procedure has become a very popular method in applications. We show how the B-H procedure can be interpreted as a test based on the spacings corresponding to the p-value distributions. This interpretation leads to the incorporation of the empirical null hypothesis, a term coined by Efron (2004). We develop a mixture modelling approach for the empirical null hypothesis for the B-H procedure and demonstrate some theoretical results regarding both finite-sample as well as asymptotic control of the false discovery rate. The methodology is illustrated with application to two high-throughput datasets as well as to simulated data.
©2012 Walter de Gruyter GmbH & Co. KG, Berlin/Boston
Articles in the same Issue
- Article
- A New Explained-Variance Based Genetic Risk Score for Predictive Modeling of Disease Risk
- Hessian Calculation for Phylogenetic Likelihood based on the Pruning Algorithm and its Applications
- Cluster-Localized Sparse Logistic Regression for SNP Data
- How to analyze many contingency tables simultaneously in genetic association studies
- Incorporating the Empirical Null Hypothesis into the Benjamini-Hochberg Procedure
- Estimating the Number of One-step Beneficial Mutations
- Testing clonality of three and more tumors using their loss of heterozygosity profiles
- Correction for Founder Effects in Host-Viral Association Studies via Principal Components
- A Non-Homogeneous Dynamic Bayesian Network with Sequentially Coupled Interaction Parameters for Applications in Systems and Synthetic Biology
- An Integrated Hierarchical Bayesian Model for Multivariate eQTL Mapping
- A Novel and Fast Normalization Method for High-Density Arrays
- Performance of MAX Test and Degree of Dominance Index in Predicting the Mode of Inheritance
- A Bayesian autoregressive three-state hidden Markov model for identifying switching monotonic regimes in Microarray time course data
- QTL Mapping Using a Memetic Algorithm with Modifications of BIC as Fitness Function
- Computing Posterior Probabilities for Score-based Alignments Using ppALIGN
Articles in the same Issue
- Article
- A New Explained-Variance Based Genetic Risk Score for Predictive Modeling of Disease Risk
- Hessian Calculation for Phylogenetic Likelihood based on the Pruning Algorithm and its Applications
- Cluster-Localized Sparse Logistic Regression for SNP Data
- How to analyze many contingency tables simultaneously in genetic association studies
- Incorporating the Empirical Null Hypothesis into the Benjamini-Hochberg Procedure
- Estimating the Number of One-step Beneficial Mutations
- Testing clonality of three and more tumors using their loss of heterozygosity profiles
- Correction for Founder Effects in Host-Viral Association Studies via Principal Components
- A Non-Homogeneous Dynamic Bayesian Network with Sequentially Coupled Interaction Parameters for Applications in Systems and Synthetic Biology
- An Integrated Hierarchical Bayesian Model for Multivariate eQTL Mapping
- A Novel and Fast Normalization Method for High-Density Arrays
- Performance of MAX Test and Degree of Dominance Index in Predicting the Mode of Inheritance
- A Bayesian autoregressive three-state hidden Markov model for identifying switching monotonic regimes in Microarray time course data
- QTL Mapping Using a Memetic Algorithm with Modifications of BIC as Fitness Function
- Computing Posterior Probabilities for Score-based Alignments Using ppALIGN