Confidence intervals and hypothesis testing for a single proportion with clustered binary data
Short, Meghan Ilene
MetadataShow full item record
When outcome data in a clinical trial are clustered and binary, such as in a trial estimating the specificity of a diagnostic where participants contribute multiple observations, methods must account for clustering in order to correctly estimate the variance of the proportion of interest. Confidence interval methods have been developed that account for clustering when estimating the precision of a single proportion. However, there is room for improvement with regard to the coverage probability of these intervals when sample size is small. We propose a continuity-corrected confidence interval based on the Wilson score interval, and conduct a Monte Carlo simulation study to compare the coverage probability of the interval to that of the existing confidence interval methods. We found that the new interval gives coverage closer to the nominal level at smaller sample sizes than existing methods when there are ≥ 5 measurements per cluster. While confidence interval methods have been developed for this setting, the best performing of these methods have not been converted into one-sample hypothesis tests versus a performance goal. We derive test statistics corresponding to existing confidence intervals and the new confidence interval method we proposed, and use a Monte Carlo simulation study to compare Type I error control of the one- and two-sided hypothesis tests under a range of scenarios. In many cases, the Type I error control of the novel test was superior to that of the hypothesis tests derived from existing confidence intervals. We develop tables of recommendations for practitioners wishing to use a confidence interval or a one- or two-sided hypothesis test when data are clustered and binary, based on the results of the simulation studies described above. In order to appropriately power a study using the new statistical test we propose, formulas for theoretical power and sample size are valuable. We derive power and sample size formulas for the new hypothesis test, and compare theoretical power to simulated power in a Monte Carlo simulation study. In general, the power formula produces values within 10% of simulated power in cases where Type I error is controlled.