Αρχειοθήκη ιστολογίου

Πέμπτη 11 Μαΐου 2017

A note on likelihood ratio testing for average error control

In an fMRI analysis, testing activation in each of over 100,000 voxels induces a huge multiple testing problem. To guard against an explosion of false positives (FPs), thresholding is made very conservative but comes at the price of a problematic increase in false negatives (FNs). In their recent paper, Kang et al. (2015) propose a likelihood ratio (LR) approach that contrasts evidence in favor of true activation against evidence in favor of the null. They show how the likelihood paradigm (LP) controls average FP and FN error rates, decreasing FNs with only a slight increase in FPs. Their work is promising and a welcome contribution to the development of methods that not solely focus on classical null hypothesis testing but also take into account practical relevance. The authors acknowledge that the approach is specific to the effect size (ES) specified under the alternative and point out that this requires further research. They show that choosing an ES equal to a percentile between 90th and 99th of the contrast of interest is a possibility. In this note, we study the impact of choosing this ES in more detail. First want to raise awareness that the value of percentiles of estimated contrast values is highly dependent on the proportion of the brain activated by the task. For example, within the context of localizer tasks, we expect to pinpoint a single brain region and hence a small activated volume. The 95 percentile value would then lead to an underestimation of the true ES, since the ES of active voxels will be at the right tail of the ESs. Secondly, the LP measures evidence between two simple hypotheses (Blume, 2002). This requires valid estimation of the specified ES as both under- and overestimation of the true ES will result in a reduced LR for active voxels. Methods We simulated single subject contrast values maps (resolution: 32 × 32; voxel size 1mm × 1mm) with a proportion of q active voxels (ES = 2.5% BOLD). Gaussian noise with a standard deviation of 8 was added to the image, resulting in a CNR of 0.32. The LR for each voxel was calculated as the likelihood of the data under the simple alternative with a specified ES divided by the likelihood of the data under the null. First, we let q vary from 0.01 to 0.99 and in each step used the 95th percentile of the estimated contrasts as the specified ES. We demonstrate the effect of a varying q on the LR of one active voxel. Second, we set q = 0.098 and let the specified ES vary from 0.5% to 4.5% BOLD. For dichotomous LP (dLP), all voxels having a LR larger or equal to k were retained. For continuous LP (cLP), inactive voxels had a LR smaller or equal to 1/k, active voxels had a LR larger or equal to k and for weak evidence voxels, the LR was situated between 1/k and k. Using contrast maps, we demonstrate the effect of under- or overestimating the true ES for k = 8. Results We show how the LR varies in function of the proportion of active voxels through variation in the specified ES. Evidence for activation is only convincing if the specified ES is close to the true ES (2.5% BOLD). Additionally, misspecifying the alternative hypothesis reduces the LR of the active voxels, resulting in more FNs. For cLP, many null voxels exhibit weak evidence when underestimating the true ES. Conclusions Kang et al. (2015) present a valuable approach for simultaneous control of error rates in fMRI data analysis. Our results demonstrate the importance of a correct specification of the alternative hypothesis. Voxels with an ES higher than the specified ES may exhibit a low LR and hence may show inconclusive evidence. Further research is needed to study the possible ES choices and the use of this ES to evaluate evidence for activation.

http://ift.tt/2pC2BC3

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου