Thursday, January 15, 2009

Voodoo Counter-Counterpoint



In today's Weekday Update, The Neurocritic is pleased to present an excerpt from Vul et al.'s rebuttal to Jabbi et al.'s rebuttal to the lively and controversial paper by Vul, Harris, Winkielman, and Pashler (PDF).

The intro, main bullet points from Jabbi et al. (bold font), and shortened versions of the rejoinder (regular font) are reproduced below. Go to Ed Vul's Voodoo Rebuttal page to read the complete text.
Voodoo Correlations in Social Neuroscience: Rebuttal and Rejoinder:
We are pleased to see that some of the authors whose methods we criticized put forth a response for the press. We look forward to responding to a full rebuttal in a peer reviewed journal, but in the meantime, this is a brief response.

1. Correction for multiple comparisons safeguards against inflation of correlations.
It appears that the authors of the rebuttal misunderstand what correction for multiple comparisons provides...

2. Our claims based on calculations of an 'upper bound' on the correlations are inappropriate
.....The fact that there is some variability and uncertainty associated with reliability estimates does not seem to us to be likely very important in understanding why this literature has featured so many enormous correlations.

3. Our simulations are misleading about false alarm rates.
.....We did not intend to make any assertions about the rate of false alarms, nor to claim that all the correlations that we contend to be inflated are false alarms.

4. Non-independent analyses sometimes yield low or non-significant correlations.
In the rebuttal, the authors assert that the sorts of non-independent analyses we describe do not always produce substantial correlations. However, they do not provide specific examples, so we are not able to meaningfully comment.

5. Correlation magnitude is not so important
.....Whether or not the authors themselves care about the magnitude of the correlations, their procedures for producing these correlation estimates produce inflated numbers. The scientific literature should, where possible, be free of such erroneous measurements.

6. If non-independent analyses are so untrustworthy, why are they producing replicable results?
This is a very important point: if what we say is true, why do replications of the measured correlations occur? Assessing this claim requires an in-depth examination of specific literatures, which is beyond the scope of this rapid response, but we look forward to examining some specific cases in the future.....

7. Our survey was misleading and confusing.
This critical point would seem to be whether we mis-classified the methods of some studies, and counted them as having conducted non-independent analyses, when in fact they had not. If this happened, we would regret it, and any authors who feel that their papers have been misclassified should ple
ase contact us and provide details.....

8. Our suggested split-half analyses are not necessarily non-independent.
Here we think the authors of the rebuttal bring up an excellent point. ..... There is evidently no single perfect analysis of brain-behavior correlations, but the procedures we suggest should offer a major improvement over the non-independent approaches being widely used.



BPS Research Digest is a little late in their coverage of this debate, but yesterday they wrote that a second rebuttal is in preparation:
Matthew Lieberman, a co-author on Eisenberger's social rejection study, told us that he and his colleagues have drafted a robust reply to these methodological accusations, which will be published in Perspectives on Psychological Science alongside the Pashler paper. In particular he stressed that concerns over multiple comparisons in fMRI research are not new, are not specific to social neuroscience, and that the methodological approach of the Pashler group, done correctly, would lead to similar results to those already published. "There are numerous errors in their handling of the data that they reanalyzed," he argued. "While trying to recreate their [most damning] Figure 5, we went through and pulled all the correlations from all the papers. We found around 50 correlations that were clearly in the papers Pashler's team reviewed but were not included in their analyses. Almost all of these overlooked correlations tend to work against their hypotheses."

No comments:

Post a Comment