Profile cover photo
Profile photo
Walter Boot
Vision Scientist
Vision Scientist
About
Walter Boot's posts

Post has attachment
A nice commentary from Ed Young at National Geographic on our paper that appeared in Perspectives on Psychological Science today.  In it, we examine whether most intervention studies adequately control for placebo effects. 

Post has attachment
NYT coverage our collaborator Wendy Roger's work on robot assistants and the elderly.

Post has attachment
Can video game training address the problem of age-related cognitive decline?  An important step to answering this question may be designing video games that older adults are willing and able to engage with.  We’d first need to know what the "active ingredient" is in games that are claimed to improve cognition (we don't).  Only then can we design games that have the features that appeal to seniors AND have the game elements that are cognitively beneficial.  Overall, we’re pretty far from being able to recommend game-training to seniors who wish to improve important cognitive abilities that support independent living.  Physical fitness, not game-based training, still appears to be the most promising route to improving cognition, brain structure, and function.  

Post has shared content
Some nice discussion here on how to respond (particularly, how not to respond) to studies that fail to replicate your own.
A primer for how not to respond when someone fails to replicate your work
with a discussion of why replication failures happen

In the linked post, John Bargh responds to a paper published in PLoS ONE that failed to replicate his finding that priming people with terms related to aging led them to walk more slowly to the elevator afterward. His post is a case study of what NOT to do when someone fails to replicate one of your findings.


Replication failures happen. In fact, they should happen some of the time even if the effect is real and the replication attempt was conducted exactly like the original study. For any effect, especially small ones, you would expect some failures to replicate. Failures to replicate could occur for all of the following reasons (and maybe others):

(1) chance -- the effect is real, but this particular test of it didn't find the effect. With small effects, you expect some percentage of exact replication attempts to fail to find effects as big as the original. Remember, measurements of behavior are inherently noisy, and it's rare to find exactly the same effect size every time. In fact, finding exactly the same effect every time is a sign of bias (and sometimes a sign of fraud.)

(2) seemingly arbitrary design differences that contributed to the discrepancy -- these can be informative, helping to constrain the generalizability of the conclusions. They are grounds for further studies.

(3) poor methodology on the part of those trying to replicate the study—it's easy to produce a null result by conducting shoddy research. In this account, the failure to replicate is a false negative due to poor design, not to subtle but reasonable design differences.

(4) poor methodology in the original research—the original finding was a false positive due to poor controls and design. False positives could also result from design or analysis decisions that lead to reporting only of significant findings or variants of a study.

5) Chance, but for the original finding. Some published effects might be false positives, even if they were conducted competently. That's especially true for underpowered studies.

Given the strong bias to publish only positive results (see work by Ionnides, for example), an original false positive seems at least as likely as a false negative, especially when there are few if any direct replications of a published result. Given the difficulty of publishing replication failures, it's important to realize that there might be other failures to replicate that were not published (see the comment from +Alex Holcombe on the post, noting another failure to replicate this particular study).

Rather than dispassionately considering all of these possibilities, including that the original research might be a false positive, Bargh chose to:
1) Dismiss the journal in which the replication failure was published in an uninformed way. Bargh claims that PLoS One is a for profit journal that is effectively a vanity press with a pay-to-publish model that doesn't do a thorough peer review and doesn't rely on expert editors. In reality, PLoS One is a non-profit organization that selects expert editors, reviews papers just like any other journal, and never rejects a paper because authors can't pay (they waive the fee upon request). It is one of the fastest growing open-access journals, and has a roughly 30% rejection rate. It differs from other journals in that it publishes empirically solid work regardless of the perceived theoretical impact.

2) Accuse the authors of the critique of incompetence with an unjustified, ad-hominem attack. This group of authors has extensive expertise in consciousness research. For example, Cleeremans was an editor of the Oxford Companion to Consciousness and is well-respected in the field.

3) Describe method details from the replication attempt (and the original study) inaccurately. See the first comment on the post for a detailed discussion.

4) Reveal a lack of familiarity with science blogging by blaming one of the most careful and thoughtful science writers working today (+Ed Yong) for publicizing the replication failure. It seems somewhat disingenuous to fault Ed for "swallowing their conclusions whole" after refusing to respond to a request for comment he sent several days before the post went live.

An effective response to a failure to replicate would be to identify ways in which the studies differed and then to test whether those differences explain the discrepancy. Acknowledging that replication failures happen and pushing for more direct replication attempts rather than just conceptual ones might help too. But, assuming that a failure to replicate must have been due to incompetence or the shoddy standards of a journal is a pretty brash response.

The comments on Bargh's post (from +Ed Yong, Neuroskeptic, +Peter Binfield, +Alex Holcombe, and many others) are interesting and informative, an example of how the science bloggers work to correct the record.

Post has attachment
Very intriguing! A free, python-based "E-prime". Just downloaded it and giving it a shot. Sample search paradigms are beautiful.

Post has attachment

Post has shared content
A home for all your failures to replicate!
This is very very new, but is a serious attempt to create an index of replication attempts in psychology (failed or otherwise). This is an effort we should all try to support.

Post has shared content
Drawing the wrong conclusion
More on Stapel's fraud and the fallout for psychology

Great. Now a NYT headline implies that all psychology research is suspect because of Stapel's fraud. Yes, responsibility for this fraud extends beyond Stapel himself (http://goo.gl/dQmBk), but not that broadly. Now those areas that lack replications (or even replication attempts) are harming the reputations of all psychology researchers.

Within cognitive psychology and especially in the vision sciences, if someone publishes a splashy, straightforward-to-try-yourself result in Science, many labs actually do try to replicate it. Following the annual Vision Sciences Society meeting, or sometimes while still there, researchers code up the latest result for themselves and try it out. If a finding doesn't replicate, people typically find out (and if one lab continually produces results that don't replicate, people stop trusting research from that lab). If a result does replicate, researchers then re-do the experiments and try to test their limits.

The problem here seems to be that nobody even tries to replicate any of the sorts of stuff Stapel "did," or if they do, it never gets published. In cognitive psychology, a standard approach is to replicate the original result and then extend or challenge it. Papers in Stapel's area don't include many such replications and extensions of earlier results by other labs.

Want a fun exercise? Try scanning the literature to see if anyone has published an attempt to replicate any of the dozens of fraudulent papers that Stapel produced. These were papers in Science and other top journals, and involved collaborations with other well-known researchers. The types of studies he published aren't that hard to do, and he's working in a crowded field with a lot of other people doing this sort of research. If you find any replications embedded in other articles, post them in the comments. I expect I'll be waiting a while.
Wait while more posts are being loaded