1) Firstly, note the extremely small sample size of this study - 22. If the results are coming through at a statistically significant level, this indicates quite large impacts. Still, we should be cautious at extrapolating something from such a small sample especially since
2) The sample selection criteria is suspect - the men who were chosen for the study were chosen because they previously had not had much exposure to violent video games. Think about the external validity here - we are making statements about an entire population (adult men who play video games) based on a sample of people who self-select out of the treatment. It's quite possible that the effect is nil on the type of people who choose to play violent video games.
3) The fact that the treatment is "playing violent video games" and the control is "doing nothing" means we can't really say it was the violence itself which had an impact on the brain. Nor, possibly, can we even say it is video games at all - staring at a tv screen more often than normal could be the result. We need different types of treatments to parse out these impacts: some men need to be playing non-violent video games, some men need to be watching movies, etc. This study really doesn't allow us to make many assertions about the components of the treatment itself.
4). Again, I haven't gotten the chance to see the paper yet, but fMRI studies are notorious for plucking significant results out of thin air - for example, less than 50% adjust their inference methods to account for sub-group analysis (fMRI studies divide the brain into regions and test effects within each region, which increases the probability that we'll find a result by chance). Recall the fMRI paper where researchers got positive results out of a dead salmon.
5). The reports on actual aggression are mixed - but generally I haven't seen any reports which actually cite evidence that the treatment group got more aggressive, just that their brains looked different after playing the games.
6). The men were measured a week after cessation of the treatment and the results were already fading - these are, at least as far as we can trust the study, not permanent changes in the brain.
Ever the sly contrarian - Chris Blattman bothers to read the paper with a critical eye, and finds many shortcomings. Really crucial discussion between Blattman and Michael Clemens in the comments.
Something that Clemens says jump out of me, roughly: `these results are important because the are independent'.
Are independent results more likely to be truly unbiased? Let's think a little bit about standard research bias - usually we 're always looking to show an impact, positive or negative. But now the null hypothesis is: "Millennium Villages are the holy grail of poverty alleviation, prove us wrong." To me, it seems like `independent' researchers have a huge incentive to disprove, not affirm, the MVP's claims. True independence would have involved a little more transparency at the start - although this may not have been practical.
- Student, present
- Ministry of Finance, Malawi
I blog about economics, development and international aid at aidthoughts.org
My interests are broadly centred on development economics, ranging from larger
issues such as the political economy and impact of international aid, to the
smaller, microeconometric questions concerning nutritional distribution and
health, land rights, and ethnicity in SSA.
I am also an amateur film director, some of the films I have produced can be found here.
- University of Oxfordpresent
- University of Oxford
- Clemson University