Shared publicly  - 
 
*"Addressing" the problem..."

The Society for Personality and Social Psychology just posted this letter on "responsible conduct," documenting all the steps the society is taking to address concerns about methodological integrity, ethics, p-hacking, non-replicating results, and documented fraud in psychology (particularly in social psychology of late). It's a start. I guess.

Apparently, they appointed a task force that "outlined a variety of ways we could take positive steps to ensure the integrity of our science." Okay. Fine. Next they're planning a symposium for January to talk more about these issues. Perhaps at the symposium, they'll discuss other ways that they might talk about identifying problems worth discussing. 

As Star Trek's Lt. Cmdr Worf once said, "Less talk. More synthohol."

The society publishes top journals. It has the authority to enact changes that would enhance the reliability, replicability, and robustness of the research conducted and published by its members. They can do better than encouraging their members to "make discussions of ethical behavior part of the everyday discussion in your lab."

For me, the most disappointing part of this letter was its occasional emphasis on appearance rather than substance: "People are talking about what our goals are as individual scientists (i.e., promoting our science or promoting ourselves)." When did the goal of individual scientists become promoting rather than discovering or finding the truth? Maybe that misplaced emphasis is part of the problem. A stated goal of the task force was to examine "...how we can generally promote social and personality psychology as a credible scientific endeavor" [emphasis mine]. Perhaps a good place to start would be taking actual steps to bolster the foundations of the science itself.
4
2
Joshua Hartshorne's profile photoBrent Donnellan's profile photoAlex Holcombe's profile photoRoger Giner-Sorolla's profile photo
12 comments
 
Also somewhat disappointed. They peremptorily dismiss all the Simmons et al. recommendations to preserve "freedom of analysis" but most of them are about being honest in reporting what was done, rather than restricting what can be reported.
 
Absolutely disappointing. "Too much resistance" to sharing data to implement it as a policy?? How about "Get rid of the f*ing awful "scientists" who refuse to share data!" Better to cater to reactionary nonsense than improve the science, I guess. 
 
+Joseph Cesario -- Or, how about changing the attitudes in the field by making it journal policy. Those "awful scientists" need to submit their work someplace, and if the policy were standard, they'd adapt. That's what it means to lead by taking active steps.

Maybe they fear it would affect their impact factor (not that that should really matter), but it might also inspire other journals and societies to follow suit. If you want to increase credibility, what better way to do it.

I do have some mixed feelings about mandatory data sharing (for example, I have a data set that I can't share because there is no way to make the responses anonymous, and identifying the individual subjects could embarrass them (it was a study of overconfidence). But, there are lots of standards a society could implement for their journals and as new initiatives that would improve the science.
 
Right--to make data sharing journal policy (with exception for obvious cases such as non-anonymous data) would force it to happen and would force those scientists who refuse to do so out of the mainstream of the field. What I meant by "get rid of them" was a self-selection out of the discipline if journal policy was implemented, not the society itself removing scientists from their ranks. Sorry for the poorly worded reply.

But what are your mixed feelings regarding mandatory data sharing? If you recommend making it journal policy, is that not the same as mandatory data sharing? At least, if people want their findings published. 
 
I guess I was saying I'm not sure mandatory data sharing is the first thing I'd implement, largely because of the problem of handling exception cases. Beyond the anonymity issues in data like mine, there are other situations in which mandatory data sharing would be problematic (MRI data, for example, or data that can be mined for multiple papers or that would potentially lead to the researchers being scooped on their own research).

But, I do like the idea for data that are readily shareable, which includes most data in social psych, I'd think. There might be a way to mandate it with approved exceptions. My point, though, is that there are other policies that could be implemented right away that would help dramatically. For example, requiring power analyses, requiring explicit identification of all measures collected and all tests conducted (and having reviewers/editors challenge results that could be due to fishing), providing a means for advanced registration of analysis plans to avoid fishing and investigator degrees of freedom, encouraging replication attempts and publishing them, etc. 
 
Of course, if there are principled reasons to keep data confidential those can be implemented in a policy. There are also ways to exclude harassing and unprofessional requests. Complete lack of imagination here.
 
+Roger Giner-Sorolla -- I agree completely. The ideal would be an approach in which the data were available with the article as supplementary materials. There are journals that take that approach (Judgement and Decision Making, I believe). And, you're right -- a formal policy could handle exception cases. Having the data coupled with the article would avoid the problem of harassing/unprofessional requests as well (interesting discussion of one such example on the Open Science Google Group a couple weeks ago.
Ed Yong
+
1
2
1
 
"Perhaps at the symposium, they'll discuss other ways that they might talk about identifying problems worth discussing."

WHOA, easy there. The first thing to do is to set-up a committee to identify ways of identifying problems worth discussing. Baby steps.
 
Easy steps that would effect change: 1) Require authors who do not post their data give the reasons why they have not posted their data. This explanation should appear with the published article. Sure, many authors will make specious excuses, but some will be able to see through those excuses, which will result in a loss of esteem for those authors. This should be oriented towards having data posted in repositories, not just "available on request". We know when it is required to be "available on request", authors don't usually respond to the requests: http://t.co/TfIryl2O
2) Journals should favor studies that preregister their hypotheses and analysis plan.
 
Is there any way to get more data-sharing by using the carrot rather than the stick? That is, can we reward sharers?

I've always thought data-sharing was a good idea because it led to more citations to your own work (the CHILDES system is a classic example; most of those papers would never have gotten cited at all if the data hadn't been made public). Both we and journals are rewarded by citations (both directly and indirectly in that it makes more people aware of our work). Demonstrating that this is the case might lead more journals to encourage public data-archiving and more researchers to want to participate.

But we could go beyond that. One problem with current citation counts is that any citation is treated like any other; a passing reference is the same as a real engagement with the text. Ideally, there would be some way of tagging a citations as "I relied on this heavily" or even better "I re-analyzed their results for discussion here". But it shouldn't be hard to Google Scholar (or someone) to weight citations based on the number of times the paper is cited within a given manuscript, which would pick up those citations to work that is being re-analyzed. (Of course this can be gamed, but everything can be gamed).

None of this will directly address fraud, though indirectly as data-sharing becomes more common, it'll be more difficult to avoid doing it (due to peer pressure, not regulations). And I think there are more reasons to care about data-sharing than fraud, anyway.
 
There are several cases that make data sharing impossible. We have to protect participants. However, I do not think they apply to the vast majority of social/personality samples. These samples can be easily de-identified.

So I think we should make posting the assumed norm. If data cannot be shared, this should be disclosed at the point of submission to the journal.  The Editor can make a determination to go forward (or not) and this can be reported in the Author note.  (This is bascially what Debby Kashy and I recommended in a 2009 paper).

Right now, Joe C. and I are struggling with an interesting issue about what we can and can't say about someone's data. Is it wrong to say that a distribution is severely non-normal and probably non-sense if you were granted access to compare your results with another set of results?  Right now, we basically have to write:  Joe saw the data, they seemed odd, ask the authors because they will not let us talk about it. 

How is that for transparency?
 
+Brent Donnellan - That sounds like a disturbing case. If they are odd enough to raise suspicions of the sort raised by Simonsohn (i.e., possibility of fraud or fabrication), you're in the awkward position of having some obligation to the field to make that known while also honoring a promise of confidentiality. I guess my concern is that by granting access with a promise of confidentiality, the author could be covering their butt in the case of fraud (although then it's not clear why they would provide the data in the first place if there were deliberate fraud as opposed to something less nefarious).

You were granted access to the data, meaning the author is capable of providing the data to others. On what grounds can the author claim confidentiality? Does the author provide any explanation for the "odd" data? Or, does the author provide any reason why you can't discuss it? If not, it seems hard to justify the required confidentiality other than to keep others from knowing there's a problem.

These presumably are published results, and if the data are "odd" or nonsense, then the result is too. The field needs to know that. If the author refuses to let you discuss the data or to provide any explanation, I wonder if you could pass the buck to someone with the authority to demand access to the data. Could you approach the editor for that journal/paper with your concerns and ask that person to follow up in a more official capacity? Presumably, the journal would want to know if they might have published a paper based on problematic data. You could also take the Simonsohn approach and contact that person's university ethics office and ask them to investigate.

I think it would be great if journals required open-access to data unless there were a compelling reason why the data could not be made available (and there are such cases). I wonder if IRB protocols could be written in such a way that even in those cases, data could be shared with the constraint that the person receiving the data obtain some sort of certificate of confidential treatment from the IRB? I know that IRBs sometimes do that when researchers are studying behaviors that could otherwise require mandatory reporting (e.g., studying criminal activity). Perhaps there could be a mechanism by which data are available for researchers to use and re-analyze provided they promise confidential treatment. That might be a challenge, but there should be ways to do it.
Add a comment...