This is a difficult subject to talk about (because it, rightly, comes off as blaming the victim), but I wonder whether community self-policing like this is actually an effectively policy, or if it's a dangerous long-term policy that reinforces the problem.
To qualify that, I want to reiterate that when a community joins in solidarity to drive out and silence cruel harassment and undesirable bigots, that's great. It's way better that the community jumps up and confronts the problem head-on than shrink away under the pretense of "civility" to let the aggressors run roughshod over the people who need to be protected ("well, their voices deserve to be heard too!"). I'm glad people are so ready to take up arms for such a good cause.
But I'm still not ready to give it a free pass, because this is such a big and important issue with so much at stake (see: death threats, terrorist hacking, threats of sexual abuse, not to mention a culture of intolerance), it's not really safe to just assume that people's individual consciences are sufficient to manage communities so large and so powerful. It might be necessary to actually have a leviathan to protect everyone, even undesirables.
The average teenager grumbling about how "unfair" he thinks feminism usually pulls out the old argument, "you're being so divisive," which is to say, "it's not fair if you respond to my heated, emotional argument with a heated, emotional reply." That's a cop-out, there's no way you could possibly hold individuals to that impossible double-standard because individuals have beliefs and emotions and are humans. But is there any truth to it? Is there a way to police communities without digging wider and deeper trenches, and to actually build build a community, not necessarily with open borders, but that can integrate and naturalize outsiders who, to the amateur, seem like hopeless lost causes? Most of these idiots online are 14-year-old kids, they're going to grow up, or at least they can if they're guided right.
I have no idea what studies there are on this subject or if there even are, but if there is any research into creating accepting online communities I know where it's being done: Online games. In online games, like, say, Defense of the Ancients, you don't want players who are terrible to ruin the experience, and you also don't want verbally abusive players to ruin the experience. In the old days, the DotA community was infamously harsh; new players were immediately policed out by the community, blamed for ruining games and bullied until they left. When Valve took over and created Dota 2, they took community management seriously, and made sure that the community was welcoming to everyone. Abuse, bigoted language, harassment, and even sore-loser behavior is not tolerated, and players who are abusive suffer consequences. But it's also not in Valve's interest to banish players who can be reformed, so presumably they put a lot of thought and design into building systems that help new players and bad apples join. Does Valve's community management have any lessons to share?
The consequence to Dota having a self-policing community is that is spun off into a hugely successful competitor, League of Legends. Which is fine. But the competitor to a community that supports social justice is dangerous beast. We can't expect individuals to be anything more than noble warriors for justice, but it might be important to have a group that can operate on larger, calculated principles.
Anyway I don't have a very concrete thought, though I would be tickled if online communities got together and elected an online marshal to represent justice for them. It would be an all-new world government rising out of nowhere, which would be crazy. I wish I could write this thought more eloquently.