Shared publicly  - 
Why SEO Experiments Are Almost Always Invalid

"In an environment where anyone can make an unscientific assertion, back it up with invalid tests, and toss in absurd disclaimers that everyone winks at, you cannot expect to find much high quality information.

And, frankly, there ISN’T much high quality information coming from our community."

I largely agree but I still find the tests valuable because I don't take them at face value. Instead I'm looking for patterns and applying my own critical analysis based on day-to-day findings. As a community, we could all run tests and gather a far more comprehensive picture about the changing nature of search.

So I like having these out there. I'd like more of them. But it doesn't mean I trust many, if any of them in isolation. And I'll always want to put them to the test myself if at all possible.
Doc Sheldon's profile photoMichelle Robbins's profile photoAJ Kohn's profile photo
Michael likes to come across as pissed, and I suspect he really is, to a degree. I'd hazard a guess, though, that he's not so much pissed at the poorly structured tests that go on, as he is at the folks that will blindly wrap their next strategy du jour up in whatever hypothesis is written up from the results of such tests. In that, I agree with him.

That said, though, the point is well taken that such tests, as inconclusive as they may be, still have value. In reality, many highly structured laboratory testing procedures actually involve a series of tests, knowing full well that all variables can't be isolated. So you just try to isolate one at a time, test 100 times, and hope a pattern emerges.
AJ Kohn
+Michelle Robbins Well said! What's even more fun is watching +Matt Cutts during some of these presentations, particularly as they review the parameters of the test.

I'm not saying I'm The Mentalist or anything (yes I watch the show, no I don't care if you think that makes me old or not cool) but I swear a knowing smirk crosses his face and I see a thought bubble over his head that says something akin to 'nice try, but that's not going to work'.

I'd be fine with all of these, but it is disheartening to see so many run off thinking the results should be taken as fact.
AJ Kohn
+Sheldon Campbell Yes, if you read the comments on the blog post it seems clear Michael's more incensed at those who can't see these tests for what they are and instead simply take them at face value.

I like the sort of testing that is going on, but I'm always more interested in doing my own experiential tests.
I'm more alarmed because I'm a stickler for using data properly, if you're going to use it. And I think a lot of people don't really understand the finer points of stats and analysis. I think a lot of people play fast and loose with these things, then give a blanket "correlation is not causation" disclaimer, but continue to put forth the data as relevant. And I'm just not sure if they're doing it to further understanding of (ever shifting) variables, or to put themselves forward as "experts" - or worse, just to use as common link bait to capture eyeballs or customers. Google Analytics + Excel = Instant Expert #ugh #yourenothelping [the larger them, not you, AJ :) you're definitely helping! ]
AJ Kohn
+Michelle Robbins I definitely see that side of things. I've begun to really dislike the 'correlation is not causation' statement because people have heard that so many times ... many simply ignore it (when they shouldn't.) Never mind that sometimes correlation is actually a fine measure if taken in the right context, but that type of subtlety is lost of many, if not most.

A lot of this has to do with the way it is presented. Too many times it's delivered in a way that makes it seem definitive, even when the author of the research knows it isn't.

Is that for linkbait? Maybe. Is it simple marketing? Might be. And it's certainly working.

The result is a lot of second-hand zombie like practitioners who are substituting someone else's thoughts for their own critical analysis. And that's bad for our industry.
+AJ Kohn exactly. it's irresponsible on the part of the people presenting as such, and lazy on the part of practitioners. I think everyone should be testing, all the time, and critically evaluating the results they are seeing, instead of blindly accepting what they're being told as fact. Also, I think there's overall waaaaayyy too much speculation. Speculation is not correlation :)
AJ Kohn
+Michelle Robbins Yes, I must admit I tire of the speculative sessions at conferences. I harken back ... was it two years ago when everyone was speculated how search was going to all go real-time.

I'm all for speculation (I have a number of theories and ideas myself) but it MUST to be integrated with real research (e.g. - reading those big research papers, understanding the science and math as best as possible, and trying to keep up with +William Slawski) as well as true day-to-day trench activity - and even then, it should be presented as a viewpoint or theory - a hypothesis.
AJ Kohn
+M. Edward (Ed) Borasky Great to have you weigh in here. And you're right on all counts. A testable model is crucial and the system you're testing it against is a moving target. I really like the phrase that it's a 'living thing', one that is complex and adaptive.
Add a comment...