Why is it hard to test, reverse engineer, track algo changes, or even correlations in Google algorithms? Well, just from the top of my head I could come out with a few examples you'd have to think before deploying your experiment:

- Type of query used: Navigational, Transactional, Informational, Local, or did you try to use a neutral query? Was it really neutral?

- Changes pushed by the Search Quality team: think about not only permanent changes, but experiments that last one day or one week and then go away;

- Trends in search on that day on a hourly basis (at least): Rising search terms on that day, is something happening?

- Your Geolocation: Your location affects not only the datacenter you hit, there are also language specific signals;

- The datacenter you hit: sometimes the same algos work in different ways according to the datacenter you are hitting;

- Personalization: Are you exposed to personalization in search?

- Social Signals: Are you exposed to social influence in search?

- Search bias: Because you are an SEO, sometimes it's hard to think or even mimic the behaviour of a "normal person", how biased are your ideas?

There is much more stuff to add here of course, but I bet most of the tests, algo behaviour tracking and correlation studies outside don't consider most of the points above.
Shared publiclyView activity