Posts

Post has attachment

Going beyond #statistics and into the design of experiments, more precisely online controlled experiments (a.k.a. A/B tests). Can your test result be highly statistically significant and still be biased or completely misleading when you implement it? Can you lose money even if you get all the stats right? A detailed look into external validity / generalizability and what one can do to increase the chance of conducting tests which result in useful predictions.

#conversionrate #cro #abtesting

#conversionrate #cro #abtesting

Post has attachment

Going beyond #statistics and into the design of experiments, more precisely online controlled experiments (a.k.a. A/B tests). Can your test result be highly statistically significant and still be biased or completely misleading when you implement it? Can you lose money even if you get all the stats right? A detailed look into external validity / generalizability and what one can do to increase the chance of conducting tests which result in useful predictions.

#conversionrate #cro #abtesting

#conversionrate #cro #abtesting

Post has attachment

Going beyond #statistics and into the design of experiments, more precisely online controlled experiments (a.k.a. A/B tests). Can your test result be highly statistically significant and still be biased or completely misleading when you implement it? Can you lose money even if you get all the stats right? A detailed look into external validity / generalizability and what one can do to increase the chance of conducting tests which result in useful predictions.

#conversionrate #cro #abtesting

#conversionrate #cro #abtesting

Post has attachment

#conversionrate #cro #abtesting

Post has attachment

#conversionrate #cro #abtesting

Post has attachment

Public

#conversionrate #cro #abtesting

Add a comment...

Post has attachment

Public

Many, including PhDs in #statistics, can't wrap their heads around the paradox of 1-tailed v. 2-tailed tests. Given data X, how can we reject with #probability P the claim of "no difference or negative difference", but not the claim of "no difference"? Here is my solution: https://www.onesided.org/articles/the-paradox-of-one-sided-v-two-sided-tests-of-significance.php

Add a comment...

Post has attachment

Public

An article I promised several weeks ago is now ready. Learn how the poor design of a couple of statistical tables in the early 20-th century likely contributed to a major misunderstanding of a whole class of statistical calculations. This usability/UX issue is then transferred to modern statistical software and contributes to poor statistical practices in medicine, a lot of behavioral sciences, economics and business risk estimation included...

https://www.onesided.org/articles/widespread-usage-of-two-sided-tests-result-of-usability-issue.php

https://www.onesided.org/articles/widespread-usage-of-two-sided-tests-result-of-usability-issue.php

Add a comment...

Post has attachment

Public

Add a comment...

Post has attachment

Public

Add a comment...

Wait while more posts are being loaded