Recently I've been doing more statistics than I am used to as part of my thinking on a joint project withand . One of the difficulties I've encountered in my reading is the focus on yes/no answers to the existence of effect. This seems great for verbal and logical theories, but less useful for quantitative ones. I am personally much more comfortable with the measurement perspective, of trying to propagate errors on estimated quantities up from the observables to the higher order properties that shape our theories. It is nice to know that this dichotomy between 'discovery' and 'measurement' is acknowledged in statistics, and has proponents on both sides.
View original post
Moving statistical theory from a "discovery" framework to a "measurement" framework - Statistical Modeling, Causal Inference, and Social Science
Avi Adler points to this post by Felix Schönbrodt on “What’s the probability that a significant p-value indicates a true effect?” I’m sympathetic to the goal of better understanding what’s in a p-value (see for example my paper with John Carlin on type M and type S errors) but I really don’t like the framing …
Add a comment...