### John Cook

Discussion -It's common to evaluate Bayesian designs by their frequentist characteristics. This is a lot of work. Sometimes it's futile and creates an unfair comparison.

2

Add a comment...

Start a hangout

All communitiesRecommended for you

Join this community to post or comment

Join community

It's common to evaluate Bayesian designs by their frequentist characteristics. This is a lot of work. Sometimes it's futile and creates an unfair comparison.

2

Add a comment...

What is the big/major difference between margin of error and confidence interval?

Why we need to set margin of error if we have already set confidence interval at (- +)2SD ?

Note: Kindly help I am so confused, I need conceptual answers, not google ones!

Why we need to set margin of error if we have already set confidence interval at (- +)2SD ?

Note: Kindly help I am so confused, I need conceptual answers, not google ones!

1

2 comments

Informative, Thank you :)

Add a comment...

The reproducibility crisis in conventional statistics is growing. Here is a recent study of a failure to replicate a finding, the romantic priming effect.

http://blogs.discovermagazine.com/neuroskeptic/2015/11/10/reproducibility-crisis-the-plot-thickens/#.VncFaOjRaf0

http://blogs.discovermagazine.com/neuroskeptic/2015/11/10/reproducibility-crisis-the-plot-thickens/#.VncFaOjRaf0

A new paper from British psychologists David Shanks and colleagues will add to the growing sense of a “reproducibility crisis” in the field of psychology. The paper is called Romance, Risk, and Replication and it examines the question of whether subtle reminders of ‘mating motives’ (i.e. sex) can make people more willing to spend money …

11

3

Add a comment...

Joe is a software engineer living in lower manhattan that specializes in machine learning, statistics, python, and computer vision.

1

Mattias Villani

+

1

2

1

2

1

Impressive!

Add a comment...

A while back I wrote about how the classical non-parametric bootstrap can be seen as a special case of the Bayesian bootstrap. Well, one difference …

5

1

Add a comment...

moderator

"The problem posed is the classic one, along these lines [...]: given that a biased die averaged 4.5 on a large number of tosses, assign probabilities for the next toss, x. This problem can seemingly be solved by Bayesian Inference, or by MaxEnt with a constraint on the expected value of x: E(x) =4.5. These two approaches give different answers!"

https://letterstonature.wordpress.com/2008/12/29/where-do-i-stand-on-maximum-entropy/

https://letterstonature.wordpress.com/2008/12/29/where-do-i-stand-on-maximum-entropy/

My title is taken from a similarly titled article by the physicist Ed Jaynes, whose work influenced me greatly. It refers to a controversial idea of epistemological probability theory: the method o...

2

Thanks, a good read.

Add a comment...

moderator

"The prior distribution p(theta) in a Bayesian analysis is often presented as a researcher’s beliefs about theta. I prefer to think of p(theta) as an expression of information about theta."

http://andrewgelman.com/2015/07/15/prior-information-not-prior-belief/

http://andrewgelman.com/2015/07/15/prior-information-not-prior-belief/

The prior distribution p(theta) in a Bayesian analysis is often presented as a researcher’s beliefs about theta. I prefer to think of p(theta) as an expression of information about theta. Consider this sort of question that a classically-trained statistician asked me the other day: If two Bayesians are given the same data, they will come …

6

2

I'd say that it should be no bother that the model is different.

It's frequently the case in machine learning that the model builder's skill and experience has a large effect on the quality.

It's frequently the case in machine learning that the model builder's skill and experience has a large effect on the quality.

Add a comment...

moderator

"Sadly, the concept of p-values and significance testing forms the very core of statistics. A number of us have been pointing out for decades that p-values are at best underinformative and often misleading. Almost all statisticians agree on this, yet they all continue to use it and, worse, teach it. I recall a few years ago, when Frank Harrell and I suggested that R place less emphasis on p-values in its output, there was solid pushback. One can’t blame the pusherbackers, though, as the use of p-values is so completely entrenched that R would not be serving its users well with such a radical move.

And yet, wonder of wonders, the American Statistical Association has finally taken a position against p-values. I never thought this would happen in my lifetime, or in anyone else’s, for that matter, but I say, Hooray for the ASA!"

https://matloff.wordpress.com/2016/03/07/after-150-years-the-asa-says-no-to-p-values/

And yet, wonder of wonders, the American Statistical Association has finally taken a position against p-values. I never thought this would happen in my lifetime, or in anyone else’s, for that matter, but I say, Hooray for the ASA!"

https://matloff.wordpress.com/2016/03/07/after-150-years-the-asa-says-no-to-p-values/

15

10

Add a comment...

moderator

"Suppose I tell you that I know of a magician, The Amazing Significo, with extraordinary powers. He can undertake to deal you a five-card poker hand which has three cards with the same number.

You open a fresh pack of cards, shuffle the pack and watch him carefully. The Amazing Significo deals you five cards and you find that you do indeed have three of a kind." #pHacking

http://deevybee.blogspot.co.uk/2016/01/the-amazing-significo-why-researchers.html

You open a fresh pack of cards, shuffle the pack and watch him carefully. The Amazing Significo deals you five cards and you find that you do indeed have three of a kind." #pHacking

http://deevybee.blogspot.co.uk/2016/01/the-amazing-significo-why-researchers.html

2

2

Add a comment...

The non-parametric bootstrap was my first love. I was lost in a muddy swamp of zs, ts and ps when I first saw her. Conceptually beautiful, simple to …

8

4

Check out David Draper's talk for an interpretation of the bootstrap as a Dirichlet process:

https://users.soe.ucsc.edu/~draper/draper-irvine-15-may-2014.pdf

https://users.soe.ucsc.edu/~draper/draper-irvine-15-may-2014.pdf

Add a comment...

moderator

"I don’t think statistical models are representations of the data at all (barring one exception, which I will discuss later). Instead, they are representations of the *prior information* that our analysis is assuming"

https://plausibilitytheory.wordpress.com/2015/07/10/what-is-a-statistical-model/

https://plausibilitytheory.wordpress.com/2015/07/10/what-is-a-statistical-model/

What is a statistical model? This question was posed recently by the excellent "Stats Fact" Twitter account, which linked to a paper that was too complicated for me to understand, involving categor...

1

2

Add a comment...

Pre-Bayesian: Ridiculous, probabilities are

without doubt objective. They can be seen

in the relative frequencies they cause.

Bayesian: So if p = 0.75 for some event, after

1000 trials we’ll see exactly 750 such events?

Pre-Bayesian: You might, but most likely you

won’t see that exactly. You’re just likely to

see something close to it.

Bayesian: Likely? Close? How do you define or

quantify these things without making reference

to your degrees of belief for what will

happen?

Pre-Bayesian: Well, in any case, in the infinite

limit the correct frequency will definitely

occur.

Bayesian: How would I know? Are you saying

that in one billion trials I could not possibly

see an “incorrect” frequency? In one

trillion?

Pre-Bayesian: OK, you can in principle see

an incorrect frequency, but it’d be ever less

likely!

Bayesian: Tell me once again, what does ‘likely’

mean?

without doubt objective. They can be seen

in the relative frequencies they cause.

Bayesian: So if p = 0.75 for some event, after

1000 trials we’ll see exactly 750 such events?

Pre-Bayesian: You might, but most likely you

won’t see that exactly. You’re just likely to

see something close to it.

Bayesian: Likely? Close? How do you define or

quantify these things without making reference

to your degrees of belief for what will

happen?

Pre-Bayesian: Well, in any case, in the infinite

limit the correct frequency will definitely

occur.

Bayesian: How would I know? Are you saying

that in one billion trials I could not possibly

see an “incorrect” frequency? In one

trillion?

Pre-Bayesian: OK, you can in principle see

an incorrect frequency, but it’d be ever less

likely!

Bayesian: Tell me once again, what does ‘likely’

mean?

4

1

2 comments

+charles griffiths But sometimes a decision needs to be made, whether or not we have "answers about the actual world". Losses are least using the Bayesian approach.

Add a comment...