Stream

Join this community to post or comment

John Cook

Discussion  - 
 
It's common to evaluate Bayesian designs by their frequentist characteristics. This is a lot of work. Sometimes it's futile and creates an unfair comparison.
2
Add a comment...

Deeejaaay Khan

Discussion  - 
 
What is the big/major difference between margin of error and confidence interval?
Why we need to set margin of error if we have already set confidence interval at (- +)2SD ?

Note: Kindly help I am so confused, I need conceptual answers, not google ones!
1
Dr. Ezra Boyd's profile photoDeeejaaay Khan's profile photo
2 comments
 
Informative, Thank you :)
Add a comment...
 
The reproducibility crisis in conventional statistics is growing. Here is a recent study of a failure to replicate a finding, the romantic priming effect.

http://blogs.discovermagazine.com/neuroskeptic/2015/11/10/reproducibility-crisis-the-plot-thickens/#.VncFaOjRaf0
A new paper from British psychologists David Shanks and colleagues will add to the growing sense of a “reproducibility crisis” in the field of psychology. The paper is called Romance, Risk, and Replication and it examines the question of whether subtle reminders of ‘mating motives’ (i.e. sex) can make people more willing to spend money …
11
3
FZ Zhao's profile photoTwain Twain's profile photo
Add a comment...

Brydon Parker

Discussion  - 
 
Joe is a software engineer living in lower manhattan that specializes in machine learning, statistics, python, and computer vision.
1
Mattias Villani's profile photo
 
Impressive! 
Add a comment...
A while back I wrote about how the classical non-parametric bootstrap can be seen as a special case of the Bayesian bootstrap. Well, one difference …
5
1
Rafael Miranda-Esquivel's profile photo
Add a comment...

João Neto
moderator

Discussion  - 
 
"The problem posed is the classic one, along these lines [...]: given that a biased die averaged 4.5 on a large number of tosses, assign probabilities for the next toss, x. This problem can seemingly be solved by Bayesian Inference, or by MaxEnt with a constraint on the expected value of x: E(x) =4.5. These two approaches give different answers!"

https://letterstonature.wordpress.com/2008/12/29/where-do-i-stand-on-maximum-entropy/
My title is taken from a similarly titled article by the physicist Ed Jaynes, whose work influenced me greatly. It refers to a controversial idea of epistemological probability theory: the method o...
2
Mattias Villani's profile photo
 
Thanks, a good read. 
Add a comment...

João Neto
moderator

Discussion  - 
 
"The prior distribution p(theta) in a Bayesian analysis is often presented as a researcher’s beliefs about theta. I prefer to think of p(theta) as an expression of information about theta."

http://andrewgelman.com/2015/07/15/prior-information-not-prior-belief/
The prior distribution p(theta) in a Bayesian analysis is often presented as a researcher’s beliefs about theta. I prefer to think of p(theta) as an expression of information about theta. Consider this sort of question that a classically-trained statistician asked me the other day: If two Bayesians are given the same data, they will come …
6
2
Phillip Adkins's profile photoHarris Floudas's profile photoPär-Anders Aronsson's profile photo
 
I'd say that it should be no bother that the model is different.

It's frequently the case in machine learning that the model builder's skill and experience has a large effect on the quality.
Add a comment...

Matt Kuenzel

Discussion  - 
 
Pre-Bayesian: Ridiculous, probabilities are
without doubt objective. They can be seen
in the relative frequencies they cause.

Bayesian: So if p = 0.75 for some event, after
1000 trials we’ll see exactly 750 such events?

Pre-Bayesian: You might, but most likely you
won’t see that exactly. You’re just likely to
see something close to it.

Bayesian: Likely? Close? How do you define or
quantify these things without making reference
to your degrees of belief for what will
happen?

Pre-Bayesian: Well, in any case, in the infinite
limit the correct frequency will definitely
occur.

Bayesian: How would I know? Are you saying
that in one billion trials I could not possibly
see an “incorrect” frequency? In one
trillion?

Pre-Bayesian: OK, you can in principle see
an incorrect frequency, but it’d be ever less
likely!

Bayesian: Tell me once again, what does ‘likely’
mean?
4
1
charles griffiths's profile photoJan Galkowski's profile photoHelen South's profile photo
2 comments
 
+charles griffiths But sometimes a decision needs to be made, whether or not we have "answers about the actual world". Losses are least using the Bayesian approach.
Add a comment...

João Neto
moderator

Discussion  - 
This is perhaps the first real crack in the wall for the almost-universal use of the null hypothesis significance testing procedure (NHSTP). The journal, Basic...
15
8
Roland Kofler's profile photoStefan Knecht's profile photoDeniz Yuret's profile photoJessie Roberts's profile photo
2 comments
 
I guess they took the phrase "lies, damn lies, statistics" a bit literally.. by the way this is not a "crack" sort of thing.. I mean, Cumming's “dance of the p-value” argument is valid but do we have a better alternative keeping it's simplicity? 
Add a comment...

Olga Scrivner

Discussion  - 
 
In my PhD thesis on a historical language change from Latin to Old French (probably very boring subject for many people), and I am trying to use Bayesian inference for my data which is mainly categorical (with logistic regression glm as a prior). Surprisingly, I have never seen any previous linguistic study that has used Bayesian statistics (except language evolution prediction model which is different). I am not sure even how to present and explain my choices of prior and my models to a non-statistical non-Bayesian  audience. I would greatly appreciate your insights!
1
Mad Tinker's profile photoOlga Scrivner's profile photo
8 comments
 
Thank you, Mad!
Add a comment...

João Neto
moderator

Discussion  - 
 
"Sadly, the concept of p-values and significance testing forms the very core of statistics. A number of us have been pointing out for decades that p-values are at best underinformative and often misleading. Almost all statisticians agree on this, yet they all continue to use it and, worse, teach it. I recall a few years ago, when Frank Harrell and I suggested that R place less emphasis on p-values in its output, there was solid pushback. One can’t blame the pusherbackers, though, as the use of p-values is so completely entrenched that R would not be serving its users well with such a radical move.

And yet, wonder of wonders, the American Statistical Association has finally taken a position against p-values. I never thought this would happen in my lifetime, or in anyone else’s, for that matter, but I say, Hooray for the ASA!"

https://matloff.wordpress.com/2016/03/07/after-150-years-the-asa-says-no-to-p-values/
15
10
Manuel Martin's profile photoYi Li's profile photo
Add a comment...

João Neto
moderator

Discussion  - 
 
"Suppose I tell you that I know of a magician, The Amazing Significo, with extraordinary powers. He can undertake to deal you a five-card poker hand which has three cards with the same number.
You open a fresh pack of cards, shuffle the pack and watch him carefully. The Amazing Significo deals you five cards and you find that you do indeed have three of a kind." #pHacking

http://deevybee.blogspot.co.uk/2016/01/the-amazing-significo-why-researchers.html
2
2
Mohan Raj's profile photoAnand Jeyahar's profile photo
Add a comment...

João Neto
moderator

Discussion  - 
10
8
Twain Twain's profile photoKirsi Hyytiäinen's profile photoJosue Guzman's profile photoPankaj Daga's profile photo
 
Great talk! 
Add a comment...
The non-parametric bootstrap was my first love. I was lost in a muddy swamp of zs, ts and ps when I first saw her. Conceptually beautiful, simple to …
8
4
G Zhao's profile photoEmre Safak's profile photoPär-Anders Aronsson's profile photoTruyen Tran's profile photo
 
Check out David Draper's talk for an interpretation of the bootstrap as a Dirichlet process:

https://users.soe.ucsc.edu/~draper/draper-irvine-15-may-2014.pdf
Add a comment...

João Neto
moderator

Discussion  - 
 
"I don’t think statistical models are representations of the data at all (barring one exception, which I will discuss later). Instead, they are representations of the prior information that our analysis is assuming"

https://plausibilitytheory.wordpress.com/2015/07/10/what-is-a-statistical-model/
What is a statistical model? This question was posed recently by the excellent "Stats Fact" Twitter account, which linked to a paper that was too complicated for me to understand, involving categor...
1
2
Math is in the Air | blog di matematica applicata's profile photoStefano Terzi's profile photo
Add a comment...

Omar Javed

Discussion  - 
 
Nice high-level talk on the difference between Frequentism and Bayesianism
https://clip.mn/video/yt-KhAUfqhLakw
14
8
Osvaldo Martin's profile photoRicardo Perdiz's profile photo
Add a comment...

João Neto
moderator

Discussion  - 
 
"This is the Bayesian approach. You have a belief according to existing evidence and theories. If a new bit of evidence comes in you don’t discard all prior knowledge, or pretend that we currently know nothing. You simply update your belief, adding the new information to existing information. In this way our beliefs slowly evolve, tracking with new evidence and ideas (unless you have a large emotional investment in one belief, but that’s another post)."

http://theness.com/neurologicablog/index.php/in-defense-of-prior-probability/
This post is a follow up to one from last week about reproducibility in science. An e-mailer had a problem with the following statement: 'I tend to accept...
8
1
Yue Ma's profile photo
Add a comment...

Wayne Hajas

Discussion  - 
 
This is a writing-problem concerning a bayesian analysis I hope to publish.  There is a simple idea that I just can't justify succinctly.  People must have to deal with it all the time but I can't find any references.  It's driving me nuts!  

The concept I want to express:  As we wish to consider a larger range of data-values, a model must be made more complicated in order to remain useful.  

As an example: If I drop a rock a distance of one-metre, I can probably get away with a constant-acceleration model.  If I drop a rock a distance of a kilometre, I have to consider air resistance. If I drop it a distance of 1000 kilometres, I must consider orbital dynamics. ...

Correspondingly,one way of managing the need for model-complexity in a bayesian model is to limit the range of data-values.  In my particular circumstances, I can do that at an acceptable cost.

There must be a name for this concept.  It's gotta be published somewhere.  Google is failing me.  The paper will lose a lot of focus if I have to chase this tangent.  Can anybody suggest a useful reference?  Or even a useful term to google?
1
Wayne Hajas's profile photoDan Mazur's profile photo
5 comments
 
The concept is just that of approximation.

I would say "Within the range <describe range>, the model can be approximated by a simplified version where <list parameters> are ignored."
Add a comment...
 
Could someone point me to some literature on setting priors? 
Specifically, I want to set a prior for Click Through Rate estimation but I want to penalize a subset of the results based on the cardinality of the set, but It doesn't sound like a very Bayesian thing to do. Naturally, some reading could help. Thanks!
1
Håvard Rue's profile photoSanjeev Satheesh's profile photo
4 comments
 
Splendid, thank you. I will go through this!
Add a comment...