Statistical Power And The False Discovery Rate

Brad Klingbenberg has an insightful article on false discovery rate:

A good frequentist would never interpret a p-value as the probability that the null hypothesis is true. But it can be enormously tempting. And despite all your efforts to the contrary it is likely that many of your colleagues don’t appreciate the distinction.

So, really, how wrong is it to treat a p-value as (one minus) the posterior probability that the null hypothesis is true? In general, it’s bad. But in some cases a p-value is a very good approximation to a posterior probability. Here we examine that approximation in a common testing scenario.

Check it out for sure.

Related Posts

Biases in Tree-Based Models

Nina Zumel looks at tree-based ensembling models like random forest and gradient boost and shows that they can be biased: In our previous article , we showed that generalized linear models are unbiased, or calibrated: they preserve the conditional expectations and rollups of the training data. A calibrated model is important in many applications, particularly when financial data […]

Read More

Comparing Poisson Regression to Regressing Against Logs

Nina Zumel compares a pair of methods for performing regression when income is the dependent variable: Regressing against the log of the outcome will not be calibrated; however it has the advantage that the resulting model will have lower relative error than a Poisson regression against income. Minimizing relative error is appropriate in situations when […]

Read More

Categories

May 2018
MTWTFSS
« Apr Jun »
 123456
78910111213
14151617181920
21222324252627
28293031