John Mount explains how to perform bias correction and explains why it happens so rarely in practice:
The bias in question is falling off at a rate of
1/n
(wheren
is our sample size). So the bias issue loses what little gravity it ever may have ever had when working with big data. Most sources of noise will be falling off at a slower rate of1/sqrt(n)
, so it is unlikely this bias is going to be the worst feature of your sample.But let’s pretend the sample size correction indeed is an important point for a while.
Under the “no bias allowed” rubric: if it is so vitally important to bias-correct the variance estimate, would it not be equally critical to correct the standard deviation estimate?
The practical answer seems to be: no. The straightforward standard deviation estimate itself is biased (it has to be, as a consequence of Jensen’s inequality). And pretty much nobody cares, corrects it, or teaches how to correct it, as it just isn’t worth the trouble.
This is a good explanation of the topic as well as the reason people make these corrections so rarely.