Nina Zumel takes us through a pair of techniques for avoiding overfitting:
Cross-validation is relatively computationally expensive; regularization is relatively cheap. Can you mitigate nested model bias by using regularization techniques instead of cross-validation?
The short answer: no, you shouldn’t. But as, we’ve written before, demonstrating this is more memorable than simply saying “Don’t do that.”
Definitely worth the read.
Comments closed