Press "Enter" to skip to content

Neural Networks Are Polynomial Regression

Norman Matloff announces a new paper:

A summary of the paper is:

  • We present a very simple, informal mathematical argument that neural networks (NNs) are in essence polynomial regression (PR). We refer to this as NNAEPR.

  • NNAEPR implies that we can use our knowledge of the “old-fashioned” method of PR to gain insight into how NNs — widely viewed somewhat warily as a “black box” — work inside.

  • One such insight is that the outputs of an NN layer will be prone to multicollinearity, with the problem becoming worse with each successive layer. This in turn may explain why convergence issues often develop in NNs. It also suggests that NN users tend to use overly large networks.

  • NNAEPR suggests that one may abandon using NNs altogether, and simply use PR instead.

  • We investigated this on a wide variety of datasets, and found that in every case PR did as well as, and often better than, NNs.

  • We have developed a feature-rich R package, polyreg, to facilitate using PR in multivariate settings.

The paper and presentation slides are ungated, so check it out.  H/T R-bloggers