Bayesian Neural Networks

Yoel Zeldes thinks about neural networks from a different perspective:

The term logP(w), which represents our prior, acts as a regularization term. Choosing a Gaussian distribution with mean 0 as the prior, you’ll get the mathematical equivalence of L2 regularization.

Now that we start thinking about neural networks as probabilistic creatures, we can let the fun begin. For start, who says we have to output one set of weights at the end of the training process? What if instead of learning the model’s weights, we learn a distribution over the weights? This will allow us to estimate uncertainty over the weights. So how do we do that?

It’s an interesting approach to the problem.

Related Posts

Power BI AutoML

Teo Lachev takes a look at AutoML in Power BI: Let’s see how AutoML works based on what’s in the private preview (the usual disclaimer is that things will probably change). To start with, AutoML requires a dataflow (a note to Microsoft here is that AutoML will become more pervasive if it’s available in Power […]

Read More

Using Convolutional Neural Networks To Recognize Features In Images

Michael Grogan shows how you can use Keras to perform image recognition with a convolutional neural network: VGG16 is a built-in neural network in Keras that is pre-trained for image recognition. Technically, it is possible to gather training and test data independently to build the classifier. However, this would necessitate at least 1,000 images, with […]

Read More

Categories

August 2018
MTWTFSS
« Jul Sep »
 12345
6789101112
13141516171819
20212223242526
2728293031