Shirin Glander explains some of the concepts behind neural networks using H2O as a guide:

Before, when describing the simple

perceptron, I said that a result is calculated in a neuron, e.g. by summing up all the incoming data multiplied by weights. However, this has one big disadvantage: such an approach would only enable our neural net to learnlinearrelationships between data. In order to be able to learn (you can also say approximate) any mathematical problem – no matter how complex – we useactivation functions. Activation functions normalize the output of a neuron, e.g. to values between -1 and 1, (Tanh), 0 and 1 (Sigmoid) or by setting negative values to 0 (Rectified Linear Units, ReLU). In H2O we can choose between Tanh, Tanh with Dropout, Rectifier (default), Rectifier with Dropout, Maxout and Maxout with Dropout. Let’s choose Rectifier with Dropout.Dropoutis used to improve the generalizability of neural nets by randomly setting a given proportion of nodes to 0. The dropout rate in H2O is specified with two arguments:`hidden_dropout_ratios`

, which per default sets 50% of hidden (more on that in a minute) nodes to 0. Here, I want to reduce that proportion to 20% but let’s talk about hidden layers and hidden nodes first. In addition to hidden dropout, H2O let’s us specify a dropout for the input layer with`input_dropout_ratio`

. This argument is deactivated by default and this is how we will leave it.

Read the whole thing and, if you understand German, check out the video as well.

Laura Ellis walks us through some easy techniques for learning about our data using R:

## DIM AND GLIMPSE

Next, we will run the dim function which displays the dimensions of the table. The output takes the form of row, column.

And then we run the glimpse function from the dplyr package. This will display a vertical preview of the dataset. It allows us to easily preview data type and sample data.

Spending some quality time doing EDA can save you in the long run, as it can help you get a feel for things like data quality, the distributions of variables, and completeness of data.

Manoj Gautam shows how to perform a logistic regression with Apache Spark:

Since we are going to try algorithms like Logistic Regression, we will have to convert the categorical variables in the dataset into numeric variables. There are 2 ways we can do this.

- Category Indexing
- One-Hot Encoding
Here, we will use a combination of StringIndexer and OneHotEncoderEstimator to convert the categorical variables. The

`OneHotEncoderEstimator`

will return a SparseVector.

Click through for the code and explanation.

Specifically, let us assume that we wish to analyze traffic density for buses and coaches. The main thing we are interested in is the

frequency of traffic across a particular route.Let’s take an example. If buses cover 100 miles on a route that is 5 miles long within a certain timeframe, then the frequency will be greater than 100 miles covered on a route that is 10 miles long over the same time period.

Read on for an interesting example.

Notice only grouping columns and columns passed through an aggregating calculation (such as

`max()`

) are passed through (the column`z`

is not in the result). Now because`y`

is a function of`x`

no substantial aggregation is going on, we call this situation a “pseudo aggregation” and we have taught this before. This is also why we made the seemingly strange choice of keeping the variable name`y`

(instead of picking a new name such as`max_y`

), we expect the`y`

values coming out to be the same as the one coming in- just with changes of length. Pseudo aggregation (using the projection`y[[1]]`

) was also used in the solutions of the column indexing problem.Our

`wrapr`

package now supplies a special case pseudo-aggregator (or in a mathematical sense: projection):`psagg()`

. It works as follows.

In this post, John calls the act of grouping functional dependencies (where we can determine the value of y based on the value of x, for any number of columns in y or x) pseudo-aggregation.

Shrin Glander has a new video, currently only in German but there is an English transcript:

RF is based on decision trees. In machine learning decision trees are a technique for creating predictive models. They are called decision

treesbecause the prediction follows several branches of “if… then…” decision splits – similar to the branches of a tree. If we imagine that we start with a sample, which we want to predict a class for, we would start at the bottom of a tree and travel up the trunk until we come to the first split-off branch. This split can be thought of as a feature in machine learning, let’s say it would be “age”; we would now make a decision about which branch to follow: “if our sample has an age bigger than 30, continue along the left branch, else continue along the right branch”. This we would do until we come to the next branch and repeat the same decision process until there are no more branches before us. This endpoint is called a leaf and in decision trees would represent the final result: a predicted class or value.At each branch, the feature thresholds that best split the (remaining) samples locally is found. The most common metrics for defining the “best split” are

gini impurityandinformation gainfor classification tasks andvariance reductionfor regression.

Click through for more info and if you understand German, the video is good as well.

We leverage the power of HDP 3.0 from efficient storage (erasure coding), GPU pooling to containerized TensorFlow and Zeppelin to enable this use case. We will the save the details for a different blog (please see the video)- to summarize, as we trained the car on a track, we collected about 30K images with corresponding steering angle data. The training data was stored in a HDP 3.0 cluster and the TensorFlow model was trained using 6 GPU cards and then the model was deployed back on the car. The deep learning use case highlights the combined power of HDP 3.0.

Click through for more additions and demos.

David Parr shows us how to get started with Microsoft R Client and performs some quick benchmarking:

This message will pop up, and it’s worth noting as it’s got some information in it that you might need to think about:

It’s worth noting that right now Microsoft r Client is lagging behind the current

`R`

version, and is based on version 3.4 of`R`

, not 3.5. This will mean your default package libraries will not be shared between the installations if you are running`R`

3.5.It’s using a snapshot of

`CRAN`

called`MRAN`

to source packages by default. 90% of the time it will operate just as you expect, but because it takes a ‘snapshot’ of packages, newer features and changes that have hit`CRAN`

may not be in the version of the package you are grabbing.`RevoScaleR`

and probably the`ggplot2`

and`dplyr`

packages will likely be installed for you already as default in Microsoft R Client. The other two you will probably have to install yourself.

Intel MKL will have scanned your system on install and attempted to work out how many cores your processor has. Here it’s identified 2 on my old Lenovo Yoga. This is where the speed boost will come from.

I had an old two-core Lenovo Yoga too, so this article really spoke to me.

For each of these images, I am running the

`predict()`

function of Keras with the VGG16 model. Because I excluded the last layers of the model, this function will not actually return any class predictions as it would normally do; instead we will get the output of the last layer:`block5_pool (MaxPooling2D)`

.These, we can use as learned features (or abstractions) of the images. Running this part of the code takes several minutes, so I save the output to a RData file (because I samples randomly, the classes you see below might not be the same as in the

`sample_fruits`

list above).

Read the whole thing.

Michael Grogan shows us how to use the `plm`

package to perform linear regression against panel data:

## Types of data

- Cross-Sectional: Data collected at one particular point in time
- Time Series: Data collected across several time periods
- Panel Data: A mixture of both cross-sectional and time series data, i.e. collected at a particular point in time and across several time periods
- Fixed Effects: Effects that are independent of random disturbances, e.g. observations independent of time.
- Random Effects: Effects that include random disturbances.
Let us see how we can use the

`plm`

library in R to account for fixed and random effects. There is a video tutorial link at the end of the post.

Read on for an example.

Kevin Feasel

2018-11-07

Data Science, Machine Learning, R