Press "Enter" to skip to content

Category: R

Machine Learning Services Updates

Umachandar Jayachandran and team have been busy.  First, they announced a preview of SQL Server ML Services in Azure SQL Database:

In-database Machine Learning support was added in SQL Server 2016 and we are now bringing the same functionality to Azure SQL Database. You can now train and score machine learning models in Azure SQL Database and the predictions can be exposed to any application using your database, easily and seamlessly.

The preview functionality allows you to train and score machine learning models using data that fits in memory (in R data frame). Please note that the amount of memory available for R scripts execution depends on the edition of the Azure SQL database and cannot be modified.

No Python support there yet, but it’s upcoming.  Second, we can use the PREDICT function in Azure SQL Database:

Today we are announcing the general availability of the native PREDICT Transact-SQL function in Azure SQL Database. The PREDICT function allows you to perform scoring in real-time using certain RevoScaleR or revoscalepy models in a SQL query without invoking the R or Python runtime.

The PREDICT function support was added in SQL Server 2017. It is a table-valued function that takes a RevoScaleR or revoscalepy model & data (in the form of a table or view or query) as inputs and generates predictions based on the machine learning model. More details of the PREDICT function can be found here.

There are a limited number of models which support PREDICT—things like linear and logistic regression, RevoScaleR’s fast decision trees, etc.  If you have this type of model, however, the predictions stay within SQL Server and end up being much faster than going out to R.

Comments closed

dplyr Mutate Quirks

John Mount explains a quirk in dplyr’s mutate function:

It is hard for experts to understand how frustrating the above is to a new R user or to a part time R user. It feels like any variation on the original code causes it to fail. None of the rules they have been taught anticipate this, or tell them how to get out of this situation.

This quickly leads to strong feelings of learned helplessness and anxiety.

Our rule for dplyr::mutate() has been for some time:

Each column name used in a single mutate must appear only on the left-hand-side of a single assignment, or otherwise on the right-hand-side of any number of assignments (but never both sides, even if it is different assignments).

If you do data analysis with R, you’ve probably run into this before.  I certainly have, and it’s nice to understand why this is the case.

Comments closed

Regular Expression Cheat Sheets

Mara Averick shows off a collection of regular expression guides:

There are helpful string-related R packages 📦, stringr (which is built on top of the more comprehensive stringi package) comes to mind. But, at some point in your computing life, you’re gonna need to get down with regular expressions.

And so, here’s a collection of some of the Regex-related links I’ve tweeted 🐦:

Click through for links to regular expression resources.

Comments closed

Visualizing A Single Number

Tim Bock shows a dozen methods for visualizing a single number:

There are a number of situations in which it can be advantageous to create a visualization to represent a single number:

  • To communicate with less numerate viewers/readers;

  • Infographics and dashboards commonly use one important number;

  • To attract the attention of distracted or busy viewers/readers;

  • To add some humanity or “color”, to create an emotional connection;

  • Or to increase the redundancy of the presentation (see Improve the Quality of Data Visualizations Using Redundancy).

To a great extent, my favorite is the first.  There are good cases for many of the others—primarily the shock value of the uncountable pictogram—but typically, the best visualization is simple.

Comments closed

Polar Charts In Power BI With R

Leila Etaati shows how to build a polar chart in Power BI using an R component:

I just add a layer to the above furmula “coord_polar()” this function also has been used for creating pie charts. it gets the “theta” variable, in below example I put theta=y axis, so we have below charts

Normally I don’t much like this type of polar chart, though I’m a big fan of radar charts, which follow a similar concept.

Comments closed

Basics Of Survival Analysis

Subhasree Chatterjee explains the basics of survival analysis:

Survival analysis is a set of methods to analyze the ‘time to occurrence’ of an event. The response is often referred to as a failure time, survival time, or event time. These methods are widely used in clinical experiments to analyze the ‘time to death’, but nowadays these methods are being used to predict the ‘when’ and ‘why’ of customer churn or employee turnover as well.

The dependent variables for the analysis are generally two functions:

Read the whole thing.  H/T R-Bloggers

Comments closed

Naive PCA With R

Pablo Bernabeu gives us a naive method for performing a Principal Component Analysis:

STAGE 1.  Determine whether PCA is appropriate at all, considering the variables

  • Variables should be inter-correlated enough but not too much. Field et al. (2012) provide some thresholds, suggesting that no variable should have many correlations below .30, or any correlation at all above .90. Thus, in the example here, variable Q06 should probably be excluded from the PCA.

  • Bartlett’s test, on the nature of the intercorrelations, should be significant. Significance suggests that the variables are not an ‘identity matrix’ in which correlations are a sampling error.

  • KMO (Kaiser-Meyer-Olkin), a measure of sampling adequacy based on common variance (so similar purpose as Bartlett’s). As Field et al. review, ‘values between .5 and .7 are mediocre, values between .7 and .8 are good, values between .8 and .9 are great and values above .9 are superb’ (p. 761). There’s a general score as well as one per variable. The general one will often be good, whereas the individual scores may more likely fail. Any variable with a score below .5 should probably be removed, and the test should be run again.

  • Determinant: A formula about multicollinearity. The result should preferably fall below .00001.

PCA is a powerful tool in several fields, including clinical testing.

Comments closed

Transit Data Visualization In R

Goncalo Trincao Cunha shows us how to plot General Transit Feed Specification data in R:

GTFS (General Transit Feed Specification) is a specification that defines a data format for public transportation routes, stop, schedules, and associated geographic information.

In this post, we’ll use R with ggplot2 and ggmap to visualize GTFS route and schedule information on a map.

This post uses a GTFS feed from CARRIS, which is a bus public transport operator from the city of Lisbon.

Click through for code and a few interesting maps of Lisbon, Portugal.

Comments closed

Automatically E-Mailing R Reports

Tim Ali shows how to use SendGrid and Displayr to generate and e-mail out automated reports in R:

To automatically write and send email reports we need to have three tools:

  • A programming language. If the analysis is non-trivial, R is usually the best way forward.

  • An app that can automatically run the analyses at specified times. I’ve used Displayr. Of course, if you have the time you can avoid this commercial product and set up your own servers. (Disclaimer, I work for Displayr.) You can read this post to see how to get your data into Displayr and sign up to Displayr here if you do not already have an account.

  • An app to actually send the emails. I’ve chosen an email delivery application called SendGrid because R can work with its API.

To be honest, I’d probably use a different mechanism for deploying reports, but if you just need to put something together with a bit of chewing gum and bailing wire, the tools are there.  H/T R-Bloggers

Comments closed

Nested Resampling In R

Max Kuhn describes how nested resampling works:

A common method for tuning models is grid search where a candidate set of tuning parameters is created. The full set of models for every combination of the tuning parameter grid and the resamples is created. Each time, the assessment data are used to measure performance and the average value is determined for each tuning parameter.

The potential problem is, once we pick the tuning parameter associated with the best performance, this value is usually quoted as the performance of the model. There is serious potential for optimization bias since we uses the same data to tune the model and quote performance. This can result in an optimistic estimate of performance.

Nested resampling does an additional layer of resampling that separates the tuning activities from the process used to estimate the efficacy of the model. An outer resampling scheme is used and, for every split in the outer resample, another full set of resampling splits are created on the original analysis set. For example, if 10-fold cross-validation is used on the outside and 5-fold cross-validation on the inside, a total of 500 models will be fit. The parameter tuning will be conducted 10 times and the best parameters are determined from the average of the 5 assessment sets.

Definitely worth the read.  H/T R-Bloggers

Comments closed