Press "Enter" to skip to content

Category: Data Science

Using Sparklyr To Analyze Flight Data

Aki Ariga uses sparklyr on Apache Spark 2.0 to analyze flight data living in S3:

Using sparklyr enables you to analyze big data on Amazon S3 with R smoothly. You can build a Spark cluster easily with Cloudera Director. sparklyr makes Spark as a backend database of dplyr. You can create tidy data from huge messy data, plot complex maps from this big data the same way as small data, and build a predictive model from big data with MLlib. I believe sparklyr helps all R users perform exploratory data analysis faster and easier on large-scale data. Let’s try!

You can see the Rmarkdown of this analysis on RPubs. With RStudio, you can share Rmarkdown easily on RPubs.

Sparklyr is an exciting technology for distributed data analysis.

Comments closed

Principal Component Analysis Using R

Francisco Lima explains what principal component analysis is and shows how to do it in R:

Three lines of code and we see a clear separation among grape vine cultivars. In addition, the data points are evenly scattered over relatively narrow ranges in both PCs. We could next investigate which parameters contribute the most to this separation and how much variance is explained by each PC, but I will leave it for pcaMethods. We will now repeat the procedure after introducing an outlier in place of the 10th observation.

PCA is extremely useful when you have dozens of contributing factors, as it lets you narrow in on the big contributors quickly.

Comments closed

Support Vector Machines In R

Deepanshu Bhalla explains what support vector machines are:

The main idea of support vector machine is to find the optimal hyperplane (line in 2D, plane in 3D and hyperplane in more than 3 dimensions) which maximizes the margin between two classes. In this case, two classes are red and blue balls. In layman’s term, it is finding the optimal separating boundary to separate two classes (events and non-events).

Deepanshu then goes on to implement this in R.

Comments closed

Data Analysis Basics In R

Sibanjan Das provides some of the basics of data analysis using R:

Let’s start thinking in a logical way the steps that one should perform once we have the data imported into R.

  • The first step would be to discover what’s in the data file that was exported. To do this, we can:
    • Use head function to view few rows from the data set. By default, head shows first 5 rows. Ex: head(mtcars)
    • str to view the structure of the imported data. Ex: str(mtcars)
    • summary to view the data summary. Ex: summary(mtcars) 

There’s a lot to data analysis, but this is a good start.

Comments closed

Thinking About Real-Time Analytics

Martin Willcox offers some advice for people getting into the real-time analytics game:

  1. Clarify who will be making the decision – man, or machine? Humans have powers of discretion that machines sometimes lack, but are much slower than a silicon-based system, and only able to make decisions one-at-a-time, one-after-another.  If we chose to put a human in the loop, we are normally in “please-update-my-dashboard-faster-and-more-often” territory.

  2. It is important to be clear about decision-latency. Think about how soon after a business event you need to take a decision and then implement it. You also need to understand whether decision-latency and data-latency are the same. Sometimes a good decision can be made now on the basis of older data. But sometimes you need the latest, greatest and most up-to-date information to make the right choices.

There are some good insights here.

Comments closed

Understanding Bayesian Priors

Angelika Stefan and Felix Schönbrodt explain the concept of priors:

When reading about Bayesian statistics, you regularly come across terms like “objective priors“, “prior odds”, “prior distribution”, and “normal prior”. However, it may not be intuitively clear that the meaning of “prior” differs in these terms. In fact, there are two meanings of “prior” in the context of Bayesian statistics: (a) prior plausibilities of models, and (b) the quantification of uncertainty about model parameters. As this often leads to confusion for novices in Bayesian statistics, we want to explain these two meanings of priors in the next two blog posts*. The current blog post covers the the first meaning of priors.

Priors are a big differentiator between the Bayesian statistical model and the classical/frequentist statistical model.

Comments closed

CRISP-DM

Steph Locke explains the CRISP-DM model for data mining projects and applies it to data science projects:

Within a given project, we know that at the beginning of our first ever project we may not have a lot of domain knowledge, or there might be problems with the data or the model might not be valuable enough to put into production. These things happen, and the really nice thing about the CRISP-DM model is it allows for us to do that. It’s not a single linear path from project kick-off to deployment. It helps you remember not to beat yourself up over having to go back a step. It also equips you with something upfront to explain to managers that sometimes you will need to bounce between some phases, and that’s ok.

This is another place in which “iterate, iterate, iterate” ends up being the best answer available.

Comments closed

Forecasting Restaurant Inspection Failures

David Smith writes about an R model which predicts which restaurants are more likely to fail inspection:

Chicago’s Department of Public Health used the R language to build and deploy the model, and made the code available as an open source project on GitHub. The reasons given are twofold:

An open source approach helps build a foundation for other models attempting to forecast violations at food establishments. The analytic code is written in R, an open source, widely-known programming language for statisticians. There is no need for expensive software licenses to view and run this code.

Read on for more details and check out their GitHub repo.

Comments closed

Basic Non-Linear Regression In R

Renata Ghisloti Duarte de Souza gives an example of running a non-linear regression in R:

Now, suppose you were able to find a good function to model your data. With that, we are able to predict future values for our small dataset.

One important thing about the predict() function in R is that it expects a similar dataframe with the same column name and type as the one you used in your model.

Click through for several examples.

Comments closed

Mastering Tools

The folks at Sharp Sight Labs explain that future obsolescence of a tool does not mean you should not master it:

The heart of his critique is this: data science is changing very fast, and any tool that you learn will eventually become obsolete.

This is absolutely true.

Every tool has a shelf life.

Every. single. one.

Moreover, it’s possible that tools are going to become obsolete more rapidly than in the past, because the world has just entered a period of rapid technological change. We can’t be certain, but if we’re in a period of rapid technological change, it seems plausible that toolset-changes will become more frequent.

The thing I would tie it to is George Stigler’s paper on information theory.  There’s a cost of knowing—which the commenter notes—but there’s also a cost to search, given the assumption that you know where to look.  Being effective in any role, be it data scientist or anything else, involves understanding the marginal benefit of pieces of information.  This blog post gives you a concrete example of that in the realm of data science.

Comments closed