Press "Enter" to skip to content

Category: Data Science

DBScan for Clustering in Python

Brendan Tierney takes us through the DBScan algorithm:

Let’s illustrate the use of DBScan (Density Based Spatial Clustering of Applications with Noise), using the scikit-learn Python package, for a “manufactured” dataset. This example will illustrate how this density based algorithm works (See my other blog post which compares different Clustering algorithms for this same dataset). DBSCAN is better suited for datasets that have disproportional cluster sizes (or densities), and whose data can be separated in a non-linear fashion.

Click through for an interesting read on a dataset which is historically difficult to cluster (unless you know the general shape and translate everything to polar coordinates).

Comments closed

Understanding Support Vector Machines

Luis Valencia takes us through the algorithm for support vector machines:

A support vector machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problem. Compared to newer algorithms like neural networks, they have two main advantages: higher speed and better performance with a limited number of samples (in the thousands).

Pepperidge Farms remembers when we used genetic algorithms to solve problems because support vector machines were too slow.

Comments closed

Word Stemming and Text Processing in R

Genrikh Ananiev takes us through some examples of text processing in R:

First, there are a lot of classes (in fact, how many products you have so many classes) And if in this process you have to work not only with the company’s products, but also competitors, the growth of such new classes can occur every day – therefore it becomes meaningless to teach one time Model to be repeatedly used to predict new products.

Secondly, the number of documents (different variations of the same product) in the classes are not very balanced: there may be one by one to class, and maybe more.

Click through for an example of the classical technique versus a classification-based technique.

Comments closed

Understanding Logistic Regression

Luis Valencia explains the idea of logistic regression:

Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X.

However, unlike ordinary linear regression, in it’s most basic form logistic regressions target value is a binary variable instead of a continuous value.

Read on to learn more about logistic regression. The point I like to make about logistic regression is that people brand new to it say it’s regression, because hey, it has “regression” in its name! People who are more familiar with it say that’s a misnomer and it’s really a classification algorithm, not a regression algorithm. But as Luis shows, people who are very familiar with it understand that it is a regression algorithm, which just happens to have nice classification properties because in many cases, elements get pushed to the edges (0 and 1).

Comments closed

Rolling Means with MazamaRollUtils

Jonathan Callahan has an interesting R package for us:

The initial release of MazmaRollUtils provides all the basic rolling functions with features like alignment and missing value removal along with additional capabilities for smoothing, damping and outlier detection — all common activities in time series analysis.

Click through for an explanation of the process, and then check out the package itself on GitHub. H/T R-Bloggers.

Comments closed

De Moivre’s Equation and Sample Size-Based Variance

Holger von Jouanne-Diedrich demonstrates de Moivre’s equation:

Over one billion dollars have been spent in the US to split up big schools into smaller ones because small schools regularly show up in rankings as top performers.

In this post, I will show you why that money was wasted because of a widespread (but not so well known) statistical artifact, so read on!

Do read on to learn more about this paradox.

Comments closed

What is Pandas?

Lina Kovacheva starts a new series on Pandas:

First and foremost – what is Pandas?

Pandas is a popular Python library that allows users to easily analyse and manipulate data. It offers powerful and flexible data structures and is vastly popular among data scientists and analysts. As with any other library to be able to use Pandas you have to import the library. 

Click through to learn more.

Comments closed

Testing Stock Market Efficiency with Compression Algorithms

Holger von Jouanne-Diedrich has a clever test:

One of the most fiercely fought debates in quantitative finance is whether the stock market (or financial markets in general) is (are) efficient, i.e. whether you can find patterns in them that can be profitably used.

If you want to learn about an ingenious method (that is already present in anyone’s computer) to approach that question, read on!

As soon as I saw the post, my Eugene Fama senses were tingling. The results are not surprising (at least, to anyone who got my reference in the prior sentence), but I did enjoy the rather clever approach to the question.

Comments closed

Subgroup Analysis via Bayesian Hierarchical Modeling

Keith Goldfield ponders subgroup analysis:

Which got me thinking, of course, about subgroup analyses. In the context of a null hypothesis significance testing framework, it is well known that conducting numerous post hoc analyses carries the risk of dramatically inflating the probability of a Type 1 error – concluding there is some sort of effect when in fact there is none. So, if there is no overall effect, and you decide to look at a subgroup of the sample (say patients over 50), you may find that the treatment has an effect in that group. But, if you failed to adjust for multiple tests, than that conclusion may not be warranted. And if that second subgroup analysis was not pre-specified or planned ahead of time, that conclusion may be even more dubious.

If we use a Bayesian approach, we might be able to avoid this problem, and there might be no need to adjust for multiple tests. I have started to explore this a bit using simulated data under different data generation processes and prior distribution assumptions. It might all be a bit too much for a single post, so I am planning on spreading it out a bit.

Read on for two separate Bayesian model approaches to the problem. H/T R-Bloggers.

Comments closed

Using tsoutliers() to Detect Time Series Outliers

Rob J. Hyndman shows off a function in the forecast package in R:

The tsoutliers() function in the forecast package for R is useful for identifying anomalies in a time series. However, it is not properly documented anywhere. This post is intended to fill that gap.

The function began as an answer on CrossValidated and was later added to the forecast package because I thought it might be useful to other people. It has since been updated and made more reliable.

Read on to see how it works. This is one of the reasons I like the R programming language so much for data analysis and statistics: usually, somebody smarter than me has already built a solution to the problem and it’s just a matter of finding the right function. H/T R-Bloggers

Comments closed