Press "Enter" to skip to content

Category: Data Science

Linear Prediction Confidence Region Flare-Out

John Cook explains why the confidence region of a tracked object flares out instead of looking conical (or some other shape):

Suppose you’re tracking some object based on its initial position x0 and initial velocity v0. The initial position and initial velocity are estimated from normal distributions with standard deviations σx and σv. (To keep things simple, let’s assume our object is moving in only one dimension and that the distributions around initial position and velocity are independent.)

The confidence region for the object flares out over time, something like the bell of a trumpet.

Read on for the explanation.

Comments closed

Bayesian Average

Jelte Hoekstra has a fun post applying the Bayesian average to board game ratings:

Maybe you want to explore the best boardgames but instead you find the top 100 filled with 10/10 scores. Experience many such false positives and you will lose faith in the rating system. Let’s be clear this isn’t exactly incidental either: most games have relatively few votes and suffer from this phenomenon.

The Bayesian average

Fortunately, there are ways to deal with this. BoardGameGeek’s solution is to replace the average by the Bayesian average. In Bayesian statistics we start out with a prior that represents our a priori assumptions. When evidence comes in we can update this prior, computing a so called posterior that reflects our updated belief.

Applied to boardgames this means: if we have an unrated game we might as well assume it’s average. If not, the ratings will have to convince us otherwise. This certainly removes outliers as we will see below!

This is a rather interesting article and you can easily apply it to other rating systems as well.

Comments closed

Calculating Relative Risk In T-SQL

Mala Mahadevan explains how to calculate relative risk using T-SQL:

In this post we will explore a common statistical term – Relative Risk, otherwise called Risk Factor. Relative Risk is a term that is important to understand when you are doing comparative studies of two groups that are different in some specific way. The most common usage of this is in drug testing – with one group that has been exposed to medication and one group that has not. Or , in comparison of two different medications with two groups with each exposed to a different one.

Read on for an example of a statistical formula calculation which might actually be easier in T-SQL than R.

Comments closed

Random Forests In scikit-learn

Mark Needham shows how easy it is to create a random forest model in Python using scikit-learn:

As I mentioned in a blog post a couple of weeks ago, I’ve been playing around with the Kaggle House Prices competition and the most recent thing I tried was training a random forest regressor.

Unfortunately, although it gave me better results locally it got a worse score on the unseen data, which I figured meant I’d overfitted the model.

I wasn’t really sure how to work out if that theory was true or not, but by chance, I was reading Chris Albon’s blog and found a post where he explains how to inspect the importance of every feature in a random forest. Just what I needed!

There’s a nagging voice in my head saying “Principal Component Analysis” as I read this post.

Comments closed

Fraud Detection With Python

Kevin Jacobs has a walkthrough of how to use Pandas and scikit-learn to perform fraud detection against a sample set of credit card transactions:

Apparently, the data consists of 28 variables (V1, …, V28), an “Amount” field a “Class” field and the “Time” field. We do not know the exact meanings of the variables (due to privacy concerns). The Class field takes values 0 (when the transaction is not fraudulent) and value 1 (when a transaction is fraudulent). The data is unbalanced: the number of non-fraudulent transactions (where Class equals 0) is way more than the number of fraudulent transactions (where Class equals 1). Furthermore, there is a Time field. Further inspection shows that these are integers, starting from 0.

There is a small trick for getting more information than only the raw records. We can use the following code:

print(df.describe())

This code will give a statistically summary of all the columns. It shows for example that the Amount field ranges between 0.00 and 25691.16. Thus, there are no negative transactions in the data.

The Kaggle competition data set is available, so you can follow along.

Comments closed

Forcing 0 Intercept Inflates R-squared In R

John Mount has an informative post on how you can trick yourself when running linear regression models in R and forcing the y intercept to be 0:

So far so good. Let’s now remove the “intercept term” by adding the “0+” from the fitting command.

m2 <- lm(y~0+x, data=d)
t(broom::glance(m2))
##                        [,1]
## r.squared      7.524811e-01
## adj.r.squared  7.474297e-01
## sigma          3.028515e-01
## statistic      1.489647e+02
## p.value        1.935559e-30
## df             2.000000e+00
## logLik        -2.143244e+01
## AIC            4.886488e+01
## BIC            5.668039e+01
## deviance       8.988464e+00
## df.residual    9.800000e+01
d$pred2 <- predict(m2, newdata = d)

Uh oh. That appeared to vastly improve the reported R-squared and the significance (“p.value“)!

Read on to learn why this happens and how you can prevent this from tricking you in the future.

Comments closed

Facial Recognition With Amazon Rekognition

Chris Adzima describes how his law enforcement agency uses Amazon Rekognition for facial recognition:

Setup was fairly straightforward. In the Washington County jail management system (JMS), we have an archive of mugshots going back to 2001. We needed to get the mugshots (all 300,000 of them) into Amazon S3. Then we need to index them all in Amazon Rekognition, which took about 3 days.

Our JMS allows us to tag the shots with the following information: front view or side view, scars, marks, or tattoos. We only wanted the front view, so we used those tags to get a list of just those.

Read on for sample implementation details, including moving images to S3, building the facial recognition “database,” and using it.

Comments closed

Cochran-Mantel-Haenszel Test

Mala Mahadevan explains the Cochran-Mantel-Haenszel test, with two parts up so far.  First, her data set:

Below is the script to create the table and dataset I used. This is just test data and not copied from anywhere.

Second, an introduction to the test itself and solutions in R and T-SQL:

This test is an extension of the Chi Square test I blogged of earlier. This is applied when we have to compare two groups over several levels and comparison may involve a third variable.
Let us consider a cohort study as an example – we have two medications A and B to treat asthma. We test them on a randomly selected batch of 200 people. Half of them receive drug A and half of them receive drug B. Some of them in either half develop asthma and some have it under control. The data set I have used can be found here. The summarized results are as below.

This series is not yet complete, so stay tuned.

Comments closed

Quantile Regression With Python

Gopi Subramanian discusses one of my favorite regression concepts, heteroskedasticity:

With variance score of 0.43 linear regression did not do a good job overall. When the x values are close to 0, linear regression is giving a good estimate of y, but we near end of x values the predicted y is far way from the actual values and hence becomes completely meaningless.

Here is where Quantile Regression comes to rescue. I have used the python package statsmodels 0.8.0 for Quantile Regression.

Let us begin with finding the regression coefficients for the conditioned median, 0.5 quantile.

The article doesn’t render the code very well at all, but Gopi does have the example code on Github, so you can follow along that way.

Comments closed

Sentiment Analysis In R

Stefan Feuerriegel and Nicolas Pröllochs have a new package in CRAN:

Our package “SentimentAnalysis” performs a sentiment analysis of textual contents in R. This implementation utilizes various existing dictionaries, such as QDAP or Loughran-McDonald. Furthermore, it can also create customized dictionaries. The latter uses LASSO regularization as a statistical approach to select relevant terms based on an exogenous response variable.

I’m not sure how it stacks up to external services, but it’s another option available to us.

Comments closed