Press "Enter" to skip to content

Category: Data Science

Correlation and Predictive Power Score in Python

Abhinav Choudhary looks at two methods for understanding the relationship between variables:

dataframes while working in python which is supported by the pandas library. Pandas come with a function corr() which can be used in order to find relation amongst the various columns of the data frame. 
Syntax :DataFrame.corr() 
Returns:dataframe with value between -1 and 1 
For details and parameter about the function check out Link 
Let’s try this in action. 

Read on to see how it works, how to visualize results, and where Predictive Power Score can be a better option.

Comments closed

The Power of AUC

John Mount takes a deeper look at Area Under the Curve:

I am finishing up a work-note that has some really neat implications as to why working with AUC is more powerful than one might think.

I think I am far enough along to share the consequences here. This started as some, now reappraised, thoughts on the fallacy of thinking knowing the AUC (area under the curve) means you know the shape of the ROC plot (receiver operating characteristic plot]. I now think for many practical applications the AUC number carries a lot more information about the ROC shape than one might expect.

Read on for the explanation.

Comments closed

Principal Component Analysis in Azure ML

Dinesh Asanka walks us through Principal Component Analysis as an Azure ML Studio data transformation technique:

We will be discussing one of the most common Data Reduction Technique named Principal Component Analysis in Azure Machine Learning in this article. After discussing the basic cleaning techniquesfeature selection techniques in previous articles, now we will be looking at a data reduction technique in this article.

Data Reduction mechanism can be used to reduce the representation of the large dimensional data. By using a data reduction technique, you can reduce the dimensionality that will improve the manageability and visualability of data. Further, you can achieve similar accuracies.

Read on for the demo.

Comments closed

Explaining the ROC Plot

Nina Zumel takes us through what each element of a ROC curve means:

In our data science teaching, we present the ROC plot (and the area under the curve of the plot, or AUC) as a useful tool for evaluating score-based classifier models, as well as for comparing multiple such models. The ROC is informative and useful, but it’s also perhaps overly concise for a beginner. This leads to a lot of questions from the students: what does the ROC tell us about a model? Why is a bigger AUC better? What does it all mean?

Read on for the answer.

Comments closed

Fun with Benford’s Law

Nagdev Amruthnath covers a topic which brings me joy:

Benford’s Law is one of the most underrated and widely used techniques that are commonly used in various applications. United States IRS neither confirms nor denies their use of Benford’s law to detect any number of manipulations in income tax filing. Across the Atlantic, the EU is very open and proudly claims its use of Benford’s law. Today, this is widely used in accounting to detect any fraud. Nigrini, a professor at the University of Cape Town, also used this law to identify financial discrepancies in Enron’s financial statement. In another case, Jennifer Golbeck, a professor at the University of Maryland, was able to identify bot accounts on twitter using Benford’s law. Xiaoyu Wang from the University of Winnipeg even published a report on how to use Benford’s law on images. In the rest of this article, we will take about Benford’s law and how it can be applied using R.

The applications to images and music were new to me. Very cool. H/T R-Bloggers

Comments closed

Covariance and Multicollinearity

Mattan Ben-Shachar gives us an intuitive understanding of multicollinearity and how it can affect an analysis:

The common and almost default approach is to fix age to a constant. This is really what our model does in the first place: the coefficient of height represents the expected change in weight while age is fixed and not allowed to vary. What constant? A natural candidate (and indeed emmeans’ default) is the mean. In our case, the mean age is 14.9 years. So the expected values produced above are for three 14.9 year olds with different heights. But is this data plausible? If I told you I saw a person who was 120cm tall, would you also assume they were 14.9 years old?

No, you would not. And that is exactly what covariance and multicollinearity mean – that some combinations of predictors are more likely than others.

I liked the explanation Mattan provides us. Also be sure to read the warnings near the end of the post around other things to try. H/T R-bloggers

Comments closed

Classification Problems and Classification Rules

John Mount warns against simply returning a class in a classification problem:

This statement is a bit of word-play which I will need to unroll a bit. However, the concrete advice is that you often get better results using models that return a continuous score for classification problems. You should make that numeric score available to downstream business logic instead of making a class choice at model prediction time. Informally the word “classifier” to informally mean “scoring procedure for classes” is not that harmful. Losing a numeric score is harmful.

Read the whole thing, as John lays out a good argument.

Comments closed

Multi-Armed Bandit Problems

Brian Amadio takes us through one of my favorite classes of problem:

Multi-armed bandits have become a popular alternative to traditional A/B testing for online experimentation at Stitch Fix. We’ve recently decided to extend our experimentation platform to include multi-armed bandits as a first-class feature. This post gives an overview of our experimentation platform architecture, explains some of the theory behind multi-armed bandits, and finally shows how we incorporate them into our platform.

The post gives a good explanation of the concept, as well as the implementation strategy.

Comments closed

Kafka Integration with Knime

Swantika Gupta shows off some of Knime’s ability to integrate with Apache Kafka:

Knime Analytics Platform provides it’s users a way to consume messages from Apache Kafka and publish the transformed results back to Kafka. This allows the users to integrate their knime workflows easily with a distributed streaming pub-sub mechanism.

With Knime 3.6 +, the users get a Kafka extension with three new nodes:
1. Kafka Connector
2. Kafka Consumer
3. Kafka Producer

Click through to see how to configure each and how to enrich your data with Knime.

Comments closed

Real-World Sentiment Analysis Examples

Ines Roldos shares a few examples of sentiment analysis:

Net Promoter Score (NPS) surveys are one of the most common ways of knowing how customers perceive a product or service. Basically, they consist of two stages: first, you ask a customer to score a business from 0 to 10, then you ask them to give reasons for the score they leave with open-ended question.

When it comes to processing the results, the first stage is easy: you just have to calculate the average score. But when it comes to analyzing tons of open-ended NPS responses, the analysis becomes more complicated. Imagine if your team had to tag hundreds of responses manually. Not only it would be a tedious and time-consuming task, it may also lead to inconsistent results derived from different criteria during the tagging process.

Fortunately, sentiment analysis enables you to process large volumes of NPS responses and obtain consistent results in a very fast and simple way.

It might just be the industry I’m in, but I don’t really get excited about sentiment analysis. Still, don’t let my biases influence your thought process too much.

Comments closed