Press "Enter" to skip to content

Category: Data Science

The Dual Perils of Overfitting and Data Leakage

John Mount shares notes on a theme:

One of the bigger risks of iterative statistical or machine learning fitting procedures is over-fit or the dreaded data leak.

Over-fit is when: a model performs better on training data than on future data. Some degree of over-fit is expected. A data leak is when: the model learns things about the evaluation set that it would not know about the future data the model will be applied on. This can drive models that look great on training and (supposedly) held-out data, but don’t work in practice.

Click through for the rest of the story, and be sure to check out the comments for a notebook digging further into one of the topics.

Leave a Comment

Model Diagnostics for Statistics vs Machine Learning

Christian Lorentzen talks diagnostics:

In this post, we show how different use cases require different model diagnostics. In short, we compare (statistical) inference and prediction.

As an example, we use a simple linear model for the Munich rent index dataset, which was kindly provided by the authors of Regression – Models, Methods and Applications 2nd ed. (2021). This dataset contains monthy rents in EUR (rent) for about 3000 apartments in Munich, Germany, from 1999.

Read on to learn more about this dataset and how the mindset differs if you’re thinking about inference versus prediction.

Leave a Comment

Breaking down the Limitations of R^2

M. Fatih Tüzen explains an important regression concept:

When building a statistical model, one of the first numbers analysts and data scientists often cite is the , or coefficient of determination. It’s widely reported in research, academic theses, and industry reports — and yet, frequently misunderstood or misused.

Does a high R² mean your model is good? Is it enough to evaluate model performance? What about its adjusted or predictive counterparts?

Read on to learn the answers to each question. H/T R-Bloggers.

Leave a Comment

The Monty Hall Problem

I have a new video:

In this video, I explain the classic Monty Hall problem, based on the concept of the show Let’s Make a Deal. I explain the paradox behind the problem and demonstrate that it’s better to switch doors.

I’m not joking at all when I say it took me years of listening to explanations before it actually clicked. Some of it is my innate stubbornness, but I think this is a great example of a true paradox, where the intuitive answer is wrong and first-level reasoning also leads you astray.

Comments closed

Data Splitting and Cross-Validation in R

Nick Han has a pair of articles. First up is on data splitting and pre-processing:

Data preprocessing is a crucial step in any machine learning workflow. It ensures that your data is clean, consistent, and ready for modeling. In this blog post, we’ll walk through the process of splitting and preprocessing data in R, using the rsample package for data splitting and saving the results for future use.

H/T R-Bloggers for that one.

The second involves using cross-validation via the caret package in R:

Cross-validation is a resampling technique used to assess the performance and generalizability of machine learning models. It helps address issues like overfitting and ensures that the model’s performance is consistent across different subsets of the data. By splitting the data into multiple folds and repeating the process, cross-validation provides a robust estimate of model performance.

H/T R-Bloggers for that as well.

Comments closed

The Basics of Bayes’ Theorem

I have a new video:

In this video, I provide an introduction to Bayes’ theorem, explaining the key concepts and terms, as well as solving a totally realistic problem via Bayesian analysis.

My goal in this video was to explain a counter-intuitive phenomenon: how much a positive test or piece of information moves the needle depends primarily on how frequently the event normally happens and how frequently we generate false positives in tests. The less common the scenario, the less your positive test actually moves the needle.

Comments closed

Converting a CSV to Parquet with DuckDB and Polars in R

Michael Mayer makes a swap:

In this recent post, we have used Polars and DuckDB to convert a large CSV file to Parquet in steaming mode – and Python.

Different people have contacted me and asked: “and in R?”

Simple answer: We have DuckDB, and we have different Polars bindings. Here, we are using {polars} which is currently being overhauled into {neopandas}.

Click through for the comparison.

Comments closed

Calculating a Matrix Inversion in SQL Server

Sebastiao Pereira performs matrix math in-database:

There are numerous applications to obtain a Matrix inverse for a given Matrix. Is it possible to do it using only SQL Server? Read on to learn how to build a matrix inverse calculator using a set of SQL Server custom functions.

I expect this to be extremely slow in comparison to GPU-based methods using a language like C, but this approach maximizes style points.

Comments closed

Solving Linear Equations in SQL Server

Sebastiao Pereira implements a function:

Solving linear equations is essential for solving real-world problems in Science, Engineering, Data Analysis, Machine Learning, Economics, Finance, and other areas. Is it possible to have a tool to solve linear equations directly in SQL Server? We will look at how to create a Gauss-Seidel method function for SQL Server.

This is one way to solve a series of linear equations, and it’s a pretty neat implementation.

Comments closed