Press "Enter" to skip to content

Month: July 2025

What’s Old is New Again: Lakebases

Daniel Janik notes the cyclical nature of things:

For years, the narrative pushed was that traditional relational databases were ill-suited for the scale and complexity of modern BI solutions. The marketing was something like: “Databases don’t belong in BI; use Spark!” We embraced distributed computing frameworks, data lakes, and complex ETL pipelines to move data from operational databases into analytical engines. The idea was to separate transactional workloads from analytical ones to ensure performance and scalability. Spark, with its ability to handle massive datasets and flexible processing, became the darling of the data world.

“Remember, Sully, when I said you don’t need databases anymore?”

“Yeah, Matrix, I remember!”

“I lied.”

Leave a Comment

Grouping Sets in T-SQL

Erik Darling has a new video.

Erik mentions that he doesn’t often see GROUPING SETS in the wild. I’ve used them several times. And the use of the term “several times” probably gives you exactly the feeling that I intended. I really like grouping sets for very specific analytical system purposes (at least for moderate-sized datasets), so I’m glad that syntax is there. But outside of reporting queries, it’s a really uncommon bit of syntax.

Leave a Comment

Stolen Memory in SQL Server

Chad Callihan is overdrawn at the memory bank:

When you specify the max amount of memory that SQL Server can use, you may think that’s all the memory to be used for the buffer pool. That’s not the case. SQL Server will use memory not only for the buffer pool but also in tasks such as sorts, memory grants, and other internal tasks. These tasks can swoop in and “steal” memory from the buffer pool, causing more work for SQL Server.

Click through for some notes on the topic, as well as your reminder that “128 GB for the buffer pool” in Standard Edition doesn’t mean “128 GB total memory usage for SQL Server.”

Leave a Comment

Percentage Splits with Window Functions

Andy Brownsword breaks things up:

Sometimes you want to segment records. It may be splitting a customer base for marketing purposes, or segmenting a user base for a new feature. Good segmentation makes clean divisions in the data.

In this post we’ll see a way to achieve that with a great deal of help from Window Functions.

Click through for Andy’s motivation, which is a way that absolutely will not work the way you want it to.

Leave a Comment

Making k-Means Clustering Better

Matthew Mayo shares a few tips:

The k-means algorithm is a cornerstone of unsupervised machine learning, known for its simplicity and trusted for its efficiency in partitioning data into a predetermined number of clusters. Its straightforward approach — assigning data points to the nearest centroid and then updating the centroid based on the mean of the assigned points — makes it one of the first algorithms most data scientists learn. It is a workhorse, capable of providing quick and valuable insights into the underlying structure of a dataset.

This simplicity comes with a set of limitations, however. Standard k-means often struggles when faced with the complexities of real-world data. Its performance can be sensitive to the initial placement of centroids, it requires the number of clusters to be specified in advance, and it fundamentally assumes that clusters are spherical and evenly sized. These assumptions rarely hold true in the wild, leading to suboptimal or even misleading results.

Read on for a few ways to relax some of the constraints in k-means clustering.

Leave a Comment

Incremental Data Load into Parquet Files from Python

Lee Asher loads some data:

Parquet is a column-oriented open-source storage format increasingly used for “big data” analytics. Yet despite its growing popularity as a native format for data lakes and data warehouses, tools for maintaining these environments remain scarce. Getting data from a SQL environment into Parquet isn’t difficult – but how do we maintain that data over time, keeping it current? In other words, if we already have an existing Parquet file, how can we efficiently append new data to it?

In this article, we’ll introduce the Parquet format, explain some strategies for incrementally updating a Parquet repository, and, with a simple Python script, implement a nightly-feed update process.

Not listed in here is one word that I expected: Delta. Because that’s how we normally do incremental data modification in Parquet data. Either that or Apache Iceberg. Lee shows us a different route that can work.

Leave a Comment

PostgreSQL for the Oracle DBA: Tuples and MVCC

Kellyn Gorman continues a series on PostgreSQL for Oracle DBAs:

After our first two weeks of ensuring Grant and I didn’t burn down SQLServerCentral figuring out how Steve Jones has kept the pace he has for so long, (quite an impressive feat, I think we’d both agree!) I’m back to working with my comparisons and building more knowledge in PostgreSQL.  What caught my attention this week was the simple concept of a row (or tuple) which might seem universal in relational systems, after all, data is data no matter the platform, right? But under the hood, the way databases store, manage, and control visibility of that data can differ drastically. This is especially true when comparing PostgreSQL with how Oracle manages rows.

Read on to learn more about how these systems work.

Leave a Comment