Press "Enter" to skip to content

Author: Kevin Feasel

S3-Compatible Object Storage in SQL Server 2025

Anthony Nocentino updates a guide for SQL Server 2025:

In this blog post, I’ve implemented two example environments for using SQL Server’s S3 object integration. One for backup and restore to S3-compatible object storage and the other for data virtualization using PolyBase connectivity to S3-compatible object storage. This work aims to get you up and running as quickly as possible to work with these new features. I implemented this in Docker Compose since that handles all the implementation and configuration steps for you. The complete code for this is available on my GitHub repo. I’m walking you through the implementation here in this post.

Click through to see the updates Anthony has to his scripts.

Comments closed

Content Discovery in Microsoft Fabric

Jon Vöge wants to find data:

So you built a nice Data Platform on Microsoft Fabric. Users are happily using a few Models and Reports, but you face two problems:

  1. Users are not aware of all the other awesome models, reports and even lakehouses that they already have access to, which they should be using.
  2. Users also don’t know anything about the models, report and lakehouses that they don’t have access to, but which could also be useful for them, if they requested access.

For my take on how best to solve this natively in Fabric, read on below.

Read on to see how you can enable content discovery.

Comments closed

Scaling On-Prem Vector Search with Ollama and Nginx

Anthony Nocentino solves a problem:

When you call out to an external embedding service from T-SQL via REST over HTTPS, you’re limited by the throughput of that backend. If you’re running a single Ollama instance, you’ll quickly hit a ceiling on how fast you can generate embeddings, especially for large datasets. I recently attended an event and discussed this topic. My first attempt at generating embeddings was for a three-million-row table. I had access to some world-class hardware to generate the embeddings. When I arrived at the lab and initiated the embedding generation process for this dataset, I quickly realized it would take approximately 9 days to complete. Upon closer examination, I found that I was not utilizing the GPUs to their full potential; in fact, I was only using about 15% of one GPU’s capacity. So I started to cook up this concept in my head, and here we are, load balancing embedding generation across multiple instances of ollama to more fully utilize the resources.

Click through for the solution.

Comments closed

Inspecting the Postgres Write-Ahead Log

Henrietta Dombrovskaya digs into the write-ahead log:

First, when the users fixed one of the primary suspects jobs, the situation with WAL growth didn’t change. Second, the rate of the growth couldn’t be explained by these suboptimal jobs: the data volumes they were removing and reloading were still magnitudes smaller than the WAL size we were dealing with. Finally, I decided to do what I should have done from the start – to take a look at what exactly was in these super-fast growing WALs.

Read on to learn what Henrietta found. Also check out the comments for some additional context.

Comments closed

Generating and Recognizing Hash Collisions in SQL Server

Hugo Kornelis continues a deep dive into hash tables in SQL Server:

Welcome back! In the previous parts, I first showed how a Hash Match (Left Outer Join) can give insight in the order of data in a hash table, and then used that trick to obtain and verify some interesting insights into the internal structure of such a table. It’s now time to see if this same trick can also be used to find hash collisions.

Hugo finds some interesting results along the way, so check it out.

Comments closed

Tokenization in SQL Server

Sebastiao Pereira demonstrates a combination of encryption and redirection to store sensitive data:

As privacy regulations tighten like General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standards (PCI DSS) organizations and more, there is an increased focus to protect sensitive information within databases. Tokenization is an option to adhere to those regulations. Let’s see how to implement SQL tokenization in SQL Server.

This is a reasonably clever solution, though if you need to search on any of the tokenized (i.e., encrypted and moved to a separate table) values, performance would be miserable. Even displaying the results for a moderately sized result set would run into serious performance issues. I suppose that if you, for some regulatory reason, need to keep these tokens stored elsewhere from the data, then you manage expectations the best you can.

Comments closed

Losing Data with PostgreSQL and Jepsen

Jeremy Schneider performs some tests:

This is a follow‑up to the last article: Run Jepsen against CloudNativePG to see sync replication prevent data loss. In that post, we set up a Jepsen lab to make data loss visible when synchronous replication was disabled — and to show that enabling synchronous replication prevents it under crash‑induced failovers.

Since then, I’ve been trying to make data loss happen more reliably in the “async” configuration so students can observe it on their own hardware and in the cloud. Along the way, I learned that losing data on purpose is trickier than I expected.

Click through to learn more. Jepsen has been the gold standard in testing distributed database systems for data loss.

Comments closed

Testing in R with testthat

Aida Gjoka writes a test:

Testing is an important step when developing code in R or any other language. If you are a Python user, you can consider reading our previous blogs in pytest. Writing tests helps us make sure that the code is working as expected. In the R ecosystem, the testthat package is one of the most used frameworks. In this blog we will explore some of the main properties of {testthat} highlighting some of the most useful functions with some examples.

Read on to see how it works. This isn’t a mocking library, but rather an assertions-based testing library. And near the end, Aida includes an extra library that helps with plot testing.

Comments closed

Memory Grant Feedback Woes

Rebecca Lewis explains an issue with memory grant feedback in SQL Server:

Before SQL Server runs a query, it estimates how much memory it needs for sorting and joining. But what if it gets it wrong?

  • Too little memory → Spills to tempdb (slow)
  • Too much memory → Starves other queries

SQL Server 2017+ tries to fix bad estimates based on previous calls with ‘Memory Grant Feedback’.  Kinda like:  ‘Last time I gave you 2GB but you only used 50MB. Next time I’m giving you less.’

This is great in theory, though Rebecca shows a case in which the end result might not be great.

Comments closed