Press "Enter" to skip to content

Author: Kevin Feasel

What’s New in Kafka 4.1.0

Paul Brebner has a list:

Since then, Kafka 4.1.0 was released (September 2025, see detailed release notes), with around 472 Kafka Improvement (KIPs), including new features, improvements, bug fixes, tests, and more—well done to the Apache Kafka open source community! Kafka 4.1.1 (a bugfix release) was made available on the NetApp Instaclustr Managed Platform in December 2025.

So, what’s changed from 4.0 to 4.1.0? What are the most interesting improvements (for me at least)? In this blog, we focus on a new improvement, the Streams Rebalance Protocol.

Click through for that list.

Leave a Comment

The Power of Database Projects

Andy Brownsword is sold:

If you’ve been using Database Projects for simply maintaining copies of your database objects – like I used to – then you’ve been missing out on the power of its deployments. Initially I was sceptical about how it could effectively perform upgrades but after seeing them land in SSMS last month I wanted to revisit them as a means of actual deployment.

My scepticism was completely misplaced, so if you haven’t touched Database Projects before – or had similar concerns as me – I want to demonstrate 3 features which make them not just good, but amazing for deployments.

Click through for those features. I will say that for straightforward databases, the database project deployment process is pretty good. Where it falls apart is when you have a large number of cross-database dependencies, especially if there are mutual cross-database dependencies: DB1.dbo.sp1 needs DB2.dbo.Table2, and DB2.dbo.view2 references DB1.dbo.Table1. In that case, the workaround is so annoying and essentially comes down to “have three separate database projects, one for DB1, one for DB2, and one for a scalled-down version of DB1/DB2 without the dependencies, and then use that to inject into the other DB.” Which does kind of work, yeah, but now you’re maintaining even more. And once you get to dozens of dependencies and lots of cross-database queries? Yeah, forget about it.

Leave a Comment

Bug in sys.dm_exec_query_plan_stats

Brent Ozar finds a bug:

When you turn on last actual plans in SQL Server 2019 and newer:

ALTER DATABASE SCOPED CONFIGURATION SET LAST_QUERY_PLAN_STATS = ON;

The system function sys.dm_exec_query_plan_stats is supposed to show you the last actual query plan for a query. I’ve had really hit-or-miss luck with this thing, but my latest struggle with it is that two of the numbers are flat out wrong. It mixes up CPU time and elapsed time.

Here’s a simple query to prove it:

Click through for a demonstration of the bug.

Leave a Comment

An Overview of pg_plan_advice

Christophe Pettus continues a series on plan hints in Postgres:

Robert Haas’s pg_plan_advice patch set, proposed for PostgreSQL 19, is where the twenty-year argument from Part 2 has landed — or is trying to. It is not pg_hint_plan brought into core. It is a different thing, with different mechanics, a different scope, and a different answer to the “why is this different from Oracle-style hints” question.

Read on to learn more about the proposal and how this resolves some of the core issues that led the major Postgres maintainers to reject query hints for so long.

Leave a Comment

User-Context-Aware Calculated Columns in Power BI

Nikola Ilic digs into a new feature:

A few weeks ago, I was sitting in a session at FabCon Atlanta. It was an amazing session about Direct Lake semantic models and various optimization tips and tricks, delivered by true masters, Christian Wade and Phil Seamark (both from Microsoft). Among many fantastic topics, the one that immediately caught my attention was the new feature that Christian Wade introduced: User-context-aware calculated columns.

Although we all know that DAX calculated columns are the “last island” in what are considered recommended data modeling practices (“Roche’s Maxim”, etc.), this one still stood out for me as something that might be super useful in certain scenarios.

Read on to see how it works and scenarios in which it could be useful.

Leave a Comment

Microsoft Fabric Eventstream Network Security Features

Alex Lin looks at network security features:

Eventstream in Fabric Real-Time Intelligence stream data from both inside and outside the Fabric platform. When your external sources sit behind firewalls or in private networks, choosing the right network security feature is essential. This post breaks down the available options in Eventstream and helps you determine which one fits your scenario.

Click through for more information.

Leave a Comment

Simplifying a Gantt Chart

Amy Esselman looks at a chart:

Gantt charts are a popular choice for illustrating the start and duration of events, which is common practice in project management. While useful for representing timelines, these charts can quickly become busy and difficult to interpret, especially when dealing with complex workflows.

 Let’s consider an example.

Click through for that example and how you can turn a rather complex-looking chart into something a bit easier to understand and work with.

Leave a Comment

Visualizing High-Dimensional Vectors

Andrew Pruski takes a look:

Following on from my previous post on building The Burrito Bot, I want to delve into visualisation of vector embeddings that were generated from the restaurant data pulled from Google Maps.

Those embeddings had 1536 dimensions, each dimension corresponding to an axis within a high dimensional space, with embeddings that have similar meanings grouped together in that high dimensional space.

1536 dimensions…is a lot of dimensions! And for me, a hard concept to get my head around. It all just feels so abstract (to me anyway), I want to see what they actually look like!

Click through for a link to a website that helps with that visualization. It ultimately performs principal component analysis (PCA) to get 1536 (or however many) dimensions down to 3 principal components. It’s not perfect, but it does give us the ability to reason over the data.

Leave a Comment

What’s in a SQL Server File Header

Anthony Nocentino goes poking around:

I’ve been doing a deep dive into SQL Server on-disk structures lately, and one of my favorite rabbit holes is revisiting Paul Randal’s series on file header pages. If you haven’t read it, go do that now. It covers what file header pages are, what they contain, and what happens when they corrupt. This post takes that concept and runs with it. I’ll use DBCC FILEHEADER to read the file header of every user database file on a server and answer a question that comes up more than you’d think: can you determine which files belong together as a database purely from the file header, without querying sys.databases?

Read on for that answer, as well as what you cannot do with DBCC FILEHEADER.

Leave a Comment

Query Hints and Plan Guides in RDBMS Products

Cristophe Pettus has a series in progress. The first post covers the basics of query hints and plan guides:

pg_plan_advice is expected to land in PostgreSQL 19. That makes this a good moment to look at query hints — what they are, what every other major database does with them, and how PostgreSQL ended up being the obvious outlier. Three parts. This is the first.

The second post explains why PostgreSQL hasn’t had query hints:

For most of PostgreSQL’s history, the official community position on query hints has been a polite version of “no, and stop asking.”

The position isn’t subtle. The PostgreSQL wiki maintains a page titled Not Worth Doing, and “Oracle-style optimizer hints” is listed there, right above in-process embedded mode and obfuscated function source. The companion wiki page, OptimizerHintsDiscussion, states the position outright:

Click through for a bit of history and comparison. The upcoming post promises to go into pg_plan_advice‘s proposal in more detail.

Leave a Comment