Press "Enter" to skip to content

Day: September 26, 2024

Kafka Internals: Handling a Producer Request

Danica Fine continues a series on Kafka internals:

Welcome to the second installment of our blog series to understand the inner workings of the beautiful black box that is Apache Kafka®. 

We’re diving headfirst into Kafka to see how we actually interact with the cluster through producers and consumers. Along the way, we explore the configurations that affect each step of this epic journey and the metrics that we can use to more effectively monitor the process. 

In the last blog, we explored what the Kafka producer client does behind the scenes each time we call producer.send() (or similar, depending on your language of choice). In this post, we follow our brave hero, a well-formed produce request, that’s on its way to the broker to be processed and have its data stored on the cluster.

Click through to learn more about how it all works.

Comments closed

Cardinality Estimation Changes Post-SQL Server 2014

Brent Ozar reminds us that small changes happen:

About 10 years ago, Microsoft made changes to the Cardinality Estimator (CE) which caused some problems for SQL Server upgrades. When folks upgraded to SQL Server 2014, they also casually switched their databases’ compatibility level to the latest version, because for years that hadn’t really affected query plans. They just figured they wanted the “latest and greatest” compat level, without regard to the effects. That backfired badly when they suddenly got 2014’s Cardinality Estimation changes.

So for several years, whenever someone upgraded from older versions, and they complained about performance, the stock community answer was, “Change your compatibility level back to what it used to be.” In many cases, that just solved the problems outright, leading to blog posts like this and this.

Even today on SQL Server 2019 & 2022, this advice is still relevant! If you mess around with compatibility levels, you can absolutely change cardinality estimations in ways you didn’t expect.

Read on for more information, including examples where cardinality estimation has improved with SQL Server 2022 and where it has gotten worse.

Comments closed

Copying Content between Databricks Unity Catalogs

Meagan Longoria busts out the photocopier:

I had a couple of clients who were moving content from development catalogs to production catalogs for the first time. They wanted to copy the schema and data from tables, views, and volumes.

So I wrote a python notebook to handle this task. It creates the objects in the new catalog and then changes the owner to a selected user/group. You can find the full notebook on GitHub.

You can check out the notebook, and Meagan also has explanatory notes covering the process.

Comments closed

Tracking Microsoft Fabric Notebook Progress

Gilbert Quevauvilliers asks are we there yet? are we there yet?

How to view or track the progress of Notebook while it is running in Microsoft Fabric

I was recently working with a Notebook in Microsoft Fabric that was started via a Data Pipeline.

The challenge I had was that I had no idea how far the notebook had gone (as there were quite a lot of cells in this particular notebook).

In this blog post I am going to show you how I can use Microsoft Fabric to identify exactly which cell my notebook is currently on.

Click through for the answer. And so help me, if you ask that question one more time, I’m turning this thing around and we’re going back home.

Comments closed

Online Upgrade of Logical and Physical Replication Notes in Postgres

Amit Kapila notes an improvement in replication capabilities:

In my last blog post, Evolution of Logical Replication, I mentioned the future development of a feature to allow “upgrades of logical replication nodes.” The upcoming release of PostgreSQL 17 includes this feature. Previously, after major version upgrades (via pg_upgrade), users couldn’t immediately connect and write data to logical replication nodes. This was because the slots were lost during upgrades, preventing replication from continuing. As a result, new writes wouldn’t get replicated, causing data on both nodes to become out of sync. As explained in this blog post, users had to block applications from writing until the replication setup was re-enabled after the upgrade.

Read on to see how PostgreSQL 17 changes things.

Comments closed

Debugging a DAX Measure with DAX Query View

Dennes Torres sorts out an issue:

DAX Query View was created some time ago and it was a great tool to run DAX inside Power BI environment.

However, testing a measure in DAX Query View may not be something so obvious. While the measure in a report obeys to the filter context and most of times results in a single value, the filter context doesn’t exist in Query View and the execution needs to return a table.

In this way, a simple copy/past of the measure in DAX Query View will not run. The measure needs to be adapted to be tested.

Let’s analyze one sample case to discover how this happens.

Click through for the answer.

Comments closed

Azure Database for PostgreSQL Backup and Restoration

Josephine Bush makes and restores a backup:

I wanted to test a migration of Azure PostgreSQL from simple to flexible. I wanted a simple db to restore onto my single server. I may not have needed to do this, but I was then curious how one would go about dumping and restoring on Azure Postgres in general. I plan to use the Azure migration service, but since I started this dump/restore exercise, I thought I would document it here.

Read on for the process.

Comments closed