Press "Enter" to skip to content

Day: June 22, 2020

Scaling Kafka Down

Jay Kreps announces an update to Project Metamorphosis:

Rightly or wrongly, Kafka is viewed as a “heavy-duty” component, one where you have to ask, “Do we really really need it?” and wait until you have several use cases to justify adoption. Until then people often abuse other systems or try to get by as best they can without first-class support for events in their infrastructure. This is unfortunate, and we want to change it! A log of events is a fundamentally better abstraction and the ecosystem around Kafka is phenomenal. Limiting this to only large-scale users and big tech companies does a disservice to the small apps and use cases that are often some of the best uses of the technology.

We’re changing that. We offer three types of clusters, optimized for the type of use case. Our Basic cluster type allows you to just pay for the amount of Kafka you use with no cost at all for allocating an unused cluster. This scales down to zero cost and scales up very gradually. You can literally use a few dollars a month of Kafka with no additional operational overhead. This makes Kafka a very lightweight option, at very low cost, for even the smallest of use cases. This offering is also ideal for development and test environments, even for those using Dedicated clusters for their production environments.

This does seem to help out in cases where I’d otherwise suggest Kinesis or Event Hubs (or one of the five dozen other Azure products around brokering messages) due to the relative lack of volume.

Comments closed

Pattern Matching in Scala

Mansi Babbar covers one of the most powerful tools available in functional programming languages like Scala:

The match expression consist of multiple parts:

1. The value we’ll use to match the patterns is called a candidate 
2. The keyword match
3. At least one case clause consisting of the case keyword, the pattern, an arrow symbol, and the code to be executed when the pattern matches
4. A default clause when no other pattern has matched. The default clause is recognizable because it consists of the underscore character (_) and is the last of the case clauses

This is similar to the switch statement in C-like languages but offers up a few more things like partial matching of complex objects. Mansi covers some of the ways in which these two things differ.

Comments closed

Calculating Test Coverage of Azure Data Factory Pipelines

Richard Swinbank wraps up a series on testing in Azure Data Factory:

To determine which activities have been executed by a test suite, I need to collect and aggregate activity run data from every pipeline execution triggered from any test fixture. In the previous post I developed components to retrieve and cache activities for a pipeline run – I’ll use those components here to collect data systematically.

I’m going to create a new helper class to contain functions specific to coverage measurement. It’s a subclass of the database helper because I want to exploit functionality from classes further up the hierarchy:

Read on for the code and process for measurement.

Comments closed

Fun with Query Tuning in SQL Server 2019

Erik Darling has just wrapped up a nice series on tackling a problem which looks like parameter sniffing but isn’t. Part 2 covers the issue:

This isn’t always the exact case, but generally speaking you’ll observe something along these lines.

It’s definitely not the case for what we’re going to be looking at this week.

This week is far more interesting.

That’s why it’s a monstrosity.

Part three digs in:

It’s not parameter sniffing, but it sure could feel like it.

– When the procedure compiles and runs with VoteTypeId 5, it runs for 12 minutes
– Other VoteTypeIds run well with the same plan that VoteTypeId 5 gets
– When VoteTypeId 5 runs with a “small” plan, it does okay at 10 seconds

Part four gives us a solution without using OPTIMIZE FOR MEDIOCRE:

This is what happens when we optimize for unknown. The density vector guess is 13,049,400.

That guess for Vote Types with very few rows ends up with a plan that has very high startup costs.

This version of the query will run for 13-17 seconds for any given parameter. That sucks in zero gravity.

Part 5 looks into something which occasionally pops up with this query:

You see, there’s a mystery plan.

It only shows up once in a while, like Planet X. And when it does, we get bombarded by asteroids.

Just like when Planet X shows up.

I wouldn’t call it a good all-around plan, but it does something that we would want to happen when we run this proc for VoteTypeId 5.

Read on for an educational romp through the SQL Server 2019 optimizer.

Comments closed

Database Restoration in Docker

John Morehouse gives us one way to restore a database in Docker:

Here are the steps that we will take to make this work:

1. Download one of the sample databases from I have a “mssql” directory in my local profile to make things easier
2. Make sure the container is started.  You can issue a “docker ps” command terminal to see which containers are running
3. Create a directory within the container
4. Copy the sample database backup file into the directory you just created
5. Restore the database onto the SQL instance that is running within the container

The set of steps is fine and it’s what I normally do, though someone did suggest to set up an external volume linking, e.g., /var/opt/mssql/backups outside the container. That way, you can drop your backup file in and it’ll be there without the copy step.

Comments closed

I Remember Halloween

Jared Poche experiences Halloween problem protection:

Simple enough so far. The scan is against a memory optimized table variable, and the filter to the left our our seeks and scans check for a change to our value. Nothing left but to update the index and…

CURVE BALL

Wait, what’s all this? We have a Split operator after our Clustered Index Update. SQL Server does sometime turn an UPDATE statement into effectively a DELETE and INSERT if the row needs to move, but this seems a bit much. We have a total of 4 index update/delete operators now, and they aren’t cheap.

My very simple addition to the WHERE clause actually caused a small increase in duration, and a big jump in CPU. So what’s going on?

Read on to see the cause and what Jared was able to do about it.

Comments closed

Expanding INT to BIGINT Without (Much) Downtime

Michael J. Swart finds a creative way around a problem, and then a creative way around the problem it causes, and so on:

The table has over 2 billion rows and it looks like it’s going to run out of space soon because the LogId column is defined as an INT. I need to change this table so that it’s a BIGINT. But changing an INT to a BIGINT is known as a “size of data” operation. This means SQL Server has to process every row to expand the LogId column from 4 to 8 bytes. But it gets trickier than that.

The biggest challenge is that the table has to remain “online” (available for queries and inserts).

Compression?
Gianluca Sartori (spaghettidba) had the idea of enlarging the columns with no downtime using compression. It’s promising, but I discovered that for this to work, all indexes need to be compressed not just the ones that contain the changed column. Also, any indexes which use the column need to be disabled for this to work.

Cheating
I gave up on solving this problem in general and constrained my focus to the specific problem I was facing. There’s always some context that lets us bend the rules. In my case, here’s what I did.

Read the whole thing. Also, as soon as I saw Michael’s title, I immediately thought of this:

Comments closed