Press "Enter" to skip to content

Month: November 2022

Arg_Max and Arg_Min in KQL

Robert Cain continues a series on KQL:

A very common need in query languages is the ability to extract the maximum and minimum values in a column of data. The Kusto Query Language provides this capability through two functions, arg_max and arg_min. In this post we’ll take a look at these functions.

Click through to learn more about how these work.

Comments closed

Merging Database Project Changes

Olivier Van Steenlandt feeds changes into different branches:

When you start development, you create a feature branch, which is a living copy of your main branch where you apply changes during the development phase. As soon as you finalize your development, you want to get these changes to your development environment.

This process is called merging. During the merge process, 2 branches will be combined. At this point, we want our feature branch to be combined with the development branch.

Click through to learn how it all works.

Comments closed

Azure SQL Trigger for Azure Functions

Drew Skwiers-Koballa announces a new feature:

The Azure SQL trigger for Azure Functions uses SQL change tracking functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted.  Change tracking is available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server, making the Azure SQL trigger for Azure Functions a flexible component for event-driven applications.

Similarly to the Azure SQL bindings for Azure Functions, a connection string for the SQL database is stored in the application settings of the Azure Function and supporting authentication options such as managed identity. In addition to the connection string, the SQL trigger is configured with a table name. The SQL trigger is specified on lines 12 and 13 in the C# Azure Function example below, which will log information about each change made to data in the dbo.Employees table.

Read on to see how it works.

Comments closed

Performance Tuning Tables with Filters in Power BI

Chris Webb doesn’t want to wait:

There are four columns: Date, Town and two measures. One measure called [Fast Measure] is, as the name suggests, very quick to execute; the other measure, called [Slow Measure], is very complex and slow. The definitions are irrelevant here. Notice that there is a filter on this table visual so only the rows where [Fast Measure] is greater than 1 are shown.

If I measure the amount of time to render this table in Performance Analyzer, it takes around 17.5 seconds to run. However, if I remove the filter on [Fast Measure], the table only takes 8 seconds to run. Why? The filter is on the fast measure and surely more rows are returned without the filter, so wouldn’t the slow measure be evaluated more?

Click through for the answer.

Comments closed

Defining Production-Grade Code

Deb Melkin puzzles on a topic:

To me, the most obvious part is that code going out into production can pass code review and testing. You don’t want to send out code that has obvious bugs – like having incorrect joins or inaccurate calculations or poor query plans, etc. Code should also go through reviews, not just to make sure the obvious bugs are there but other basic development standards, like naming conventions and proper commenting, have been followed.

There’s also an interesting bit about proofs of concept that I wanted to expand upon a little. The terms “pilot” and “proof of concept” sound interchangeable (and people switch the terms around quite often) but mean rather different things. A proof of concept is an attempt to see if some concept actually works: for example, can I have my .NET-based web app send 5,000 messages per second to an Azure Event Hub without things falling apart? Proofs of concept are intended to be quick-and-dirty code which do not take into account security, robustness, or code quality. This is throw-away code and should never go into production.

By contrast, a pilot is intended to be production-ready but serves as a scaled-down version of a product. A pilot data warehouse might only include data for one department in the organization but the design accommodates future growth and the people developing it do everything they would do to make the warehouse robust and production-ready.

The really tricky part is that companies will often ask for a proof of concept and then treat it like a pilot, which is where they go wrong and end up spending years on awful maintenance of an awful product.

Comments closed

A Checklist for Code Quality

Deepthi Goguri has a list and is checking it twice:

3. Check for the execution times based on the DEV, TEST data. I have seen scenarios where the data size in the DEV and TEST environments are not same as production data. This can cause the difference in the execution times from lower environments to production. It is always suggested to test your code in lower environments having same amount of data as production.

Read on for the full list.

Comments closed

Commenting on Actions in Production

Andy Leonard wants to know whodunnit:

While “Commands completed successfully” is helpful to know – and definitely better than an error message – it does not inform the person executing the command precisely what just happened. Rather, “Commands completed successfully” informs the user that “something happened,” and as far as SQL Server Management Studio is concerned, whatever it was succeeded.

I prefer more details.

Andy also casually drops the concept of idempotence, which I define as saying that the end state is the same no matter how many times you run a specific process. My phrasing of the term is a little different though we get to the same point. Which is itself kind of an explanation of idempotence if you squint a little bit.

Comments closed

Practical Results of a ZooKeeper-less Kafka

Paul Brebner does the math:

The Kafka cluster meta-data is now only stored in the Kafka cluster itself, making meta-data update operations faster and more scalable. The meta-data is also replicated to all the brokers, making failover from failure faster too. Finally, the active Kafka controller is now the Quorum Leader, using Raft for leader election.

The motivation for giving Kafka a “brain transplant” (replacing ZooKeeper with KRaft) was to fix scalability and performance issues, enable more topics and partitions, and eliminate the need to run an Apache ZooKeeper cluster alongside every Kafka cluster.

Read on for some initial testing of KRaft versus ZooKeeper.

Comments closed

Diagnosing Kafka Message Throughput Reductions

Danica Fine and Nikoleta Verbeck troubleshoot an issue:

One of the greatest advantages of Kafka is its ability to maintain high throughput of data. Unsurprisingly, high throughput starts with the producers. Prior to sending messages off to the brokers, individual records destined for the same topic-partition are batched together as a single compressed collection of bytes. These batches are then further aggregated before being sent to the destination broker.

Batching is a great thing, and we (generally) want it. But how do you know when it’s working well and when it’s not?

This first post covers message throughput but there will be several other topics in the series as well.

Comments closed

Defining Production-Grade Code

Chad Callihan takes a detour:

You understand what you’re thinking when you write code but will you remember it in a few weeks or months? What about the new associate that has to work with your code? Will they be able to decipher what you were thinking based on code alone?

If code isn’t documented, it can make work unnecessarily difficult. For production-grade code, I would expect it to be well documented so even someone with minimal knowledge can get an idea of what the code is doing. It’s not necessary to write a novel for documentation but having something is better than nothing.

Read on for more thoughts of what makes code production-grade.

Comments closed