Press "Enter" to skip to content

Curated SQL Posts

Temporal Tables and Execution Plans

Hugo Kornelis starts a new sub-series within an existing series:

Welcome to part sixteen of the plansplaining series. The first of a few posts about how temporal tables affect execution plans. In this post, I’ll build on the last four posts on data modifications, building on the more generic discussion of data modification in the previous four posts. Later posts will look at data retrieval and some specific scenarios.

Hugo hits the highlights of temporal tables and how they handle insertion, deletion, and updating scenarios.

Comments closed

Avoiding Division by Zero

Chad Callihan has a few methods for us to avoid dividing by zero:

In the real world, everyone knows that if you divide by zero a wormhole will open up and swallow the universe. In SQL Server, it’s not good, but it’s not nearly as dramatic. I encountered the following error this week and thought it would make a good topic:

Msg 8134, Level 16, State 1, Line 6
Divide by zero error encountered.

There are multiple ways to handle this error message in SQL Server and some are better than others. Let’s take a look at a few.

Click through for those methods and try to keep this universe existent—it’s where I keep all my stuff.

Comments closed

Updates to Message Keys in ksqlDB

Victoria Xia announces an improvement to ksqlDB:

One of the most highly requested enhancements to ksqlDB is here! Apache Kafka® messages may contain data in message keys as well as message values. Until now, ksqlDB could only read limited kinds of data from the key position. ksqlDB’s latest release—ksqlDB 0.15—adds support for many more types of data in messages keys, including message keys with multiple columns. Users of Confluent Cloud ksqlDB already have access to these new features as Confluent Cloud always runs the latest release of ksqlDB.

Read on for more information on this, as well as some of the ramifications of this change.

Comments closed

From SQL to Functional

Shel Burkow has the start of an interesting series:

Neither loops nor branches were used in any of the T-SQL work, but that doesn’t mean they aren’t there. They are – visible underneath in the query plans, which describe the process steps (operators and data flow) the SQL Server database engine follows to produce results. A salient example of looping is the Nested Loops operator used in the T-SQL rewrite query plan to match each outer row to an inner row, i.e. perform the INTERSECT. This style of coding, in which the logic of the computation is expressed rather than the lower-level control flow detail, is declarative programming, and is a major theme of this series. (The latter being imperative programming.)

This is a concept I’m really big on: I think functional programming languages are easier for data platform developers to pick up than object-oriented or imperative languages for the reason that both are declarative, so many of the programming metaphors for one apply to the other.

Comments closed

Elastic Beats and the ELK Stack

Shane Ducksbury explains where Elastic Beats fits in the ELK stack:

After my last blog post about Logstash, Elasticsearch, and Kibana, I wanted to investigate something else I kept coming across during my Logstash research: Elastic Beats.

Beats initially appeared to me to be a way to send data to Elasticsearch, the same as Logstash, leading me to wonder how Beats is different and where it fits in the ELK stack. In this blog, I’ll take a deeper look at Beats to understand how it works, what you might use it for, and how it compares with Logstash.

Read on to learn more about Elastic Beats and how this is quite different from Logstash.

Comments closed

A Warning on Using Distributed Network Names

Allan Hirt has a warning for us:

DNNs are supported as of SQL Server 2019 CU2 and require Windows Server 2016 or later. I wrote more about them in my blog post Configure a WSFC in Azure with Windows Server 2019 for AGs and FCIs. Go there if you want to see what they look like and learn more.

Right now, I cannot wholeheartedly recommend the use of DNNs for listeners or FCIs if you are using Enterprise Edition. Why?

Read on to learn why.

Comments closed

Deploying Bacpacs to Azure SQL Database via Terraform

John Martin shows how to deploy a database schema (in bacpac format) via Terraform:

It’s all well and good deploying Azure SQL Database resources as we did in the previous post. However, databases tend to work a little better with a schema and some data in them. One of the options for getting data from an on-premises SQL Server database into Azure SQL Database is via a bacpac. This is, at its core, an export of the schema and data into a single file which is then created and loaded to Azure SQL Database. Much the same as a MySQL dump operates.

Read on for one way to do this.

Comments closed

Memory Grant Feedback in SQL Server

Deepthi Goguri hits on one part of Intelligent Query Processing in SQL Server:

In this part, let us focus on the Memory Grant Feedback feature released in SQL Server 2017 for Batch mode execution and in SQL Server 2019 for Row mode execution.

Memory Grants are used by the SQL Server for the Hashes and Sort operations. SQL Server optimizer uses the statistics information and allocate the memory needed by the query before the query executes. When the query is executed, SQL Server uses the allocated memory to process the query for the hashes and sorts. If this memory grant is not enough to process the query, data will use tempdb spilling to disk. When too much memory is allocated based up on the estimates, we can effect the concurrency as all other queries requires memory grants to process the queries as well. Bad estimates can effect the memory grants allocated to the queries. Too much or too little memory grants is bad.

Read on to see how Memory Grant Feedback helps the optimizer out with queries over time.

Comments closed

Documenting dm_db_missing_index_group_stats_query

Erik Darling does a good deed:

When I was checking out early builds of SQL Server 2019, I noticed a new DMV called dm_db_missing_index_group_stats_query, that I thought was pretty cool.

It helped you tie missing index requests to the queries that requested them. Previously, that took a whole lot of heroic effort, or luck.

This has long been a pain point for index-based query tuning. You can guess what the types of queries can look like based on the suggested columns, but one risk of that guessing is that index order matters but the columns get returned in the order in which they exist on the table, not necessarily the order in which they would be most useful.

Comments closed