Press "Enter" to skip to content

Month: July 2021

Identifying Troublesome NOLOCK Statements

Aaron Bertrand is on a mission:

I’ve warned before about the possible downsides of both NOLOCK in general and, more specifically, when used against the target of an update or delete. While Microsoft claims that corruption errors due to the latter have been fixed in cumulative updates (e.g. see KB #2878968), we’re still seeing an occasional related issue where SQL Server will terminate, producing a stack dump that indicates a DML statement with NOLOCK as the cause. How do I find and correct all these potentially problematic statements?

The contrarian in me says, “You’re using NOLOCK; they’re all trouble.” But Aaron is a lot nicer about it than I am here.

Comments closed

Unique Resource Names and Azure

Meagan Longoria gives us a warning:

Each resource type in Azure has a naming scope within which the resource name must be unique. For PaaS resources such as Azure SQL Server (server for Azure SQL DB) and Azure Data Factory, the name must be globally unique within the resource type. This means that you can’t have two data factories with the same name, but you can have a data factory and a SQL server with the same name. Virtual machine names must be unique within the resource group. Azure Storage accounts must be globally unique. Azure SQL Databases should be unique within the server.

Since Azure allows you to create a data factory and a SQL server with the same resource name, you may think this is fine. But you may want to avoid this, especially if you plan on using system-defined managed identities or using Azure PowerShell/CLI. And if you aren’t planning on using these things, you might want to reconsider.

Click through for a demonstration of how you might get into trouble with this.

Comments closed

Deploying SQL Server via Ansible

Amit Khandelwal gives us another way of deploying SQL Server on Linux:

Today, we’ll look at how to automate SQL Server deployment and configuration on Linux. To automate our deployment, we will use the Ansible system role, which is available here.

Note: The Ansible system role that I use in this blog is a sample system role that is provided as is and for reference only. Microsoft and RedHat do not support this. However, I invite you to provide feedback and suggestions for improving the system role here: Issues linux-system-roles/mssql (github.com).

Read on for the instructions and a demonstration.

Comments closed

Nonclustered Index Leaf Records and Null Bitmaps

Alex Stuart lays out a finding:

While testing a script that involved calculating index record size recently I was getting some confusing results depending on server version, and after some digging it appears there was a somewhat undocumented change to nonclustered index leaf page structure in SQL Server 2012.

Prior to 2012, as dicussed by Paul Randal in this 2010 blog post (which is still the top result for searching for ‘nonclustered index null bitmap’, hence this post) the null bitmap – that is, a >= 3 byte structure representing null fields in a record – was essentially present in all data pages but not the leaf pages of a nonclustered index that had no nulls in either the index key or any clustering key columns.

Read on for a demonstration using SQL Server 2008 R2 as well as SQL Server 2012.

Comments closed

Inferring Median from a Few Values

Holger von Jouanne-Diedrich is stuck in the middle with you:

Let us dive directly into the matter, the Small Data Rule states:

In a sample of five numerical values from any unknown population, the median of this population lies between the smallest and the largest sample value with 94 percent certainty.

The “population” can be anything, like data about age in a population, income in a country, television consumption, donation amounts, body sizes, temperatures and so on.

This is a very interesting concept. Five values won’t give you the median, but it will give you a bounded expectation with high likelihood. And check out the comments: adding a few more data points increases the expected likelihood even further.

1 Comment

Explaining Chatbots

Will Harris takes us through the basics of chatbots:

You can develop your business processes into conversational workflows to help people perform tasks. This can be wide ranging; from looking up records to do with their accounts, through to engaging in new services. There are many processes that can be turned into an effective conversational workflow. This typically helps people perform activities more inclusively and conveniently or helps reduce grunt work for employees.

Brisa2, an automotive company, developed a bot to help find company data and perform tasks like password resets, helping to free up the IT team for other tasks.

Click through for an overview of the concept. I’ve been down on this generation of chatbot because, as a user, they usually show up in places where a well-designed UI would be faster and more effective—as well as less prone to failures in understanding.

Comments closed

Tips for Decreasing the Impact of Rebalancing in Kafka Streams

Vasyl Sarzhynskyi has some techniques to make rebalancing in Kafka less of a big deal:

Kafka Rebalance happens when a new consumer is either added (joined) into the consumer group or removed (left). It becomes dramatic during application service deployment rollout, as multiple instances restarted at the same time, and rebalance latency significantly increasing. During rebalance, consumers stop processing messages for some period of time, and, as a result, processing of events from a topic happens with some delay. Some business cases could tolerate rebalancing, meanwhile, others require real-time event processing and it’s painful to have delays in more than a few seconds. Here we will try to figure out how to decrease rebalance for Kafka-Streams clients (even though some tips will be useful for other Kafka consumer clients as well).

Read on for an example of the problem, as well as several practical tips for mitigating the issue.

Comments closed

When DBCC_OBJECT_METADATA becomes a Bottleneck

Paul Randal takes us through another latch:

Continuing my series of articles on latches, this time I’m going to discuss the DBCC_OBJECT_METADATA latch and show how it can be a major bottleneck for consistency checks prior to SQL Server 2016 under certain circumstances. The issue affects DBCC CHECKDB, DBCC CHECKTABLE, and DBCC CHECKFILEGROUP, but for clarity I’ll just reference DBCC CHECKDB for the rest of this post.

You might wonder why I’m writing about an issue that affects older versions, but there are still a huge number of SQL Server 2014 and older instances out there, so it’s a valid topic for my series.

Read on to understand what DBCC_OBJECT_METADATA does and how it can become a bottleneck on those older versions of SQL Server.

Comments closed

Use Cases for Extended Properties

Eitan Blumin has a few non-standard use cases for extended properties:

Let’s say that you’re using the CHANGE TRACKING feature in SQL Server for the purpose of synchronizing one or more tables from one database to another.

For this purpose, you would have to keep track of the value of CHANGE_TRACKING_CURRENT_VERSION() when you last synchronized your data, so that you’d know which value you should use with the CHANGETABLE function during the next synchronization.

Most DBAs would think of creating a dedicated table to manage this synchronization per each table.

But why should you, when you can simply use extended properties for this purpose?

I’ve used extended properties primarily for documentation, so it’s interesting to see a couple of use cases which are definitively not about documenting objects.

Comments closed