Press "Enter" to skip to content

Category: Administration

ML Services and Resource Governor

I have a post on two gotchas you might run into around Resource Governor throttling SQL Server Machine Learning Services:

By default, SQL Server will grant 20% of available memory to any R or Python scripts running. The purpose of this limit is to prevent you from hurting server performance with expensive external scripts (like, say, training large neural networks on a SQL Server).

Here’s the kicker: this affects you even if you don’t have Resource Governor enabled. If you see out-of-memory exceptions in Python or error messages about memory allocation in R, I’d recommend bumping this max memory percent up above 20, and I have scripts to help you with the job. Of course, making this change assumes that your server isn’t stressed to the breaking point; if it is, you might simply want to offload that work somewhere else.

Click through for the other issue.

Leave a Comment

A Note on Distributed Network Names

Allan Hirt provides an explanation around Distributed Network Names when building Windows Server Failover Clusters on Windows Server 2019:

The new Windows Server 2019 DNN functionality does have a side effect that does affect Azure-based configurations. When creating a WSFC, Windows Server 2019 detects that the VM is running in Azure and will use a DNN for the WSFC name. This is the default behavior.

I clipped this paragraph specifically because Allan uses both “affect” and “effect” correctly, and I wanted to call that out. Do read the rest of it as well.

Leave a Comment

Understanding the Identity Cache

Jignesh Raiyani explains the purpose of the identity cache and shows how you might disable it if you want to:

In the release of SQL Server 2012, Microsoft has introduced a feature of Identity Cache. The identity jump doesn’t cause any issue neither with the database nor the running tasks; however, this identity value gap is not acceptable in some of the business-oriented scenarios. This Identity counter values of the columns are stored in the system table separately, which is known as sys.identity_columns DMV. This reference object sys.identity_columns does not rely on the transaction status, and it doesn’t matter that the transaction on the user table is committed or rolled back. Now, in case this user transaction is being rolled back, we will find that the identity value has been skipped in the sys.identity_columns table. The reason being is that the value of the Identity column value does not get rolled back. Apart from the above scenario, there could be various reasons or situations where these issues of the skip of the Identity value will be seen, which will be small in number and random. However, when there is an issue of Identity value jump, the amount of the skip value will be prominent and precise in number, depending upon the column data type, which is easy to identify.

My advice is, don’t disable this unless you have a really good reason to. “I don’t want to see numbers skipped” is a bad reason because identity columns never guarantee sequential insertion. You can still have skips due to transaction rollbacks or other errors on insertion, and the identity cache helps with performance problems in cases with lots of writers inserting rows into the table.

Leave a Comment

Accelerated Database Recovery and tempdb Usage

Jason Hall takes a look at how much tempdb space Accelerated Database Recovery might use:

You might have heard me talk about tempdb parasites in the past, or maybe you’ve read my blog post on the same topic, “Be Mindful of SQL Server Tempdb Use (aka Tempdb Parasites!).” I know that at least one person did, because they recently asked a great question.

In that blog post, I reviewed how triggers use the version store in tempdb for access to the “special” trigger tables we can use from within the trigger code to access the previous and new versions of data being modified. One person on Twitter was wondering if that is still the case for triggers on databases using Accelerated Database Recovery (ADR) in SQL Server 2019. I really wasn’t sure, so I decided to find out.

Click through for the answer.

Leave a Comment

Distributed Transactions Across Multiple Log Files

Eric Cobb notices something strange:

When the new log files were added, SQL Server immediately filled them to match the fullness percentage of the original log files. As I manually increased the size of the log file, SQL Server again shifted the transactions around to keep both log files at the exact same fullness percentage. So, if the first log was 95% full, the second log was 95% full, regardless of the actual log file sizes. If the first log was 80% full, so was the second. The more I expanded the second log, the more transactions SQL Server would move to it, always keeping the fullness percentage the same on both log files. The larger the second log became, the more space it freed up on the first log, but the 2 logs were always exactly in sync in terms of fullness.

This is some interesting behavior, especially because transaction log files don’t use proportional fill.

Leave a Comment

Improving the system_health Session

Aaron Bertrand takes a look at a pre-configured tool and gives us ways to make it better:

I was recently trying to troubleshoot a SQL Server replication-related deadlock that our monitoring tool didn’t capture, and tried to find information about it in the system_health Extended Events session. With the default retention settings and the amount of noise contributed by security ring buffer events, I quickly discovered that the session only had data going back less than two hours. Meaning unless I started investigating an event immediately after it happened, all evidence had rolled out forever.

Read on for Aaron’s guidance around this. The natural next step is to build out your own extended events which capture what you need.

Leave a Comment

Improving Async Stats Update Concurrency

Dimitri Furman announces a change in Azure SQL Database:

In Azure SQL Database and Azure SQL Managed Instance, the background process that updates statistics asynchronously can now wait for the schema modification lock on a low priority queue. This improves concurrency for workloads with frequent query plan (re)compilations.

New behavior is enabled with the ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY database-scoped configuration. This feature is currently in public preview.

Dimitri does a good job of explaining what this means and how it can make life a little better for people querying tables with statistics updates.

Leave a Comment

Batch Mode on Rowstore in SQL Server

Monica Rathbun introduces us to one of the biggest internal improvements for SQL Server 2019:

Under compatibility level 150, in both SQL Server 2019 and Azure SQL Database, you now can use batch mode for CPU-bound analytic type workloads without requiring columnstore indexes. There is no action needed to turn on batch mode aside from being on the proper compatibility mode. You also have the ability to enable it as a database scoped configuration option (as shown below), and you can hint individual queries to either use or not use batch mode (also shown below). If you recall in my earlier blogs on columnstore, it is batch mode in conjunction with page compression that drastically increases query performance. This feature, Batch Mode on Rowstore, allows all operators enabled for batch mode to operate as such in SQL Server.

What does this mean? It means that query operations can process data faster, more efficiently and mimic what makes columnstore so fast. Instead of reading row by row (row store mode) it can read in chunks i.e. batches of 900 rows at a time. The performance impact of this can be tremendous which effectively uses CPU more efficiently.

Read on to see the limitations and benefits.

Leave a Comment

Altering the Database without Rolling Back Users

Kenneth Fisher wants to change a database:

If this strikes a bit too close to home for you then you need to look at the ROLLBACK clause. It’s great for killing and rolling back all of the current connections before making my change.

But this is a pretty sensitive app and if there’s something running I have to let it finish. No ROLLBACK allowed. But I’m also not going to wait forever to see if my alter is going to happen. Turns out there is a nice easy option for this too.

Click through to see the option, as well as the message you get if it can’t work immediately.

Leave a Comment