Press "Enter" to skip to content

Category: Administration

Persistent Memory and SQL Server

Ned Otter gives us the rundown on Persistent Memory and how it can make life smoother:

SQL 2017 on Windows Server 2016 behaves the same as SQL 2016 on Windows Server 2016 – “tail of the log” is supported. However, there is no support for PMEM with SQL 2017 on supported Linux distributions (except as a traditional block store). Using PMEM with SQL 2019 on Linux supports what’s known as “enlightenment”, which allows us to place data and log files on DAX formatted volumes, thereby reducing latency considerably. SQL 2019 on Linux also support “tail of the log”.

This is one of those areas where understanding Linux versus Windows administration really pays off, at least until Windows Server supports something like enlightenment.

Comments closed

Problems With Renaming SQL Server

Jamie Wick goes through some of the problems you might experience when renaming a SQL Server instance:

Monitoring Agents – The first (and easiest) thing to be fixed were the monitoring agents (ie. MS Systems Center Operations Manager). All that took was a quick Redeploy and they came back online.

SQL Agent Jobs – The next issue was SQL Agent Jobs. Many of the maintenance jobs (eg. syspolicy_check_schedule_xxxx) have the server name hardcoded into them. This can be easily remediated by editing the job step properties and replacing the old server name with the new one.

Read on for more problems you might run into. None of these is insurmountable but they are annoying enough that I wouldn’t want to rename servers willy-nilly.

Comments closed

Getting Version Info From dbatools

Jess Pomfret shows how you can get your operating system and SQL Server versions from the dbatools Powershell module:

With these dates on the horizon it’s a good time to look at our estate and make sure we have a good understanding of the versions we currently support. I’m going to show you how to do that easily with a couple of dbatools functions. Then, bonus content, I’ll show you how to present it for your managers with one of my other favourite PowerShell modules ImportExcel.

Jess gets bonus points for avoiding the dreaded pie chart at the end.

Comments closed

Parameter Sniffing in the Wild

Erik Darling is a parameter sniffing anthropologist:

A while back, I put together a pretty good rundown of this on the DBA Stack Exchange site.

In the plan cache, it’s really hard to tell if a query is suffering from parameter sniffing in isolation.

By that I mean, if someone sends you a cached plan that’s slow, how can you tell if it’s because of parameter sniffing?

Read on to see what Erik does to discover parameter sniffing problems.

Comments closed

Auditing Azure Analysis Services

Kasper de Jonge shows how you can audit an Azure Analysis Services cube:

So the question was: how can I see who connected to my AS Azure database and what queries where send? Initially I thought of ways I used to do this in the on premises world. Capture profiler traces or XEvents by writing code and then store it somewhere for processing. It looks like was not alone in these, even the AS team itself had ways to capture XEvents and store them: https://azure.microsoft.com/en-us/blog/using-xevents-with-azure-analysis-services/

But it turns out it is much more smooth, simple and elegant by leveraging Azure’s own products. In this case we will be using Azure Log Analytics. It already documented in the official documentation here.

Click through for a demo.

Comments closed

Plan Cache Sizes

Erin Stellato shares information on plan cache sizes:

If you have an ad hoc workload, you’ll often hit the max number of entries before you hit the space limit, particularly if you have the optimize for ad hoc workloads server option enabled, which stores the plan stub for an ad hoc query on initial execution, rather than the entire plan (the plan stub consumes less space, so this is an attempt to reduce bloat).

Click through for the numbers and a couple of options you have around plan cache size.

Comments closed

More on Index Fragmentation

Tibor Karaszi revises and extends some remarks on index fragmentation:

In my last blog post, I wanted to focus on the sequential vs random I/O aspect and how that part should be more or less irrelevant with modern hardware. So I did a test that did a full scan (following the linked list of an index) and see if I could notice any performance difference on my SSD. I couldn’t.

That isn’t the end of the story, it turns out. Another aspect is how the data is brought into memory. You might know that SQL server can do “read ahead”, meaning it does larger reads per I/O instead if single-page I/O. I.e., fewer but larger I/O operations. This sounds fine, but what happens when we have fragmentation?

Read on for a situation in which fragmentation does matter.

Comments closed

SQL Server Settings Blade in Azure

Dave Bermingham notes a recent change to the Azure Portal when creating a new VM with SQL Server pre-installed:

As you slide the IOPS slider to the right you will see the number of data disks increase, the Storage Size increase, and the Throughput increase. You will be limited to the max number of IOPS and disks supported by that instance size. You see in the screenshot below I am able to go as high as 80,000 IOPS when provisioning storage for a Standard E64-16s_v3 instance.

It sounds like they did a pretty good job of things there.

Comments closed

The Cost of Ad Hoc Queries

Erin Stellato gives us a simple demonstration of why parameterization is important for performance:

From this screenshot you can see that we have about 3GB total dedicated to the plan cache, and of that 1.7GB is for the plans of over 158,000 adhoc queries. Of that 1.7GB, approximately 500MB is used for 125,000 plans that execute ONE time only. About 1GB of the plan cache is for prepared and procedure plans, and they only take up about 300MB worth of space. But note the average use count – well over 1 million for procedures. In looking at this output, I would categorize this workload as mixed – some parameterized queries, some adhoc.

Kimberly’s blog post discusses options for managing a plan cache filled with a lot of adhoc queries. Plan cache bloat is just one problem you have to contend with when you have an adhoc workload, and in this post I want to explore the effect it can have on CPU as a result of all the compilations that have to occur. When a query executes in SQL Server, it goes through compilation and optimization, and there is overhead associated with this process, which frequently manifests as CPU cost. Once a query plan is in cache, it can be re-used. Queries that are parameterized can end up re-using a plan that’s already in cache, because the query text is exactly the same. When an adhoc query executes it will only re-use the plan in cache if it has the exact same text and input value(s).

Read on to see an example of how long it takes a set of ad hoc queries to finish versus their parameterized equivalents. Erin’s test is at the behavioral extreme (100% parameterized versus 100% ad hoc) so real-world results won’t be quite this good.

Comments closed