Press "Enter" to skip to content

Category: Administration

Automating Cache Cleanup

Tracy Boggiano has a process to automate cleaning up different caches in SQL Server:

First, we need to create a table to store our information on the caches we would like to clear on an automated basis and populate it with values.

For example, we clear SQL Plans if we 10,000 plans are Adhoc or Prepared plans that take up 5GBs of memory or Single Used Plans is greater than 10,000 or the memory used for Adhoc or Prepared plans if more than 50% of memory.  We clear Transactions cache if is more than 2 GBs and Lock Manager : Node 0 if it is more than 2 GBs.

Read on for the script.

Comments closed

Query Store Capture Modes

Arun Sirpal notes an important difference in the default Query Store settings for SQL Server 2017 versus Azure SQL Database:

So just remember the only difference when analyzing settings is the difference in Query Store Capture Mode. For Azure it is set to AUTO whereas with local installed SQL Servers it is set to ALL.

What does this mean? ALL means that it is set to capture all queries but AUTO means infrequent queries and queries with insignificant cost are ignored. Thresholds for execution count, compile and runtime duration are internally determined.

Read on to learn more, including how to change these settings.

Comments closed

External Memory Pressure With SQL On Linux

Anthony Nocentino explains how SQL Server on Linux reacts to memory pressure:

We can use tools like ps, top and htop to look our are virtual and physical memory allocations. We can also look in the /proc virtual file system for our process and look at the status file. In here we’ll find the point in time status of a process, and most importantly the types of memory allocations for a process. We’ll get granular data on the virtual memory allocations and also the resident set size of the process. Here are the interesting values in the status file we’re going to focus on today.

  • VmSize – total current virtual address space of the process

  • VmRSS – total amount of physical memory currently allocated to the process

  • VmSwap – total amount of virtual memory currently paged out to the swap file (disk)

The differences are going to be interesting for people to troubleshoot later, particularly if you look at SOS_SCHEDULER_YIELD and give a knee-jerk reaction that the problem is with CPU.

Comments closed

When The Maximum Workspace Memory Isn’t The Internal Pool Maximum

Lonny Niederstadt answers the call from someone who needs the combination of Perfmon and DMV data:

When is a maximum not really the maximum?
When it’s a maximum for an explicitly or implicitly modified default.
Whether “the definitive documentation” says so or not.

Yesterday on Twitter #sqlhelp this question came up.

*****

*****

Aha! I thought to myself.  For this I am purposed! To show how Perfmon and DMV data tie out!

Read on for the simple form of the answer, followed by the complication which makes life interesting.

Comments closed

Tracking Long-Running Queries

Ryan Booz walks us through tracking long-running queries with sp_whoisactive:

This solution runs sp_WhoIsActive every minute and saves the output into a global temp table. From there, I look for any processes that have been running for more than the low threshold setting. Any of the processes that have not been identified and stored previously get logged, output to an HTML table, and an email alert sent.

Next, I take a second look at the table for anything that’s been running longer than the high threshold.  If a second email alert has not been sent for these processes, we output the same data and send the email. If two alerts have already been sent for these processes, I don’t do anything else at the moment. One of the next updates to this script will send an alert to our DevOps notification system for anything running longer than some final threshold (or maybe just the high threshold).

I particularly like this part about not re-alerting over and over for a long-running query.  It’s a relatively minor part of the whole solution, but it gets annoying watching the same e-mail come in every 5 minutes, especially if there’s nothing you can (or at least want to) do about the cause.

Comments closed

Restoration With Replacement

Joey D’Antoni tests whether RESTORE WITH REPLACE is functionally different from dropping a database and performing a restoration:

I recently read something that said using the RESTORE WITH REPLACE command could be faster than dropping a database and then performing a RESTORE, because the shell of the file could be used and therefore skip file initialization. I did not think that was the case, but books online wasn’t clear about the situation, so I went ahead and built a quick test case, using ProcMon from sysinternals. If you aren’t familar with the sysinternals tools, you should be—they are a good way to get under the hood of your Windows Server to see what’s going on, and if you’re old like me, you probably used PSEXEC to “telnet” into a Windows server to restart a service before RDP was a thing.

Read on to see how the processes compare.

Comments closed

Automatic Identity Value Reseeding

Tracy Boggiano shows how to build an automated identity column reseeding solution:

You have tables that have a lot of data inserted into them and deleted that use identity values and run out integers to use.  I have over 3000+ databases where this can occur so we have an alerts setup that checks the tables then checks a table to see if setup to be auto reseeded based on rather of database engineers have indicated it is safe to do so.  If it is that table is auto reseed either to one the maximum negative number for the datatype else we are alerted and we check with our database engineers on how to handle that table.  Keep in mind we are reseeding tables that have been deemed OK to reseed automatically.

Click through for the code, which includes reseeding logic, a job to run reseed operations, and a whitelist table for the tables which you want to allow automatic reseeding.

Comments closed

Use Fixed Filegrowth Settings

Andy Galbraith notes that you should use fixed-increment filegrowth settings for log and data files:

As you probably already know, the key flaw to percentage-based FILEGROWTH is that over time the increment grows larger and larger, causing the actual growth itself to take longer and longer.  This is especially an issue with LOG files because they have to be zero-initialized before they can be used, causing excessive I/O and file contention while the growth is in progress.  Paul Randal (blog/@PaulRandal) describes why this is the case in this blog post.  (If you ever get a chance to see it Paul also does a fun demo in some of his classes and talks on why zero initialization is importan, using a hex editor to read the underlying contents of disk even after the old data is “deleted”)

Andy also has a script to change filegrowth to fixed-increment growth depending upon the size of the file, so check that out.

Comments closed

Finding Broken Views

Bill Fellows has a script to test each view to see if it is broken:

Shh, shhhhhh, we’re being very very quiet, we’re hunting broken views. Recently, we were asked to migrate some code changes and after doing so, the requesting team told us we had broken all of their views, but they couldn’t tell us what was broken, just that everything was. After a quick rollback to snapshot, thank you Red Gate SQL Compare, I thought it’d be enlightening to see whether anything was broken before our code had been deployed.

You’ll never guess what we discovered.

Read on to see what they discovered (spoilers:  broken views) and how Bill fixed the problem.

Comments closed

More On The New Service Model

Randolph West summarizes the new SQL Server patching model:

  • Every twelve months after GA, the installation files will be updated to contain all the Cumulative Updates in what is effectively now a service pack, but won’t be called that. This will also become the slipstream update. In other words, you’re more likely to be up to date when installing from scratch, later in the release cycle.

  • Customers on the GDR (General Distribution Release) release cycle will only get important security and corruption fixes, as before. You can switch to the standard CU release cadence any time, but once you do, you can’t switch back to GDR.

Brent Ozar thinks CU12 might become the new SP1 in the minds of managers:

So now fast forward to late 2018, early 2019. You’re about to build a new SQL Server for a project, and you have two choices:

  • SQL Server 2018 – which is basically the new dev branch, getting monthly updates, or
  • SQL Server 2017 (or 2016, or 2014) – which is the stable branch, getting quarterly updates

Once a version has hit CU12, and it only gets updates once a quarter, it might be considered Good Enough For Our Apps. Managers might see 2017/2016/2014 interchangeably at that point – which might be great for the second most recent version’s adoption.

It will be interesting to see how companies adopt this new model.

Comments closed