Press "Enter" to skip to content

Category: Administration

Testing sp_ineachdb

Aaron Bertrand takes us to the Island of Misfit Databases:

The only database that requires extra handling is the one that contains a tab, because SQL Server doesn’t know how to generate file names when that character is present. I am sure there are a bunch of other less common but equally exotic characters that may cause the same problem.

This is how I actually tested sp_ineachdb, to make sure it was ready for just about any bad idea anyone used to name a database, and could handle various possible database states (for a lot more background on this procedure, and why it is better than the undocumented, unsupported, and buggy sp_msforeachdb, see this and this). Here you can see that the procedure works against all these poorly-named databases, and skips databases that are inaccessible (rather than raise an exception).

Click through to see the list of databases Aaron uses. Technically, I think Aaron’s blog post also counts as a Halloween post because some of those databases are spooky.

Comments closed

Compatibility Mode and Upgrades

Tom Collins explains how database compatibility mode may be a way to reduce the risk of a SQL Server version upgrade:

Microsoft’s recommended SQL Server upgrade  workflow is to upgrade to the latest SQL Server but keep the source DB compatibility level, assess the workload impact via establishing a baseline and based on testing move the compatibility level  to the latest. 

Upon creation of a new user database – the create  database sets the compatibility level at the default compatibility level of the SQL Server. Keep in mind – if the model database level is set lower than the create database will set the compatibility level based on the model db

You can also change the database compatibility level at any time

Read on for more information about compatibility mode and some inbuilt guard rails around upgrades. Those guard rails aren’t perfect by any means, but over the past couple of editions, we’ve seen a fair amount of movement toward this ideal of compatibility mode being a guarantee of behavior between versions.

Comments closed

Measuring File Latency in SQL Server

Anthony Nocentino has a script and some tips for us:

This post is a reference post for retrieving IO statistics for data and log files in SQL Server. We’ll look at where we can find IO statistics in SQL Server, query it to produce meaningful metrics, and discuss some key points when interpreting this data.

Click through for the script, and then a bulleted list of things to keep in mind as you’re reviewing the data.

Comments closed

SQL Server Express Memory Limitations

Steve Stedman notes that the memory limitations on SQL Server Express Edition are not quite as stringent as you may first believe:

Looking at the memory limits and other limits on the SQL Server versions over time, we have seen things increase, but one limit that is still very low is the memory limit for SQL Express. Specifically the maximum memory for buffer pool per instance of SQL Server Database Engine for SQL 2019. The limit there is 1410 MB.

At first glance you may think that this limit is the total amount of memory that SQL Server will use, but let me show you a couple of screen shots for Database Health Monitor showing the memory utilization on two different SQL 2019 Express servers.

Read on to see what, exactly, the memory limitation is. Also, there are separate limits for things like In-Memory OLTP table sizes.

Comments closed

Automating Data Collection with Extended Events

Ed Pollack continues a series on extended events:

While using Extended Events is not overly complex, building a reliable system to collect, parse, and store events over time without any data loss can be challenging.

This article dives into one method for collecting and retaining all event data for a specific set of events. A system like this is highly customizable and provides a solid starting point for Extended Events development. This should be viewed as a collection of puzzle pieces; individual pieces can be adjusted as needed to produce a monitoring solution that fits the needs of a given situation, even if it is vastly different from what is demonstrated here.

Read on for the process. Shredding XML isn’t pretty, but the good news is that with a setup like this, you only need to do it once…unless you need to change it later, so get it right the first time and bam, problem solved.

Comments closed

Working with DACPACs

Chad Callihan has a two-parter. First up is the process of creating a DACPAC:

DAC is the abbreviation for data-tier application and is an item containing the objects of a database. When put together into a versioned file to be used for deploying in SQL Server, that package is called a DACPAC. You can point a DACPAC at an existing database to deploy changes or use a DACPAC to create a brand new database with tables, procedures, and the rest of the database objects built in.

And once you have a DACPAC, you of course need something to do with it:

What if we want to use our DACPAC to create a new database using SSMS? Our first step will be to right click on Databases and select Deploy Data-tier Application.

As a concept, DACPACs are really great: I can package together tables, database code, and reference data in one bundle and import it into a variety of products (e.g., database projects, other SQL Server instances, Azure SQL Database). The problem is that once they get beyond a certain size, DACPACs are rather unwieldy and failure-prone to create and to extract.

Comments closed

Troubleshooting Login Timeouts

Grant Fritchey shows us another use of extended events:

I was recently approached at work about a company that was seeing tons of timeouts in SQL Server. I was asked how to troubleshoot this. There are lots of posts by people on this topic, but I found something I didn’t see anywhere else, let me share it with you.

Read on to see how, though as Grant notes, it’s not as easy as searching for “login_timeout” or something named similarly.

Comments closed

Capturing SQL Server Login Details with extended Events

Jack Vamvas shows how to track SQL Server logins:

I have to capture logon information details for a specific logon on a SQL Server.   Specifically – the client_hostname, nt_username & username. What i’m looking for is a log recording a successful connection made to the server.     The event should be triggered a) when a connection is made & b)   from a connection pool. 

Click through to see how.

Comments closed

Comparing CPU Activity and Diagnosing the Cause

Joe Obbish has a tutorial for us:

Sometimes I have a need to run a quick CPU comparison test between two different SQL Server instances. For example, I might be switching from old hardware to new hardware and I want to immediately see a faster query to know that I got my money’s worth. Sometimes I get a spider sense while working with virtualized SQL Server instances and want to check for problems. Yesterday, I was doing a sort of basic health check on a few servers that I hadn’t worked with much and I wanted to verify that they got the same performance for a very simple query.

Click through for an easy test script and a good amount of diagnosis to understand why there is a significant difference between two instances.

Comments closed

Pure Storage FlashArray Snapshot Torture Test

Argenis Fernandez puts SQL Server snapshots on a Pure Storage FlashArray to the test:

Look, I’m not here to fight your religious war about how snapshots should not be called backups. I’m just gonna call them fast-as-fast restores(*) and be done with it. Because let’s be honest, with Pure Storage there’s absolutely nothing faster than a storage snapshot to recover a volume. Or volume(s). You get the idea. It’s about how fast you recover, every time.

Yes, I do understand that there are a million of considerations for something to be called a “backup”. We’ll get to those little by little – don’t expect a thorough post on that debate right now. Today I want to focus on one question: Are Pure Storage FlashArray snapshots stable, trustworthy enough that I can take them without pausing I/O against my database? Can I trust that the database will come online every time from a snapshot?

Read on for the Answer. For additional fun, read the whole article with your mental voice sounding like Argenis.

Comments closed