Press "Enter" to skip to content

Category: Administration

Safely Dropping Databases

Bob Pusateri notes a little issue when it comes to dropping databases:

At a previous employer, we had a well-defined process when dropping databases for a client. It went something like this:

  1. Confirm in writing the databases on which servers/instances to be dropped
  2. Take a final full backup of databases
  3. Take databases offline
  4. Wait at least two weeks to make sure nothing breaks in the absence of this database
  5. Drop databases

This is a pretty good and safe method. If taking the database offline causes some unforeseen system to stop working, it can be very quickly brought back online in-place, instead of having to locate the backup and restore it. But it there’s just one problem…

Read on for that problem and its solution.

Comments closed

New DMF: dm_db_page_info

Pam Lahoud shows off a new Dynamic Management Function in SQL Server 2019:

The primary use case we had in mind when developing this function was troubleshooting page-related waits. Some of the key performance scenarios that require page details to diagnose are tempdb contentionlast page insert contention (also see this blog articlefor code samples) and page-level blocking. All of these scenarios present as either PAGELATCH or lock wait types in sys.dm_exec_query_stats with a page identifier (db_id:file_id:page_id) in the wait_resource column. In order to gain any insight into the problem, you have to know more details about what that page is. Is it a data page, an index page or something else? If it’s a data or index page, which object does it belong to? Prior to SQL Server 2019 CTP 2.0, the only way to gain any of this insight was to use the lightlydocumented command DBCC PAGE. Unfortunately, if you’re using some sort of monitoring tool or script to gather data about waits and blocking, there’s no good way to include this DBCC PAGE call in those scripts. Not to mention it requires sysadmin privileges. Enter sys.dm_db_page_info.

The sys.dm_db_page_info function takes 4 parameters: database ID, file ID, page ID and mode (‘LIMITED’ or ‘DETAILED’). It returns a table with a single row that contains the header information from the page, including things like object_id, index_id and partition_id. As a standalone function, this provides a documented and fully supported way to retrieve this information without the need for a DBCC command. The real benefit however is the ability to JOIN this function with other DMVs such as sys.dm_exec_requests or sys.sysprocesses to gather this information automatically when there is any sort of page-related contention. If you’re familiar with these views you know that the wait_resource column can contain all sorts of different resources, so it’s not in the right format to facilitate a direct join to sys.dm_db_page_info. To support this use case, we had to add another column to these views called page_resource. Whenever the wait_resource column contains a page resource, the page_resource column will be populated with an 8-byte hexadecimal value. The page ID is the first four bytes, followed by 2 bytes for the file ID and then 2 bytes for the database ID.

There are also a couple of supporting functions, so check those out.

Comments closed

Testing TDE Performance

Eduardo Pivaral tests the performance of a database with Transparent Data Encryption versus that same database without encryption:

Transparent data encryption (TDE) helps you to secure your data at rest, this means the data files and related backups are encrypted, securing your data in case your media is stolen.
This technology works by implementing real-time I/O encryption and decryption, so this implementation is transparent for your applications and users.

However, this type of implementation could lead to some performance degradation since more resources must be allocated in order to perform the encrypt/decrypt operations.

On this post we will compare how much longer take some of the most common DB operations, so in case you are planning to implement it on your database, you can have an idea on what to expect from different operations.

These results fit in reasonably well with what I’d heard, but it’s nice to have someone run the numbers.

Comments closed

External Memory Pressure In SQL Server 2019 On Linux

Anthony Nocentino walks us through memory pressure in SQL Server on Linux:

Now in SQL Server 2017 with that 7GB program running would cause Linux to need to make room in physical memory for this process. Linux does this by swapping least recently used pages from memory out to disk. So under external memory pressure, let’s look at the SQL Server process’ memory allocations according to Linux. In the output below we see we still have a VmSize of around 10GB, but our VmRSS value has decreased dramatically. In fact, our VmRSS is now only 2.95GB. VmSwap has increased to 5.44GB. Wow, that’s a huge portion of the SQL Server process swapped to disk.

In SQL Server 2019, there’s a different outcome! In the data below we see our 16GB VmSize which won’t change much because of the virtual address space for the process. With that large external process running SQL Server reduced VmRSS from 7.9GB (from Table 1) to 2.8GB only placing about 4.68GB in the swap file. That doesn’t sound much better, does it? I thought SQL Server was going to react to the external memory pressure…let’s keep digging and ask SQL Server what it thinks about this.

Anthony is doing some great work digging into this.  This is an area where you do have to understand the differences between Windows and Linux.

Comments closed

Automating Azure SQL Database Scaling

Arun Sirpal shows how to use Azure Logic Apps to auto-scale Azure SQL Database:

When I was presenting my Azure SQL Database session at DataRelay (used to be SQLRelay) I was asked (over coffee) about auto scaling capabilities. Quite simply there is nothing out of the box to achieve this. The idea of auto scaling would be good where you would need a burst to fulfill higher demand in terms of workload for a time duration, you know, something like “end of the day, Friday night sale” for your database.

Classically you would probably go down the PowerShell route via a runbook, but I am different.

In this case, the automation is timer-based rather than load-based.

Comments closed

SQL Undercover Inspector 1.2 Released

Adrian Buckman announces a new version of SQL Undercover Inspector:

#21 Added AG Databases check to warn on databases not joined to an AG 

If you are using Availability groups and you have this new module enabled the Inspector will assume that all databases should be joined to an AG, every database name for the instance is inserted into a new table called [Inspector].[AGDatabases] and the Is_AG flag is set to a 1 , if databases are joined to an AG then the Is_AGJoined column is set to a 1 therefore no Advisory will be shown on the report. If a database is marked as Is_AG then it will continue to warn if not joined to an AG on the Inspector reports, if you wish to exclude a given database from the advisory condition simply update Is_AG to a 0 .

Instances that are not Hadr enabled with at least one AG will automatically have Is_AG set to 0 and will be excluded from the checking even if the module is enabled.

Click through for a big list of changes.

Comments closed

Uninstalling A SQL Server Feature

Marek Masko shows how to uninstall a particular SQL Server feature:

If you wonder if it is possible to uninstall particular SQL Server feature instead of uninstalling the complete instance, then I have a good news. It is possible and it is very easy.

Let’s assume you want to uninstall PolyBase because you want to install it with a newer version of SQL Server (PolyBase can be installed on only one SQL Server instance per computer).

It brings me mild sadness that the feature Marek chose for this demo was Polybase, but slightly greater joy for the reason.

Comments closed

Missing Backup Directory When Trying To Upgrade SQL Server

Lori Brown walks us through the solution to an error she experienced:

I was recently performing an in-place upgrade of SQL 2008 R2 to SQL 2014 on one of my client’s servers.  I have done a ton of successful and uneventful in-place upgrades and was surprised when the upgrade failed with the error message:  “Failed to create a new folder ‘X:\SQLBackups’. The specified path is invalid (for example, it is on an unmapped drive).”  This client had over the years changed from using a local drive for all backups to having backups sent to a network share.  So, the X drive really was no longer in existence.

Read on for the solution.

Comments closed

Unused Indexes Might Not Be

Tara Kizer has a warning for people eager to drop “unused” indexes:

About 10 years ago, I decided to drop an unused index on a table that had 2 billion rows. The database was around 7TB in size. We were having storage and performance issues. I thought I could help the system out if I were to drop the index.

4 days after I dropped the index, I got a call from our NOC that CPU utilization was pegged at 100% for an hour so they were reaching out to the on-call DBA to check it out. I logged in and saw a query using A LOT of CPU. Its WHERE clause matched the index I had dropped. The execution plan showed it was scanning the table.

It turned out that I only had 2 weeks of uptime, which didn’t include the 1st of the month. The query that was causing the high CPU utilization was a report that ran on the 1st of each month.

Tara has also provided us with a script to track these details over time, so check that out.

Comments closed

Redshift Architecture Performance Tips

John Ryan has a few hints to help us build speedy Redshift clusters:

The Need to Vacuum

As Redshift does not reclaim free space automatically, updates and delete operations can frequently lead to table growth. Equally, it’s important as new entries are added, that the data is maintained in a sorted sequence.

The VACUUM command is used to re-sequence data, and reclaim disk space as a result of DELETE and UPDATE operations. Although it won’t block other processes, it can be a resource-intensive operation, especially for data stored using interleaved sort keys.

It should be run periodically to ensure consistent performance and to reduce disk usage.

Some of this is good Postgres advice; some of it is good MPP advice (and serves well, for example, when dealing with Azure SQL Data Warehouse); the rest is Redshift-specific.

Comments closed