Press "Enter" to skip to content

Category: Administration

Auditing xp_cmdshell Usage

Patrick Keisler shows how to create a SQL Server audit file for xp_cmdshell usage:

This article assumes you already have a basic understanding of SQL Server Audit, but if not, use this link to catch up on all the details.

Are you required to have xp_cmdshell enabled on one of your servers? If so, then setup a SQL Audit now to track its use. Never mind the implications of enabling xp_cmdshell, as a DBA you are responsible for what happens on your servers and tracking the use of xp_cmdshell should be a priority.

Some smart people will tell you to disable xp_cmdshell altogether, but I don’t like that advice at all.  Auditing usage can give you more peace of mind while not limiting your ability to use a valuable tool.

Comments closed

Breaking A Database File Into Multiple Files

Lori Brown shows us how to take a database with one database file and add new database files to it:

I occasionally come across some pretty good sized databases that are set up with a single data file.  We recently have been working with a client to break up their single data file into multiple data files so that we can spread them over several different LUNs and so that they can take advantage of the improved performance of using the files in parallel.  The concept is much like setting up tempdb with 1 file (up to 8) per core.

Since most people don’t think about using multiple files for databases until they have grown large enough to be a problem, I think that most don’t realize that breaking up a database can be done at any time, you just need to have enough space for new files.  Here is a bit of a demo on how to do this.

Do read Lori’s warning at the end, however, should you decide to do this in production.

Comments closed

Understanding sp_reset_connection

Greg Low explains what sp_reset_connection does and why it’s often a good thing:

Anyone who’s ever traced activity against a SQL Server will have no doubt seen a large number of commands where the procedure sp_reset_connection has been executed. Yet, this command won’t appear anywhere in the source code of the applications that are running.

As an example of why this occurs, one of the most common data access technologies that is used to connect applications to SQL Server is ADO.NET. It has a SqlConnection object that represents a connection that can be opened to a SQL Server instance. In the design of the SqlConnection class, the architects of it were grappling with two big issues:

  • They knew that opening and closing connections to SQL Server was a relatively expensive process.
  • They also knew that on a busy website, they didn’t want to use enough resources (or might not even have had them), to open up a connection for each concurrent session on the website.

So they decided to make the connections to SQL Server able to be shared.

Read the whole thing.

Comments closed

The Halloween Problem And Deleting From Self-Referencing Tables

Louis Davdison walks us through a case where deleting from a table can be extremely slow:

I have a great interest in hierarchy solutions, as I plan to write a book on implementing hierarchies in SQL Server in the next year or so (after I see what happens in SQL Server vNext after 2017). Something I didn’t include in my design testing when I built my presentation on hierarchies (available from a link here) a few years back was the cost to remove nodes from a tree. I learned a big lesson today about the cost of deleting from a hierarchy (the hard way). It is definitely something I will be digging deeper into at some point in a lab environment, comparing multiple methods of deleting from a hierarchy, but I wanted to write down the issue with self-referencing table for future use (and hopefully for your use too.)

We have a table with 100s of millions of rows, and deletes from this table were taking forever. There were 17 foreign key constraints to this table, but each of the references were only showing as 1% of the plan. The physical DELETE from the clustered index operator was ~80%, but in the query, there showed up an operator that looked ominous.

Definitely worth a read.

Comments closed

Tracking Latency To Azure With PsPing

Arun Sirpal shows us how to use PsPing (part of the Sysinternals tool set) to determine latency between your computer and a VM in an Azure data center:

This is the tool of choice when wanting to find out latency to your Azure SQL Server. In addition to standard ICMP ping functionality, it can report the latency of connecting to TCP ports, the latency of TCP round-trip communication.

I use this to find the latency from my location to various Azure SQL Servers which are in different Azure regions. I am based in the heart of England so let’s look and compare a couple of locations (just out of curiosity). Once you have downloaded the tool you will need to CD to the directory and call the following command.

Read on to see how to use PsPing.

Comments closed

Don’t Set Max Size For Containers In In-Memory OLTP

Ned Otter recommends you not mess with the maximum container size when creating a memory-optimized filegroup:

I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do with storage for In-Memory OLTP.

Unfortunately, that’s not correct: through SSMS or TSQL, you can in fact set a max size for a container.

But you should not ever do that…..

Why?

Because if you do, and your checkpoint files exceed the max size of the container, your database can go into the In Recovery, Suspect, or OFFLINE state.

Read on for a repro that you should not try in production.  Or anywhere, really.

Comments closed

Managing Central Management Server

Chrissy LeMaire shows how you can use dbatools to manage Central Management Server and registered servers:

It’s a super useful feature that not all DBAs know about. Since CMS data is stored in msdb and accessible via SMO, you can access it from SQL Server Management Studio or PowerShell modules like dbatools.

Central Management Server’s essential functionality includes:

Actions that are taken by using a central management server group act on all servers in the server group. This includes connecting to servers by using Object Explorer and executing Transact-SQL statements and Policy-Based Management policies on multiple servers at the same time.

I mostly use it as a visual repository of my SQL Servers. Prior to using dbatools and Invoke-DbaSqlQuery, however, I did use CMS to easily execute code against a number of different servers.

CMS is a great feature, and is a critical tool for scaling out a SQL Server infrastructure.

Comments closed

Checking File Sizes In SQL Server

Andy Mallon looks back at a contribution by Junior DBA Andy, this one on checking file sizes:

This is every DBA’s favorite game. Figuring out what DMV contains the data you want. It turns out there are two places that database file size info is maintained. Each database has sys.database_files which has information for that database. The master database also has sys.master_files, which contains information for every database.

Using sys.master_files seems like it would be the obvious choice: everything in one view in master is going to be easier to query than hitting a view in a bunch of different databases. Alas, there’s a minor snag. For tempdb, sys.master_files has the initial file size, but not the current file size. This is valuable information, but doesn’t answer the use cases we set out above. I want to know the current file sizes. Thankfully, sys.database_files has the correct current file sizes for tempdb, so we can use that.

Using sys.database_files seems like it’s going to be the right move for us then. Alas, this isn’t quite perfect either. With Log Shipping, and Availability Group secondaries, if you’ve moved data files to a different location, sys.database_files will contain the location of the files on the primary database. Thankfully, sys.master_files has the correct local file locations for user databases, so we can use that.

Ugh, so it looks like the answer is “both”… we’ll need to use sys.database_files for tempdb, and sys.master_files for everything else.

Click through for the script, including Andy’s critical reflection on how Past Andy has failed him.

Comments closed

The Impact Of Auto-Growth Settings For Log Files

Jamie Wick has started a series on log growth and starts by looking at auto-growth settings:

For the data file, the impact can be illustrated in the following chain of events:

  1. A new 1MB data file is created that contains no information. (ie. a 1MB data file containing 0MB of data)
  2. Data is written to the data until it reaches the file size. (ie. the 1MB data file now contains 1MB of data)
  3. The SQL server suspends normal operations to the database while the data file is grown by 1MB. (ie. the data file is now 2MB and contains 1MB of data) If Instant File Initialization (IFI) is enabled, the file is expanded and database operations resume. If IFI is not enabled, the expanded part of the data file must be zeroed before db operations resume, resulting in an additional delay.
  4.  Once the data file has been grown successfully, the server resumes normal database processing. At this point the server loops back to Step 2.

The server will continue this run-pause-run-pause processing until the data file reaches its Maxsize, or the disk becomes full. If the disk that the data file resides on has other files on it (ie. the C drive, or a disk that is shared by several databases), there will be other disk write events happening between the data file growth events. This may cause the data file expansion segments to be non-contiguous, increasing the file fragmentation and further decreasing the database performance.

This is all to answer the question, “What’s the problem with missing a few log backups?”

Comments closed

Gathering Info On Tables

Raul Gonzales has a script which provides useful information for tables and columns:

Useful information it provides at table level:

  • tableType, to identify HEAP tables
  • row_count, to identify tables with plenty of rows or now rows at all
  • TotalSpaceMB, to identify big tables in size
  • LastUserAccess, to identify tables that are not used
  • TotalUserAccess, to identify tables that are heavily used
  • TableTriggers, to identify tables that have triggers

Useful information it provides at column level:

  • DataType-Size, to identify supersized, incorrect or deprecated data types

  • Identity, to identify identity columns

  • Mandatory-DefaultValue, to identify NULL/NOT NULL columns or with default constraints

  • PrimaryKey, to identify primary key columns

  • Collation, to identify columns that might have different collation from the database

  • ForeignKey-ReferencedColumn, to identify foreign keys and the table.column they reference

Click through for the script.

Comments closed