Press "Enter" to skip to content

Category: Administration

Multiple Instances On A VM

David Klee answers the question, when should you have multiple named instances on a single VM?

I am personally partial to having just one instance per VM, as long as the situation allows for it. The resource management area between SQL Server and Windows allows me to manage the overall resource consumption at the VM level, and en mass, managing at this layer rather than multiple layers is usually preferable. I claim that the extra overhead of managing more VMs is worth the resource management flexibility.

I agree with this.  The biggest advantage I see is in licensing, but if your environment is of a non-trivial size, you’re probably going to license the host instead of individual VMs.  Nevertheless, check out David’s pro-and-con list and see where your situation lies.

Comments closed

Specify Valid Network Protocols

Steve Jones shows how to specify the set of network protocols people can use to connect to a SQL Server instance:

I ran across a question on network protocols recently, which is something I rarely deal with. Often the default setup for SQL Server is fine, but there are certainly times you should add or remove network connectivity according to your environment.

Microsoft’s guidance on protocols pushes you toward TCP/IP and that’s a good default.

Comments closed

Clear The Query Store

Grant Fritchey shows how to clear the Query Store in SQL Server 2016:

While setting up example code for my presentation at SQL Cruise (which is going to be a fantastic event), I realized I wanted to purge all the data from my Query Store, just for testing. I did a series of searches to try to track down the information and it just wasn’t there. So, I did what anyone who can phrase a question in less than 140 characters should do, I posted a question to Twitter using the #sqlhelp hash tag.

You can also call EXEC sp_query_store_remove_query to remove a specific query from the Query Store.

2 Comments

Mirrored Backups

Sean McCown talks about mirrored backups:

By mirroring backups, you’re saying that you want to backup to 2 locations simultaneously.  So let’s say you have the need to backup your DBs to a local SAN drive, but also you need to send them to another data center in case something happens to your local SAN.  The way to do that in SQL is with mirrored backups and the syntax looks like this:

BACKUP DATABASE MyDB TO DISK = ‘G:\MyDB.trn’ MIRROR TO DISK = ‘\\DC1\MyDB.trn’

So above you can see that SQL will write both of these files at once, and give you a good amount of redundancy for your DB backups.  However, this can go wrong when your network isn’t stable or when the link to the other data center is slow.  So you should only mirror backups when you can pretty much guarantee that it won’t fail or lag.  And as you can guess that’s a heavy burden to put on most networks.  In the situation last week that spawned this blog, the network went down for something like 9 hrs and caused the DB’s log to not be backed up that entire time, and hence the log grew and grew.  Now you’re in danger of bringing prod down and that’s clearly not what your backup strategy should do.

Sean talks about alternatives and then talks about how they’ve gotten around the problem with Minion Backup.  If you haven’t tried Minion Backup, it is well worth your time; it’s already a great product and I use it in a production environment I support.

Comments closed

Tracking Changed Data In Standard Edition

Mickey Stuewe wants to track changed data, but has to use Standard Edition:

I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.

Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().

This is a common pattern and works pretty well.  The trick is making sure that you keep that metadata up to date.

Comments closed

Don’t Rebuild Heaps

Steve Jones notes the issues around rebuilding tables lacking clustered indexes:

What about adding a clustered index and dropping it? Nooooooo, and again, I learned something new. This causes two rebuilds of the non-clustered indexes as they are rebuilt with the cluster addition and then rebuilt when the table changes back to a heap (to get the heap locations). That’s crazy, and certainly not what we want.

Also read Matthew Darwin’s comment, as “Don’t do X” usually has an “Except when Y” corollary.

Comments closed

Automated Database Shrinking

Chris Shaw talks about auto-shrink:

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

Friends don’t let friends auto-shrink.

Comments closed

Pausing The SQL Server Service

Tom LaRock shows us that we can pause the SQL Server service, as well as what that gets us:

By pausing the SQL Server service before restarting the instance we allow end users to continue their work uninterrupted and we also stop any new connections to the instance. This is a nicer way of telling people to “get out” of the database in order for the server to be rebooted. I wouldn’t leave the server paused for 60 minutes of course, but I would rather use this method than forcibly disconnect users and rollback their transactions.

This is a nice way of bleeding the service dry before taking an instance down for maintenance.

Comments closed

Fixing SQL Server R Services Installation Issues

Cody Konior notes that upgrading from CTP 3.0 to CTP 3.2 can cause SQL Server R Services to break:

If you were using CTP 3.0 and later ran an in-place upgrade to CTP 3.2 this will silently break R Services. Uninstalling and reinstalling the R component will not fix the problem, but it can be fixed. There are a few interrelated issues here so bear with me.

Hopefully you don’t run into this issue, but if you do, at least there’s a fix.

Comments closed

Who Grew The Database?

Andy Galbraith helps us figure out who to blame for database growth:

I feel a little dirty writing about the Default Trace in the world of Extended Events, but I also know that many people simply don’t know how to use XEvents, and this can be faster if you already have it in your toolbox.  Also it will work back to SQL 2005 where XEvents were new in SQL 2008.

I have modified this several times to improve it – I started with a query from Tibor Karaszi (blog/@TiborKaraszi), modified it with some code from Jason Strate (blog/@StrateSQL), and then modified that myself for what is included and what is filtered.  There are links to both Tibor’s and Jason’s source material in the code below.

I usually just blame the BI team for database growth.

Comments closed