Press "Enter" to skip to content

Category: Administration

Starting SQL Server Without TempDB

Kenneth Fisher shows us how to start an instance in safe mode in case the drive hosting tempdb gets fried:

The situation: Your server is down. The drive/directory where tempdb is supposed to be doesn’t exist. Who knows why. Maybe those evil SAN guys forgot to re-attach your storage during a DR situation. You may or may not realize it but SQL Server will not start without tempdb. Which is fine. Just move it to a location that exists right? Well, yes. That is an important step. So here is how we

Move tempdb

I like the way Russ Thomas (and Kenneth Fisher) put it:  this is a low-occurrence, high-liability issue.

Comments closed

Trust Relationship Failed

Chrissy LeMaire ran into a trust issue one one node of a cluster:

You know what’s scary as hell? When one node of an important cluster loses its trust relationship with the domain and you see the error “the trust relationship between this workstation and the primary domain failed”. That happened to me late last year with one of my SQL Server 2008 R2 nodes. The scary part was that I just didn’t know what to expect. The fix could be simple, or it could require a node rebuild.

Trust no one.

Comments closed

Setting Up A Scrub Server

Tom Norman talks about setting up a data scrubbing server:

With that said, when I setup a Scrub server, it is in a very secure area where the data access is very, very limited. For example, in my current company, the server is in a separate domain from Production and QA/Dev. Only DBAs are allowed to access this server. If you have multiple DBAs at your location, you may want to even limit which DBAs have access to this server. Our goal is to automate the entire scrubbing process so no one has to access the data including copying backup files from Production and to a shared scrub location for QA/Dev to retrieve.

Scrub servers are a way of stripping personally identifiable or sensitive information from production data so developers can safely use the data in lower, less secure environments.

Comments closed

Multiple Instances On A VM

David Klee answers the question, when should you have multiple named instances on a single VM?

I am personally partial to having just one instance per VM, as long as the situation allows for it. The resource management area between SQL Server and Windows allows me to manage the overall resource consumption at the VM level, and en mass, managing at this layer rather than multiple layers is usually preferable. I claim that the extra overhead of managing more VMs is worth the resource management flexibility.

I agree with this.  The biggest advantage I see is in licensing, but if your environment is of a non-trivial size, you’re probably going to license the host instead of individual VMs.  Nevertheless, check out David’s pro-and-con list and see where your situation lies.

Comments closed

Specify Valid Network Protocols

Steve Jones shows how to specify the set of network protocols people can use to connect to a SQL Server instance:

I ran across a question on network protocols recently, which is something I rarely deal with. Often the default setup for SQL Server is fine, but there are certainly times you should add or remove network connectivity according to your environment.

Microsoft’s guidance on protocols pushes you toward TCP/IP and that’s a good default.

Comments closed

Clear The Query Store

Grant Fritchey shows how to clear the Query Store in SQL Server 2016:

While setting up example code for my presentation at SQL Cruise (which is going to be a fantastic event), I realized I wanted to purge all the data from my Query Store, just for testing. I did a series of searches to try to track down the information and it just wasn’t there. So, I did what anyone who can phrase a question in less than 140 characters should do, I posted a question to Twitter using the #sqlhelp hash tag.

You can also call EXEC sp_query_store_remove_query to remove a specific query from the Query Store.

2 Comments

Mirrored Backups

Sean McCown talks about mirrored backups:

By mirroring backups, you’re saying that you want to backup to 2 locations simultaneously.  So let’s say you have the need to backup your DBs to a local SAN drive, but also you need to send them to another data center in case something happens to your local SAN.  The way to do that in SQL is with mirrored backups and the syntax looks like this:

BACKUP DATABASE MyDB TO DISK = ‘G:\MyDB.trn’ MIRROR TO DISK = ‘\\DC1\MyDB.trn’

So above you can see that SQL will write both of these files at once, and give you a good amount of redundancy for your DB backups.  However, this can go wrong when your network isn’t stable or when the link to the other data center is slow.  So you should only mirror backups when you can pretty much guarantee that it won’t fail or lag.  And as you can guess that’s a heavy burden to put on most networks.  In the situation last week that spawned this blog, the network went down for something like 9 hrs and caused the DB’s log to not be backed up that entire time, and hence the log grew and grew.  Now you’re in danger of bringing prod down and that’s clearly not what your backup strategy should do.

Sean talks about alternatives and then talks about how they’ve gotten around the problem with Minion Backup.  If you haven’t tried Minion Backup, it is well worth your time; it’s already a great product and I use it in a production environment I support.

Comments closed

Tracking Changed Data In Standard Edition

Mickey Stuewe wants to track changed data, but has to use Standard Edition:

I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.

Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().

This is a common pattern and works pretty well.  The trick is making sure that you keep that metadata up to date.

Comments closed

Don’t Rebuild Heaps

Steve Jones notes the issues around rebuilding tables lacking clustered indexes:

What about adding a clustered index and dropping it? Nooooooo, and again, I learned something new. This causes two rebuilds of the non-clustered indexes as they are rebuilt with the cluster addition and then rebuilt when the table changes back to a heap (to get the heap locations). That’s crazy, and certainly not what we want.

Also read Matthew Darwin’s comment, as “Don’t do X” usually has an “Except when Y” corollary.

Comments closed

Automated Database Shrinking

Chris Shaw talks about auto-shrink:

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

Friends don’t let friends auto-shrink.

Comments closed