By mirroring backups, you’re saying that you want to backup to 2 locations simultaneously. So let’s say you have the need to backup your DBs to a local SAN drive, but also you need to send them to another data center in case something happens to your local SAN. The way to do that in SQL is with mirrored backups and the syntax looks like this:
BACKUP DATABASE MyDB TO DISK = ‘G:\MyDB.trn’ MIRROR TO DISK = ‘\\DC1\MyDB.trn’
So above you can see that SQL will write both of these files at once, and give you a good amount of redundancy for your DB backups. However, this can go wrong when your network isn’t stable or when the link to the other data center is slow. So you should only mirror backups when you can pretty much guarantee that it won’t fail or lag. And as you can guess that’s a heavy burden to put on most networks. In the situation last week that spawned this blog, the network went down for something like 9 hrs and caused the DB’s log to not be backed up that entire time, and hence the log grew and grew. Now you’re in danger of bringing prod down and that’s clearly not what your backup strategy should do.
Sean talks about alternatives and then talks about how they’ve gotten around the problem with Minion Backup. If you haven’t tried Minion Backup, it is well worth your time; it’s already a great product and I use it in a production environment I support.
I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.
Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().
This is a common pattern and works pretty well. The trick is making sure that you keep that metadata up to date.
What about adding a clustered index and dropping it? Nooooooo, and again, I learned something new. This causes two rebuilds of the non-clustered indexes as they are rebuilt with the cluster addition and then rebuilt when the table changes back to a heap (to get the heap locations). That’s crazy, and certainly not what we want.
Also read Matthew Darwin’s comment, as “Don’t do X” usually has an “Except when Y” corollary.
If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically. If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.
When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else. The downstream effect of this is going to be the fact your indexes are going to become fragmented. This can be demonstrated in a simple test.
Friends don’t let friends auto-shrink.
By pausing the SQL Server service before restarting the instance we allow end users to continue their work uninterrupted and we also stop any new connections to the instance. This is a nicer way of telling people to “get out” of the database in order for the server to be rebooted. I wouldn’t leave the server paused for 60 minutes of course, but I would rather use this method than forcibly disconnect users and rollback their transactions.
This is a nice way of bleeding the service dry before taking an instance down for maintenance.
If you were using CTP 3.0 and later ran an in-place upgrade to CTP 3.2 this will silently break R Services. Uninstalling and reinstalling the R component will not fix the problem, but it can be fixed. There are a few interrelated issues here so bear with me.
Hopefully you don’t run into this issue, but if you do, at least there’s a fix.
I feel a little dirty writing about the Default Trace in the world of Extended Events, but I also know that many people simply don’t know how to use XEvents, and this can be faster if you already have it in your toolbox. Also it will work back to SQL 2005 where XEvents were new in SQL 2008.
I have modified this several times to improve it – I started with a query from Tibor Karaszi (blog/@TiborKaraszi), modified it with some code from Jason Strate (blog/@StrateSQL), and then modified that myself for what is included and what is filtered. There are links to both Tibor’s and Jason’s source material in the code below.
I usually just blame the BI team for database growth.
Planning to move to new hardware for your SQL Server? Techniques like log shipping and database mirroring can be incredibly useful to make the change fast and painless– but you’ve got to pick the right techniques for your environment ahead of time, and know how to do a few things that aren’t in the GUI.
Some of this stuff could also feed into a disaster recovery plan.
The first, and most important thing to notice here is that it’s supplying me with a new name. I can change that to anything I want as long as it’s not the name of a database already in existence on my Azure SQL Database Server. You read that correctly, you can restore a database to Azure SQL Database, but there is no WITH REPLACE option. The restore creates a new database. This is important. In a recovery scenario, you need to be sure that you’re prepared to deal with this fact. How could you replace the existing database? Immediately run a pair of ALTER DATABASE commands to change the name of the existing database to something else and then change the name of your newly created database to the old name. That’s your choice.
There are a couple of gotchas, so if you are administering Azure SQL Database instances, be aware of these.
Although, you can get away with using convenience names (i.e. a dot, (local), or locahost) as SQL Server identifier name during the configuration of a SharePoint server farm as shown in Figures 2 & 3, such a practice is not allowed during configuration of TFS (as shown in Figure 1).
Check this out before installing your own TFS server. Or use Visual Studio Online or GitHub or BitBucket or …