I feel a little dirty writing about the Default Trace in the world of Extended Events, but I also know that many people simply don’t know how to use XEvents, and this can be faster if you already have it in your toolbox. Also it will work back to SQL 2005 where XEvents were new in SQL 2008.
I have modified this several times to improve it – I started with a query from Tibor Karaszi (blog/@TiborKaraszi), modified it with some code from Jason Strate (blog/@StrateSQL), and then modified that myself for what is included and what is filtered. There are links to both Tibor’s and Jason’s source material in the code below.
I usually just blame the BI team for database growth.
Planning to move to new hardware for your SQL Server? Techniques like log shipping and database mirroring can be incredibly useful to make the change fast and painless– but you’ve got to pick the right techniques for your environment ahead of time, and know how to do a few things that aren’t in the GUI.
Some of this stuff could also feed into a disaster recovery plan.
The first, and most important thing to notice here is that it’s supplying me with a new name. I can change that to anything I want as long as it’s not the name of a database already in existence on my Azure SQL Database Server. You read that correctly, you can restore a database to Azure SQL Database, but there is no WITH REPLACE option. The restore creates a new database. This is important. In a recovery scenario, you need to be sure that you’re prepared to deal with this fact. How could you replace the existing database? Immediately run a pair of ALTER DATABASE commands to change the name of the existing database to something else and then change the name of your newly created database to the old name. That’s your choice.
There are a couple of gotchas, so if you are administering Azure SQL Database instances, be aware of these.
Although, you can get away with using convenience names (i.e. a dot, (local), or locahost) as SQL Server identifier name during the configuration of a SharePoint server farm as shown in Figures 2 & 3, such a practice is not allowed during configuration of TFS (as shown in Figure 1).
Check this out before installing your own TFS server. Or use Visual Studio Online or GitHub or BitBucket or …
Reader Dick H. posted a comment on my last version of this script stating that he got an error when this was run against tables containing sparse columns. Data compression does not support tables with sparse columns, so they should be excluded from this process. I’ve modified this script to correct this. I don’t have any tables with sparse columns in my environment, so thanks to Dick for pointing this out!
For instructions on using this script, look here.
This is a very useful script to have in your back pocket.
Here in this post I will talk about the new Block Predicate option available in the CTP 3.0 release. With this option we can restrict write access for specific users.
Block Predicates block all write operations like:
– AFTER INSERT and AFTER UPDATE
– BEFORE UPDATE
– and BEFORE DELETE
I want this to perform well in a busy production environment. I really, really do.
A frequent complaint we receive comes from a client that has an in-house sysadmin monitoring tool like Microsoft SCOM/SCCM. They turn the tool on and it startsred-alerting because the Windows server hosting SQL Server is at 90%+ used RAM. The sysadmin (or local DBA) logs on to the server and finds that there is 5GB free (~4%), and the sqlservr.exe process is using 120GB of the 128GB on the server!
In my experience, VMware administrators tend to be most hung up about this concept.
By default when you install SQL Server the TempDB database is not optimized. For SQL Servers that use the TempDB even moderately, I recommend optimizing it for the best performance. The TempDB is the storage area that SQL Server uses to store data for a short periods of time, information that can be found in the TempDB include temporary tables, and data that is being sorted for query results. The data in the TempDB is so temporary that each time the SQL Server service restarts, the TempDB is recreated.
Good advice within.
One of the most important duties of a DBA is to ensure that CHECKDB is run frequently to ensure that the database is both logically and physically correct. So when inheriting an instance of SQL, it’s usually a good idea to check when the last CHECKDB was last run. And ironically enough, it is actually quite difficult to get this information quickly, especially if you have a lot of databases that you need to check. The obvious way is to run DBCC DBINFO against the specific database. This returns more than just the last time CHECKDB was run, and it is not especially clear which row returned tells us the last CHECKDB (FYI the Field is “dbi_dbccLastKnownGood”.)
It’s a bit of a shame that this information isn’t made available in an easily-queryable DMV.
Turns out SQL 2008R2 (where the original script worked) returns different fields than 2012 and 2014 (where it didn’t).
I figured I didn’t want to find out which version of the script to use every time I needed to run it on a server, so I told the script to figure that out by itself, and then run the appropriate hunk of code (example below)
This is a good explanation of how to back out of a complex situation.