Press "Enter" to skip to content

Category: T-SQL

Finding Progress On A Long-Running Statement

David Fowler shows us how to track how far we’ve gotten on a long-running data modification statement:

Well, it would turn out that there is and to find out we need to turn to our trusty friend, the transaction log.

As we know, the transaction log will squirrel away an entry each time that a row is modified.  We can count up all the LOP_MODIFY_ROW, LOP_INSERT_ROW and LOP_DELETE_ROWS entries for our transaction and that will tell us just how many rows our transaction has altered so far.

Click through for a script, as well as an important disclaimer.

Comments closed

Finding Broken Views

Bill Fellows has a script to test each view to see if it is broken:

Shh, shhhhhh, we’re being very very quiet, we’re hunting broken views. Recently, we were asked to migrate some code changes and after doing so, the requesting team told us we had broken all of their views, but they couldn’t tell us what was broken, just that everything was. After a quick rollback to snapshot, thank you Red Gate SQL Compare, I thought it’d be enlightening to see whether anything was broken before our code had been deployed.

You’ll never guess what we discovered.

Read on to see what they discovered (spoilers:  broken views) and how Bill fixed the problem.

Comments closed

Why Window Functions Can’t Appear In The WHERE Clause

Doug Lane explains why window functions like ROW_NUMBER() cannot appear in the WHERE clause (or FROM, GROUP BY, or HAVING):

SQL Server doesn’t process parts of a query in the same order they’re written. Rather than start with SELECT the way we read and write it, here’s the order SQL Server progresses through:

  1. FROM
  2. WHERE
  3. GROUP BY
  4. HAVING
  5. SELECT
  6. ORDER BY
  7. TOP

The first four steps are all about getting the source data and reducing the result set down. Steps 5 & 6 determine which columns are presented and in which order. Step 7 (TOP) is only applied at the end because you can’t say which rows are in the top n rows until the set has been sorted. (You can read Itzik Ben-Gan’s explanation of this process in way more detail here.)

Definitely worth reading.  Doug also shows how to get around this fact of life and get the equivalent of a window function inside a WHERE clause, at least in terms of function if not necessarily performance.

Comments closed

Concatenation Truncation

Adrian Buckman walks through one of the more annoying aspects of building large strings in SQL Server:

So there I was building this massive VARCHAR(MAX) string and concatenated at various points in my code were Database names of the datatype NVARCHAR(128).
The interesting part was that I was expecting SQL server to use my largest data type – the VARCHAR(MAX) and just concatenate the NVARCHAR(128) values into it
this was not the case – what actually happened was my string of  VARCHAR(MAX) characters being truncated down to an NVARCHAR(4000)!

There is a reason for this and its all to do with Data Type Precedence in this case the NVARCHAR is preceding my VARCHAR unless of course I explicitly convert the NVARCHAR to a VARCHAR.

Read the whole thing.

Comments closed

Listing SQL Server Options

Kenneth Fisher breaks apart the @@OPTIONS bit flags:

There are a fair number of options settings. ANSI_NULLS, ARITHABORT, QUOTED_IDENTIFIER, etc. Each session has its own set of configurations. They are initially set based on the user settings system configuration, then the various connection programs (SSMS for example) can override that, then the various SETcommands can override that.

Now personally I prefer to keep my settings to the default to eliminate confusion but they do get changed occasionally. Just as a for example, when you generate a script from SSMS it typically includes a bunch of SET ON and SET OFF commands. And if you turn on a setting that was already on, then turn it off and the end, well, your setting has changed unexpectedly. My original intent for this post was to create a stored procedure that would let you save the current settings and restore them. Unfortunately, I ran into a scope problem. I can find the current settings: @@options. I can break down the integer value using a script from here (just in case the post should disappear before this one does here is the code from the article)

Read on for the list, including things like ANSI_NULLS, ANSI_PADDING, and XACT_ABORT.  I probably pay less attention to these than I should and just have a habit of setting the few most important settings for my environment atop every procedure definition.

Comments closed

Using RAISERROR For Debug Info

Doug Lane exhorts people to use RAISERROR instead of PRINT when printing messages:

It wasn’t until a few years ago, when I started contributing to the First Responder Kit at Brent Ozar Unlimited, that I noticed every status message in the kit scripts was thrown with something other than PRINT.

Strange, I thought, since those scripts like to report on what statements are running. Turns out, they avoided PRINT because it has some serious drawbacks:

  • PRINT doesn’t necessarily output anything at the moment it’s called.
  • PRINT statements won’t show up in Profiler.
  • PRINT can’t be given variable information without CAST or CONVERT.

Those are important limitations, as Doug shows.

Comments closed

Truncation Versus Deletion

Richie Lee contrasts two methods of getting rid of data:

I’ve been using TRUNCATE TABLE to clear out some temporary tables in a database. It’s a very simple statement to run, but I never really knew why it was so much quicker than a delete statement. So let’s look at some facts:

  1. The TRUNCATE TABLE statement is a DDL operation, whilst DELETE is a DML operation.

  2. TRUNCATE Table is useful for emptying temporary tables, but leaving the structure for more data. To remove the table definition in addition to its data, use the DROP TABLE statement.

Read on for more details and a couple scripts to test out Richie’s statements.

Comments closed

Finding Out Whodunnit Using The Transaction Log

David Fowler shows us how to figure out which user made a bad data change when you don’t have auditing mechanisms in place:

So it’s looking like things are in a bad way, obviously we could go to a backup and get the old values back but that’s never going to tell us who made the change.  So that transaction log again, how do we actually go about getting our hands dirty and having a look at it.

Well there’s a nice little undocumented function called fn_dblog.  Let try giving that a go and see what we get back. By the way, the two parameters are the first and last LSNs that you want to look between.  Leaving them as NULL with return the entire log.

This is great unless you have connection pooling and the problem happened through an application.  In that case, the returned username will be the application’s username.

Comments closed

New E-Mail Course For Unit Testing T-SQL Code

Ed Elliott has a free e-mail course available to learn how to use tSQLt:

Unit testing helps us to write better code, make rapid changes to our code and has been generally seen as a good idea for about 10 years. Writing tests for T-SQL code is made much easier by using tSQLt but there is quite a high barrier to entry both in terms of the technical skills in getting tSQLt running and also how to approach large code bases of, sometimes, unfriendly T-SQL code and taming the code with unit tests.

I have successfully unit tested T-SQL code in a number of different environments including clean greenfield environments as well as legacy projects and I have written this course to help people get started with unit testing but also help them to turn unit testing into a part of their development process that they can use everyday to improve the quality of their work and the speed at which deployments can be made.

Click through to sign up.

Comments closed