Press "Enter" to skip to content

Category: T-SQL

Concatenation Truncation

Adrian Buckman walks through one of the more annoying aspects of building large strings in SQL Server:

So there I was building this massive VARCHAR(MAX) string and concatenated at various points in my code were Database names of the datatype NVARCHAR(128).
The interesting part was that I was expecting SQL server to use my largest data type – the VARCHAR(MAX) and just concatenate the NVARCHAR(128) values into it
this was not the case – what actually happened was my string of  VARCHAR(MAX) characters being truncated down to an NVARCHAR(4000)!

There is a reason for this and its all to do with Data Type Precedence in this case the NVARCHAR is preceding my VARCHAR unless of course I explicitly convert the NVARCHAR to a VARCHAR.

Read the whole thing.

Comments closed

Listing SQL Server Options

Kenneth Fisher breaks apart the @@OPTIONS bit flags:

There are a fair number of options settings. ANSI_NULLS, ARITHABORT, QUOTED_IDENTIFIER, etc. Each session has its own set of configurations. They are initially set based on the user settings system configuration, then the various connection programs (SSMS for example) can override that, then the various SETcommands can override that.

Now personally I prefer to keep my settings to the default to eliminate confusion but they do get changed occasionally. Just as a for example, when you generate a script from SSMS it typically includes a bunch of SET ON and SET OFF commands. And if you turn on a setting that was already on, then turn it off and the end, well, your setting has changed unexpectedly. My original intent for this post was to create a stored procedure that would let you save the current settings and restore them. Unfortunately, I ran into a scope problem. I can find the current settings: @@options. I can break down the integer value using a script from here (just in case the post should disappear before this one does here is the code from the article)

Read on for the list, including things like ANSI_NULLS, ANSI_PADDING, and XACT_ABORT.  I probably pay less attention to these than I should and just have a habit of setting the few most important settings for my environment atop every procedure definition.

Comments closed

Using RAISERROR For Debug Info

Doug Lane exhorts people to use RAISERROR instead of PRINT when printing messages:

It wasn’t until a few years ago, when I started contributing to the First Responder Kit at Brent Ozar Unlimited, that I noticed every status message in the kit scripts was thrown with something other than PRINT.

Strange, I thought, since those scripts like to report on what statements are running. Turns out, they avoided PRINT because it has some serious drawbacks:

  • PRINT doesn’t necessarily output anything at the moment it’s called.
  • PRINT statements won’t show up in Profiler.
  • PRINT can’t be given variable information without CAST or CONVERT.

Those are important limitations, as Doug shows.

Comments closed

Truncation Versus Deletion

Richie Lee contrasts two methods of getting rid of data:

I’ve been using TRUNCATE TABLE to clear out some temporary tables in a database. It’s a very simple statement to run, but I never really knew why it was so much quicker than a delete statement. So let’s look at some facts:

  1. The TRUNCATE TABLE statement is a DDL operation, whilst DELETE is a DML operation.

  2. TRUNCATE Table is useful for emptying temporary tables, but leaving the structure for more data. To remove the table definition in addition to its data, use the DROP TABLE statement.

Read on for more details and a couple scripts to test out Richie’s statements.

Comments closed

Finding Out Whodunnit Using The Transaction Log

David Fowler shows us how to figure out which user made a bad data change when you don’t have auditing mechanisms in place:

So it’s looking like things are in a bad way, obviously we could go to a backup and get the old values back but that’s never going to tell us who made the change.  So that transaction log again, how do we actually go about getting our hands dirty and having a look at it.

Well there’s a nice little undocumented function called fn_dblog.  Let try giving that a go and see what we get back. By the way, the two parameters are the first and last LSNs that you want to look between.  Leaving them as NULL with return the entire log.

This is great unless you have connection pooling and the problem happened through an application.  In that case, the returned username will be the application’s username.

Comments closed

New E-Mail Course For Unit Testing T-SQL Code

Ed Elliott has a free e-mail course available to learn how to use tSQLt:

Unit testing helps us to write better code, make rapid changes to our code and has been generally seen as a good idea for about 10 years. Writing tests for T-SQL code is made much easier by using tSQLt but there is quite a high barrier to entry both in terms of the technical skills in getting tSQLt running and also how to approach large code bases of, sometimes, unfriendly T-SQL code and taming the code with unit tests.

I have successfully unit tested T-SQL code in a number of different environments including clean greenfield environments as well as legacy projects and I have written this course to help people get started with unit testing but also help them to turn unit testing into a part of their development process that they can use everyday to improve the quality of their work and the speed at which deployments can be made.

Click through to sign up.

Comments closed

SQL Code Smells

Phil Factor has updated his code smells compendium:

Some time ago, Phil Factor wrote his booklet ‘SQL Code Smells’, collecting together a whole range of SQL Coding practices that could be considered to indicate the need for a review of the code. It was published as 119 code smells, even though there were 120 of them at the time. Phil Factor has continued to collect them and the current state of the art is reflected in this article. There are now around 150 of these smells and SQL Code Guard is committed to cover as many as possible of them.

I loved this booklet when it came out almost as much as I loved his Confessions of an IT Manager.  If you’re looking for some light reading over a long weekend, you can do a lot worse than this.

Comments closed

Selecting Into Tables, Sans Identity

Kenneth Fisher shows a couple of ways to remove an identity property from a column when creating a new table:

A while back I did a post about creating an empty table using a SELECT statement. Basically doing something like this:

1
SELECT TOP 0 * INTO tableNameArchive FROM tableName

will create a new table with the exact same structure as the source table. It can be a really handy way to create an archive table, a temp table, etc. You don’t create any of the extra objects (indexes, triggers, constraints etc) but what you do end up with is every table property from the original table. This includes datatypes, nullability, and (as I’m sure you realized from the title) IDENTITY. Which if you are creating an archive table, a temp table, etc is probably not something you want. Fortunately, there are two easy ways to get rid of the identity.

Click through to see those two methods.

Comments closed

Building Temp Tables From Queries

David Fowler shows how to use dm_exec_describe_first_result_set to generate a temp table schema:

Have you ever needed to store the results of a complex query in a temp table?  How did you go about working out what the definition for that temp table should be, the columns and their data types?

It can be a bit of a pain, not to mention time consuming to have to go figuring out what all datatypes of the base tables are.

I got fed up with all that hunting around as well so as a quick blog I thought I’d share a little script that will take your query in a variable and print out a temp table definition for its result set.

Click through for the script, as well as an important comment by frequent curatee Shane O’Neill.

Comments closed