Press "Enter" to skip to content

Category: T-SQL

ISNULL vs COALESCE in SQL Server

Erik Darling has a video for us:

A Difference Between ISNULL And COALESCE You Might Care About In SQL Server

There’s nothing for me to snip as the graf. I don’t often link to videos without any sort of text accompaniment, but it’s been too long since I’ve linked to Erik and this was an interesting topic.

Bonus points for using “case expression” instead of the more common but technically incorrect “case statement.”

Comments closed

Finding Basic Table Information via T-SQL

Andy Brownsword has a script for us:

In Management Studio we can view object details by hitting F7 in Object Explorer. It gives us basic metrics but I find it very slow to load for the details I typically need.

For that reason I though I’d share a script to turn to for metrics I commonly need. This query returns:

  • The table details (schema, name, created date)
  • The primary storage (Heap, Clustered, or Columnstore)
  • The numbmer of Nonclustered / Columnstore Indexes
  • The number of records and rough size for data / indexes

Click through for the script and an example of what it looks like.

Comments closed

Thoughts on T-SQL Snapshot Backups

Anthony Nocentino has a two-parter for us. First is the idea of how T-SQL snapshot backups work:

Traditional SQL Server backups can struggle with large databases, resulting in longer backup times and resource contention. T-SQL Snapshot Backup, a new feature in SQL Server 2022, addresses these challenges by allowing storage-based snapshots to be coordinated through T-SQL. This feature delivers faster, more efficient backups, especially for large-scale environments with the most aggressive of recovery objectives.

The follow-up is for point-in-time recovery from a T-SQL snapshot backup:

In this post, the second in our series, I will guide you through using the new T-SQL Snapshot Backup feature in SQL Server 2022 to take a snapshot backup and perform point-in-time database restores using a snapshot backup as the base of the restore. We will explore how to manage storage-level operations, such as cloning snapshots and executing an instantaneous point-in-time restore of a database from the snapshot with minimal impact on your infrastructure. Additionally, I will demonstrate a PowerShell script that utilizes dbatools and the PureStoragePowerShellSDK2 modules to automate the process.

Check out both posts and be on the lookout for subsequent entries in the series.

Comments closed

Understanding the EXISTS Keyword in SQL

Eric Blinn probably exists:

I’ve seen the EXISTS keyword in Microsoft SQL Server T-SQL code and don’t understand it well. What does it do? How do I use it? Are there best practices around SQL EXISTS?

This SQL tutorial will explain what the keyword EXISTS does and show several different use cases.

Read on to see how you can use EXISTS and its complement, NOT EXISTS, in a variety of use cases. One important part of why EXISTS can be useful compared to other ways of writing a particular query is that the performance profile of an EXISTS clause is a semi-join: we proceed until we find the first result matching our clause. If that happens to be in the first row, we can stop there as we’ve fulfilled the requirement. By contrast, an alternative using IN or something else like using SELECT COUNT(*) would likely need to read more pages of data than EXISTS.

Comments closed

Eliminating Unexpected Whitespace or Null Characters in Columns

Kevin Wilkie has fun with an unexpected character:

The query ran just fine so I happily completed my work for the day. I sent it to my QA team and had them check it out before pushing it to Production. They sent it back with a note saying “We expect a lot more “Insert Garbage Data Here” to show.

Being a good data developer – or someone who just wants to show QA up (take your pick) – decided to dig deep into what FieldName really shows. I found a lot of the following:

In Kevin’s case, he tried to use TRIM() and it didn’t fix anything. That’s because TRIM(), by default, only removes the space character (up to the first non-space character), not all whitespace and not the null character CHAR(0) that Kevin found.

An alternative version of TRIM() that would have worked in this case, plus adding in tabstops as well, would be:

SELECT
    TRIM(' '+char(0)+char(9) FROM FieldName) AS TrimmedFieldName
FROM dbo.SomeTable;
GO

And you could also extend that to include newlines, line feeds, vertical tabs, the line separator character, and whatever else you need.

Comments closed

Searching for Wildcard Characters in LIKE

Andy Brownsword is looking for a discount:

Performing a wildcard search by throwing a % into a LIKE expression is bread and butter. How do we handle this when we actually want to search for the wildcard though?

This was an issue I first saw early in my career before I was even building database solutions. The business had a back office solution where you could search through offers on the UI. Unfortunately it threw up some strange results.

Searching for ‘30%’ offers would return more than expected.

Read on for an example of the problem, as well as how you can resolve it.

Comments closed

Uniquifiers Doing Heavy Lifting

Michael J. Swart is one of a kind:

If you define a clustered index that’s not unique, SQL Server will add a hidden 4-byte column called UNIQUIFIER. You can’t see it directly but it’s there. When you add a row whose key is a duplicate of an existing row, the new row gets a new unique value for it’s uniqueifier. If you add over 2.1 billion rows with the same key, the uniquifier value exceeds the limit and you will see error 666.

A while ago, we nearly got into trouble because of a bad choice for clustering key that went undetected for so long.

Click through for a query to see how many clustered indexes need uniquifiers and which have the most duplication of key fields.

Comments closed

Storing and Parsing JSON in SQL Server

Ed Pollack talks JSON:

Like XML, JSON is an open standard storage format for data, metadata, parameters, or other unstructured or semi-structured data. Because of its heavy usage in applications today, it inevitably will make its way into databases where it will need to be stored, compressed, modified, searched, and retrieved.

Even though a relational database is not the ideal place to store and manage less structured data, application requirements can oftentimes override an “optimal” database design. There is a convenience in having JSON data close to related relational data and architecting its storage effectively from the start can save significant time and resources in the future.

Read on for plenty of examples and tips. Ideologically, I have no problem parsing JSON to load data into SQL Server. I have no real problem storing data in JSON if the calling application takes that JSON as-is and does not expect the database to modify or shred that JSON. I have no problem taking relational data and creating JSON structures to send out to calling applications. My problem comes when you store the data as JSON but then expect the database to manage data. Treat the JSON blob as atomic and we’re fine; otherwise, I want to make that data relational, as befits a relational database.

2 Comments

Comparing Transaction State and Transaction Count Functions

Sergey Gigoyan talks transactions:

When working with transactions, identifying active transactions often becomes quite important. In SQL Server, there are XACT_STATE and @@TRANCOUNT system functions that allow one to get information about active user transactions. However, the question about the differences between these functions is one of the most common questions among database development. Understanding the details of each function and their differences will help developers make the correct choice between these functions to solve a specific problem related to transactions.

Read on to learn more about each.

Comments closed