Press "Enter" to skip to content

Category: T-SQL

Performance Testing DATE_BUCKET()

Louis Davidson runs some tests:

A month and a half ago, I wrote a blog on using DATE_BUCKET. It is a cool feature thta makes doing some grouping really quite easy. It is here: Cool features in SQL Server I missed…DATE_BUCKET. One of the comments that came in was about performance of the DATE_BUCKET versus using things like DATEDIFF or a date table.

I started working on it then, but it got a bit involved (as performance comparison tests often do), so it took me a bit longer to get to than expected. But here it is, and the results are kind of what you would expect. The uses for DATE_BUCKET are really straightforward, and would rarely involve an index or a lot of filtering using the the function. But over a large number of rows, if it takes more time (even a millisecond more) than another method, you would notice it pretty quickly adding up.

Read on to see how DATE_BUCKET() performs compared to other methods of solving the same problem.

Leave a Comment

Code to Perform Binary Search in SQL Server

Andy Brownsword has a procedure:

Let’s recap what we’re doing here:

Large append-heavy tables – like logs or audits – often don’t have a useful index on the timestamp. These types of tables do however have a strong correlation between their clustering key and the timestamp due to chronological inserts.

A binary search approach splits the table in half to narrow down the search space with each iteration. By abusing the incremental relationship between the clustering key and timestamps, we can quickly zero in on the point in time we’re after. If you want to see the mechanics, check out last week’s post.

I love the approach for log tables, assuming that a timestamp is part of the filter. This is a clever application of a very common computer science algorithm to database operations.

Leave a Comment

Idempotence in SQL Scripts

Jared Westover lays out some solid advice:

Imagine you’ve spent weeks preparing a software release for production, only to deploy it late one night and receive an error that the table or column already exists. This occurs in production environments, even when you use migration-based deployment methods such as DbUp. How can you ensure or at least reduce the likelihood of an error like this in the future?

At a prior job, we needed to write idempotent scripts because the deploy-to-QA process would run every script for the sprint every time someone checked in a new script. This prevented a few classes of release error, and I’ve carried that practice with me to future engagements.

SQL Server 2016 then gave us several helpers like CREATE OR ALTER for stored procedures and views, or DROP IF EXISTS for views and tables. It’s not a complete set of everything I’d like the functionality to do, but it’s a lot more convenient than what we had to do in prior versions.

Leave a Comment

Tracking the Last Sequence Value in SQL Server

Greg Low shares some queries and some history:

Sequences allow us to create a schema-bound object that is not associated with any specific table.

For example, if I have a Sales.HotelBookings table, a Sales.FlightBookings table, and a Sales.VehicleBookings table, I might want to have a common BookingID used as the key for each table. If more than the BookingID was involved, you could argue that there is a normalization problem with the tables, but we’ll leave that discussion for another day.

Another reason I like sequences is that they make it much easier to override the auto-generated value, without the need for code like SET IDENTITY_INSERT that we need with IDENTITY columns. This is particularly powerful if you ever need to do this across linked servers, as you’ll quickly find out that it doesn’t work.

Sequences let me avoid these types of issues: they perform identically to IDENTITY columns, and they also give me more control over the cache for available values.

Click through for some queries to find the latest value of a sequence, as well as how this functionality has changed over the years. One thing that I would point out is that, on busy systems, you might find that the value has changed between the time you run this query and the time you use the results.

Leave a Comment

Binary Search for Chronological Records in SQL Server

Andy Brownsword performs several probes:

Specifically we’ll use a binary search approach to narrow the search range. We abuse the correlation between the clustering key and timestamp to zero in on the records, using the key for navigation, and the timestamp to guide us.

We’ll start with the first and last records as boundaries, followed by checking the timestamp at the mid-point. Depending on whether the timestamp is before or after our target point in time, the appropriate boundary is moved. This halves the key space, and the search repeats until we’ve narrowed the range sufficiently to scan a very small portion of records.

It’s a neat idea, though do watch for Andy’s warning at the end.

Leave a Comment

Rolling Average Calculation via DATE_BUCKET()

Koen Verbeeck writes some code for SQL Server 2022 or later:

In the Microsoft Fabric Warehouse, a new T-SQL function was recently added: the DATE_BUCKET function. With this function, you can group dates into pre-defined buckets. This allows you to easily calculate aggregates that use the GROUP BY clause over these buckets, greatly simplifying the T-SQL statements for analytical use cases.

Click through for a demo. Koen mentions that this is also now available in the Microsoft Fabric Warehouse. Once you know how DATE_BUCKET() works, it’s pretty powerful. But I also think that the function is a bit confusing to use.

Leave a Comment

Testing Implicit Conversion and Performance in SQL Server

Louis Davidson runs some tests:

If you have ever done any performance tuning of queries in SQL Server, no doubt one of the first thing you have heard is that your search argument data types need to match the columns that you are querying. Not one thing in this blog is going to dispute that. Again, the BEST case is that if your column is an nvarchar, your search string matches that column datatype. But why is this? I will do my best to make this pretty clear, especially why it doesn’t always matter.

Read on as Louis lays out the explanation.

Leave a Comment

NTILE and Uneven Row Distribution

Jared Westover clarifies:

One of the simplest yet least-popular ranking functions in T-SQL is NTILE. It’s useful for dividing data into buckets or tiles. However, when your data isn’t evenly distributed across buckets, the results are confusing. Also, NTILE sometimes returns rows in a seemingly random order. What’s happening here?

There’s absolutely a pattern to how NTILE() works, as Jared describes.

Leave a Comment

Implementing Shamir’s Secret Sharing in SQL Server

Sebastiao Pereira implements an algorithm:

Shamir’s Secret Sharing is a cryptographic algorithm that allows a secret to be split into multiple components and shared among a group in such a way that the secret can only be revealed if a minimum number of components are combined. Is it possible to have this algorithm implemented in SQL Server without using external tools?

Click through for a T-SQL implementation, as well as one using CLR.

Comments closed

Combining UNION and UNION ALL

Greg Low crosses the streams:

Until the other day though, I’d never stopped to think about what happens when you mix the two operations. I certainly wouldn’t write code like that myself but for example, without running the code (or reading further ahead yet), what would you expect the output of the following command to be? (Note: The real code read rows from a table but I’ve mocked it up with a VALUES clause to make it easier to see the outcome).

Read on to see what happens.

Comments closed