Press "Enter" to skip to content

Category: Temp Tables

Comparing Memory-Optimized Table Variables to Temp Tables

Mehdi Ghapanvari runs a test:

SQL Server memory optimized table variable and a temporary table: Which is faster to temporarily load and store data?

Dear readers, I want to share a true story. In 2018, when I was using SQL Server 2016 in production, I had a complex stored procedure where I needed to store data temporarily. I can’t remember exactly, but I needed to store, say, 50,000 rows temporarily. At that time, I thought memory-optimized table variables were extremely fast for storing temporary data. So, I used them. There was not a lot of concurrency on that stored procedure. However, after a while, I realized that it was not fast enough. I investigated to find the root of the issue and found that loading data into the Memory-Optimized table variable was causing the problem.

Here is where I say that the specific circumstances are going to play a large role in this. If the number of rows is smaller, that can change the outcome. If the action you are performing with the temporary object is more complex, that can change the outcome (generally in favor of temp tables). If you have very high concurrency, with hundreds of users trying to access variants of this object at all times throughout the day, that can change the outcome (generally in favor of the memory-optimized table type).

If you try to pin me down on specific advice, I’d say to start with temp tables and only switch to memory-optimized table types after you have proof of table creation (or re-use) contention in tempdb tied to that specific object. But unlike most situations with In-Memory OLTP, it’s reasonably straightforward to find a great use case for memory-optimized table types.

Leave a Comment

Temp Table Bugs in Microsoft Fabric Warehouses

Jared Westover runs into a wall:

I was excited when Microsoft announced the ability to create session-scoped temporary tables in a Fabric warehouse. However, after using Microsoft Fabric temporary tables, I quickly felt disappointed. When will they be ready for prime time, and in the meantime, what other options are available?

Click through for Jared’s experience, although it might already be fixed.

Comments closed

A Performance Comparison of Temp Tables and Table Variables

Mehdi Ghapanvari compares two ways of holding temporary data:

Do you ever wonder why your query is slow when you use a table variable in a join operation? If you are interested in why this happens and how to improve query performance in such a scenario, keep reading to learn more.

Click through for the full article. This focuses entirely on classic table variables and not memory-optimized table variables, the latter of which can significantly improve performance in specific circumstances.

Comments closed

Discerning the Value of an Empty Table

Andy Levy performs some database archaeology:

It seems like no matter how long you work with a system beyond a trivial size, you’ll find something new every so often. A little while ago, I found a table without about a dozen columns, no data, and was referenced in only one place across an entire database hosting nearly 1000 tables and over 8000 stored procedures. Why does this thing even exist?

To protect the innocent(ish?), obviously I’m changing the names of everything here. I was looking into some performance issues and found a very short stored procedure being called from another stored procedure which opened with this query:

Click through for the query and why it existed in the first place.

Comments closed

Temp Table Performance in PostgreSQL

Jobin Augustine shares a warning:

PostgreSQL is one of the most powerful database systems in the world. I have always been passionate about its great power, especially its modern SQL language features.

However, that doesn’t mean everything is great. There are areas where it hurts. Novice users unaware of the problem might get into trouble, and I see such things happen very often. This is a note for those who design their solutions around PostgreSQL using temporary tables.

This is very interesting to read, especially in comparison to SQL Server. It’s another example of the adage that, just because something works in a specific way on one platform, it’s not necessarily going to work the same way on another.

Comments closed

Understanding Worktables in SQL Server

Steve Stedman takes a peek at tempdb:

worktable in SQL Server is a temporary structure that the SQL Server Database Engine uses to process certain types of queries. These tables are not explicitly created by users but are generated by SQL Server internally to handle specific operations that cannot be managed directly within memory. Worktables are stored in the tempdb database and are crucial for facilitating complex query execution plans.

Read on for examples of when SQL Server will use worktables and some good ideas when you spot worktables in the wild. They’re not inherently bad, but there are some performance problems you could experience around them.

Comments closed

Thoughts on Temp Tables in Stored Procedures

Erik Darling has preferences:

I probably overuse temp tables. I’m the first one to admit it. But often when I choose to use one, it’s a choice between:

  • Spending a long time trying to business understand logic for a company I don’t work for
  • Tinkering with query syntax and using really weird tricks and hints to get different plan shapes
  • Waiting a long time to create or tweak indexes on huge tables (usually on “dev” hardware)

Erik provides a good example of where a temp table fits well, contrasting it to a common table expression that fares poorly. My bias tends to come in the opposite direction from Erik’s: I’m susceptible to playing query golf and getting everything I need back in one optimized call. But part of craftsmanship is knowing how and when to use each tool.

Comments closed

TempDB Improvements in SQL Server Versions

Jared Poche looks at the recent past:

Tempdb contention has long been an issue in SQL Server, and there are many blogs on the issue already. But I wanted to add one more mainly to highlight the improvements in recent versions of SQL Server

Tempdb contention is most often discussed in as relating to the creation of temp tables (and other objects) in tempdb. If you are experiencing this you will see PAGELATCH_EX or PAGELATCH_SH waits, frequently with wait resources like 2:1:1 or 2:1:3. This indicates contention in database 2 (tempdb), page 1 (the first data file in tempdb), and one of the PFS, GAM, or SGAM pages (which are pages 1, 2, and 3 respectively). Tempdb files of sufficient size will have additional PFS, GAM, and SGAM pages at higher page numbers, but 1 and 3 are the pages most often referenced.

Read on to see what the SQL Server team has done over three of the past four versions of SQL Server to limit tempdb contention.

Comments closed

The User Transaction Scope for Temporary Objects

Bob Dorr troubleshoots a performance problem:

When the temporary table is bound to the user transaction it is both created and destroyed as part of the transaction.  The same logic in a procedure attempts to avoid the creation and destruction, for each execution, by using temporary table caching.

From the issue I was debugging, the user transaction scope mattered because creation and destruction of metadata may be an expensive operation.  

This post ties into two separate things: first, how temp objects tie to specific sessions; and second, the cost of creating and destroying temporary objects. For the latter, a couple quick pieces of advice:

  • Reduce the number of temporary objects you create. If you can solve a problem with fewer temp tables or table variables while maintaining acceptable performance, that can help on busy systems.
  • Never explicitly drop temp tables. There’s no benefit to explicitly dropping temp tables, as they’ll go away as soon as the session ends. Also, not dropping temp tables is the first step to:
  • Embrace temp table reuse. There are specific rules around when you can re-use a temp table. Each re-use of a temp table means two fewer metadata operations (one delete and one create).
  • Use memory-optimized table variables instead of temp tables or table variables.
  • Turn on memory-optimized tempdb metadata. The biggest issue here is that you lose cross-database queries into tempdb views. That can end up being painful and is why I can’t recommend it as a general solution.

Comments closed