TL;DR – Update to the latest CU, create multiple tempdb files, if you’re on SQL 2014 or earlier enable TF 1117 and 1118, if you’re on SQL 2016 enable TF 3427.
And now it’s time for everyone’s favorite SQL Server topic – tempdb! In this article, I’d like to cover some recent changes that you may not be aware of that can help alleviate some common performance issues for systems that have a very heavy tempdb workload. We’re going to cover three different scenarios here:
There’s some good information in here so don’t just say tl;dr.
That’s what it looks like when your system is under heavy contention due to a lot of queries partying in table variables: in each second, each core on this system is spending 48 seconds waiting on PAGELATCH_UP. That’s awesome.
What about temp tables instead, you ask? Same symptoms: PAGELATCH_UP.
Read the whole thing.
Inserts and other modifications to table variables can’t be parallelized. This is a product limitation, and the XML warns us about it.
The select could go parallel if the cardinality estimate were more accurate. This could potentially be addressed with a recompile hint, or with Trace Flag 2453.
Click through to see an example of what Erik means.
When people think about Heaps and the problems they can cause, they don’t often think of temp tables and table variables as Heaps.
Of course, without a clustered index, any table is a Heap.
This isn’t an argument for or against indexing temp tables, but while working with a client we came across something strange!
sp_BlitzFirst was reporting hundreds of thousands of Forwarded Records, but there were no Heaps in user databases.
When we dug in closer to what queries were doing, we found lots of places where temp tables had an insert/update pattern.
Click through for a demonstration and an explanation of why this can be trouble.
While temp tables are a good option for in-flight data transformation, there are some unique challenges that arise when using temp tables in SSIS.
SQL Server Integration Services uses tight metadata binding for data flow operations. This means that when you connect to a relational database, flat file, or other structure in an SSIS data flow, the SSIS design-time and runtime tools will check those data connections to validate that they exist and that the metadata has not changed. This tight binding is by design, to avoid potential runtime issues arising from unexpected changes to the source or destination metadata.
Because of this metadata validation process, temp tables present a challenge to the SSIS data flow. Since temp tables exist only for the duration of the session(s) using them, it is likely that one of these tables created in a previous step in an SSIS package may not be present when validation needs to occur. During the design of the package (or even worse, when you execute the deployed package in a scheduled process), you could find yourself staring at an “object not found” error message.
It’s good to have alternatives, though there are times when you really just need a temp table.
You should be able to create a #temp in every session. That’s the idea, right? It’s one of the things that differentiates a global temp table from a local temp table. But there can be some difficulties with that.
If you are working with reusable code that uses temp tables (a stored procedure for example), sometimes you need to create a constraint. The thing about constraints is that their names are just as unique as tables, stored procedures etc. i.e. the name of a constraint can only be used once. You can’t have two tables with the same constraint name. In fact, you can’t even have a constraint name that matches a table, stored procedure etc name.
There’s some solid advice in this post.
I was recently reviewing a newly created T-SQL stored procedure. This procedure was verifying temporary table existence with the following code:
2 IF exists (select * from tempdb..sysobjects where name like ‘#fg%’)DROP TABLE #fg
Seeing this takes me back to one of my favorite presentations, where I compare Temporary Tables and Table Variables. In this, I go over several of the methods that I have seen for how code found on the internet actually does this task… and I show why they are all doing it wrong.
Read on to understand why this isn’t the correct answer.
We store the results in a temporary table first.
Don’t worry, that’s not the end of the post. That’s not even the point of this post. It is, however, what I was doing when I came across a weird error.
Let me show you!
INSERT EXEC failed because the stored procedure altered the schema of the target table.
He wasn’t really trying to alter the schema of that temp table, but read on to see the context and solution.
So what happened to cause this inconsistent growth and how should I fix it?
Tempdb is used (and can grow) when users explicitly create objects like temp tables, variables, cursors, or stored procedures. Tempdb is also used when the database engine creates work tables for sorts or spooling, and row versioning operations.
It’s often difficult to diagnose what happened on a SQL Server after the fact. There are many resources online to determine what is currently using tempdb but not a lot of information from processes that may have run days ago.
Read on for some tips to help manage tempdb growth better.
SQL Azure has added something called database scoped global temporary tables.
Azure SQL Database supports global temporary tables that are also stored in tempdb and scoped to the database level. This means that global temporary tables are shared for all users’ sessions within the same Azure SQL database. User sessions from other Azure SQL databases cannot access global temporary tables.
They way you add these is by using a double pound sign
Click through for more. I will occasionally use a global temp table, mostly in conjunction with Central Management Server queries going over a set of databases on each instance, but I limit my usage to manual operations and nothing as part of a normal application or administrative process.