Bob Dorr troubleshoots a performance problem:
When the temporary table is bound to the user transaction it is both created and destroyed as part of the transaction. The same logic in a procedure attempts to avoid the creation and destruction, for each execution, by using temporary table caching.
From the issue I was debugging, the user transaction scope mattered because creation and destruction of metadata may be an expensive operation.
This post ties into two separate things: first, how temp objects tie to specific sessions; and second, the cost of creating and destroying temporary objects. For the latter, a couple quick pieces of advice:
- Reduce the number of temporary objects you create. If you can solve a problem with fewer temp tables or table variables while maintaining acceptable performance, that can help on busy systems.
- Never explicitly drop temp tables. There’s no benefit to explicitly dropping temp tables, as they’ll go away as soon as the session ends. Also, not dropping temp tables is the first step to:
- Embrace temp table reuse. There are specific rules around when you can re-use a temp table. Each re-use of a temp table means two fewer metadata operations (one delete and one create).
- Use memory-optimized table variables instead of temp tables or table variables.
- Turn on memory-optimized tempdb metadata. The biggest issue here is that you lose cross-database queries into tempdb views. That can end up being painful and is why I can’t recommend it as a general solution.