SQL Server memory optimized table variable and a temporary table: Which is faster to temporarily load and store data?
Dear readers, I want to share a true story. In 2018, when I was using SQL Server 2016 in production, I had a complex stored procedure where I needed to store data temporarily. I can’t remember exactly, but I needed to store, say, 50,000 rows temporarily. At that time, I thought memory-optimized table variables were extremely fast for storing temporary data. So, I used them. There was not a lot of concurrency on that stored procedure. However, after a while, I realized that it was not fast enough. I investigated to find the root of the issue and found that loading data into the Memory-Optimized table variable was causing the problem.
Here is where I say that the specific circumstances are going to play a large role in this. If the number of rows is smaller, that can change the outcome. If the action you are performing with the temporary object is more complex, that can change the outcome (generally in favor of temp tables). If you have very high concurrency, with hundreds of users trying to access variants of this object at all times throughout the day, that can change the outcome (generally in favor of the memory-optimized table type).
If you try to pin me down on specific advice, I’d say to start with temp tables and only switch to memory-optimized table types after you have proof of table creation (or re-use) contention in tempdb tied to that specific object. But unlike most situations with In-Memory OLTP, it’s reasonably straightforward to find a great use case for memory-optimized table types.