Large Data Sets In Memory-Optimized Table Types

Jack Li explains a difference between memory-optimized tables and memory-optimized table types:

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop.

Click through for the complete story.

Related Posts

What’s New With In-Memory OLTP In SQL Server 2019

Ned Otter gives us two things to look forward to with SQL Server 2019: So far, there’s been only one publicly announced enhancement for In-Memory OLTP in SQL 2019: system tables in TempDB will be “Hekatonized”. This will forever solve the issue of system table contention in TempDB, which is a fantastic use of Hekaton. […]

Read More

Don’t Set Max Size For Containers In In-Memory OLTP

Ned Otter recommends you not mess with the maximum container size when creating a memory-optimized filegroup: I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do […]

Read More

Categories

February 2017
MTWTFSS
« Jan Mar »
 12345
6789101112
13141516171819
20212223242526
2728