The Difficulties Of Memory-Optimized Tables

Michael J. Swart relays a cautionary table around using In-Memory OLTP:

We’re leaving the feature behind for a few reasons. There’s an assumption we relied on for the sardine servers: Databases that contain no data and serve no activity should not require significant resources like disk space or memory. However, when we turned on In Memory OLTP by adding the filegroup for the memory-optimized data, we found that the database began consuming memory and disk (about 2 gigabytes of disk per database). This required extra resources for the sardine servers. So for example, 1000 databases * 2Gb = 2Tb for a server that should be empty.

Another reason is that checkpoints began to take longer. Checkpoints are not guaranteed to be quick, but on small systems they take a while which impacts some of our Continuous Integration workflows.

Read the whole thing.  This technology definitely does not fit all use cases, and there are some painful limitations.  If it does fit, however, you’ll wonder how you lived without it.

Related Posts

Don’t Set Max Size For Containers In In-Memory OLTP

Ned Otter recommends you not mess with the maximum container size when creating a memory-optimized filegroup: I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do […]

Read More

sp_BlitzInMemoryOLTP

Ned Otter announces a new tool for troubleshooting memory-optimized databases: Instance level evaluates the following: the version/edition of SQL server SQL Server ‘max memory’ setting memory clerks XTP memory consumers, aggregated XTP memory consumers, detailed the value of the committed_target_kb column from sys.dm_os_sys_info whether or not instance-level collection of execution statistics has been enabled for all […]

Read More

Categories

November 2017
MTWTFSS
« Oct Dec »
 12345
6789101112
13141516171819
20212223242526
27282930