Press "Enter" to skip to content

Category: In-Memory OLTP

Checkpoints And Memory-Optimized Filegroups

Jack Li explains why, even without a memory-optimized table, you can see XTP checkpoints in your database:

What are checkpoint files?

They are data and delta files as documented in Durability for Memory-Optimized Tables. When you use disk based tables, the data is written to data files.  Even though data is stored in memory for memory optimized tables, SQL Server still needs to persists data for disaster recovery.  Data for memory optimized tables is stored in what we call checkpoint files.  Data file contains rows from insert and update operations. Delta file contains deleted rows.  Over time, these files can be ‘merged’ increase efficiency.  Unneeded files after the merge can be removed eventually (but this can only happen after a log backup).

Click through for a demo script to see this in action.

Comments closed

Large Data Sets In Memory-Optimized Table Types

Jack Li explains a difference between memory-optimized tables and memory-optimized table types:

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop.

Click through for the complete story.

Comments closed

Memory-Optimized Hash Indexes And Bucket Counts

Jack Li relates a story about how it’s important to think through bucket count for memory-optimized hash indexes:

I was working with a customer to troubleshoot memory optimized table issues.  In this scenario, our customer uses a memory optimized table variable.  He put 1 million rows of data into the table variable and then process it.  Based on what he said, I tried to come up with a repro to see if I can duplicate the problem.   While troubleshooting that issue, I ran into another issue where I can’t even insert 1 million row into a memory optimized table variable.

My rule of thumb is 30-50% more buckets than expected rows.  That way I can handle some level of unexpected growth while keeping the chance of a hash collision and really slow linked list scan as low as possible.  The official guidance says performance should be “acceptable” with up to 5x rows per buckets, but my experience has been that “acceptable” is a generous term at that point.

Comments closed

Temporal Tables

Alex Grinberg has a tutorial on temporal tables, including combining temporal tables with In-Memory OLTP:

Although the process of converting an In-Memory Optimized OLTP table to a system-versioned table is similar, there are some differences that we need to cover and demonstrate in this section.

You need to be aware of some specific details when converting the in-memory optimized table to the system-versioned table

Read on for those specifics.

Comments closed

Memory-Optimized Table Warnings

Robert Davis looks at messages in the error log related to memory-optimized tables:

The server on which we are running in-memory OLTP is a really hefty server with 128 logical cores and 1.5 TB of RAM (1.4 TB allocated to SQL Server). We are limiting in-memory’s memory usage with Resource Governor, which also makes it easy to see how much it is using. Needless to say, even with a limited percentage of 1.4 TB of RAM is still a lot of memory. The highest I have seen in-memory usage for this one database reach at peak activity levels is ~43 GB. In production, when the heavy in-memory OLTP processes complete, I see the system reclaim the in-memory buffers pretty quickly, though not completely. During a normal day, I often see the in-memory memory usage hovering between 1 and 3 GB even when there is virtually no traffic.

When testing in-memory on a dev server that only I was using before deploying to production, I noticed that the memory usage would stay at whatever high level it reached. This makes me believe that in-memory buffers are cleaned up and reclaimed as needed, and if not needed, they just hang around as in-memory buffers. And it appears that some of the buffers end up hanging around. Perhaps they wouldn’t if the server was memory starved. I have not tested that theory.

It’s a conjecture, but seems pretty solid.  Also worth reiterating is that they’re warnings, not errors.

Comments closed

Memory-Optimized Tables In Query Store

Enrico van de Laar looks at how Query Store treats memory-optimized tables and natively compiled stored procedures:

Interestingly enough, no runtime statistics are returned for our natively stored procedure. As it turns out, we need to specifically enable the collection of statistics for natively compiled stored procedures through the sp_xtp_control_query_exec_stats stored procedure. Keep in mind, though, that enabling the collection of statistics for natively-compiled stored procedure can cause performance degradation. Thankfully we can very easily control the collection of statistics by enabling it for only specific natively compiled stored procedures and turn it on and off directly.

Read the whole thing.  Long story short:  they work together nicely.

Comments closed

In-Memory OLTP For Reporting

Daniel Janek shows that memory-optimized tables aren’t just for OLTP scenarios:

I wouldn’t have thought that Hekaton could take my report query down from 30+ min to 3 seconds but in the end it did. *Note that the source data is static and repopulated just twice a week. With that said I didn’t bother looking into any limitations that “report style” queries may cause OLTP operations. I’ll leave that to you.

With SQL Server 2016 (an important caveat), memory-optimized tables can work great for reporting scenarios.  The important factor is having enough RAM to store the data.

Comments closed

Memory-Optimized Transaction Logging

Raul Gonzalez looks at how transaction logging works with memory-optimized tables:

Now we need double the rows, because for each row we’ve said it’s been deleted, we have to tell SQL Server that was not actually deleted (COMPENSATION due to ROLLBACK) in case of recovery (crash recovery or backup recovery). That’s so bad.

But not everything is lost yet :) let’s check how the In-Memory engine deal with this problem

Memory-optimized tables are pretty neat.

Comments closed

In-Memory Columnstore Updates

Niko Neugebauer looks at improvements in SQL Server 2016 SP1 with respect to adding columnstore indexes to memory-optimized tables:

With an incredible Service Pack 1 Microsoft has triumphantly announced that all editions (Standard, Web, Express and even Local) will get the most advanced programming capabilities of Columnstore, In-Memory, Database Snapshot, Compression, Partition & many others, plus that there are some incredible features for the T-SQL (CREATE OR ALTER) and Execution Plan details (Actual Information on processed Rows, used Trace Flags, etc),
but at the same time there are some quite important improvement under the hood that will make you want to use Service Pack 1 for SQL Server 2016 immediately. One of this features is the fast addition of the Columnstore Index to the Memory-Optimised tables. Let take it to the test by restoring a copy of the ContosoRetailDW free database:

These results look nice.

Comments closed