Recovery Without Write-Ahead Logging

Kendra Little thinks about scenarios in which SQL Server is not able to do write-ahead logging (like with memory-optimized objects):

A few episodes ago, I talked about how learning about Write Ahead Logging was a light bulb moment for me, and helped me learn tons of concepts about backups and recovery. This week, we talk about when SQL Server turns things upside down and doesn’t use write ahead logging: and what it has to do for recovery in these special cases.

Watch this week’s 24 minute video. Subscribe to my YouTube channel, or check out the audio podcast to listen anywhere, anytime. Links from this episode are below the video.

Click through for the video.

Checkpoints And Memory-Optimized Filegroups

Jack Li explains why, even without a memory-optimized table, you can see XTP checkpoints in your database:

What are checkpoint files?

They are data and delta files as documented in Durability for Memory-Optimized Tables. When you use disk based tables, the data is written to data files.  Even though data is stored in memory for memory optimized tables, SQL Server still needs to persists data for disaster recovery.  Data for memory optimized tables is stored in what we call checkpoint files.  Data file contains rows from insert and update operations. Delta file contains deleted rows.  Over time, these files can be ‘merged’ increase efficiency.  Unneeded files after the merge can be removed eventually (but this can only happen after a log backup).

Click through for a demo script to see this in action.

Large Data Sets In Memory-Optimized Table Types

Jack Li explains a difference between memory-optimized tables and memory-optimized table types:

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop.

Click through for the complete story.

Memory-Optimized Hash Indexes And Bucket Counts

Jack Li relates a story about how it’s important to think through bucket count for memory-optimized hash indexes:

I was working with a customer to troubleshoot memory optimized table issues.  In this scenario, our customer uses a memory optimized table variable.  He put 1 million rows of data into the table variable and then process it.  Based on what he said, I tried to come up with a repro to see if I can duplicate the problem.   While troubleshooting that issue, I ran into another issue where I can’t even insert 1 million row into a memory optimized table variable.

My rule of thumb is 30-50% more buckets than expected rows.  That way I can handle some level of unexpected growth while keeping the chance of a hash collision and really slow linked list scan as low as possible.  The official guidance says performance should be “acceptable” with up to 5x rows per buckets, but my experience has been that “acceptable” is a generous term at that point.

Temporal Tables

Alex Grinberg has a tutorial on temporal tables, including combining temporal tables with In-Memory OLTP:

Although the process of converting an In-Memory Optimized OLTP table to a system-versioned table is similar, there are some differences that we need to cover and demonstrate in this section.

You need to be aware of some specific details when converting the in-memory optimized table to the system-versioned table

Read on for those specifics.

Memory-Optimized Table Warnings

Robert Davis looks at messages in the error log related to memory-optimized tables:

The server on which we are running in-memory OLTP is a really hefty server with 128 logical cores and 1.5 TB of RAM (1.4 TB allocated to SQL Server). We are limiting in-memory’s memory usage with Resource Governor, which also makes it easy to see how much it is using. Needless to say, even with a limited percentage of 1.4 TB of RAM is still a lot of memory. The highest I have seen in-memory usage for this one database reach at peak activity levels is ~43 GB. In production, when the heavy in-memory OLTP processes complete, I see the system reclaim the in-memory buffers pretty quickly, though not completely. During a normal day, I often see the in-memory memory usage hovering between 1 and 3 GB even when there is virtually no traffic.

When testing in-memory on a dev server that only I was using before deploying to production, I noticed that the memory usage would stay at whatever high level it reached. This makes me believe that in-memory buffers are cleaned up and reclaimed as needed, and if not needed, they just hang around as in-memory buffers. And it appears that some of the buffers end up hanging around. Perhaps they wouldn’t if the server was memory starved. I have not tested that theory.

It’s a conjecture, but seems pretty solid.  Also worth reiterating is that they’re warnings, not errors.

Memory-Optimized Tables In Query Store

Enrico van de Laar looks at how Query Store treats memory-optimized tables and natively compiled stored procedures:

Interestingly enough, no runtime statistics are returned for our natively stored procedure. As it turns out, we need to specifically enable the collection of statistics for natively compiled stored procedures through the sp_xtp_control_query_exec_stats stored procedure. Keep in mind, though, that enabling the collection of statistics for natively-compiled stored procedure can cause performance degradation. Thankfully we can very easily control the collection of statistics by enabling it for only specific natively compiled stored procedures and turn it on and off directly.

Read the whole thing.  Long story short:  they work together nicely.

In-Memory OLTP For Reporting

Daniel Janek shows that memory-optimized tables aren’t just for OLTP scenarios:

I wouldn’t have thought that Hekaton could take my report query down from 30+ min to 3 seconds but in the end it did. *Note that the source data is static and repopulated just twice a week. With that said I didn’t bother looking into any limitations that “report style” queries may cause OLTP operations. I’ll leave that to you.

With SQL Server 2016 (an important caveat), memory-optimized tables can work great for reporting scenarios.  The important factor is having enough RAM to store the data.

Memory-Optimized Transaction Logging

Raul Gonzalez looks at how transaction logging works with memory-optimized tables:

Now we need double the rows, because for each row we’ve said it’s been deleted, we have to tell SQL Server that was not actually deleted (COMPENSATION due to ROLLBACK) in case of recovery (crash recovery or backup recovery). That’s so bad.

But not everything is lost yet :) let’s check how the In-Memory engine deal with this problem

Memory-optimized tables are pretty neat.

Non-Clustered, Memory-Optimized Indexes

Raul Gonzalez takes a look at non-clustered indexes on memory-optimized tables:

Wow, what happened there? This is something new I wasn’t expecting.

The first query matches one of our expected query plans, “Index Seek + Lookup” but just an “Index seek” doesn’t make sense, or it does?

Read on for the answer.

Categories

March 2017
MTWTFSS
« Feb  
 12345
6789101112
13141516171819
20212223242526
2728293031