Press "Enter" to skip to content

Category: In-Memory OLTP

Finding Candidates For Memory-Optimized Tables

Ned Otter points out a very interesting report in SSMS 2016 and 2017, which helps you determine if you should migrate a table to be memory-optimized:

The chart attempts to display both the best candidates and the degree of difficulty for migration. But there are a few problems with the “difficulty” rating of this internal query, and as a result, if we really want to know how easy/difficult the process might be, we’ll have to roll our own query.

Read on for more details, as well as a script Ned has put together to do the same in T-SQL.

Comments closed

Reducing TempDB Usage With Memory-Optimized Objects

Mark Wilkinson shows how to replace temp tables (or table variables) with memory-optimized table variables to reduce tempdb latching:

If all worked, you should now see that we have contention on the sysschobjs table. Earlier we discussed using sp_help to get index details on system tables, if we do that now and look at index 2, we will see the lead column is nsclass which is a tinyint field. Using a tinyint as a lead column is typically a terrible idea since there is little selectivity on such a narrow field, and this is no exception.

This isn’t the only case of contention you might see with system objects related to temporary tables. We ran into a few different contention scenarios with tempdb:

  • Contention on sysschobjs again, but on index 3. This index leads with the name of the temporary table and is fairly narrow so you can fit a lot of records on a single index page. Because of this, if you are running lots of concurrent procedures that create temporary tables with the same or similar names, it creates a hot spot on a single page, leading to more contention.

  • Temporary table auto-stats. Statistics objects for all tables (including temporary tables) are stored in the sys.sysobjvalues table. If you get enough auto-stats generations on temporary tables you can see contention here.

Mark’s post reads like a book chapter and he does a great job of summing up the problem and the solution.

Comments closed

Out Of User Memory Quota

Jack Li troubleshoots an In-Memory OLTP error:

[INFO] HkDatabaseTryAcquireUserMemory(): Database ID: [7]. Out of user memory quota: requested = 131200; available = 74641; quota = 34359738368; operation = 1.

This is my first time to see this error.  As usual, I relied on source code to find answers.   The message is a result of enforcing memory quota for In-memory OLTP usage.  As documented in “In-Memory OLTP in Standard and Express editions, with SQL Server 2016 SP1”, SQL Server 2016 SP1 started to allow In-Memory OLTP to be used in all editions but enforce memory quotas for editions other than Enterprise edition.  The above message is simply telling you that you have reached the quota and what ever operation you did was denied.

Jack provides more context around the error as well.

Comments closed

C Complier Errors With Memory-Optimized Objects

Arun Sirpal ran into an issue with memory-optimized tables:

Naturally (whether right or wrong) I thought that it could be corruption and I didn’t want to rule it out. I headed straight to the error log to see if it could guide me, all this did was confuse me. However after some research it led me to conclude that “Unable to call into the C compiler GetLastError = 2” for the specific error code shown below meant that a specific file(s) could not be found, I was thinking exactly what file(s)?

This is a snippet from the error log when the database went through recovery – Seems to even struggle during the analysis phase and the checkpoint failure was throwing me off slightly.

It’s worth reading in case you experience this error.

Comments closed

Recompiling Memory-Optimized Procedures On AGs

Ned Otter takes note of how natively compiled stored procedures differ from traditional stored procedures, especially on Availability Group secondary nodes:

As of SQL 2016, the database engine automatically updates statistics for memory-optimized tables (documentation here), but recompilation of native modules must still be performed manually. But hey, that’s way better than SQL 2014, when you couldn’t recompile at all; you had to drop/recreate the native module. And natively compiled stored procedures don’t reside in the plan cache, because they are executed directly by the database engine.

This post attempts to determine if the requirement to manually recompile native modules is any different for AG secondary replicas.

The results are interesting.

Comments closed

Memory-Optimized Clustered Columnstore Indexes

Ned Otter contrasts memory-optimized clustered columnstore indexes with traditional, disk-based clustered columnstore indexes:

5. Parallelism

On-disk: When you query an on-disk table that has a columnstore index, the database engine can use parallelism to process the results more quickly.

Memory-optimized: When you query a memory-optimized table that has a columnstore index, the database engine can use parallelism to process the results more quickly, BUT, that statement is only true if you use interop. Natively compiled modules are always executed serially.

Click through for the rest of the comparison points as well as a repro script.

Comments closed

Recovery Without Write-Ahead Logging

Kendra Little thinks about scenarios in which SQL Server is not able to do write-ahead logging (like with memory-optimized objects):

A few episodes ago, I talked about how learning about Write Ahead Logging was a light bulb moment for me, and helped me learn tons of concepts about backups and recovery. This week, we talk about when SQL Server turns things upside down and doesn’t use write ahead logging: and what it has to do for recovery in these special cases.

Watch this week’s 24 minute video. Subscribe to my YouTube channel, or check out the audio podcast to listen anywhere, anytime. Links from this episode are below the video.

Click through for the video.

Comments closed

Checkpoints And Memory-Optimized Filegroups

Jack Li explains why, even without a memory-optimized table, you can see XTP checkpoints in your database:

What are checkpoint files?

They are data and delta files as documented in Durability for Memory-Optimized Tables. When you use disk based tables, the data is written to data files.  Even though data is stored in memory for memory optimized tables, SQL Server still needs to persists data for disaster recovery.  Data for memory optimized tables is stored in what we call checkpoint files.  Data file contains rows from insert and update operations. Delta file contains deleted rows.  Over time, these files can be ‘merged’ increase efficiency.  Unneeded files after the merge can be removed eventually (but this can only happen after a log backup).

Click through for a demo script to see this in action.

Comments closed

Large Data Sets In Memory-Optimized Table Types

Jack Li explains a difference between memory-optimized tables and memory-optimized table types:

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop.

Click through for the complete story.

Comments closed

Memory-Optimized Hash Indexes And Bucket Counts

Jack Li relates a story about how it’s important to think through bucket count for memory-optimized hash indexes:

I was working with a customer to troubleshoot memory optimized table issues.  In this scenario, our customer uses a memory optimized table variable.  He put 1 million rows of data into the table variable and then process it.  Based on what he said, I tried to come up with a repro to see if I can duplicate the problem.   While troubleshooting that issue, I ran into another issue where I can’t even insert 1 million row into a memory optimized table variable.

My rule of thumb is 30-50% more buckets than expected rows.  That way I can handle some level of unexpected growth while keeping the chance of a hash collision and really slow linked list scan as low as possible.  The official guidance says performance should be “acceptable” with up to 5x rows per buckets, but my experience has been that “acceptable” is a generous term at that point.

Comments closed