Press "Enter" to skip to content

Category: In-Memory OLTP

Temporal Tables

Alex Grinberg has a tutorial on temporal tables, including combining temporal tables with In-Memory OLTP:

Although the process of converting an In-Memory Optimized OLTP table to a system-versioned table is similar, there are some differences that we need to cover and demonstrate in this section.

You need to be aware of some specific details when converting the in-memory optimized table to the system-versioned table

Read on for those specifics.

Comments closed

Memory-Optimized Table Warnings

Robert Davis looks at messages in the error log related to memory-optimized tables:

The server on which we are running in-memory OLTP is a really hefty server with 128 logical cores and 1.5 TB of RAM (1.4 TB allocated to SQL Server). We are limiting in-memory’s memory usage with Resource Governor, which also makes it easy to see how much it is using. Needless to say, even with a limited percentage of 1.4 TB of RAM is still a lot of memory. The highest I have seen in-memory usage for this one database reach at peak activity levels is ~43 GB. In production, when the heavy in-memory OLTP processes complete, I see the system reclaim the in-memory buffers pretty quickly, though not completely. During a normal day, I often see the in-memory memory usage hovering between 1 and 3 GB even when there is virtually no traffic.

When testing in-memory on a dev server that only I was using before deploying to production, I noticed that the memory usage would stay at whatever high level it reached. This makes me believe that in-memory buffers are cleaned up and reclaimed as needed, and if not needed, they just hang around as in-memory buffers. And it appears that some of the buffers end up hanging around. Perhaps they wouldn’t if the server was memory starved. I have not tested that theory.

It’s a conjecture, but seems pretty solid.  Also worth reiterating is that they’re warnings, not errors.

Comments closed

Memory-Optimized Tables In Query Store

Enrico van de Laar looks at how Query Store treats memory-optimized tables and natively compiled stored procedures:

Interestingly enough, no runtime statistics are returned for our natively stored procedure. As it turns out, we need to specifically enable the collection of statistics for natively compiled stored procedures through the sp_xtp_control_query_exec_stats stored procedure. Keep in mind, though, that enabling the collection of statistics for natively-compiled stored procedure can cause performance degradation. Thankfully we can very easily control the collection of statistics by enabling it for only specific natively compiled stored procedures and turn it on and off directly.

Read the whole thing.  Long story short:  they work together nicely.

Comments closed

In-Memory OLTP For Reporting

Daniel Janek shows that memory-optimized tables aren’t just for OLTP scenarios:

I wouldn’t have thought that Hekaton could take my report query down from 30+ min to 3 seconds but in the end it did. *Note that the source data is static and repopulated just twice a week. With that said I didn’t bother looking into any limitations that “report style” queries may cause OLTP operations. I’ll leave that to you.

With SQL Server 2016 (an important caveat), memory-optimized tables can work great for reporting scenarios.  The important factor is having enough RAM to store the data.

Comments closed

Memory-Optimized Transaction Logging

Raul Gonzalez looks at how transaction logging works with memory-optimized tables:

Now we need double the rows, because for each row we’ve said it’s been deleted, we have to tell SQL Server that was not actually deleted (COMPENSATION due to ROLLBACK) in case of recovery (crash recovery or backup recovery). That’s so bad.

But not everything is lost yet :) let’s check how the In-Memory engine deal with this problem

Memory-optimized tables are pretty neat.

Comments closed

In-Memory Columnstore Updates

Niko Neugebauer looks at improvements in SQL Server 2016 SP1 with respect to adding columnstore indexes to memory-optimized tables:

With an incredible Service Pack 1 Microsoft has triumphantly announced that all editions (Standard, Web, Express and even Local) will get the most advanced programming capabilities of Columnstore, In-Memory, Database Snapshot, Compression, Partition & many others, plus that there are some incredible features for the T-SQL (CREATE OR ALTER) and Execution Plan details (Actual Information on processed Rows, used Trace Flags, etc),
but at the same time there are some quite important improvement under the hood that will make you want to use Service Pack 1 for SQL Server 2016 immediately. One of this features is the fast addition of the Columnstore Index to the Memory-Optimised tables. Let take it to the test by restoring a copy of the ContosoRetailDW free database:

These results look nice.

Comments closed

Memory-Optimized Table Types

Randolph West has a three-part series in which he looks at using memory-optimized table types and table valued parameters to increase application performance.  Part 1 introduces the concept:

In other words, for these short-lived temp tables, there’s not only an added benefit of no longer worrying about referring to tempdb..#table, but you also get a massive performance improvement as well.

Part 2 specifies the scale of performance improvements:

The test is very simple and makes use of a technique that one of my customers uses extensively: writing some rows to a temp table, so that another process can reuse those values as parameters, and perform an action, after which it destroys the temp table.

Unfortunately, the method my customer uses to populate a temp table, does not work on Azure SQL Database, because they explicitly refer to the temp tables with three-part naming (tempdb..#temptable), which is not permitted.

For the sake of the exercise, I will have a process that writes to a data structure and compare the times.

Part 3 repeats the test in Azure SQL Database:

I’m going to use the same WHILE loop again, but instead of a million runs, I’ll do 1000, 10,000 and 100,000, because I’m paying for this instance of Azure SQL Database (I picked a Premium P1, with 125 DTUs) and I’m a cheapskate. I doubt the 125 DTUs is even enough to run a million times for the fourth option.

Even in SQL Server 2014, this was a good use of In-Memory OLTP.  With the improvements in 2016, this becomes a viable option for a lot more workloads.

Comments closed

Memory-Optimized Tables

Sunil Agarwal explains that memory-optimized tables are more than just “in memory” tables:

In my many conversations with customers during Microsoft events, people often confuse between the terms ‘In Memory’ and ‘Memory-Optimized’ and many think that they are one and the same. If you continue reading this blog, you will realize that they are somewhat related but can lead to very different performance/scalability.

To understand this, let us travel back in time few years when the size of OLTP databases were much larger than the memory available on the Server. For example, your OLTP database could be 500GB while your Server box has 128 GB of memory. We all know the familiar strategy to address it by storing data/indexes in pages. SQL Server supports 8k pages and brings pages in/out of memory as needed by deploying complex heuristics as implemented as part of Buffer Pool. When running a query, if the PAGE containing the requested row(s) in not in memory, an explicit physical IO is done to bring it into memory. This impacts query performance negatively. Today, you can buy a Server class machine with say 1 TB of physical memory that can keep your full 500GB database in memory. This will indeed improve the performance of your workload by removing  bottleneck due to IO path. This is what I refer to as ‘your database is in memory’. However, the more important question to be asked ‘Is your database optimized for memory?’.

Read on for more details.

Comments closed

Always Encrypted And Memory-Optimized Tables

Joey D’Antoni tests whether Always Encrypted works on memory-optimized tables in SQL Server 2016:

Last week was the PASS Summit, which is the biggest confab of SQL Server professionals on the planet (and educational as ever), Denny Cherry  (b|t) and I ran into Bob Ward (b|t) of Microsoft and of 500 level internals presentations. And for the first time ever, Bob asked us a question about SQL Server—of course we didn’t know the answer of the top of our heads, but we felt obligated to research it like we’ve made Bob do so many times. Anyone, the question came up a Bob’s internals session on Hekaton (In-Memory OLTP) and whether it supported the new Always Encrypted feature in SQL Server 2016. I checked books online, but could not find a clear answer, so I fired up SSMS and setup a quick demo.

Click through for scripts and the answer.

Comments closed