sp_BlitzInMemoryOLTP

Ned Otter announces a new tool for troubleshooting memory-optimized databases:

Instance level evaluates the following:

  • the version/edition of SQL server

  • SQL Server ‘max memory’ setting

  • memory clerks

  • XTP memory consumers, aggregated

  • XTP memory consumers, detailed

  • the value of the committed_target_kb column from sys.dm_os_sys_info

  • whether or not instance-level collection of execution statistics has been enabled for all natively compiled stored procedures (because this can kill their performance….)

  • when running Enterprise, if there are any resource groups defined, and which memory-optimized databases are bound to them

  • XTP and buffer pool memory allocations, because In-Memory OLTP can affect on-disk workloads

  • summary of memory used by XTP

There’s a lot of useful information you can get out of this procedure.  Click through for the full documentation.

Monitoring Performance Of Natively Compiled Stored Procedures

Jos de Bruijn announces a feature coming to the next version of SQL Server:

We just added new database-scoped configuration options that will help with monitoring performance of natively compiled stored procedures. The new options XTP_PROCEDURE_EXECUTION_STATISTICS and XTP_QUERY_EXECUTION_STATISTICS are available now in Azure SQL Database, and will be available in the next major release of SQL Server. These options will improve your monitoring and troubleshooting experience for databases leveraging In-Memory OLTP with natively compiled stored procedures.

After enabling these options, you can monitor the performance of natively compiled stored procedures using Query Store, as well as the DMVs sys.dm_exec_query_stats and sys.dm_exec_procedure_stats. Note that there is a performance impact to enabling execution statistics collection, thus we recommend to disable stats collection when not needed.

That last sentence is important:  there’s an observer effect which slows down execution of natively compiled stored procedures, and considering that you’re implementing them specifically for the speed, that’s fairly unwelcome.

Digging Into The In-Memory Columnstore Location

Niko Neugebauer does some investigation into where, exactly, memory-optimized columnstore data goes:

This is a rather simple blog post that is dedicated to the theme of the In-Memory Columnstore Indexes location. This has been a constant topic of discussion over a long period of time, even during the public events – and there is a need to clear out this topic.

I have assumed that the In-Memory Columnstore structures (Segments, Dictionaries, …) are located in the In-Memory, but there have been voices that I greatly respect, pointing that actually the Columnstore Object Pool is the exact location of any Columnstore structures, and there is nothing better than to take this feature for a ride and see what the SQL Server engine is actually doing.

Niko shows off a couple of useful DMVs along the way, too.

Out Of Memory Conditions And In-Memory OLTP

Ned Otter explains how a memory-optimized database can run out of its most critical resource:

What can cause a memory-optimized database to run out of memory? It could be that resource consumption (memory) exceeded:

  • the relevant percentage of committed_target_kb from the sys.dm_os_sys_info DMV (explained in a moment)
  • MAX_MEMORY_PERCENT value of a Resource Pool that the database is bound to (if running Enterprise Edition and using Resource Governor)

or:

  • garbage collection is not operational (the purpose of GC is to reclaim memory consumed by stale row versions)
  • updates to memory-optimized table variables caused row versions to be created, and because GC does not operate on table variables, you ran out of memory (for table variables that have a very large amount of rows)

The only thing that can prevent GC from working is a long running transaction.

This is the final post of Ned’s resource issues miniseries and it’s been a good one.

Migrating A Memory-Optimized Database

Michael Bourgon notes that there’s an extra step when migrating a database with a memory-optimized filegroup from one server to another:

So, I was trying to get an in-memory database moved from one server to another (long story, involving IMOLTP melting down on me and resulting in a 2 terabyte log file).

I thought it’d be simple: copy the files over, along with the Filestream folders, and attach.  Nope!  Various errors, including “5(Access is denied.)”.

Click through for the steps involved.

In-Memory OLTP: When You’re Out Of Space

Ned Otter shows us what happens when you run out of disk space and you’re using memory-optimized objects:

In my lab, I’m running Windows Server 2012. Let’s use Powershell to install the File System Resource Manager, which will allow us to create a quota for the relevant folder:

add-windowsfeature –name fs-resource-manager –includemanagementtools

After installing the Windows feature we can set the quota for the folder, but we shouldn’t enable it just yet, because first we have to verify the current size of the folder.

On my server, I created a quota of 1.5GB, and then enabled it.

Now let’s INSERT rows into the table, in batches of 1000, until we reach the limit (the INSERT script is listed in Part 2, I’m trying to keep this post from getting too long).

Click through to see what happens.  It’s not exactly a swath of carnage, but it’s also something you really don’t want to happen.

Watching In-Memory OLTP Files

Ned Otter observes the process when you create memory-optimized tables and insert data into those tables:

It’s clear that before we inserted any data, we had 20 files that were in the PRECREATED state. After inserting 10 rows, we now have 18 PRECREATED files, and 2 UNDER CONSTRUCTION files, which means the In-Memory engine is populating these files, that they are “open” in terms of their CHECKPOINT status. If you don’t understand what these terms mean, please read Part 1.

But there’s one thing that doesn’t look right here: we’ve inserted data into the table, but sizeBytesUsed is still zero for the UNDER CONSTRUCTION files. Why is that?

Ned does assume that you’ve read part 1 of the series already.

Thinking About Scalable Persistent Memory

Ned Otter has a good post thinking about Scalable Persistent Memory:

There were other potential issues when using Persistent Memory, detailed in this blog post. But what’s not covered in that post is the fact that deploying NVIDMM-N reduced the memory speed and/or capacity, because they are not compatible with LRDIMM. This causes you to use RDIMM, which reduces capacity, and because NVDIMM-N operates at a slower speed than RDIMM, it also affects total memory speed.

HP has since released Gen10 servers, and they have changed the landscape for those seeking reduced latency by storing larger data sets in memory. For one thing, they raise the bar for what’s now referred to as Scalable Persistent Memory, with a total server capacity of 1TB. To be clear, NVDIMM-N is not used in this configuration. Instead, regular DIMMs are used, and they are persisted to flash via a power source (this was also the case for NVDIMM-N, but both the flash, DIMM, and power source were located on the NVDIMM-N).

Check it out.  I’m happy that things are improving, but it sounds like this won’t be a panacea.

In-Memory OLTP Isolation Levels

Kendra Little has a webcast + transcript covering isolation levels when dealing with In-Memory OLTP:

Whatever you’re testing, and I really think this is an exciting area of SQL Server to say, “hey could I engineer something really fast with this?” Whatever you’re testing, the isolation levels here can be confusing, because this is really different than disk based rowstore tables in SQL Server and disk based columnstore in SQL Server, as well.

It’s not bad, and I think once you start playing with it it isn’t even hard with the In-Memory OLTP isolation levels. In part, because there’s only three isolation levels! There’s actually fewer isolation levels, and once you get used to how they work, I think it makes sense.

But it is really different, it’s definitely worth playing around with.

I’d really like to see Read Committed isolation become available for In-Memory OLTP someday, but in the meantime, Kendra does a good job walking through the isolation levels which actually exist.

Storage For In-Memory OLTP

Ned Otter has started a new series on In-Memory OLTP:

A memory-optimized database must have a special filegroup designated for memory-optimized data, known as a memory-optimized filegroup. This special filegroup is logically associated with one or more “containers”.What the heck is a “container”? Well, it’s just a fancy word for “folder”, nothing more, nothing less. But what is actually stored in those fancy folders?

Containers hold files known as “checkpoint file pairs”, which are also known as “data and delta files”, and these files persist durable memory-optimized data (in this blog post series, I’ll use the terms CFP and data/delta files interchangeably). You’ll note on the following image that it clearly states in bold red letters, “NO MAXSIZE” and “STREAMING”. “NO MAXSIZE” means that you can’t specify how large these files will grow, nor can you specify how large the container that houses them can grow (unless you set a quota, but you should NOT do that). And there’s also no way at the database level to control the size of anything having to do with In-Memory OLTP storage – you simply must have enough available free space for the data and delta files to grow.

This is the first potential resource issue for In-Memory OLTP: certain types of data modifications are no longer allowed if the volume your container resides upon runs out of free space. I’ll cover workload recovery from resource depletion in a future blog post.

Read the whole thing.  I’m looking forward to this series.

Categories

June 2018
MTWTFSS
« May  
 123
45678910
11121314151617
18192021222324
252627282930