Press "Enter" to skip to content

Category: In-Memory OLTP

In-Memory OLTP Isolation Levels

Kendra Little has a webcast + transcript covering isolation levels when dealing with In-Memory OLTP:

Whatever you’re testing, and I really think this is an exciting area of SQL Server to say, “hey could I engineer something really fast with this?” Whatever you’re testing, the isolation levels here can be confusing, because this is really different than disk based rowstore tables in SQL Server and disk based columnstore in SQL Server, as well.

It’s not bad, and I think once you start playing with it it isn’t even hard with the In-Memory OLTP isolation levels. In part, because there’s only three isolation levels! There’s actually fewer isolation levels, and once you get used to how they work, I think it makes sense.

But it is really different, it’s definitely worth playing around with.

I’d really like to see Read Committed isolation become available for In-Memory OLTP someday, but in the meantime, Kendra does a good job walking through the isolation levels which actually exist.

Comments closed

Storage For In-Memory OLTP

Ned Otter has started a new series on In-Memory OLTP:

A memory-optimized database must have a special filegroup designated for memory-optimized data, known as a memory-optimized filegroup. This special filegroup is logically associated with one or more “containers”.What the heck is a “container”? Well, it’s just a fancy word for “folder”, nothing more, nothing less. But what is actually stored in those fancy folders?

Containers hold files known as “checkpoint file pairs”, which are also known as “data and delta files”, and these files persist durable memory-optimized data (in this blog post series, I’ll use the terms CFP and data/delta files interchangeably). You’ll note on the following image that it clearly states in bold red letters, “NO MAXSIZE” and “STREAMING”. “NO MAXSIZE” means that you can’t specify how large these files will grow, nor can you specify how large the container that houses them can grow (unless you set a quota, but you should NOT do that). And there’s also no way at the database level to control the size of anything having to do with In-Memory OLTP storage – you simply must have enough available free space for the data and delta files to grow.

This is the first potential resource issue for In-Memory OLTP: certain types of data modifications are no longer allowed if the volume your container resides upon runs out of free space. I’ll cover workload recovery from resource depletion in a future blog post.

Read the whole thing.  I’m looking forward to this series.

Comments closed

Handling Session State With Memory-Optimized Tables

Perry Skountrianos shows how to configure ASP.Net to use memory-optimized tables for session state:

ASP.NET session state enables you to store and retrieve values for a user as the user navigates the different ASP.NET pages that make up a Web application. Currently, ASP.NET ships with three session state providers that provide the interface between Microsoft ASP.NET’s session state module and session state data sources:

  • InProcSessionStateStore, which stores session state in memory in the ASP.NET worker process
  • OutOfProcSessionStateStore, which stores session state in memory in an external state server process
  • SqlSessionStateStore, which stores session state in Microsoft SQL Server database

This blog post focuses on the SqlSessionStateStore provider and describes how you can configure it to use SQL Server In-Memory OLTP as the storage option for session data. You can either use the latest ASP.NET async version of the SQL Session State provider (which is the recommended approach), or configure an earlier version of the provider to work with In-Memory OLTP by downloading and running the In-Memory OLTP SQL scripts from our sql server samples github repo.

The me of seven years ago really needed this.  But with the strong shift against session-based data collection and back to stateless or client-held state paradigms, I’m not sure how many people this helps.

Comments closed

Troubleshooting Memory-Optimized Index Performance

Kunal Karoth has a post up on performance troubleshooting with In-Memory OLTP:

In the previous blog post In-Memory OLTP Indexes – Part 1: Recommendations, we gave you an update on the latest features of In-Memory OLTP technology. We also summarized the key characteristics of memory-optimized indexes and shared some guidelines and recommendations on how to best choose and configure an index for your memory-optimized table. At this point, if you haven’t read through the previous blog post, we strongly recommend you do so. In this blog post we continue onwards; take the learnings from the previous blog (Part 1) and using some sample examples, apply them in practice. The learnings from this blog post (Part 2) will be particularly useful if you are experiencing query performance issues with memory-optimized tables; either after migration from disk-based tables or in general, with your production workload leveraging memory-optimized tables.

To summarize this blog post covers the following:

  • Common mistakes and pitfalls to avoid when working with memory-optimized indexes.

  • Best practices to follow when configuring your memory-optimized indexes for optimal performance.

  • Troubleshooting and Mitigating your query performance issues with memory-optimized indexes.

  • Monitoring your query performance with memory-optimized indexes.

There’s a lot of detail in this post, and tuning these types of indexes isn’t quite the same as normal, disk-based indexes.

Comments closed

The Difficulties Of Memory-Optimized Tables

Michael J. Swart relays a cautionary table around using In-Memory OLTP:

We’re leaving the feature behind for a few reasons. There’s an assumption we relied on for the sardine servers: Databases that contain no data and serve no activity should not require significant resources like disk space or memory. However, when we turned on In Memory OLTP by adding the filegroup for the memory-optimized data, we found that the database began consuming memory and disk (about 2 gigabytes of disk per database). This required extra resources for the sardine servers. So for example, 1000 databases * 2Gb = 2Tb for a server that should be empty.

Another reason is that checkpoints began to take longer. Checkpoints are not guaranteed to be quick, but on small systems they take a while which impacts some of our Continuous Integration workflows.

Read the whole thing.  This technology definitely does not fit all use cases, and there are some painful limitations.  If it does fit, however, you’ll wonder how you lived without it.

Comments closed

Handling IoT Traffic With SQL Server

Perry Skountrianos builds a reference architecture for handling nearly one and a half million rows per second with SQL Server:

The following sample demonstrates the high scale and performance of SQL Database, with the ability to insert 1.4 million rows per second by using a non-durable memory-optimized table to speed up data ingestion, while managing the In-Memory OLTP storage footprint by offloading historical data to a disk-based Columnstore table for real time analytics. One of the customers already leveraging Azure SQL Database for their entire IoT solution is Quorum International Inc., who was able to double their key database’s workload while lowering their DTU consumption by 70%.

If you hit on the right scenario, memory-optimized tables can be great.

Comments closed

Memory-Optimized Indexing Strategy

Kunal Karoth has some advice for creating memory-optimized indexes:

To further optimize your queries that do point lookups on your memory-optimized table using an equality operator (=), consider using a Hash index. Hash indexes also optimize insert operations and aid database recovery time. As mentioned earlier, a Hash index uses a hash table internally to keep track of the index keys. The hashing function uses all the key(s) specified at the index creation time to create a mapping between the index key column(s) and the corresponding hash bucket. Hence, it is important that you specify all the index key columns that are part of the hash index, during the lookup in your query.

As with any hashing technique, collisions will occur. A hash collision occurs when two or more index keys map to the same hash bucket. In general, having a limited number of collisions is expected and is fine. It is only when the number of collisions become excessive, it may noticeably begin to impact your query performance. Your aim should be to keep the number of collisions as low as possible. You may experience a higher number of collisions either because: –

  • There are many duplicate index key column values in the table and/or

  • The number of hash buckets for your Hash index are under provisioned.

There’s some helpful information here if you are new to In-Memory OLTP.

Comments closed

Gathering Information On Memory-Optimized Objects

Ned Otter has a diagnostic script for learning about how your servers are using memory-optimized objects:

Instance level evaluates the following:

  • which databases are memory-optimized
  • if running Enterprise, if there are any resource groups defined, and which databases are bound to them
  • version/edition of SQL server
  • ‘max memory’ setting
  • whether or not instance-level collection of execution statistics has been enabled for all natively compiled stored procedures
  • memory clerks for the buffer pool and In-Memory OLTP
  • the value of the committed_target_kb column from sys.dm_os_sys_info
  • display any event notifications (because they conflict with deploying In-Memory OLTP

Database level evaluates the following:

For each memory-optimized database:

  • database files, including container names, size, and location

  • memory-optimized tables

  • indexes on all memory-optimized tables

  • count of indexes per memory-optimized table

  • natively compiled stored procedures

  • whether or not the collection of execution statistics is enabled for any natively compiled procedures

  • count of natively compiled procedures

  • if using the temporal feature for memory-optimized tables, the amount of memory consumed by hidden temporal internal tables

  • memory structures for LOB columns (off-row)

  • average chain length for HASH indexes

Ned provides the script on his blog, so click through to get that.  This looks great if you’re trying to build up some basic information on how developers in your environment use memory-optimized objects.

Comments closed

Memory-Optimized Table Maintenance

Ned Otter has a great post looking at what you can and cannot do with memory-optimized tables containing certain types of indexes:

Now, let’s attempt to create a NONCLUSTERED COLUMNSTORE INDEX:

ALTER TABLE dbo.InMemADD 
ADD INDEX NCCI_InMem NONCLUSTERED COLUMNSTORE (col1);

Msg 10794, Level 16, State 76, Line 76
The feature ‘NONCLUSTERED COLUMNSTORE’ is not supported with memory optimized tables.

It fails because we can only create a CLUSTERED columnstore index (CCI). For 25 years, Microsoft SQL Server differentiated between indexes that physically ordered data on storage (CLUSTERED) and those that did not (NONCLUSTERED). Unfortunately, they chose to ignore that pattern when creating the syntax for memory-optimized tables; using the word CLUSTERED is required when creating a columnstore index on memory-optimized tables. 

It’s a great post with plenty of trial and error.

Comments closed

Parent-Child Relationships And Native Compilation

Ned Otter looks at different ways to insert data with parent-child relationships using natively compiled, memory-optimized procedures:

This blog post demonstrates various approaches when using native compilation to insert rows into parent/child tables.

First, let’s create tables named Parent and Child, and relate them with a FOREIGN KEY constraint. Note that the Parent table uses the IDENTITY property for the PRIMARY KEY column.

Ned’s first example uses @@IDENTITY and shows that this doesn’t work.  But there’s an easy version which is superior to @@IDENTITY and is supported:  SCOPE_IDENTITY().  Even outside of memory-optimized tables, you want to use SCOPE_IDENTITY() over @@IDENTITY anyhow because of the risk of triggers changing the “current” identity value.  Here’s a quick example I ginned up using SCOPE_IDENTITY:

CREATE TABLE dbo.Parent
(
	Id INT IDENTITY (1, 1) NOT NULL PRIMARY KEY NONCLUSTERED,
	SomeChar CHAR(1) NOT NULL
)
WITH(MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
GO
CREATE TABLE dbo.Child
(
	Id INT IDENTITY (1, 1) NOT NULL PRIMARY KEY NONCLUSTERED,
	ParentId INT NOT NULL,
	SomeChildChar CHAR(1) NOT NULL
)
WITH(MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
GO
CREATE PROCEDURE dbo.TestParentChild
(
@SomeChar CHAR(1),
@SomeChildChar CHAR(1)
)
WITH EXECUTE AS OWNER, SCHEMABINDING, NATIVE_COMPILATION
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
	INSERT INTO dbo.Parent
	(
		SomeChar
	)
	VALUES
	(
		@SomeChar
	);

	INSERT INTO dbo.Child 
	(
		ParentId,
		SomeChildChar
	)
	VALUES
	(
		SCOPE_IDENTITY(),
		@SomeChildChar
	);
	
	SELECT
		p.Id,
		p.SomeChar
	FROM dbo.Parent p;

	SELECT
		c.Id,
		c.ParentId,
		c.SomeChildChar
	FROM dbo.Child c;
END;
GO  

EXEC dbo.TestParentChild
	@SomeChar = 'A',
	@SomeChildChar = 'B';

EXEC dbo.TestParentChild
	@SomeChar = 'Z',
	@SomeChildChar = 'Y';

EXEC dbo.TestParentChild
	@SomeChar = 'W',
	@SomeChildChar = 'L';

The results are what you’d expect.

Comments closed