Press "Enter" to skip to content

Category: In-Memory OLTP

LOB On Memory-Optimized Tables

Dmitri Korotkevitch digs into LOB data when you build a memory-optimized table:

There is also considerable overhead in terms of memory usage. Every non-empty off-row value adds 50+ bytes of the overhead regardless of its size. Those 50+ bytes consist of three artificial ID values (in-row, off-row in data row and leaf-level of the range index) and off-row data row structure. It is even larger in case of LOB columns where data is stored in LOB Page Allocator.

One of the key points to remember that decision which columns go off-row is made based on the table schema. This is very different from on-disk tables, where such decision is made on per-row basis and depends on the data row size. With on-disk tables, data is stored in row when it fits on the data page.

In-Memory OLTP works in the different way. (Max) columns are always stored off-row. For other columns, if the data row size in the table definition can exceed 8,060 bytes, SQL Server pushes largest variable-length column(s) off-row. Again, it does not depend on amount of the data you store there.

This is a great article getting into the internals of how memory-optimized tables work in SQL Server 2016, as well as a solid reason to avoid LOB types and and very large VARCHAR values on memory-optimized tables if you can.  Absolutely worth a read.

Comments closed

After Triggers On Memory-Optimized Tables

David Klee shows that you can put an AFTER trigger on a memory-optimized table in SQL Server 2016:

SQL Server 2016 supports AFTER triggers! I could find no good example of how to do this with a project I’m working on, so I figured out how to make this work. The following before and after screenshots are from a SQL Server 2016 instance and the DVDStore3 package. I modified a trigger to work with In-Memory OLTP.

Click through to see the code.

Comments closed

DBCC Checking Memory-Optimized Tables

Brent Ozar shows us how to run a DBCC check against a memory-optimized table.  The answer is not trivial:

THE EASY FIX: RUN FULL NATIVE BACKUPS EVERY DAY, AND FREAK OUT WHEN THEY FAIL.

Backup failures aren’t normally a big deal, but if you use in-memory OLTP on a standalone server or a failover clustered instance, backup failures are all-out emergencies. You need to immediately find out if the backup just ran out of drive space or lost its network connection, or if you have game-over Hekaton corruption.

Note that you can’t use SAN snapshot backups here. SQL Server won’t read the In-Memory OLTP pages during a snapshot backup, which means they can still be totally corrupt.

This works fine for shops with relatively small databases, say under 500GB.

Brent also has a Connect item to fix this.  Upvote if you have memory-optimized, durable tables.

Comments closed

New In-Memory OLTP Features

Jos de Bruijn points out new In-Memory OLTP features introduced since CTP3:

Large object (LOB) types varchar(max), nvarchar(max) and varbinary(max) are now supported with memory-optimized tables and table types, as well as natively compiled T-SQL modules, and the size limitations mirror that of disk-based tables (2GB limit on LOB values). In addition, you can have a memory-optimized table with a row size > 8060 bytes, even when no column in the table uses a LOB type. There is no run-time limitation on the size of rows or the data in individual columns; this is part of the table definition. Of course, all data does need to fit in memory.

I’m very pleased about this—now I can go back and turn some of my bigger table types into memory-optimized table types.

Comments closed

Memory-Optimized Temp Objects

Jos de Bruijn shares a couple scenarios in which In-Memory OLTP can improve performance—using memory-optimized table types and replacing certain types of temp tables with schema-only memory-optimized tables:

Tempdb can be a performance bottleneck for many applications. Workloads that intensively use table-valued parameters (TVPs), table variables and temp tables can cause contention on things like metadata and page allocation, and result in a lot of IO activity that you would rather avoid.

What if TVPs and temp tables could live just in memory, in the memory space of the user database? In-Memory OLTP can help! Memory-optimized table types and SCHEMA_ONLY memory-optimized tables can be used to replace traditional table types and traditional temp tables, bypassing tempdb completely, and providing additional performance improvements through memory-optimized data structures and data access methods.

I’ve used both of these techniques to good effect, but the harsh limitations in 2014 prevented me from doing as much with them as I wanted.

Comments closed

How Do Natively Compiled UDFs Perform?

Gail Shaw investigates natively compiled user-defined functions in SQL Server 2016:

When I saw that, the first question that came to mind is whether natively compiling a scalar function reduces the overhead when calling that function within another query. I’m not talking about data-accessing scalar UDFs, since natively compiled functions can only access in-memory tables, but functions that do simple manipulation of the parameters passed in. String formatting, for example, or date manipulation.

While not as harmful as data-accessing scalar UDFs, there’s still overhead as these are not inline functions, they’re called for each row in the resultset (as a look at the Stored Procedure Completed XE event would show), and the call to the function takes time. Admittedly not a lot of time, but when it’s on each row of a large resultset the total can be noticeable.

Read the whole thing and check out Gail’s method and conclusions.

Comments closed

Running Out Of In-Memory Disk Space

Jack Li shows us what happens when we run out of disk space for checkpoint files on a memory-optimized table:

The question is what happens if the disk that host the In-Memory checkpoint files runs out of disk space?  So I decided to do some testing and document the symptoms and recovery steps here in case you run into such issue.  With our Azure, test was really easy.  All I had to do was to spawn a VM and attach a very small disk to simulate out of disk space condition.

If your disk runs out of space, you will see various errors below though your database stays online

It looks like you can work your way out of a full drive scenario.  Hopefully, however, you won’t get into this scenario too often.

Comments closed

In-Memory Analytics

Sunil Agarwal introduces us to In-Memory Analytics, forthcoming in SQL Server 2016:

SQL Server 2016 has significant advancements over SQL Server 2014 for In-Memory analytics. Some highlights are functionality (e.g. ability to create traditional nonclustered index to enforce PK/FK), performance (e.g. addition of new BatchMode operators, Aggregate pushdown), Online index defragmentation, and supportability (e.g. new DMVs, Perfmon counters and XEvents).

His post talks a little bit about in-memory, but focuses more on clustered columnstore indexes.  I like that columnstore indexes are getting V3 improvements, and I think they’ll be even more useful.  Whether the “in-memory” part becomes useful is a different question; I personally have seen a very limited adoption of In-Memory OLTP (and a few huge bugs for the people brave enough to try it).

Comments closed