Press "Enter" to skip to content

Category: In-Memory OLTP

Memory-Optimized Columnstore

Niko Neugebauer clears the air regarding memory-optimized columnstore tables:

I would like to dedicate this blog post to the Memory-Optimised (also known and LOVED as Hekaton) Columnstore Indexes and their limitations in SQL Server 2016.
Disclaimer: the Memory-Optimised Technology is the ground-breaking development, which will be truly appreciated only in the next couple of years, and it has its incredible use cases (and maybe I will be blogging more about this space in the next couple of months), but people needs to understand that mapping InMemory Columnstore Indexes to disk-based Columnstore Indexes 1:1 is a very wrong idea, and that because InMemory technology is significantly younger and less stable than Columnstore Indexes – there are some very significant hidden cornerstones.

It’s important to read this post as “this is not yet a fully-mature product” rather than “this will always be worse.”  But it’s just as important to understand the limitations of the product and not think you’re getting something that you aren’t.

Comments closed

MOSL

Niko Neugebauer has a new script library, this time for memory-optimized objects:

Today, I am announcing the initial availability of the brand new library for diving into the internals of SQL Server the MOSL – Memory Optimised Scripts Library, the library that will help you to discover, maintain & troubleshoot the Hekaton in SQL Server 2014, 2016 & Azure SQLDatabase. At the moment, I am releasing the scripts for SQL Server 2014 but at the PASS Summit I am hoping to have all 3 versions online.

I am inviting everyone working with Memory-Optimised Technology to join me in this project, which is targeted on helping everyone who is working or aspiring to work with this amazing and ground-breaking technology.

Check them out if you use memory-optimized tables in your environment.

Comments closed

LOB On Memory-Optimized Tables

Dmitri Korotkevitch digs into LOB data when you build a memory-optimized table:

There is also considerable overhead in terms of memory usage. Every non-empty off-row value adds 50+ bytes of the overhead regardless of its size. Those 50+ bytes consist of three artificial ID values (in-row, off-row in data row and leaf-level of the range index) and off-row data row structure. It is even larger in case of LOB columns where data is stored in LOB Page Allocator.

One of the key points to remember that decision which columns go off-row is made based on the table schema. This is very different from on-disk tables, where such decision is made on per-row basis and depends on the data row size. With on-disk tables, data is stored in row when it fits on the data page.

In-Memory OLTP works in the different way. (Max) columns are always stored off-row. For other columns, if the data row size in the table definition can exceed 8,060 bytes, SQL Server pushes largest variable-length column(s) off-row. Again, it does not depend on amount of the data you store there.

This is a great article getting into the internals of how memory-optimized tables work in SQL Server 2016, as well as a solid reason to avoid LOB types and and very large VARCHAR values on memory-optimized tables if you can.  Absolutely worth a read.

Comments closed

After Triggers On Memory-Optimized Tables

David Klee shows that you can put an AFTER trigger on a memory-optimized table in SQL Server 2016:

SQL Server 2016 supports AFTER triggers! I could find no good example of how to do this with a project I’m working on, so I figured out how to make this work. The following before and after screenshots are from a SQL Server 2016 instance and the DVDStore3 package. I modified a trigger to work with In-Memory OLTP.

Click through to see the code.

Comments closed

DBCC Checking Memory-Optimized Tables

Brent Ozar shows us how to run a DBCC check against a memory-optimized table.  The answer is not trivial:

THE EASY FIX: RUN FULL NATIVE BACKUPS EVERY DAY, AND FREAK OUT WHEN THEY FAIL.

Backup failures aren’t normally a big deal, but if you use in-memory OLTP on a standalone server or a failover clustered instance, backup failures are all-out emergencies. You need to immediately find out if the backup just ran out of drive space or lost its network connection, or if you have game-over Hekaton corruption.

Note that you can’t use SAN snapshot backups here. SQL Server won’t read the In-Memory OLTP pages during a snapshot backup, which means they can still be totally corrupt.

This works fine for shops with relatively small databases, say under 500GB.

Brent also has a Connect item to fix this.  Upvote if you have memory-optimized, durable tables.

Comments closed

New In-Memory OLTP Features

Jos de Bruijn points out new In-Memory OLTP features introduced since CTP3:

Large object (LOB) types varchar(max), nvarchar(max) and varbinary(max) are now supported with memory-optimized tables and table types, as well as natively compiled T-SQL modules, and the size limitations mirror that of disk-based tables (2GB limit on LOB values). In addition, you can have a memory-optimized table with a row size > 8060 bytes, even when no column in the table uses a LOB type. There is no run-time limitation on the size of rows or the data in individual columns; this is part of the table definition. Of course, all data does need to fit in memory.

I’m very pleased about this—now I can go back and turn some of my bigger table types into memory-optimized table types.

Comments closed

Memory-Optimized Temp Objects

Jos de Bruijn shares a couple scenarios in which In-Memory OLTP can improve performance—using memory-optimized table types and replacing certain types of temp tables with schema-only memory-optimized tables:

Tempdb can be a performance bottleneck for many applications. Workloads that intensively use table-valued parameters (TVPs), table variables and temp tables can cause contention on things like metadata and page allocation, and result in a lot of IO activity that you would rather avoid.

What if TVPs and temp tables could live just in memory, in the memory space of the user database? In-Memory OLTP can help! Memory-optimized table types and SCHEMA_ONLY memory-optimized tables can be used to replace traditional table types and traditional temp tables, bypassing tempdb completely, and providing additional performance improvements through memory-optimized data structures and data access methods.

I’ve used both of these techniques to good effect, but the harsh limitations in 2014 prevented me from doing as much with them as I wanted.

Comments closed

How Do Natively Compiled UDFs Perform?

Gail Shaw investigates natively compiled user-defined functions in SQL Server 2016:

When I saw that, the first question that came to mind is whether natively compiling a scalar function reduces the overhead when calling that function within another query. I’m not talking about data-accessing scalar UDFs, since natively compiled functions can only access in-memory tables, but functions that do simple manipulation of the parameters passed in. String formatting, for example, or date manipulation.

While not as harmful as data-accessing scalar UDFs, there’s still overhead as these are not inline functions, they’re called for each row in the resultset (as a look at the Stored Procedure Completed XE event would show), and the call to the function takes time. Admittedly not a lot of time, but when it’s on each row of a large resultset the total can be noticeable.

Read the whole thing and check out Gail’s method and conclusions.

Comments closed