Press "Enter" to skip to content

Category: Columnstore

Handling IoT Traffic With SQL Server

Perry Skountrianos builds a reference architecture for handling nearly one and a half million rows per second with SQL Server:

The following sample demonstrates the high scale and performance of SQL Database, with the ability to insert 1.4 million rows per second by using a non-durable memory-optimized table to speed up data ingestion, while managing the In-Memory OLTP storage footprint by offloading historical data to a disk-based Columnstore table for real time analytics. One of the customers already leveraging Azure SQL Database for their entire IoT solution is Quorum International Inc., who was able to double their key database’s workload while lowering their DTU consumption by 70%.

If you hit on the right scenario, memory-optimized tables can be great.

Comments closed

Columnstore Memory Pressure And Bulk Loading

Niko Neugebauer shows what happens when you try to bulk load data into a columnstore index but don’t have enough memory available:

After waiting for the 25 seconds (notice the difference between the request_time and grant_time is exactly 25 seconds), the engine decides to grant some minimum amount of memory anyway, allowing the process to carry on, without being cancelled, but the penalisation is very heavy – the inserts will not go into the compressed row groups, but into the Delta-Stores, making this operation not-minimally-logged and in other words, painfully slow and inefficient.
To confirm the final results, let’s check on the Row Groups of our tables, given that we have canceled the inserts into the 2 first tables, we expect 1 row group for the [dbo].[FactOnlineSales_Stage3] table and 1 row group for the [dbo].[FactOnlineSales_Stage4] table, corresponding to the 3rd and 4th threads of data loading:

As Niko points out, this could be the difference between a well-behaved, single compressed rowgroup load versus dumping a million rows into the deltastore.

Comments closed

How Columnstore Delta Stores Get Created

Joe Obbish has a list of ways that delta stores get created:

I briefly reviewed the documentation written by Microsoft concerning the appearance of delta stores. Here’s a quote:

Rows go to the deltastore when they are:
Inserted with the INSERT INTO VALUES statement.
At the end of a bulk load and they number less than 102,400.
Updated. Each update is implemented as a delete and an insert.

There are also a few mentions of how partitioning can lead to the creation of multiple delta stores from a single insert. It seems as if the document is incomplete or a little misleading, but I admit that I didn’t exhaustively review everything. After all, Microsoft hides columnstore documentation all over the place.

This is a great compendium of ways in which you can shoot yourself in the foot with clustered columnstore indexes.

Comments closed

Columnstore Indexes And ML Services

Niko Neugebauer picks up on some changes that SQL Server 2017 Machine Learning Services can use with respect to columnstore indexes:

I expect not just a couple of rows to be sent over for the Machine Learning Services, but huge tables with million of rows, that also contain hundreds of columns, because this kind of tables are the basis for the Data Science and Machine Learning processes.
While of course we are focusing here on rather small part of the total process (just the IO between SQL Server relational Engine and the Machine Learning Services), where the analytical process itself can take hours, but the IO can still make a good difference in some cases.
I love this improvement, which is very under-the-hood, but it will help a couple of people to make a decision of migrating to the freshly released SQL Server 2017 instead of the SQL Server 2016.

I haven’t quite taken advantage of this yet (just moved to 2017 but still in 130 compatibility mode) but fingers crossed that I’ll see those improvements.

Comments closed

Finding Adaptive Join Inefficiencies

Joe Obbish walks us through a scenario with adaptive joins in SQL Server 2017:

The estimated costs for the two queries are very close to each other: 74.6842 and 74.6839 optimizer units. However, we saw earlier that the tipping point for an adaptive join on this query can vary between 22680 and 80388.3 rows. This inconsistency means that we can find a query that performs worse with adaptive joins enabled.

Click through to see the queries Joe is using.  Based on this, I’d guess that this is probably a knife-edge problem:  most of the time, adaptive join processing is better, but if you hit the wrong query, it’s worse.

Comments closed

Columnstore Deadlocking

Kendra Little shows us a scenario in which querying columnstore metadata during table updates can lead to a deadlock:

I was playing around with a nonclustered columnstore index on a disk-based table. Here’s what I was doing:

Session 1: this session repeatedly changed the value for a single row, back and forth. This puts it into the delta store.

Session 2: this session repeatedly counted the number of rows in the table, using the columnstore index.

With my sample data in this scenario, I found I frequently generated deadlocks.

Read on for more.

Comments closed

What Prevents Columnstore Row Group Merge

Niko Neugebauer shows two major scenarios in which the Row Group Merge operator will not work as expected, leaving you with an unexpectedly large number of row groups:

Let’s take a look at the most evil and uncorrectable pressure within SQL Server for the Columnstore Indexes – the Dictionary Pressure. For any final user without a bit of the internal knowledge of the Row Group sizes, Dictionaries & Pressures, it is always a huge surprise to find out that they can’t have those perfectly sized Row Groups with 1.048.576 rows.

Let us set up a table producing a good Dictionary Pressure and load 1.1 Million Rows into it, expecting to get just 2 Row Groups, while perfectly being aware that the result shall be quite different:

Niko’s dictionary pressure example is a good reason not to include textual columns on columnstore indexes.

Comments closed

Linked Servers And Columnstore

Niko Neugebauer continues his columnstore series by looking at how they interact with linked servers:

Lets us make sure everything is fine for data transfer and as we are using our source server (SQL Server 2014) with Linked Server to SQL Server 2016, let us insert a couple of ObjectIds to the T1 table that we have created in the [Test] database:

This statement will result in the error message that you can find below, telling us something about Cursors (????):

WHAT ? SORRY ? THERE ARE NO CURSORS HERE !
We have no cursors, we just have a Clustered Columnstore Index on our table!

Read on to see how to get around this error, to the extent that you can.

Comments closed

Columnstore Segment Elimination Pains

Niko Neugebauer shows us two pain points with row group (segment) elimination on clustered columnstore indexes:

Still this gives us processing of just 1 Segment with 253 lob logical reads vs 6 Segments and 2669 lob logical reads in the original query.

But wait, that’s not all! It can get significantly worse!
Let’s use the inequality search with the OR condition, to see what happens – should be safe, right ?

Spoilers:  it’s not safe.  I’ve been burned on the Min + Max issue that Niko shows, where you’d think that it could eliminate all segments except for the first and last, but that’s not how the process works today.

Comments closed

Columnstore Predicate Pushdown Tipping Point

Lonny Niederstadt is hot on the trail, looking for evidence of a tipping point for COUNT(*) aggregates performing pushdown against a clustered columnstore index:

Below is what we want post-execution plans to look like when counting rows in a range – the thin arrow coming out of the Columnstore scan is a hint that predicate pushdown was successful.  I didn’t specify MAXDOP in a query hint, and Resource Governor isn’t supplying MAXDOP; MAXDOP 8 is coming from the Database Scoped Configuration.  The degree of parallelism turns out to be a significant factor in determining the tipping point.  The [key] column is a BigInt.  Maybe its surprising that I’m using 27213.9 as the upper bound.  But… check out the estimated number of rows 🙂 Again – this estimate is coming from the Legacy CE, specified in the database scoped configuration for my database.

Interesting findings, although it looks like the specific values are going to be more settings-dependent than the usual finding of this nature.

Comments closed