The estimated costs for the two queries are very close to each other: 74.6842 and 74.6839 optimizer units. However, we saw earlier that the tipping point for an adaptive join on this query can vary between 22680 and 80388.3 rows. This inconsistency means that we can find a query that performs worse with adaptive joins enabled.
Click through to see the queries Joe is using. Based on this, I’d guess that this is probably a knife-edge problem: most of the time, adaptive join processing is better, but if you hit the wrong query, it’s worse.
I was playing around with a nonclustered columnstore index on a disk-based table. Here’s what I was doing:
Session 1: this session repeatedly changed the value for a single row, back and forth. This puts it into the delta store.
Session 2: this session repeatedly counted the number of rows in the table, using the columnstore index.
With my sample data in this scenario, I found I frequently generated deadlocks.
Read on for more.
Let’s take a look at the most evil and uncorrectable pressure within SQL Server for the Columnstore Indexes – the Dictionary Pressure. For any final user without a bit of the internal knowledge of the Row Group sizes, Dictionaries & Pressures, it is always a huge surprise to find out that they can’t have those perfectly sized Row Groups with 1.048.576 rows.
Let us set up a table producing a good Dictionary Pressure and load 1.1 Million Rows into it, expecting to get just 2 Row Groups, while perfectly being aware that the result shall be quite different:
Niko’s dictionary pressure example is a good reason not to include textual columns on columnstore indexes.
Lets us make sure everything is fine for data transfer and as we are using our source server (SQL Server 2014) with Linked Server to SQL Server 2016, let us insert a couple of ObjectIds to the T1 table that we have created in the [Test] database:
123 INSERT INTO [.\SQL16].Test.dbo.T1 (C1)SELECT so.object_idFROM sys.objects so;
This statement will result in the error message that you can find below, telling us something about Cursors (????):
12 Msg 35370, Level 16, State 1, Line 1Cursors are not supported on a table which has a clustered columnstore index.
WHAT ? SORRY ? THERE ARE NO CURSORS HERE !
We have no cursors, we just have a Clustered Columnstore Index on our table!
Read on to see how to get around this error, to the extent that you can.
Still this gives us processing of just 1 Segment with 253 lob logical reads vs 6 Segments and 2669 lob logical reads in the original query.
But wait, that’s not all! It can get significantly worse!
Let’s use the inequality search with the OR condition, to see what happens – should be safe, right ?
Spoilers: it’s not safe. I’ve been burned on the Min + Max issue that Niko shows, where you’d think that it could eliminate all segments except for the first and last, but that’s not how the process works today.
Below is what we want post-execution plans to look like when counting rows in a range – the thin arrow coming out of the Columnstore scan is a hint that predicate pushdown was successful. I didn’t specify MAXDOP in a query hint, and Resource Governor isn’t supplying MAXDOP; MAXDOP 8 is coming from the Database Scoped Configuration. The degree of parallelism turns out to be a significant factor in determining the tipping point. The [key] column is a BigInt. Maybe its surprising that I’m using 27213.9 as the upper bound. But… check out the estimated number of rows 🙂 Again – this estimate is coming from the Legacy CE, specified in the database scoped configuration for my database.
Interesting findings, although it looks like the specific values are going to be more settings-dependent than the usual finding of this nature.
There are 3 important factors that I use for the determination of the column that is really good for the Segment Elimination and hence the Segment Alignment:
– The support for the Segment Elimination (and then for the Predicate Pushdown). If the data type does not support Segment Elimination, than why would someone optimise for it ?
– The frequency with which the column is used in the predicates (not in the joins, because this is where generally the Segment Elimination/Predicate Pushdown does not function)
– The number of the distinct values within a table/partition (if we have more Segments than distinct values, it is not a very good sign generally: example – 10 million rows with 5 distinct values)
Read on for more details.
So where are we? Well, we found that Rowgroup Elimination is possible in stored procedures with ColumnStore indexes, but that the cached plan doesn’t change based on feedback from that elimination.
- Good news: elimination can occur with variables passed in.
- Bad news: that cached plan sticks with you like belly fat at a desk job
Remember our plan? It used a Stream Aggregate to process the
MAX. Stream Aggregates are preferred for small, and/or ordered sets.
Great post, Brent.
Do you remember one of the major problems in SQL Server 2014 using Columnstore Indexes ? It was the lack of the support for the Batch Execution Mode with just a single core. We would get wonderful, fast execution plans with MAXDOP >= 2, which will go terribly slow if there would not be enough memory to run the query with 2 or more cores, or if the internal query cost would be below the parallel execution threshold (cost threshold for parallelism)
OR if the execution plan would be dimmed as TRIVIAL by the Query Optimiser, thus getting a single core execution and running really slow.
Once we upgraded to SQL Server 2016, the problem of inability of the single core Batch Mode execution would fade away, but still, sometimes some queries would run terribly slow for some reason …
One of the reasons behind this are the trivial execution plans, which are running Columnstore Index Scan in the Row Execution Mode – also known as a VerySlowExecutionMode for the big amounts of data.
Read on to see the change in 2017, as well as a workaround for 2016.
From a recent experience at a customer, I had an opportunity to dive into the details of the Columnstore Indexes Dictionaries. I have to admit that my understanding of them was pretty low, from what I have learned in the recent days, and I would like to share what I have learned with everyone.
These are some of the key findings that I have discovered:
– The local dictionaries are not exclusively connected with just 1 Row Group, but with multiple ones;
– The dictionaries within Columnstore Indexes are compressed in a different way, depending on the type of the compression applied (Columnstore vs Columnstore Archival);
and let us dive into each one of them:
Read the whole thing.