Aggregate Predicate Pushdown And Data Types

Niko Neugebauer shows an example of how a slightly different data type can cause columnstore queries to be much faster:

Even though they are estimated to cost the same (50% for each one) with the estimated cost of 0.275286 to be more precise in this sense.
To be more precise in the reality you will notice the Aggregate Predicate Pushdown taking place on the first query, while the second query is using the Storage Engine to read out all of the 2 million rows from the table and filter it in the Hash Match iterator.

Actual Number of Locally Aggregated Rows
is the one property on the Columnstore Index Scan iterator that will give you an insight on what happened within the Columnstore Index Scan, since the Aggregate Predicate Pushdown is not shown as a filter on the property. This is not the most fortunate solution as far as I am concerned, but since the 0 rows flowing out of the Columnstore Index Scan will serve as a good indication that Aggregate Predicate Pushdown took place, but if you want to be sure of all the details you will need to check the properties of the involved iterators.

Definitely worth reading.

Memory-Optimized Clustered Columnstore Indexes

Ned Otter contrasts memory-optimized clustered columnstore indexes with traditional, disk-based clustered columnstore indexes:

5. Parallelism

On-disk: When you query an on-disk table that has a columnstore index, the database engine can use parallelism to process the results more quickly.

Memory-optimized: When you query a memory-optimized table that has a columnstore index, the database engine can use parallelism to process the results more quickly, BUT, that statement is only true if you use interop. Natively compiled modules are always executed serially.

Click through for the rest of the comparison points as well as a repro script.

Identity Integers And Columnstore

Niko Neugebauer looks at how clustered columnstore indexes handle identity integers with respect to data load times:

This blog post will try to respond this question from the perspective of the data loading performance.

For this research I decided to pick 3 distinct scenarios to investigate, which refer to different ways to approach the solution:
– a CCI table with an Identity column
– a CCI table with a Sequence as a default value
– a CCI table without Identity

There’s a pretty substantial performance difference, so this is well worth the read for large columnstore data loads.

Zero, One, Close Enough

Kendra Little points out a columnstore optimization which leaves a strange execution plan as a result:

I have a very simple query. It’s running against a table with a nonclustered columnstore index.

SELECT COUNT(*) FROM pt.FirstNameByBirthDate_1966_2015;

GO

The query returns one row, as expected. Here’s my count:

For the record, that is the correct number of rows in the table. Here’s where things get weird. In the actual execution plan, the columnstore index returns zero rows.

Yes, this is really the actual execution plan. I’m not tricking you, I promise.

Click through for the answer.

Merging With Columnstore

Niko Neugebauer does not like the MERGE statement when applied to columnstore indexes:

This blog post is focused on the MERGE statement for the Columnstore Indexes, or as I call it – the worst enemy of the Columnstore Indexes. It is extremely difficult to imagine some statement or way of making the worst out of the Columnstore Indexes, if not the infamous MERGE statement. Why ? Because it is not only making Columnstore Indexes perform slow, it will make them perform MUCH SLOWER then any Rowstore Indexes. Yes, you have read right – slower then ANY_ROWSTORE_INDEXES. In fact, this should be a hint that one should apply to the Merge statement, when it is executed against Columnstore Indexes! 🙂
I decide to dedicate a whole blog post on this matter, mainly to warn people of this pretty problematic statement – I hope not to see it being used for Columnstore Indexes in the future!

There is very little room for misunderstanding in Niko’s post.

Looking At Columnstore In vNext

Dmitry Pilugin looks at non-clustered columnstore indexes in vNext:

Note: Notice, that I created a varchar(max) column in a table, that was done deliberately, because I wanted to test if LOB columns are supported in vNext. Unfortunatelly, when I tried to create a nonclustered Columnstore index, I got an error:
The statement failed. Column ‘Note’ has a data type that cannot participate in a columnstore index. Omit column ‘Note’.

However, if you create a clustered Columnstore, you will succeed! Though, you will get the following warning in CTP 1.2:

Warning: Using Lob types (NVARCHAR(MAX), VARCHAR(MAX), and VARBINARY(MAX)) with Clustered Columnstore Index is in public preview. Do not use it on production data without backup during public preview period.

I think it is great that we can now have a LOB varchar columns in Columnstore, because it was a blocker for some scenarios, at least I have seen some of those.

There are a few interesting findings here, so check it out.

Trivial Plans With Columnstore

Dmitry Pilugin looks at columnstore query behavior when trivial plans get involved:

Both queries are now fully optimized and that lead to different plans. First of all, both queries run in a Batch Mode, which is much faster than a Row Mode.

In the first query, we see Hash Match Aggregate instead of Stream Aggregate, more to the point you may see that Actual Number of Rows is 0, because all the rows were aggregated locally at the Storage Engine level, you may see property Actual Number of Locally Aggregated Rows = 60855. This is faster than a regular aggregation and is known as Aggregate Pushdown.

In the second query, you may observe a new Window Aggregate operator which is faster than a Window Spool and runs in Batch Mode also.

Read the whole thing.  Dmitry also looks at SQL Server vNext and how it handles the same trivial-plan-generating scenario.

More On Columnstore String Handling

Niko Neugebauer focuses on two potential columnstore issues, NULL strings and string aggregation:

You can see a filter iterator taking estimated 9% of the resources and filtering over 5,1 million rows that were taken out of the lineitem_cci table (Columnstore Index Scan operation). Before it took place all of the table data was read and the values for the aggregate values were calculated, meaning a significant resource waste (when the data was filtered and while it was occupying memory).

The predicate properties of the Columnstore Index Scan are shown on the left picture and you can see that nothing has been pushed into the Storage Engine, making this query perform quite slow. On my local test VM it takes on average 0.7 seconds to execute while spending over 1.9 seconds of the CPU time. This happens because of the inability of the Columnstore Indexes to push NULL expression predicate into the storage engine.

To solve this problem in this test case I will need to rewrite the query and remove the NULL expression by substituting it by = ‘RAIL’ OR IS NULL logic:

Definitely worth a read.

Getting A Grip On Columnstore

Melissa Coates has a nice introduction to columnstore indexes:

Many of the in-memory features in the Microsoft platform rely on the xVelocity (formerly known as VertiPaq) engine. The implementations do differ somewhat between products, such as the requirements for data to be truly memory-resident.

The remainder of this post will focus on the two columnstore technologies in SQL Server: clustered and nonclustered.

If you’re not very familiar with columnstore and aren’t quite ready to tackle Niko Neugebauer’s columnstore series, this is a good way to get started.

HTDELETE Wait Type

Joey D’Antoni troubleshoots a query with excessive HTDELETE waits:

Ultimately I think any thought of the readable secondary having a vastly different plan was a red herrings. Statistics are going to be the same on both instances, and if there were a missing statistic on the secondary, SQL Server would create it in TempDB. Anyway, columnstore indexes don’t use statistics in the traditional sense.

Fortunately I was able to catch a query in the process of waiting on HTDELETE, so I no longer had to look for the needle in the haystack, and I could get to tuning the plans. I was able to grab the SELECT part of the query and generate an estimated plan on both the primary and secondary nodes. The plans were virtually the same on both nodes, with just a minor difference in memory grant between them.

Click through for the solution.

Categories

July 2017
MTWTFSS
« Jun  
 12
3456789
10111213141516
17181920212223
24252627282930
31