Press "Enter" to skip to content

Category: Columnstore

Columnstore Segment Alignment

Niko Neugebauer walks us through a new segment alignment detection function he has written:

There are 3 important factors that I use for the determination of the column that is really good for the Segment Elimination and hence the Segment Alignment:
– The support for the Segment Elimination (and then for the Predicate Pushdown). If the data type does not support Segment Elimination, than why would someone optimise for it ?
– The frequency with which the column is used in the predicates (not in the joins, because this is where generally the Segment Elimination/Predicate Pushdown does not function)
– The number of the distinct values within a table/partition (if we have more Segments than distinct values, it is not a very good sign generally: example – 10 million rows with 5 distinct values)

Read on for more details.

Comments closed

Rowgroup Elimination In Stored Procedures

Erik Darling notes a parameter sniffing problem when trying to use rowgroup elimination in a stored procedure:

So where are we? Well, we found that Rowgroup Elimination is possible in stored procedures with ColumnStore indexes, but that the cached plan doesn’t change based on feedback from that elimination.

  • Good news: elimination can occur with variables passed in.
  • Bad news: that cached plan sticks with you like belly fat at a desk job

Remember our plan? It used a Stream Aggregate to process the MAX. Stream Aggregates are preferred for small, and/or ordered sets.

Great post, Brent.

Comments closed

The Trivial Plan Problem With Columnstore

Niko Neugebauer shows that trivial columnstore plans can lead to poor performance, but SQL Server 2017 has a fix:

Do you remember one of the major problems in SQL Server 2014 using Columnstore Indexes ? It was the lack of the support for the Batch Execution Mode with just a single core. We would get wonderful, fast execution plans with MAXDOP >= 2, which will go terribly slow if there would not be enough memory to run the query with 2 or more cores, or if the internal query cost would be below the parallel execution threshold (cost threshold for parallelism)
OR if the execution plan would be dimmed as TRIVIAL by the Query Optimiser, thus getting a single core execution and running really slow.
Once we upgraded to SQL Server 2016, the problem of inability of the single core Batch Mode execution would fade away, but still, sometimes some queries would run terribly slow for some reason …
One of the reasons behind this are the trivial execution plans, which are running Columnstore Index Scan in the Row Execution Mode – also known as a VerySlowExecutionMode for the big amounts of data.

Read on to see the change in 2017, as well as a workaround for 2016.

Comments closed

Columnstore Dictionaries

Niko Neugebauer explains some interesting facts about columnstore index dictionaries:

From a recent experience at a customer, I had an opportunity to dive into the details of the Columnstore Indexes Dictionaries. I have to admit that my understanding of them was pretty low, from what I have learned in the recent days, and I would like to share what I have learned with everyone.

These are some of the key findings that I have discovered:
– The local dictionaries are not exclusively connected with just 1 Row Group, but with multiple ones;
– The dictionaries within Columnstore Indexes are compressed in a different way, depending on the type of the compression applied (Columnstore vs Columnstore Archival);

and let us dive into each one of them:

Read the whole thing.

Comments closed

Trivial Plans And Columnstore Indexes

Erik Darling warns us that trivial plans against clustered columnstore indexes could lead to row execution rather than batch execution:

Let’s look at one query with a few variations.

SELECT COUNT(*) AS [Records], SUM(CONVERT(BIGINT, t.Amount)) AS [Total]
FROM   dbo.t1 AS t
WHERE  t.Id > 0
       AND t.Id < 3;

The plan for it is alright. It’s fairly straightforward and the query finishes in about 170ms.

We can see from the graphical execution plan that it’s been Simple Parameterized. SQL Server does this to make plan caching more efficient.

Check out the entire post.

Comments closed

Finding Clustered Columnstore Index Candidates

Sunil Agarwal has a script that helps you find potential clustered columnstore index candidates:

Most of us understand that clustered columnstore index can typically provide 10x data compression and can speed up query performance up to 100x. While this sounds all so good, the question is how do I know which tables in my database could potentially benefit from CCI? For a traditional DW scenario with star schema, the FACT table is an obvious choice to consider. However, many workloads including DW have grown organically and it is not trivial to identify tables that could benefit from CCI. So the question is how can I quickly identify a subset of tables suitable for CCI in my workload?

Interestingly, the answer lies in leveraging the DMVs that collect data access patterns in each of the tables. The following DMV query provides a first order approximation to identify list of tables suitable for CCI. It queries the HEAP or the rowstore Clustered index using DMV sys.dm_db_index_operational_stats to identify the access pattern on the base rowstore table to identify tables that meet the criteria listed in the comments below:

Read on for the script, which has a sensible set of criteria.

Comments closed

Memory Grants Affecting Columnstore Load

Denzil Ribeiro explains how memory grant pressure can determine whether a columnstore bulk insert skips the deltastore or not:

We found that only at the beginning of the run, there was contention on memory grants (RESOURCE_SEMAPHORE waits), for a short period of time. After that and later into the process, we could see some latch contention on regular data pages, which we didn’t expect as each thread was supposed to insert into its own row group. You would also see this same data by querying sys.dm_exec_requests live, if you caught it within the first minute of execution, as displayed below.

This is useful in case you run into the issue, but also useful as a case study on effective troubleshooting.

Comments closed

Memory Requirements For Columnstore Rebuild/Reorg

Niko Neugebauer looks at memory requirements for rebuilding and reorganizing columnstore indexes:

To spare all the Wows & how can’s, Microsoft was well aware of this problem and has delivered a solution with Cumulative Update 3 for SQL Server 2016 with Service Pack 1:
FIX: SQL Server 2016 consumes more memory when you reorganize a columnstore index, and here it is – a new trace flag 6404 (documented in the link and thus should be supported), that will allow you to lower the memory requirements for the ALTER INDEX … REORGANIZE command.
Let’s take it for the test, by once again running the setup workload for the FactOnlineSales_Reindex table and then executing the following command, enabling the Trace Flag 6404 and then reorganising our Clustered Columnstore Index:

This is a rather interesting post and once again makes me wish that clustered columnstore indexes could be rebuilt online.

Comments closed

Columnstore Performance Counters

Niko Neugebauer talks about perfmon counters available for understanding what’s going on with columnstore indexes:

As mentioned right in the beginning of this article, Sunil Agarwal lead the development team into adding greatly valuable performance counters with it’s own object ‘MSSQL:Columnstore‘ that provides some incredible insight on some of the internal operations that are not exposed in other ways.
This was a very much needed step, because SQL Server 2014 has brought a lot of different performance counters and objects for the In-Memory (XTP), while Columnstore Indexes were deserving a good treatment of their own.

This gives you one more avenue for research if you’re experiencing columnstore-related issues.

Comments closed

Partitioned Columnstore Tables

Denny Cherry makes an important point about dealing with columnstore tables:

ColumnStore indexes are all the rage with data warehouses. They’re fast, they’re new(ish) and they solve all sorts of problems when dealing with massive amounts of data.  However they can cause some issues as well if you aren’t very careful about how you setup your partitions on the ColumnStore index.  This is because, you can’t split a ColumnStore partition once it contains data.

Now, if everything is going according to plan you create your partitions well in advance and there’s no issues.

However, if everything hasn’t gone according to plan and someone forgets to create the partitions and you end up with rows in the final partition, you can’t create any more partitions because you can’t split the partition.

Ideally, you get those ducks in a row first.  Keep reading for a repro script and a couple potential workarounds.

Comments closed