High-Compression JSON With vNext

Jovan Popovic gives an example of 25x compression of JSON data using a clustered columnstore index in vNext:

CLUSTERED COLUMNSTORE INDEXES provide extreme data compression in SQL Server and Azure SQL Database. With NVARCHAR(MAX) support in CCI indexes you can use them on your JSON data stored is database and get high 25x compression. Therefore, CCI is a perfect solution if you need to store a large volume of JSON data in your SQL Database.

ContosoDW database is publicly available for download, so you can use this database and the script below to re-create this table and try this in your environment.

I’m curious whether this will also apply to non-JSON data.

JSON In Clustered Columnstore Indexes

Jovan Popovic gives a use case for JSON data being part of a clustered columnstore index:

This is equivalent to collections that you might find in classic NoSQL database because they store each JSON document as a single entity and optionally create indexes on these documents. The only difference is CLUSTERED COLUMNSTORE index on this table that provides the following benefits:

  1. Data compression – CCI uses various techniques to analyze your data and choose optimal compression algorithms to compress data.

  2. Batch mode analytic – queries executed on CCI process rows in the batches from 100 to 900 rows, which might be much faster than row-mode execution.

I think it’s worth reading this in conjunction with Niko Neugebauer’s comments regarding strings in columnstore.

Columnstore And Strings

Niko Neugebauer has a great knowledge dump regarding strings in columnstore indexes:

Having Strings in Fact tables is something that is quite normal, but to be honest, in the most cases – does not make a lot of sense, since we are trying to keep there the information that can be calculated and/or aggregated. Notice that I have written in the most cases and NOT in all cases, because there are some noticeable exceptions. Additionally if you are “feeding” SSAS Tabular with your table this might be much easier to do it directly (hey, there is a solution through the views for that, I was told :)).

In this blog post, I am focusing not on the exceptions but on the typical cases where its not the best option and so here is a basic solution I just wanted to present you an optimised structure, which contains a tinyint column referring to the new table with distinct data for the ShipMode.

The string experience with columnstore can be troublesome.  It’s great for numeric values, but less great for strings.

NCCI Online Rebuild

Niko Neugebauer notes a big improvement in vNext:

For the Columnstore Indexes, the only online operation for the indexes that was available so far, was the Row Group Merging and Removal with ALTER INDEX REORGANIZE (as well as the Tuple Mover operations). With appearance of HTAP scenarios (Hybrid Transactional Analytical Processing aka Operational Analytics) in SQL Server 2016, there was a huge need for the online index maintenance, making sure that the operational part of the HTAP runs smoothly. For any online business, taking their application down for an hour means loosing real money and even worse – loosing credibility from their customers. To my knowledge, Microsoft was very much aware and was working on improving this missing part.
For the SQL Server vNext version (after SQL Server 2016) in CTP 1.2, yesterday, we have finally received the first Online Rebuild operation for the Columnstore Indexes – in this case for the Nonclustered Columnstore Indexes, and this is a huge news for anyone using the HTAP scenarios.
Naturally this feature is Enterprise Edition Only, and like ever before – if you are running a critical workload, you need to step up and use the Enterprise Edition.

Online clustered columnstore reorganization in 2016 was a life-saver, and I’m looking forward to online clustered columnstore rebuilding at some point in the future.

More On Columnstore Batch Mode

Sunil Agarwal talks about batch mode processing with columnstore indexes:

While these results may not appear as dramatic on my laptop, the picture below shows the performance gains with Window Aggregates on a Server class machine with large DW database. The orange bar represents the query speed up we got with Window Aggregate operator in BatchMode. The highest speed up we saw was 289x!!

Batch mode is generally a huge benefit for data warehousing environments.

Quickstats Craziness

Lonny Niederstadt takes an in-depth look at the intersection of several features and discusses an issue:

Here’s the Connect Item involved – written by Michael Swart.

Trace flag 2390 can cause large compile time when dealing with filtered indexes. (Active)
https://connect.microsoft.com/SQLServer/feedback/details/2528743/

Huh.  “What does that have to do with clustered columnstore indexes?” you might ask.  No, really, it’d be a personal favor if you *did* ask…

Ask and then read on.  This is the kind of post I’d send to someone wanting to learn how to troubleshoot issues.

Columnstore Query Patterns

Niko Neugebauer gives a couple hints on how to make columnstore queries as fast as possible:

On my VM with 4 cores it takes 33 seconds to execute this query on SQL Server 2016 with Service Pack 1, while it burns almost 48 seconds of the CPU Time.
The relevant part of the execution plan can be found below, showing so many performance problems that this query is suffering, such as INNER LOOP JOIN, INDEX SPOOL, besides even worse part that is actually hidden and is identifiable only once you open the properties of any of the lower tree (left side of the LOOP JOIN), seeing that it all runs with the Row Execution Mode actually.

To show you the problem, on the left side you will find the properties of the sort iterator that is to be found in the lower (left) part of the LOOP Join that was executed around 770.000 times in the Row Execution Mode, effectively taking any chances away from this query to be executed in a fast way. One might argue that it might that it might be more effective to do the loop part in Row Mode, but given that we are sorting around 3.1 Million Rows there – for me there is no doubt that it would be faster to do it within a Batch Execution Mode. Consulting the last sort iterator in the execution plan (TOP N SORT), you will find that it is running with the help of the Batch Execution Mode, even though it is processing around 770.000 rows.

There’s some valuable information here.

Columnstore Partitioning

Niko Neugebauer warns against partitioning small tables with clustered columnstore indexes:

Needless to say that looking at the execution plans you notice that the actual execution plan shows 10 times difference between them, even though both tables contain the very same data!
The query cost for the partitioned table is staggering – it is around 10 times bigger (~8.8) vs (~0.81) for the first query.
The execution times reflect in part this situation: 12 ms vs 91 ms. Non-partitioned table performs almost 9 times faster overall and the spent CPU time is reflecting it: 15 ms vs 94 ms. Remember, that both tables are Columnstore Indexes based ! Partitioning your table in a wrong way will contain a huge penalty that might not be directly detectable through the execution plan of the complex queries. Well, you might want to use the CISL, just saying

If you can’t fill a single rowgroup, your partition is too granular.  Even then, I’d like to see double-digit rowgroups per partition, though that’s just me.

BatchMode Execution

Sunil Agarwal describes BatchMode execution with columnstore indexes:

You may be wondering what is this magic number 900 rows within a batch? Well, when executing a query in BatchMode, SQL Server allocates a 64k bytes structure to group the rows. The number of rows in this structure can vary between 64 to 900 depending upon number of columns selected. For the example above, there are two columns that are referenced and X marks the rows that qualified in the BatchMode structure shown in the picture below. If SCAN is part of a bigger query execution tree,  the pointer to this structure is passed to the next operator for further processing. Not all operators can be executed in BatchMode. Please refer to Industry leading analtyics query performance for details on BatchMode Operators.

Under the right circumstances, BatchMode execution can be a major performance benefit.

Columnstore Elimination

Sunil Agarwal has a two-part series on columnstore data elimination.  First up is column elimination:

Now, let us run the same query on the table with clustered columnstore index as shown in the picture below. Note, that the logical IOs for the LOB data is reduced by 3/4th for the second query as only one column needs to be fetched. You may wonder why LOB? Well, the data in each column is compressed and then is stored as BLOB. Another point to note is that the query with columnstore index runs much faster, 25x for the first query and 4x for the second query.

Next up is rowgroup elimination:

In the context of rowgroup elimination, let us revisit the previous example with sales data

  • You may not even need partitioning to filter the rows for the current quarter as rows are inserted in the SalesDate order allowing SQL Server to pick the rowgroups that contain the rows for the requested date range.
  • If you need to filter the data for a specific region within a quarter, you can partition the columnstore index at quarterly boundary and then load the data into each partition after sorting on the region. If the incoming data is not sorted on region, you can follow the steps (a) switch out the partition into a staging table T1 (b) drop the clustered columnstore index (CCI) on the T1 and create clustered btree index on T1 on column ‘region’ to order the data (c) now create the CCI while dropping the existing clustered index. A general recommendation is to create CCI with DOP=1 to keep the prefect ordering.

From these two articles, queries which hit a small percentage of columns and stick to a relatively small number of rowgroups will likely perform better.  For people who understand normal B-tree indexes, the second point seems clear enough, but the first point is at least as important.

Categories

July 2017
MTWTFSS
« Jun  
 12
3456789
10111213141516
17181920212223
24252627282930
31