The Cost Of Compression

Erin Stellato looks at the COMPRESS function:

While this data is interesting, I’m more curious about how compression affects my everyday SELECT queries. I have a set of three stored procedures that each have one SELECT query, so that each index is used. I created these procedures for each table, and then wrote a script to pull values for first and last names to use for testing. Here is the script to create the procedures.

Once we have the stored procedures created, we can run the script below to call them. Kick this off and then wait a couple minutes…

To me, the COMPRESS function is most useful for compressing information you tend not to search through but need to keep the in the database, like HTML markup or long descriptions.

Filtered Indexes And Parameters

Erik Darling shows an example of what happens when you have a filtered index and parameterize the filter:

It Is Known

That when you use filtered indexes, they get ignored when your queries are parameterized. This is a Plan Caching Thing©, of course. The simplest example is a bit column with a filtered index. If your index is on WHERE Bit = 1, it doesn’t have data for WHERE Bit = 0. That index would only be suitable for one variation of the query, so caching a plan that uses an index which can’t be reused for every variation isn’t feasible.

Read on for a couple examples, and check the comments on this as well.

BatchMode Execution

Sunil Agarwal describes BatchMode execution with columnstore indexes:

You may be wondering what is this magic number 900 rows within a batch? Well, when executing a query in BatchMode, SQL Server allocates a 64k bytes structure to group the rows. The number of rows in this structure can vary between 64 to 900 depending upon number of columns selected. For the example above, there are two columns that are referenced and X marks the rows that qualified in the BatchMode structure shown in the picture below. If SCAN is part of a bigger query execution tree,  the pointer to this structure is passed to the next operator for further processing. Not all operators can be executed in BatchMode. Please refer to Industry leading analtyics query performance for details on BatchMode Operators.

Under the right circumstances, BatchMode execution can be a major performance benefit.

Columnstore Elimination

Sunil Agarwal has a two-part series on columnstore data elimination.  First up is column elimination:

Now, let us run the same query on the table with clustered columnstore index as shown in the picture below. Note, that the logical IOs for the LOB data is reduced by 3/4th for the second query as only one column needs to be fetched. You may wonder why LOB? Well, the data in each column is compressed and then is stored as BLOB. Another point to note is that the query with columnstore index runs much faster, 25x for the first query and 4x for the second query.

Next up is rowgroup elimination:

In the context of rowgroup elimination, let us revisit the previous example with sales data

  • You may not even need partitioning to filter the rows for the current quarter as rows are inserted in the SalesDate order allowing SQL Server to pick the rowgroups that contain the rows for the requested date range.
  • If you need to filter the data for a specific region within a quarter, you can partition the columnstore index at quarterly boundary and then load the data into each partition after sorting on the region. If the incoming data is not sorted on region, you can follow the steps (a) switch out the partition into a staging table T1 (b) drop the clustered columnstore index (CCI) on the T1 and create clustered btree index on T1 on column ‘region’ to order the data (c) now create the CCI while dropping the existing clustered index. A general recommendation is to create CCI with DOP=1 to keep the prefect ordering.

From these two articles, queries which hit a small percentage of columns and stick to a relatively small number of rowgroups will likely perform better.  For people who understand normal B-tree indexes, the second point seems clear enough, but the first point is at least as important.

Power Saving Mode

Mike Walsh recommends ensuring your servers are not using any form of power saving mode:

Balanced power mode has a major impact on SQL Server performance. Simply put, you’ve told Windows Server (through Control Panel) or your server hardware (through BIOS settings) to sacrifice a few performance-minded things for the sake of using a little less power. In fact, in plenty of studies and blog posts by folks in the community (including this post by Glenn Berry), you can see the effect of CPU power saving, especially. In essence, the CPUs will run at a lower clock multiplier when demand isn’t deemed high enough, and that clock speed will only increase when demand is high enough for long enough. This results in a slower CPU speed during normal operations. That setting actually works well on my laptop or tablet when I want to conserve battery life and don’t have a workload that is sensitive to CPU speed.

For a SQL Server, though? That is horrible for performance. Windows balanced mode also can cause other components to run slower or behave differently than when the server is running in High Performance mode. For instance, USB ports can be underpowered and network interfaces can be under-powered or even go to sleep. Frankly, for a SQL Serve, nothing good comes out of these modes.

Read on for a few methods for checking whether your servers are affected.

ETL With Spark

Eric Maynard demonstrates that moving data across Hadoop clusters can be sped up by using Spark:

By leveraging Spark for distribution, we can achieve the same results much more quickly and with the same amount of code. By keeping data in HDFS throughout the process, we were able to ingest the same data as before in about 36 seconds. Let’s take a look at Spark code which produced equivalent results as the bash script shown above — note that a more parameterized version of this code code and of all code referenced in this article can be found down below in the Resources section.

Read the whole thing.

Cardinality Estimator Regressions

SQL Scotsman has a great post on figuring out which of your queries have become worse as a result of the SQL Server cardinality estimator changes in 2014:

Instantly it is apparent that the most resource intensive query was the same query across both workload tests and note that the query hash is consistent too.  It is also apparent that this query performs worse under the new cardinality estimator model version 120.  To investigate and understand why this particular query behaves differently under the different cardinality estimators we’ll need to look at the actual query and the execution plans.

Looking at the information in #TempCEStats and the execution plans, the problematic query below belongs to the SLEV stored procedure.

There’s also a discussion of Query Store in there, but it’s important to understand how to figure this out even if you’re on 2014 and don’t have access to Query Store.

DBCC OPTIMIZER_WHATIF

Derik Hammer shows how to use DBCC OPTIMIZER_WHATIF to get an idea of how your query would run with different hardware:

DBCC OPTIMIZER_WHATIF can be used to pull down your resources or augment them. Often the differences in the execution plans have to do with parallelism and memory grants. This is an example of an execution plan running on an under powered development machine.

This is a good tool to help figure out what an execution plan probably would look like in production when your test environment is much smaller.

sp_executesql Performance

Grant Fritchey takes a look at the performance of sp_executesql:

The results are fun.

Execution TypeAverage Duration
sp_executesqlAVG: 57946.03187251
Ad HocAVG: 14788.8924302789

What’s going on? Is the conclusion that, in fact, ad hoc queries are faster than sp_executesql?

Absolutely not.

I cheated.

Using sp_executesql is about protecting yourself when executing dynamic SQL, not about performance gains.

Wait Stats

David Alcock provides an introduction to wait stats and why they’re useful for performance tuning:

So here are two different ways that we can use SQL Servers wait statistics for troubleshooting purposes. Both views give us really useful information but both have different purposes. If we wanted to look back over time then the sys.dm_os_wait_stats will give us a view of wait time totals. Typically we would capture the information via a scheduled job and analyse the data for spikes during periods where issues might be suspected.

For performing real-time analysis of wait statistics then we should base queries on the sys.dm_os_waiting_tasks view where we can see accurate wait duration values as they are happening within our instance.

In my opinion wait statistics are the most important piece of information when troubleshooting SQL Server so learning about the different types is vital for anyone using SQL. Thankfully there is a wealth of really useful information about wait statistics out there; I’ve listed some of my favourite posts below.

Click through for an example, as well as links to more resources.

Categories

August 2017
MTWTFSS
« Jul  
 123456
78910111213
14151617181920
21222324252627
28293031