Press "Enter" to skip to content

Category: Performance Tuning

Measuring File Latency in SQL Server

Anthony Nocentino has a script and some tips for us:

This post is a reference post for retrieving IO statistics for data and log files in SQL Server. We’ll look at where we can find IO statistics in SQL Server, query it to produce meaningful metrics, and discuss some key points when interpreting this data.

Click through for the script, and then a bulleted list of things to keep in mind as you’re reviewing the data.

Comments closed

Against Reporting Tables

Erik Darling doesn’t like reporting tables:

I’ve seen a lot of crazy ways for people to run reports on live data, but one of the worst ideas is creating a reporting table based on the set of data a user wants, and then letting them query that data.

As usual, Erik says something I want to disagree with, and then I read the post and don’t really disagree with him—or if I do, he’s already laid out the “Yes, I understand X” exception. I’ve used reporting tables to good effect, but the important thing is that they’re general-purpose and designed into the application, not specific to a single user.

Comments closed

Power BI Data Models with Multiple Aggregation Tables

Phil Seamark continues a series on Power BI aggregations:

Why might you consider having more than one aggregation table? The short answer is speed and overall resource efficiency.

Consider a heavily used Power BI report where page-load time is considered critical. A typical report may have half a dozen visuals on a page showing values computed over various grains. If the model used by the report has no aggregation tables, all calculations use the raw fact tables to produce values for each metric.

Adding an aggregation table to the model allows the same calculations as before to use smaller tables to produce the same result. Calculations using smaller aggregation tables will enable the server hosting the data model to use much less effort per query.

It was interesting to see just how easy the process is.

Comments closed

Deciding on Forced Parameterization or Optimize for Ad Hoc Workloads

Erik Darling hosts a showdown:

I often speak with people who are confused about what these settings do, and which one they should be using to fix certain problems.

The first myth is that Optimize For Ad Hoc Workloads has some special effect on queries run outside of stored procedures: it does not. It’s very poorly named in that regard. There are no special optimizations applied because of that setting.

Read the whole thing.

Comments closed

Remember CPU Time when Optimizing Power BI Premium Reports

Chris Webb has a public service announcement:

When you tune a Power BI report or dataset refresh your first job is to make it run faster – users hate slow reports and late data. However, if you are using Power BI Premium you also need to think about reducing the amount of CPU used by a query or refresh as well, and in this post I’ll explain why.

Click through for that explanation.

Comments closed

Improving Dataset Refresh with Query Folding and the Dataflows Connector

Chris Webb has a performance tip for us:

You may have noticed that a new dataflows connector was announced in the August 2021 release of Power BI Desktop, and that it now supports query folding between a dataset and a dataflow – which you may be surprised to learn was not possible before. In this post I thought I’d take a look at how much of an improvement in performance this can make to dataset refresh performance.

Click through for the demonstration.

Comments closed

The Cost of Measures in Power BI Live Connection Reports

Chris Webb explains the cost side of the ledger when it comes to measure creation:

You probably know that it’s a best practice to build your Power BI datasets in a separate .pbix file from your reports – among other things it means that different people can develop the dataset and reports. You may also know that if you are building a report in Power BI Desktop with a Live connection to a published dataset or Azure Analysis Services you can define your own measures inside the report. While this is very convenient, if you create too many measures there’s a price to pay in terms of query performance.

Click through for a demonstration of this.

Comments closed

Performance Tips when Working with Large Datasets in R

Mira Celine Klein continues a series on performance tuning R code:

Whether your dataset is “large” not only depends on the number of rows, but also on the method you are going to use. It’s easy to compute the mean or sum of as many as 10,000 numbers, but a nonlinear regression with many variables can already take some time with a sample size of 1,000.

Sometimes it may help to parallelize (see part 3 of the series). But with large datasets, you can use parallelization only up to the point where working memory becomes the limiting factor. In addition, there may be tasks that cannot be parallelized at all. In these cases, the strategies from part 2 of this series may be helpful, and there are some more ways:

Click through for four options.

Comments closed

Caching Function Results in an R Package

Maelle Salmon and Cristophe Dervieux show us ways to cache results of function calls using R:

Caching means that if you call a function several times with the exact same input, the function is only actually run the first time. The result is stored in a cache of some sort (more practical details later!). Every other time the function is called with the same input, the result is retrieved from the cache unless invalidated. You will often think of caching as something valid in only one R session, but we’ll see it can be persistent across sessions via storage on disk.

As a quick note, this makes sense when writing functions, which are expressions without side effects. If you have side effects, caching might not give you what you expect.

Comments closed

Compilations per Second in SQL Server

Fabiano Amorim clarifies a metric’s definition:

As you can see, the number of SQL Compilations/Sec is very high. It’s important to step back and remember the general description and guideline for this counter and understand what I mean by “high”:

Official Description: “Number of SQL compilations per second. Indicates the number of times the compile code path is entered.”

Read on for a dive into ad hoc SQL statements parameterization, how an instance can have a high compilations/sec value relative to batch requests/sec, and how that can affect performance in the long run.

Comments closed