Press "Enter" to skip to content

Category: Performance Tuning

Troubleshooting I/O Issues in SQL Server

Ajay Dwivedi shares some advice:

Storage performance is something that puzzles a lot of SQL Server professionals. So in this blog, I will cover the basic steps I perform to ensure I get the best performance from the underlying storage.

Read on for some thoughts on storage testing prior to SQL Server installation, as well as what to do to ensure your SQL Server instance is up and at them.

Comments closed

Performance Tuning Tables with Filters in Power BI

Chris Webb doesn’t want to wait:

There are four columns: Date, Town and two measures. One measure called [Fast Measure] is, as the name suggests, very quick to execute; the other measure, called [Slow Measure], is very complex and slow. The definitions are irrelevant here. Notice that there is a filter on this table visual so only the rows where [Fast Measure] is greater than 1 are shown.

If I measure the amount of time to render this table in Performance Analyzer, it takes around 17.5 seconds to run. However, if I remove the filter on [Fast Measure], the table only takes 8 seconds to run. Why? The filter is on the fast measure and surely more rows are returned without the filter, so wouldn’t the slow measure be evaluated more?

Click through for the answer.

Comments closed

Diagnosing Kafka Message Throughput Reductions

Danica Fine and Nikoleta Verbeck troubleshoot an issue:

One of the greatest advantages of Kafka is its ability to maintain high throughput of data. Unsurprisingly, high throughput starts with the producers. Prior to sending messages off to the brokers, individual records destined for the same topic-partition are batched together as a single compressed collection of bytes. These batches are then further aggregated before being sent to the destination broker.

Batching is a great thing, and we (generally) want it. But how do you know when it’s working well and when it’s not?

This first post covers message throughput but there will be several other topics in the series as well.

Comments closed

Parallel Loading of Tables in Power BI Dataset Refresh

Chris Webb hits the turbo button:

Do you have a a large dataset in Power BI Premium or Premium Per User? Do you have more than six tables that take a significant amount of time to refresh? If so, you may be able to speed up the performance of your dataset’s refresh by increasing the number of tables that are refreshed in parallel, using a feature that was released in August 2022 but which you may have missed.

Click through for that tip.

Comments closed

Testing Azure SQL DB Hyperscale Performance

Reitse Eskens continues a series on performance testing Azure SQL DB tiers:

So far, my blogs have been on the different Azure SQL DB offerings where there are differences between DTU and CPU based. But in general, the design is recognizable. With the hyperscale tier, many things change. There are still cores and memory of course, but the rest of the design is totally different. I won’t go into all the details, you’re better off reading them here [https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale?view=azuresql] and here [https://learn.microsoft.com/en-us/azure/azure-sql/database/hyperscale-architecture?view=azuresql] , but the main differences are the support of up to 100 TB of data in one database (all the other tiers max out at 40 TB), fast database restores based on file snapshots, rapid scale out and rapid scale up.

There are differences in testing this one versus the others, so buyer beware.

Comments closed

Azure SQL DB GP Compute Optimized Performance Comp

Reitse Eskens continues a series on comparing the performance of different Azure SQL DB tiers:

The compute optimized tier starts at 8 cores as a minimum and goes up to 72.
The maximum storage starts at 1 TB and  goes up to 4 TB. The number of TempDB files is undisclosed but the size starts at 37 GB and goes up to 333 GB. A huge difference with the regular provisioned one! The disk limitations are disclosed as well. Log rate starts at 36 Mbps and maxing out at 50. The same goes for the data; starting at 2560 iops and maxing out at 12.800. One iop (or Input Output oPeration) is connected to the disk cluster size. As these are 4 kb, reading or writing one data page (8Kb) equals two iops. 2560 Iops equals something of 10 MB per second. The top end goes to about 50 MB per second. Keep this in mind when you’re working out which tier you need, because usually disk performance can be essential.

Read on to see how it performs on Reitse’s standardized test workload.

Comments closed

General Purpose Serverless Azure SQL DB Performance

Reitse Eskens doesn’t need a server:

In my previous blog I wrote about the premium tier, the one that can be compared with the business critical tier. Now we’re moving away from the DTU models and back to what we DBA’ers really understand, cores, memory and disks. Before I’m going to dive into the limitations, there’s one thing you need to understand. The serverless tier is made for intermittent use. If you’re using the tier for more than 25% of the time (or about 183 hours per month), you’re better of going provisioned. This has nothing to do with performance but everything with cost. The tipping point of provisioned being cheaper is around 25% of the time.

There’s some solid advice on how to get the service to go to sleep but the bulk of the article revolves around performance.

Comments closed

Premium Azure SQL DB Performance

Reitse Eskens is moving on up:

The standard tier starts at 125 DTU’s and goes up to 4000. DTU’s are made up from a magic mix of CPU, memory, read iops and write iops. An iop (Input Output oPeration) should be a 4kb (disk cluster size) read or write. 125 DTU translates to 500 Kb/sec up to 32.000 Kb/sec. As we’re used to datapages which are 8Kb in size, you could say these databases are able to pull 62 to 4.000 pages per second from disk. When there are simultaneous writes, you’ll share the performance. At least that’s my interpretation of the IOP. For the DTU part, I’m still struggling to get a good grip on what it exactly is, beyond the magic mix.

It’s also a good idea to compare this to what the Standard tier has to offer. The general data patterns look similar with respect to elbows but the magnitudes are quite different, with Premium P1 starting out around Standard S4 in the test for insertion but more like S3 for selects.

Comments closed

High Memory Grants in SQL Server

Ajay Dwivedi tracks down memory grants:

Very often, specially on mixed workload enviroment where server deals with oltp & olap queries together, concurrent sessions allocated with High Memory Grants cause entire SQL Server workload to suffer.

To detect such a situation, I like to watch my critical metrics dashboards. For example, dashboard Monitoring – Live -All Servers of free SQLMonitor tool highlights all these metrics.

Click through for an example of the dashboard as well as a diagnostic query which helps find high-memory grant queries.

Comments closed