Press "Enter" to skip to content

Category: Performance Tuning

Tips for Scaling Apache Kafka

Narendra Lakshmana Gowda tunes a Kafka cluster:

Apache Kafka is known for its ability to process a huge quantity of events in real time. However, to handle millions of events, we need to follow certain best practices while implementing both Kafka producer services and consumer services.

Before start using Kafka in your projects, let’s understand when to use Kafka:

Much of the advice is pretty standard for performance tuning in Kafka, like setting batch size and linger time on the producer or managing consumers in a consumer group.

Leave a Comment

Improving Power Query CSV File Performance with Data Columns

Chris Webb makes things go faster:

A few weeks ago I replied to a question on reddit where someone was experiencing extremely slow performance when importing data from a CSV file using Power Query. The original poster worked out the cause of the problem and the solution themselves: they saw that removing all date columns from their query made their Power Query query much faster and that using the Date.FromText function and specifying the date format solved the problem. While I couldn’t reproduce the extreme slowness that was reported I was able to reproduce a performance difference between the two approaches and Curt Hagenlocher of the Power Query team confirmed that this was expected behaviour.

Read on for the example and explanation.

Leave a Comment

Preventing Skew in Teradata

Sudheer Kumar Lagisetty shares some performance tuning advice:

Teradata performance optimization and database tuning are crucial for modern enterprise data warehouses. Effective data distribution strategies and data placement mechanisms are key to maintaining fast query responses and system performance, especially when handling petabyte-scale data and real-time analytics. 

Understanding data distribution mechanisms, workload management, and data warehouse management directly affects query optimization, system throughput, and database performance optimization. These database management techniques enable organizations to enhance their data processing capabilities and maintain competitive advantages in enterprise data analytics.

Click through for some tips around data distribution. This idea becomes important in an MPP architecture.

Leave a Comment

Speed Differences with Separating Data and Log Files

Brent Ozar performs a test:

I’ve already explained that no, it doesn’t make your database server more reliable – and in fact, it’s the exact opposite. But what about performance?

The answer is going to depend on your hardware and workload, but let’s work through an example. I’ll take the first lab workload from the Mastering Server Tuning class and set it up on an AWS i3en.2xlarge VM, which has 8 cores, 64GB RAM, and two 2.5TB NVMe SSDs. (This was one of the cheapest SQL-friendly VM types with two SSDs, but of course there are any number of ways you could run a test like this, including EBS volumes.)

I would expect cloud versus on-premises answers to be quite different, because cloud services tend to throttle you hard on how much storage throughput you’re allowed to have. For that reason, the results make perfect sense in AWS (or Azure or GCP for that matter), but unless your on-prem solution has hard throttles on IOPS or throughput because your sysadmins are monsters, the limits of performance would be in how hard you can push the drives or your storage controllers.

Ultimately, the most appropriate answer is to test your systems and not rely on expectations, especially if you’re shifting from on-premises to a cloud (or vice versa).

Comments closed

Temp Table Performance in PostgreSQL

Jobin Augustine shares a warning:

PostgreSQL is one of the most powerful database systems in the world. I have always been passionate about its great power, especially its modern SQL language features.

However, that doesn’t mean everything is great. There are areas where it hurts. Novice users unaware of the problem might get into trouble, and I see such things happen very often. This is a note for those who design their solutions around PostgreSQL using temporary tables.

This is very interesting to read, especially in comparison to SQL Server. It’s another example of the adage that, just because something works in a specific way on one platform, it’s not necessarily going to work the same way on another.

Comments closed

Optimizing Incremental Inventory Calculations in DAX

Marco Russo and Alberto Ferrari track account balances:

Computing an inventory level or an account balance at a given time is a common requirement for many reports. However, when the source data contains all the transactions since the initial zero state, the calculation requires a running sum from the beginning of the data history until the day considered. While easy to implement, a calculation like this can be extremely expensive depending on several factors: the number of cells to compute in the report, the data volume of the transactions, and the cardinality of the dimensions.

The usual approach to optimizing this type of calculation is to introduce a snapshot table that pre-calculates the value of each date for all the dimensions required. Because of the resulting data volume, this solution can be very expensive both in terms of processing time and in terms of resulting memory consumption. A tradeoff is to limit the cardinality of the time available for the snapshot, for example by creating a monthly or quarterly snapshot instead of a daily snapshot. However, this approach limits the analysis of inventory or balance amount trends, and it removes any detail below the snapshot cardinality.

This article shows how to implement a hybrid approach that minimizes the snapshot cost without losing analytical capabilities. This provides outstanding query performance for the reports.

To an extent, this replicates what accountants do with general ledger operations: after an accounting period ends, you summarize results. Then, you can start from the summarized end point instead of needing to read each journal entry. It also provides a convenient time to perform audits and ensure that the numbers are what they should be.

Comments closed

Partitioned Tables and Faster Performance

Henrietta Dombrovskaya provides a warning:

I am unsure whether others have this problem, but I find myself explaining the benefits9and what’s not)of partitioning tables in Postgres over and over again.

Most times, developers have unrealistic expectations that if they partition a gigantic table, “it will be faster to select from it.” I always have to explain that the goal and the purpose of partitioning is improving maintenance, not query optimization, and if the execution speed of short queries remains the same as before partitioning, they should consider it a success (yes, there are no rules without exceptions, and there are cases of performance improvement, but those are exceptions indeed).

Henrietta’s example is specifically around Postgres, but it applies to SQL Server all the same: partitioning is primarily a maintenance benefit rather than a performance benefit.

Comments closed

GROUP BY Clause Order and Performance in PostgreSQL

Andrei Lepikhov runs some tests:

PostgreSQL users often employ analytical queries that sort and group data by different rules. Optimising these operators can significantly reduce the time and cost of query execution. In this post, I will discuss one such optimisation: choosing the order of columns in the GROUP BY expression.

Postgres can already reshuffle the list of grouped expressions according to the ORDER BY condition to eliminate additional sorting and save computing resources. We went further and implemented an additional strategy of group-by-clause list permutation in a series of patches (the first attempt and the second one) for discussion with the Postgres community, expecting it to be included in the next version of PostgreSQL core. You can also try it in action in the commercial Postgres Pro Enterprise fork.

From a “this is a 4th generation language” perspective, there should never be a performance difference in the ordering of grouped columns in a GROUP BY clause. The database optimizer should always determine the optimal ordering. Read Andrei’s article to see how much of a challenge that can be in practice.

Comments closed

Comparing Collation Speed in Postgres

Cristophe Pettus has the need for speed:

In previous installments in this series about locales and collations in PostgreSQL, we’ve made some vague allusions to the speed of the various collation functions. Let’s be a bit more analytical now.

The data here was gathered on a 4GB Linode instance running Ubuntu 24.04 and PostgreSQL 17.1. The test data was 1,000,000 records, each one a string of 64 random 7-bit ASCII characters. For each of the configurations, the test data was loaded into a table:

It’s a fairly simple test, but the results are quite interesting.

Comments closed

Materialized Views in PostgreSQL

Brent Ozar builds a view but a special one:

That query gets the top 100 users who have the most accepted answers. On my server, that takes about a minute to run – unacceptably long for a public-facing web site, for example.

Does that data need to be up to the second? Of course not. The leaderboard of the top 100 answerers isn’t going to change all that quickly. People who have successfully answered hundreds of questions aren’t going to suddenly disappear, nor is someone else suddenly going to rocket to the top.

For report queries like this, Postgres offers materialized views: a view that’s written to disk, and then updated only when you want to update it. 

Read on to see how you can create one in PostgreSQL. Brent does touch on one of the differences between indexed views in SQL Server and materialized views in PostgreSQL while covering the process of creating, querying, and updating materialized views. In discussing how to update them, Brent covers en passant a second difference between indexed views in SQL Server and materialized views in PostgreSQL. Whether the “keep it up to date at all times” approach beats the “update it when you want but let data go stale in the meantime” approach is better, that’s something worth debating.

Comments closed