Press "Enter" to skip to content

Category: Testing

GUID Hunting for Power BI Performance Load Testing

Gilbert Quevauvilliers finds some UUIDs:

When completing the Power BI performance load testing, you will need to get details from your Power BI report and App Workspace, which will later be used in the PBIReport.JSON file.

In this blog post I will show you how to find those details, so that when it comes time to add it to the PBIReport.JSON file, it will be easy to plug the values in.

The reason for a separate blog post is because you will have to find the GUIDs that are used, which takes a bit of time and knowledge to find the correct GUID for the right value.

Click through for the most unsatisfying Easter egg hunt you could imagine. Gilbert then continues to pull out slider and filter data values.

Leave a Comment

LakeBench Now Available

Miles Cole makes an announcement:

I’m excited to formally announce LakeBench, now in version v0.3, the first Python-based multi-modal benchmarking library that supports multiple data processing engines on multiple benchmarks. You can find it on GitHub and PyPi.

Traditional benchmarks like TPC-DS and TPC-H focus heavily on analytical queries, but they miss the reality of modern data engineering: building complex ELT pipelines. LakeBench bridges this gap by introducing novel benchmarks that measure not just query performance, but also data loading, transformation, incremental processing, and maintenance operations. The first of such benchmarks is called ELTBench and is initially available in light mode.

Click through to see how it works and grab a copy if you’re interested.

Leave a Comment

The Importance of Power BI Performance Load Testing

Gilbert Quevauvilliers runs some tests:

It is becoming increasingly important to understand how the Power BI reports/Semantic Model that are being used in your organization are performing.

When using Fabric Capacities this can potentially be of critical importance, because a single report that is not well designed could cripple or bring down your capacity.

By completing Power BI Performance load testing before it goes into a production environment allows for scalable, dependable, repeatable testing to take place in lower environments.

Read on to see what this entails and the tool Gilbert will use throughout this series.

Leave a Comment

Adding Timeouts to Pester Tests

Adam Bertram runs out of time:

Have you ever had a Pester test hang indefinitely, blocking your entire test suite? Maybe it’s waiting for a network response that never comes, or stuck in an infinite loop. Without proper timeout handling, one bad test can ruin your entire CI/CD pipeline.

In this article, you’ll learn how to implement robust timeout handling for Pester tests using PowerShell runspaces, ensuring your test suite always completes in a predictable timeframe.

Click through for the code and explanation.

Leave a Comment

Performance Testing the pg_tde Extension

Transparent data encryption is now available in PostgreSQL and Andreas Scherbaum has some performance measures:

The performance impact of pg_tde by Percona for encrypted data pages is measurable across all tests, but not very large. The performance impact of encrypting WAL pages is about 20% for write-heavy tests. The tests were run with an extension RC (Release Candidate), however the WAL encryption feature is still in Beta stage.

Andreas also has a post on the testing specifics:

This test was run on a dedicated physical server, to avoid external influences and fluctuations from virtualization.

The server has a Intel(R) Xeon(R) Gold 5412U CPU with 48 cores, 256 GB RAM, and a 2 TB SAMSUNG MZQL21T9HCJR NVram disk dedicated for the tests (OS was running on a different disk).

Leave a Comment

Testing Shiny Applications

Arthur Breant runs some tests:

You’ve created a fantastic mockup and your client is delighted. You’re ready to move to production with your application. But one question haunts you: how can you ensure that your application will remain stable and functional through modifications and evolutions?

The answer comes down to one word: testing.

Read on to learn how you can perform unit testing, integration testing, and end-to-end testing of Shiny applications in R. H/T R-Bloggers.

Comments closed

Verifying SQL Server Backups via SMO

Stephen Planck does some testing:

Regularly restoring test copies of your databases is the gold-standard proof that your backups work. Between those tests, however, RESTORE VERIFYONLY offers a fast way to confirm that a backup file is readable, that its page checksums are valid, and that the media set is complete. In this post you will see how to run that command from PowerShell by invoking SQL Server Management Objects (SMO), turning a one-off verification into a repeatable step you can schedule across all your servers.

Click through for the script and explanation. I also like dbatools’ Test-DbaLastBackup command, as that can also run RESTORE VERIFYONLY but goes further and allows you to restore the backup and then run DBCC CHECKDB against its contents.

Comments closed

The Case against Database Mocks

Brandur Leach lays out the argument:

The textbook example of this is the database mock. Here’s a rough articulation of the bull case for this idea: CPUs are fast. Memory is fast. Disks are slow. Why should tests have to store data to a full relational database with all its associated bookkeeping when that could be swapped out for an ultra-fast, in-memory key/value store? Think of all the time that could be saved by skipping that pesky fsync, not having to update that plethora of indexes, and foregoing all that expensive WAL accounting. Database operations measured in hundreds of microseconds or even *gasp*, milliseconds, could plausibly be knocked down to 10s of microseconds instead.

Prior to reading the article, my stance was as follows: use database mocks for unit test libraries, in which you aren’t testing the actual data processing or retrieval. Those should be able to run on an isolated build server with no access to a database. But you also need proper integration tests that cover how you interact with the database, and those tests should be a majority of your test suite. You should have a known state database before each test run (which is where Docker containers or database snapshots become extremely helpful), and passing the database tests should be a gate early on in the CI/CD process.

After reading the article, my priors remain the same. I think there’s still scope for database mocks, but not as a replacement for proper integration testing with the database.

Comments closed

Parameterization and Mocking in Python Tests

Aida Gjoka and Russ Hyde show off some capabilities in the pytest library:

Writing tests is one of the best ways to keep your code reliable and reproducible. This post builds on our previous blog about Python testing with pytest Part 1, and explores some of the more advanced features it offers. From parametrised fixtures to mocking and other useful pytest plugins, we will show how to make your tests more reproducible, easier to manage and demonstrate how writing simple tests can save you time in the long run.

Click through to learn more. I’m a huge fan of parameterization in pytest—it’s really easy to do. Mocks are a bit harder to pull off in practice, though quite useful.

Comments closed

Pitfalls in Software Testing

Ngọc lê builds a list:

Software testing plays a crucial role in ensuring the delivery of high-quality products. However, even experienced testers can fall into common traps that compromise the effectiveness of testing processes, allowing defects to slip into production. Avoiding these pitfalls is essential for maintaining the reliability and functionality of software. In this blog, we’ll explore some of the most common software testing mistakes and provide strategies to overcome them.

This is specifically for software testing, but most of these principles apply the same for database work.

Comments closed