Press "Enter" to skip to content

Category: Testing

Test Data Generation Concepts and Challenges

Phil Factor talks about data generation:

Data generation is the science and art of providing data for database development work that is as realistic and controllable as possible. The skills of generating realistic data are an essential part of being a database developer. It is important: you need plenty of data of exactly the right type, size and verisimilitude in order to develop and test a database. It has got to look right. If you are launching a corporate, data-driven application, you’ll need it for training as well as testing and development.

Where things get really tough is when you also want to get the distribution of data right. Is a uniform distribution (e.g., outputs from a random number generator) okay? Or do you have a few large accounts that take up the majority of your data? That difference could affect query plans and if you have nice and uniform data, you might never see some of the cardinality estimation problems until they pop up in production.

Comments closed

Database Subetting and Data Generation

Phil Factor tells us about two possibilities for loading a lower environment:

When dealing with the development, testing and releasing of new versions of an existing production database, developers like to use their existing production data. In doing so, the development team will be hit with the difficulties of managing and accommodating the large amount of storage used by a typical production database. It’s not a new problem because the practical storage capacity has grown over the years in line with our ingenuity in finding ways of using it.

To deal with using production data for testing, we generally want to reduce its size by extracting a subset of the entities from a ‘production’ database, anonymized and with referential integrity intact. We then deliver this subset to the various development environments.

Phil gets into some detail on the process behind subsetting and then covers data generation as an alternative.

Comments closed

Testing with Databricks

Anh Nguyen Viet shares some thoughts on testing in Databricks:

With diverse support and a focus on workspace uniformity, Databricks can bring many benefits to the testing process, such as the following:

  • Centralized: Databricks provides an integrated environment for many teams (including testing team also), allowing them to work focused and productive. Integrating tools and services in a single platform reduces fragmentation and increases efficiency during testing.
  • Consistency: Databricks offers integrated tools and services, allowing testers to work consistently across the entire testing process as a uniform and efficient working environment.
  • Enhanced Productivity and Cost Reduction: With the flexibility and efficiency in data processing supported by DataBricks, testers can save time and effort, thereby increasing work productivity and reducing project costs. Utilizing utilities properly helps automate the testing process and delivers better results.

Read on for a few tips around building tests using Databricks.

Comments closed

Mock Tests for R Packages

Maelle Salmon does a bit of mocking:

This blog featured a post on mocking, the art of replacing a function with whatever fake we need for testing, years ago. Since then, we’ve entered a new decade, the second edition of Hadley Wickham’s and Jenny Bryan’s R packages book was published, and mocking returned to testthat, so it’s time for a new take/resources roundup!

Click through to see how you can create mocks in R as well as some practical examples of mocks in action.

Comments closed

Generating Test Data with ChatGPT

Daniel Janik builds fake data:

Have you ever been tasked with creating test data for an application and then ran into performance problems once the application moves to production?

Many of us manage databases or applications that contain regulated data that can’t leave a production environment. This means that we need to “clean” the data if it’s going to be used in QA or development work and one common way to de-identify the data is to simply update columns like firstname and lastname with a simple format “firstname” + counter; however, this results in all the data being unique and sequential. Firstname1, firstname2, firstname3, …
This isn’t good for getting like for like results with a production database and can lead to questions we’ve heard before in the workplace like “Why didn’t we catch this in QA?”

This works reasonably well, though you’d want to be sure to seed in edge cases and the like. But if you just need to generate some realistic-ish data pretty quickly, this is one option that can work.

Comments closed

Trying out Azure Load Testing

Dieu Phan takes us through the Azure Load Testing service:

Azure Load Testing is a fully managed load-testing service that enables you to generate high-scale loads. The service simulates traffic for your applications, regardless of where they’re hosted so I would like to share a walkthrough Azure Load Testing in this post.

Okay, this post isn’t very data platform-centric, but I do like the Load Testing service and think more companies and people should use it.

Comments closed

Automated Power BI Visual Testing with PBI Inspector

Chris Webb phones a friend:

This week, one of my colleagues at Microsoft, Nat Van Gulck, showed me a cool new open-source tool he’s been working on to make VisOps for Power BI much easier: PBI Inspector. What is VisOps? I’ll admit I didn’t really know either, so being lazy I asked Nat to write a few paragraphs describing the project and why it will be useful:

Read on for Nat’s description and an example of PBI Inspector in action.

Comments closed

Unit Testing Dynamic SQL

Jay Robinson lays out a pattern:

Dynamic SQL (aka Ad Hoc SQL) is SQL code that is generated at runtime. It’s quite common. Nearly every system I’ve supported in the past 30 years uses it to some degree, some more than others.

It can also be a particularly nasty pain point in a lot of systems. It can be a security vulnerability. It can be difficult to troubleshoot. It can be difficult to document. And it can produce some wickedly bad results.

Click through for Jay’s process as well as recommendations and an example. It’s certainly worth thinking about.

Comments closed

Verifying a Backup in SQL Server

Chad Callihan knows your last backup is only as good as your last restore:

Is the process of testing your backups something you know you should do but never get around to? Do you find yourself assuming all is well with backups while putting out other fires? Test-DbaLastBackup, part of the beloved dbatools, can solve your dilemma.

There are many options available when using Test-DbaLastBackup. Let’s explore a few of these options and see some examples of how to use them.

Click through to learn more about this. And you could easily put together Powershell scripts to stagger your restorations over a time frame (such as, 15% of your databases each day, so that you get to 100% by the end of the week).

Comments closed