Press "Enter" to skip to content

Category: Testing

Thoughts on Testing Stored Procedures

Erik Darling has a plan:

While most of it isn’t a security concern, though it may be if you’re using Row Level Security, Dynamic Data Masking, or Encrypted Columns, you should try executing it as other users to make sure access is correct.

When I’m writing stored procedures for myself or for clients, here’s what I do.

Click through for Erik’s guidance. The premise is couched in security testing, though much of this is functionality and performance testing Regardless, it’s a good plan.

Comments closed

Managing Database Test Data

Phil Factor maintains some tests:

When learning about relational databases, we all tend to use ‘toy’ databases such as PubsAdventureWorksNorthWind, or ClassicModels. This is fine, but it is too easy to assume that one can then do real-world database development in the same way. You have your database full of data and just cut code that you then test. From a distance, it all seems so easy.

In fact, rapid and effective database development usually requires a much more active approach to data. You need to work out how to test your work as you go, and to test continuously. For that, you need appropriate data with the right characteristics, in the suitable quantity. You also need to plan how to ensure that, when you make changes to the database, or even minor changes to its settings, all business processes continue to work correctly. In Agile terms you need a test-first methodology, fast feedback loop, and iterative development. You should never cut some SQL Code and only then think to yourself “I wonder how I’ll be able to test this?“.

This is something I’ve historically been pretty lazy about, to my detriment. Phil does an outstanding job of making the case for why generating and working with your own test data (versus live data) is important, as well as categorizing the purposes of this test data and the types of tests you’ll want to have.

Comments closed

Load Testing in Power BI

Chris Webb gives us the why and the how:

If you’re about to put a big Power BI project into production, have you planned to do any load testing? If not, stop whatever you’re doing and read this!

In my job on the Power BI CAT team at Microsoft it’s common for us to be called in to deal with poorly performing reports. Often, these performance problems only become apparent after the reports have gone live: performance was fine when it was just one developer testing it, but as soon as multiple end-users start running reports they complain about slowness and timeouts. At this point it’s much harder to change or fix anything because time is short, everyone’s panicking and blaming each other, and the users are trying to do their jobs with a solution that isn’t fit for purpose. Of course if the developers had done some load testing then these problems would have been picked up much earlier.

With that in mind, Chris explains some of the things we can do to help with load testing in Power BI.

Comments closed

Data Validation with Great Expectations and Azure Functions

Eduard van Valkenburg does a bit of data validation:

Great Expectations (Great Expectations Home Page • Great Expectations) is a popular Python-based OSS tool to validate data that comes into your data estate. And for me, validating incoming data is best done file by file, as the files arrive! On Azure there is no better platform for that then Azure Functions. I love the serverless nature of Functions and with the triggers available for arriving blobs, as well as HTTP, event grid events, queues and others. There are some great patterns that allow you to build event-driven data architectures. We also now have the Python v2 framework for Azure Functions available, which makes the developer experience even better. So, let’s go through how to get it running.

This looks really interesting and tying it in to Azure Functions is a good idea assuming that the checks don’t run for too long.

Comments closed

Trying Azure SQL DB Hyperscale Serverless

Reitse Eskens ran out of money on our behalf:

In one of my last blogs, I wrote about my first encounter with the Azure Hyperscale Serverless offering. Now it’s time to dig a bit deeper and what it’s up to.

Disclaimer. Azure Hyperscale Serverless is in preview and one of the things that isn’t active yet, is the auto shutdown. This means that it will stay online 24/7. And bill you for every second it’s online. In my case, this meant that my Visual Studio credits ran out and I couldn’t use my Azure subscription anymore. Keep it in mind when testing this out, especially if your credit card is connected to said subscription.

Click through to see what Reitse was able to do in the meantime, before those Azure credits ran out for the month.

Comments closed

Unit Testing Spark Notebooks in Synapse

Arun Sethia grabs the oscilloscope:

In this blog post, we will cover how to test and create unit test cases for Spark jobs developed using Synapse Notebook. This is an extension of my previous blog, Synapse – Choosing Between Spark Notebook vs Spark Job Definition, where we discussed selecting between Spark Notebook and Spark Job Definition. Unit testing is an automated approach that developers use to test individual self-contained code units. By verifying code behavior early, it helps to streamline coding practices for larger systems.

Arun covers three major use cases: when your code is in an external library, when it is in a separate notebook, and when it is in the same notebook.

Comments closed

Azure Load Testing Now GA

Darryl Taft provides an overview of a now generally available service:

Moreover, Azure Load Testing collects detailed resource metrics to help you identify performance bottlenecks across your Azure application components. You can automate regression testing by running load tests as part of your CI/CD workflow.

Azure Load Testing also creates monitoring data using the Azure Monitor service, including application insights and container insights, to capture details from the Azure services.

It’s available in 11 regions, including the best region of all (East US) and the second-best region of all (East US 2).

Comments closed

Designing Test Code for shinytest2

Russ Hyde wraps up a series on shinytest2:

UI-driven end-to-end tests require a bit more code than unit tests. For example, starting the app and navigating around to set up some initial state will require a few lines of code. But these are things you’ll likely need to do in several tests. As you add more and more test cases and these commonalities reveal themselves, it pays to extract out some helper functions and / or classes. By doing so, your tests will look simpler, the behaviour that you are testing will be more explicit, and you’ll have less code to maintain. We’ll show some software designs that may simplify your {shinytest2} code.

This post builds upon the previous posts in the series, but is quite a bit more technical than either of them. In addition to shiny development, you’ll need to know how to define functions in R and for the last section you’ll need to know about object-oriented programming in R (specifically using R6). The ideas in that section may be of interest even if you aren’t fluent with R6 classes yet.

Click through for the series finale.

Comments closed

Writing Tests with shinytest2

Russ Hyde continues a series on shinytest2:

Here, we will write a simple shiny app (as an R package) and show how to generate tests for this app using {shinytest2}. As discussed in the previous post, {shinytest2} tests your app as if a user was interacting with it in their browser. The tests generated are application-focussed rather than component-focussed and so give some overall guarantees on how the app should behave.

This post is slightly more technical than the last, and assumes that the reader is comfortable with creating and unit-testing packages in R, and with shiny development in general.

Click through to see the code, as well as plenty of explanation.

Comments closed

End-to-End Testing via shinytest2

Russ Hyde begins a new series:

{shinytest2} builds upon the {shinytest} package and was written by Barret Schloerke and his colleagues at RStudio. Like puppeteer, {shinytest2} uses the Chrome DevTools Protocol to interact with the browser, which is a pretty stable basis for building a browser automation tool (the predecessor {shinytest} was built on a now-unsupported browser library called PhantomJS, so we strongly recommend migrating to {shinytest2} if you are still using {shinytest}). Test scripts are written in R and so should be accessible to R developers who are comfortable with {testthat}. There is an automated tool (described in the next post) for creating these test scripts. Also, {shinytest2} understands the architecture of shiny apps, and so it is simple to access the input and output variables that are stored by a shiny app at any given time, the inputs can be modified easily as well – to access these variables using the more general UI-based end-to-end testing tools is much more difficult.

Read on for the “why” behind this series and the next posts will get into more of the “how.”

Comments closed