Press "Enter" to skip to content

Category: Testing

Designing Test Code for shinytest2

Russ Hyde wraps up a series on shinytest2:

UI-driven end-to-end tests require a bit more code than unit tests. For example, starting the app and navigating around to set up some initial state will require a few lines of code. But these are things you’ll likely need to do in several tests. As you add more and more test cases and these commonalities reveal themselves, it pays to extract out some helper functions and / or classes. By doing so, your tests will look simpler, the behaviour that you are testing will be more explicit, and you’ll have less code to maintain. We’ll show some software designs that may simplify your {shinytest2} code.

This post builds upon the previous posts in the series, but is quite a bit more technical than either of them. In addition to shiny development, you’ll need to know how to define functions in R and for the last section you’ll need to know about object-oriented programming in R (specifically using R6). The ideas in that section may be of interest even if you aren’t fluent with R6 classes yet.

Click through for the series finale.

Comments closed

Writing Tests with shinytest2

Russ Hyde continues a series on shinytest2:

Here, we will write a simple shiny app (as an R package) and show how to generate tests for this app using {shinytest2}. As discussed in the previous post, {shinytest2} tests your app as if a user was interacting with it in their browser. The tests generated are application-focussed rather than component-focussed and so give some overall guarantees on how the app should behave.

This post is slightly more technical than the last, and assumes that the reader is comfortable with creating and unit-testing packages in R, and with shiny development in general.

Click through to see the code, as well as plenty of explanation.

Comments closed

End-to-End Testing via shinytest2

Russ Hyde begins a new series:

{shinytest2} builds upon the {shinytest} package and was written by Barret Schloerke and his colleagues at RStudio. Like puppeteer, {shinytest2} uses the Chrome DevTools Protocol to interact with the browser, which is a pretty stable basis for building a browser automation tool (the predecessor {shinytest} was built on a now-unsupported browser library called PhantomJS, so we strongly recommend migrating to {shinytest2} if you are still using {shinytest}). Test scripts are written in R and so should be accessible to R developers who are comfortable with {testthat}. There is an automated tool (described in the next post) for creating these test scripts. Also, {shinytest2} understands the architecture of shiny apps, and so it is simple to access the input and output variables that are stored by a shiny app at any given time, the inputs can be modified easily as well – to access these variables using the more general UI-based end-to-end testing tools is much more difficult.

Read on for the “why” behind this series and the next posts will get into more of the “how.”

Comments closed

Testing Power BI REST APIs

Gilbert Quevauvilliers tries it:

Did you know that there is an easy way to run and extract Power BI REST API data?

The good news is that you can do this directly in your web browser. You don’t have to install or configure anything!

The method below works well if you want to either test the API to see what it returns.

Or if you want to run it to extract some data.

Read on for the process.

Comments closed

Azure Data Explorer Query Performance

Devang Shah and Surya Teja Josyula do some analysis:

The below screenshot shows the results of a load test conducted on ADX using Grafana k6. This load test included 10 different queries that were concurrently sent to ADX for a duration of 3 mins generating a total request volume of 2144 requests, nearly 12 requests per second. P95 response time from ADX was 2.38 seconds which was well within the desired performance measure of the customer.

Read on to learn more.

Comments closed

Creating Repeatable Test Data

Louis Davidson repeats himself:

In order to test graph structures, I needed a large set of random data. In some ways, this data will resemble the IMDB database I will include later in this chapter, but to make it one, controllable in size and two, random, I created this random dataset. I loaded a set of values for an account into a table and a set of interests. I wanted then to be able to load a set of random data into edge, related one account to another (one follows the other), and then an account to a random set of interests.

In this article, I will discuss a few techniques I used, starting with the simplest method using ordering by NEWID(), then using RAND() to allow the code to generate a fixed set of data.

There’s a lot of code needed to do it but if it’s something you’ve got to do, that’s the cost of doing business.

Comments closed

Testing API Packages in R

Jamie Owen does some testing:

This blog post is a follow on to our API as a package series, which looks to expand on the topic of testing {plumber} API applications within the package structure leveraging {testthat}. As a reminder of the situation, so far we have an R package that defines functions that will be used as endpoints in a {plumber} API application. The API routes defined via {plumber} decorators in inst simply map the package functions to URLs.

Jamie covers a lot of testing ground in that post as well, so check it out.

Comments closed

Running Postman Tests in GitLab

Rahul Kumar automates Postman tests:

Hi folks, In this brief blog post, we’ll learn more about Gitlab CI and Postman, the API testing tool we use the most frequently. This article’s goal is to provide a quick process for automatically testing the service API response. The solution makes use of the capabilities provided by the Gitlab-integrated Continuous Integration tool.

Click through for the tutorial.

Comments closed

Testing Powershell Scripts

David Wilson provides an introduction to Pester:

Most of you probably know that I’m a big fan of automated testing and especially testing during the development process. It significantly improves the design of the code by encouraging loose coupling and high cohesion. It also provides great documentation and increases the confidence of anyone who needs to change the code in the future (this includes future you)!

Testing does tend to get the short end of the stick when it comes to development time. Some of that is design problems, like David mentions, but I think a lot of it is the “This is a solved problem” mentality we (and I am definitely part of “we” here) end up in: I proved that the solution work because the code compiled and the two scenarios I tried out worked; therefore, why do I need to “waste” the extra time by writing all of these tests when I can move on to something more interesting?

Comments closed