Press "Enter" to skip to content

Category: Testing

Calculating Test Coverage of Azure Data Factory Pipelines

Richard Swinbank wraps up a series on testing in Azure Data Factory:

To determine which activities have been executed by a test suite, I need to collect and aggregate activity run data from every pipeline execution triggered from any test fixture. In the previous post I developed components to retrieve and cache activities for a pipeline run – I’ll use those components here to collect data systematically.

I’m going to create a new helper class to contain functions specific to coverage measurement. It’s a subclass of the database helper because I want to exploit functionality from classes further up the hierarchy:

Read on for the code and process for measurement.

Comments closed

Unit Testing Azure Data Factory Pipelines

Richard Swinbank walks us through what it takes to run a unit test against an Azure Data Factory pipeline:

In part three of this series I looked at functional tests for ADF pipelines: verifying, in isolation, that pipelines are “doing things right”. In this post I’ll be testing isolated pipelines to check that they’re “doing the right things” – this is one description of a unit test. In a general-purpose programming language, unit tests might be used to verify that an individual line of code is executed, or that it has a particular effect. In Azure Data Factory, the smallest unit of development – a “line of code” – is a pipeline activity. I will be writing tests to verify that specific activities are executed (or not) and to inspect their results.

There’s a fair bit involved in this sort of test.

Comments closed

Removing and Refilling All Tables in a Database

Phil Factor has a couple T-SQL scripts for us to remove and reload a test database:

Copying a database using a BCP dataset from a DOS script or from PowerShell is fairly quick and trivial, but what if the data is already in another copy of the database on the same instance? Imagine you have a database build that is a copy of an existing database on the same instance and you want to run a ‘destructive’ test on the data, and do it over and over again.

Deleting all the data in a database should, you’d have thought, be trivial. Truncation doesn’t work because it doesn’t like to truncate tables with foreign keys, even if they are disabled. DELETE is safer but you will need to then reset any identity columns. Deleting large tables in one chunk isn’t scalable, so you have to do it in smaller chunks. I like to disable triggers while I do this as well. The only disadvantage comes if you have triggers that are supposed to be disabled because the simple code just re-enables them all.

This is, I think, one of the biggest selling points for containers where the database is built into the container image. You spin up a container based off of an image, perform your destructive testing, and destroy the container afterward. The next time you need to run these tests, spin up a new container. And if you need to change the data, modify the container. This introduces new challenges like how SQL Server on Linux has limitations which don’t exist on Windows, but for supported functionality, it’s a nice solution.

Comments closed

Running Postman Collections via Newman and Jenkins

Ankur Thakur takes us through configuring Postman automation:

We can write the automation test suite for any service which can be used for performing regression testing. But we can also integrate our automation test scripts using Newman in Jenkins CI/CD pipeline which can be triggered automatically whenever a pull request gets merged.

We need software installed on our system locally:

– Jenkins
– Node
– NPM
– Postman (For writing the collection)

We’ve done this in our environment and I’m reasonably happy with the results. If you have a REST API, it’s a fair sight easier to work with than something like Specflow.

Comments closed

Operations Testing with Pester

Sheldon Hull takes us through using Pester to automate operations tasks:

In my example, let’s start small and say you just have PowerShell, and some servers.

What I’ve discovered is that to actual validate DevOps oriented work is completed, you typically go through the equivalent of what a Cucumber test would have. This “checklist” of validations is often manually performed, lacking consistency and the ability to scale or repeat with minimal effort.

Consider an alternative approach to helping solve this issue, and expanding your ability to automate the tedious testing and validation of changes made.

Read on for an example as well as some additional thoughts from Sheldon.

Comments closed

The Basics of tSQLt

Jess Pomfret walks us through the basics of tSQLt:

Getting started with tSQLt is really easy- you download a zip file, unzip the contents and then run the tSQLt.class.sql script against your development database.

There are a couple of requirements, including CLR integration must be enabled and your database must be set to trustworthy.  These could open up some security considerations, but for my development environment it’s no issue.

This is where I’d say putting the database in a container would be extremely helpful, as then you can destroy it afterward. I’m not sure if that’d work, as SQL Server on Linux doesn’t support unsafe or external access assemblies (and I’m not sure what tSQLt requires there).

Comments closed

The Importance of Unit Testing Database Code

Chris Johnson shares some thoughts on unit testing database code:

This is a topic that is quite close to me heart. I don’t come from a computing background before I started working with SQL Server, so I was quite ignorant when it came to a lot of best practices that other developers who have worked with other languages are aware of. Because of this, I had no idea about unit testing until I attended a talk at a SQL Saturday all about tSQLt. If anyone isn’t aware (as I wasn’t) tSQLt is a free open source unit testing framework for use in SQL Server databases. It is the basis of Redgate’s SQL Test software, and is the most used framework for writing unit tests in SQL Server.

Since then I’ve worked to try and get employers to adopt this as part of a standard development life cycle, with mixed success at best. My current employer is quite keen, but there are two major problems. First, we have a huge amount of legacy code that obviously has no unit tests in place; and second, the way people code is not conducive to unit testing.

Click through for additional thoughts on writing good tests and an example of modularizing code to make it more testable. I’m still in the camp of “test what you can, but you can’t test everything” with databases. There’s just too much state dependency.

Comments closed

Why Unit Testing in the Database Is Tough

Rob Farley talks about a couple of reasons why database unit testing can be difficult to do:

Hamish wants to develop a conversation about unit testing within database because he recognises that the lack of unit testing is a significant problem. It’s quite commonplace in the world of iterative code, of C#, Java, and those kinds of languages, but a lot less commonplace in the world of data. I’m going to look at two of the reasons why I think this is.

Read Rob’s thoughts in their entirety. I fully agree that we need to test, but get wishy-washy on the topic of automated testing. The reason for that is that tooling is quite limited, and many of those limitations are inherent limits in the database platform itself. For the types of things you most need to test (like hefty stored procedures), the number of test cases spirals out of control quickly. And unlike functional or structured programming languages, T-SQL performance gets markedly worse as you modularize, which makes it so difficult to get down to an easily testable block of code.

Comments closed

Replaying Workloads to a Different Database with WorkloadTools

Gianluca Sartori takes us through workload replay ability in WorkloadTools:

One of the features I was asked to implement for WorkloadTools is the ability to replay commands to a database name different from the one recorded in the source workload.

This is something that I had been planning to implement for a while and it totally makes sense. Usually, you have two identical environments for the workload capture and replay, both with the same databases. Sometimes it makes sense to have two different databases as the source and target for the workload, for some particular reasons: resources constraints, ease of testing and so on.

WorkloadTools now supports replaying commands to a different database, using the DatabaseMap property of the ReplayConsumer.

Setting this up is pretty simple, though Gianluca does lay out a caveat.

Comments closed

Preparing for Demos with Pester

Jess Pomfret has some advice for you if you ever give a presentation:

If you don’t know what Pester is, it’s a test framework for PowerShell.  In the simplest explanation, using their Domain-Specific Language (DSL) you describe how things should look. If all looks good it returns output in green and if it doesn’t you get red output.  There are a lot of great use cases for Pester, like using it to ensure your code does what it’s supposed to, using it to validate your SQL Server environment (dbachecks), or in this example using it to make sure your demos are setup and ready to go.

When I’m preparing for a presentation I go through the demos over and over again, so it’s easy to accidentally leave things in a state that will cause issues when I go to do my demos in the presentation. If you’re creating a table, for example, during the demo and you already created it practicing and then forgot to drop it, the demo gods will strike and it’ll fail when it matters most! A simple Pester test to check whether the table exists will solve this issue.

Even if you aren’t giving talks in public (or inside your company), Pester is a useful tool for ensuring that the thing you expect to be the case actually is the case.

Comments closed