Press "Enter" to skip to content

Category: Testing

Credentials In Azure Automation

Mark Vaillancourt explains Azure automation credentials:

With that String parameters, it is easy to just type what I want in the textbox. But, for the PSCredential parameter at the bottom, I was unsure what to do. I figured, since this is just a textbox, I couldn’t just magically pass in a Credential Asset. So, I tried passing in an expression that used the Get-AzureRmAutomationCredential cmdlet that returns the Credential Asset specified. That didn’t work very well. I then started digging through documentation, figuring I would see references and examples of passing a Credential Asset in the Test Pane. It took me a bit, but I finally landed on an Azure.com blog post titled Azure Automation: Runbook Input, Output, and Nested Runbooks. While this post didn’t match all that closely with the search criteria around the Test Pane, it did contain my answer in a section on starting Runbooks:

The answer turns out to be pretty simple.

Comments closed

Writing Good Tests In R

Brian Rowe discusses testing strategy in R:

It’s not uncommon for tests to be written at the get-go and then forgotten about. Remember that as code changes or incorrect behavior is found, new tests need to be written or existing tests need to be modified. Possibly worse than having no tests is having a bunch of tests spitting out false positives. This is because humans are prone to habituation and desensitization. It’s easy to become habituated to false positives to the point where we no longer pay attention to them.

Temporarily disabling tests may be acceptable in the short term. A more strategic solution is to optimize your test writing. The easier it is to create and modify tests, the more likely they will be correct and continue to provide value. For my testing, I generally write code to automate a lot of wiring to verify results programmatically.

I started this article with almost no idea how to test R code.  I still don’t…but this article does help.  I recommend reading it if you want to write production-quality R code.

Comments closed

MSTest Output

Richie Lee shows how to use TestContext.WriteLine to debug MSTest runs:

Unit testing can sometimes throw up unexpected issues that are entirely concerned with the unit tests and not the application under test. Recently I’ve been having issues with DateTime formats affecting the results of the unit tests: the format of the DateTime was different on the build server than it was on my local machine. The stored procedure could not be altered (because reasons*). The datetime formats on both SQL and Windows were identical to my machine, and running the test itself on the build box caused the test to go green, so clearly there was something funky going on in the build process. SO I needed to find a way of logging the values during the process to verify where the issue was. And, as luck would have it, MSTest has such a method to help you. First create a private instance of the TestContext object in the TestClass, then you create a public property.

This turns out to be really easy to do.

Comments closed

Pester For Configuration Checks

Rob Sewell talks about SQL Server configuration testing using Pester, a Powershell test framework:

Occasionally, for reasons I haven’t explored I had to test against the value property of the returned object

          It "The Full User Database Backup should be scheduled Weekly $OlaUserFullSchedule" {
$Return.OlaUserFullSchedule.value | Should Be $OlaUserFullSchedule
}

I wanted to be able to run the tests against environments or groups of servers with different default values so I parameterised the Test Results as well and then the logical step was to turn it into a function and then I could do some parameter splatting. This also gives me the opportunity to show all of the things that I am currently giving parameters to the test for

This is a nice walkthrough of Rob’s contribution, available on GitHub.

Comments closed

Distributed Unit Testing

Cloudera shows off their distributed unit testing framework:

This distributed testing infrastructure started out as a Cloudera hackathon project in 2014. Todd Lipcon and I worked on a shared backend for running test tasks on a cluster, with Todd focusing on onboarding the Apache Kudu (incubating) tests, and myself on Apache Hadoop. Our prototype implementation reduced the runtime of the 1,700+ Hadoop unit tests from 8.5 hours to 15 minutes.

Since then, we’ve spent time improving the infrastructure and on-boarding additional projects. Besides Kudu and Hadoop, our distributed testing infrastructure is also being used by our Apache Hive and Apache HBase teams. We can now run all the Hadoop unit tests in less than 10 minutes!

Finally, we’re happy to announce that both our infrastructure and code are public! You can browse the webUI at http://dist-test.cloudera.org and see all the source code (ASLv2 licensed) at the cloudera/dist_test github repository. This infrastructure is already being used at upstream Apache to run the Kudu pre-commit tests.

This is an interesting look at how to scale out unit tests.  It’s a bit of a long read (especially with all the videos) but worth your time.

Comments closed

Testing Cumulative Updates

Kendra Little discusses how to test cumulative updates:

Install the cumulative update into a non-production environment (prod test or staging) and run application activity against it.

Ideally, you want this to be representative activity for your real, production applications, run from multiple application servers. You also want the hardware and storage to be as similar as possible.

If that’s not possible for you to set up, you need some sort of tool to generate concurrent activity. There are a lot of options — Michael Swart gives a great run down here.

This is some very good advice, and boils down to “test everything that you do in production.”  It’s easy to forget some of these things, though, so I’m glad Kendra enumerated them.

Comments closed

The Importance Of Integration Testing

Michael Bourgon shows an example of why integration testing is important:

We are in process of doing a migration from an ancient creaky server to a shiny new VM.  Rather than just rebuild it and restore everything, we’re taking the (painful) opportunity to clean things up and improve several systems.

As part of this, we’re replicating data from the old server to the new server, so that we can migrate processes piecemeal, so that rollback is not “OH CRAP TURN IT OFF TURN IT OFF ROLL BACK TO THE OLD SERVER”.

But we ran into a weird problem.  On the target server, we had a many-to-many table that sits between, let’s say, stores and orders.  We have a stores table, we have an orders table, and this one (call it STORE_ORDERS for simplicity) is just a linking table between the two.  ID, stores_id, orders_id.  Everything scripted identically between the two databases (aside from the NOT FOR REPLICATION flag).

This is a case where action A works fine and action B works fine, but the combination of actions A and B leads to sadness.

Comments closed

SQLCover

Ed Elliott has upgraded his T-SQL code coverage tool:

What is code coverage?

Code coverage is a way to see how many statements in your database code have been executed when you ran your tests(s). It is a way to see how well covered with tests a particular area is – the better covered with tests, the less likely you will miss issues when you make changes in those areas.

What is code coverage for SQL Server?

SQL Server gives us a great tracing mechanism, either profiler or extended events which details exactly which statements have been run. SQL Server doesn’t do so well at telling us what statements we could possibly run in some code but the Transact Sql Script Dom that is part of the DacFx does give us the ability to break T-SQL code into statements so combining the two we have the ability to take a stored procedure such as:

This is pretty snazzy.

Comments closed

When To Test

Kenneth Fisher thinks about when to test:

To test or not to test. That is the question. Or is it? Shouldn’t we always test? That was the question posted here. It’s a big question so let’s break it down a bit.

My short answer is:  test as much as necessary to keep two conditions from occurring:  first, that you get fired (or worse); second, that the inevitable bug somebody else discovers in your code is too embarrassing.  Kenneth’s answer is better.

Comments closed