I took the idea and parts of the code from Ravi Palihena’s blog post about ssisUnit testing and his GitHub repository. Then I read the source code of the SsisUnitTestRunner, SsisUnitTestRunnerUI and posts by Gérald and changed the tests a bit.
I will use MSTest to execute ssisUnit tests from the file
20_DataFlow.ssisUnit. For that, I created a new Visual C# > Test > Unit Test Project (.NET Framework) –
ssisUnitLearning.MSTest– within the solution. I also set the reference to the
SsisUnitBase.dlllibraries and loaded required namespaces
Bartosz gives us the initial walkthrough, and then builds a T4 template to automate the task. You can grab that template on his GitHub repo, and hopefully something makes its way into ssisUnit to make integration with NUnit / MSTest official.
The ssisUnit GUI does not support creating the persisted dataset. If you switch the IsResultsStored flag to true on the dataset’s properties, it gives a warning “The expected dataset’s (<the dataset name>) stored results does not contain data. Populate the Results data table before executing.” during the test run.
To find out more about it, take a look at the source code.
This is a nice explanation of a current limitation in the tool and a workaround.
I was recently working on a .NET 4.6 based project that was using EF 6 and nUnit for unit testing. While setting up some integration tests against a local SQL database I was receiving this error:
Spatial types and functions are not available for this provider because the assembly ‘Microsoft.SqlServer.Types’ version 10 or higher could not be found.
We had recently been using SQL Server spatial types for tracking geograpic locations and the tests which performed updates and inserts against these fields were failing.
Read on for the setup instructions.
In tSQLt, we can call tSQLt.FakeTable and then do an insert, if we don’t use tSQLt what do we do? Well, we need to setup the data we want, this could be by using a tool or by writing a load of insert statements. I have seen this done in various ways such as:
- Writing manual insert scripts
- Using a tool to setup the data
- Making use of API’s in the application to setup the data we need
- Some wierd and wonderful things that we shouldn’t recommend
Ultimately, for each test that you do you need to know what data you need for it. There isn’t really any way around this, and the sooner you get yourself in the position where you can setup the data you need for a test, the better.
Read the whole thing.
Imagine that I want to check for databases in Full Recovery Model on the production environment and I want to start (in parallel) a new check for the development environment where I want to check for Simple Recovery Model if this setting is not changed in the correct time frame, we can end checking for Full Recovery Model on the development environment where we want the Simple Recovery Model.
The first time I tried to run tests for some environments in parallel, that had the need to change some configs, I didn’t realise about this detail so I ended up with much more failed tests than the expected! The bell rang when the majority of the failed tests were from a specific test…the one I had changed the value.
Read the whole thing before you start running Task.Parallel or even running multiple copies of dbachecks in separate Powershell windows.
Previously we successfully prepared tests for variables and parameters using
ParameterCommand. Now it’s time to communicate with the database, and for that, I will use connection manager defined on the project level. I know from the ssisUnit tutorials it works perfect with package connection managers, so it’s time to verify it against the projects. I will test the package
10_ProjectCM.dtsx– it is just getting a single value from the table in a database and storing it in a variable. All the packages and unit tests are on my GitHub.
The package contains three SQL Tasks: the first just checks if we can communicate with the database using
SELECT 1statement, the second gets the information from the table, and the third repeats the second on the container level.
Click through for the tests.
Today we updated the HADR tests to add the capability to test multiple availability groups and fix a couple of bugs
Once you have installed dbachecks you will need to set some configuration so that you can perform the tests. You can see all of the configuration items and their values using
1 Get-DbcConfig | Out-GridView
Read on for more about these updates.
The result shows 1 test run, 1 test passed, 2 asserts run, 2 asserts passed.
Wait, what? We have prepared only one assert, why does it show two?
The second assert is: “Task Completed: Actual result (Success) was equal to the expected result (Success).“. Great. Where does it come from? Let’s find out.
This is a nice introduction to the topic; if you fuss about with SSIS packages, you should check this out.
While tweaking my Invoke-DbcCheck the list of -ExcludeCheck checks keeps growing and growing.
1 Invoke-DbcCheck -SqlInstance $Servers -ComputerName $Servers -Check $_ -ExcludeDatabase ReportServer, ReportServerTempDB -ExcludeCheck TestLastBackup, TestLastBackupVerifyOnly, LinkedServerConnection, SPN, MaintenanceSolution, SaRenamed, LastGoodCheckDb, LogShipping, InvalidDatabaseOwner -PassThru | Update-DbcPowerBiDataSource -Environment Production
Sure does make for a long command line to scroll thru.
Click through to see how to save these excluded checks in a configuration file.
The method ‘ testOperation ‘ takes the output of the operation performed on the ‘inputPair’ and check whether it is equal to the ‘outputPair’ and just like this, we can test our business logic.
This short snippet lets you test your business logic without forcing you to create even a Spark session. You can mock the whole streaming environment and test your business logic easily.
This was a simple example of unary operations on DStreams. Similarly, we can test binary operations and window operations on DStreams.
Click through for an example with code.