Press "Enter" to skip to content

Category: Testing

Generating Load For Kafka With JMeter

Anup Shirolkar shows us a way to use JMeter to generate load for Apache Kafka clusters:

The Anomalia Machina is going to require (at least!) one more thing as stated in the intro, loading with lots of data! Kafka is a log aggregation system and operates on a publish-subscribe mechanism. The Kafka cluster in Anomalia Machina will be accumulating a lot of events which are to be processed to discover anomalies. The exact sequence of processing is still being prototyped at this point in time, but there is a solid requirement of a tool/mechanism to load the Kafka cluster with lots of data in a hurry.

The requirements pointed me in direction of looking for ‘Kafka Load Testing’. Firstly thinking of load testing, one tool comes to mind which is used very widely for load testing of Java based systems: ‘Jmeter’. Jmeter has rich toolset to perform various types of testing. It also comes with many advantages viz. Open source, easy to use, platform independent, distributed testing etc. I can use Jmeter and test its ability to perform cluster loading.

Read on for the demonstration.

Comments closed

Testing TDE Performance

Eduardo Pivaral tests the performance of a database with Transparent Data Encryption versus that same database without encryption:

Transparent data encryption (TDE) helps you to secure your data at rest, this means the data files and related backups are encrypted, securing your data in case your media is stolen.
This technology works by implementing real-time I/O encryption and decryption, so this implementation is transparent for your applications and users.

However, this type of implementation could lead to some performance degradation since more resources must be allocated in order to perform the encrypt/decrypt operations.

On this post we will compare how much longer take some of the most common DB operations, so in case you are planning to implement it on your database, you can have an idea on what to expect from different operations.

These results fit in reasonably well with what I’d heard, but it’s nice to have someone run the numbers.

Comments closed

Building Observable Distributed Systems

Kevin Sookocheff has some thoughts on building observable systems:

Given the shortcomings of monitoring and testing, we should shift focus to building observable systems. This means treating observability of system behaviour as a primary feature of the system being built, and integrating this feature into how we design, build, test, and maintain our systems. This also means acknowledging that the ease with which we can debug our production environment will be a key indicator of system reliability, scalability, and ultimately customer experience. Designing a system to be observable requires effort from three disciplines of software development: development, testing, and operations. None of these disciplines is more important than the others, and the sum of them is greater than the value of the individual parts. Let’s take some time to look at each discipline in more detail, with a focus on observability.

My struggle has never been with the concept, but rather with getting the implementation details right.  “Make everything observable” is great until you run out of disk space because you’re logging everything.

Comments closed

Combining Parameters And Prompts In Powershell

Shane O’Neill compromises with a co-worker on building a Powershell function:

As you may have guessed…

…this did not go down well with my co-worker.

That?! How are we supposed to know to write “!?” there? How does that even mean help anyway? Can we not just get the help message to show up on it’s own!

To which I said…

We can but it’s going to create a massive bottleneck! If we try and put in the help messages then it’s going to slow things down eventually because we’re not going to be able to run this without someone babysitting it! Do you want to be the person that has to sit there watching this every single time it’s run?

Compromise is a wonderful thing though and we eventually managed to merge the best of both worlds…

Shane gives us a solution and has a couple of updates for simpler solutions to boot.

Comments closed

An Update To ssisUnit

Bartosz Ratajczyk has added some functionality to ssisUnit:

Second – you can get and set the properties of the project and its elements. Like – overwriting project connection managers (I designed it with this particular need on my mind). You can now set the connection string the different server (or database) – in the PropertyPath of the PropertyCommand use \Project\ConnectionManagers, write the name of the connection manager with the extension, and use one of the Properties. You can do it during the Test setup (or all tests setup), but not during the test suite setup, as ssisUnit is not aware of the project until it loads it into the memory.

Good on Bartosz for resurrecting a stable but moribund project and adding some enhancements.

Comments closed

Testing AG Read-Only Routing

Jess Pomfret shows us how we can use dbatools to test Availability Group read-only routing:

The other part I needed to set up was read-only routing, this enables SQL Server to reroute those read only connections to the appropriate replica.  You can also list the read only replicas by priority if you have multiple available or you can group them to enable load-balancing.

Although this seems to be setup correctly so that connections that specify their application intent of read only will be routed to the secondary node I wanted to prove it.

Read on to see how Jess is able to prove it.

Comments closed

Writing ssisUnit Tests With C#

Bartosz Ratajczyk shows us how to create ssisUnit tests in MSTest with C#:

In the post about using MSTest framework to execute ssisUnit tests, I used parts of the ssisUnit API model. If you want, you can write all your tests using this model, and this post will guide you through the first steps. I will show you how to write one of the previously prepared XML tests using C# and (again) MSTest.

Why MSTest? Because I don’t want to write some application that will contain all the tests I want to run, display if they pass or not. When I write the MSTest tests, I can run them using the Test Explorer in VS, using a command line, or in TFS.

UIs are great for learning how to do things and for one-off actions, but writing code scales much better in terms of time.

Comments closed

Testing BI Projects With NBi: Hierarchies And Levels

Cedric Charlier shows us a potential pain point when testing an Analysis Services cube with hierarchies defined:

When executing the following query (on the Adventure Works 2012 sample database/cube), you’ll see two columns in the result displayed by SSMS. It’s probably what you’re expecting, you’re only selecting one specific level of the hierarchy [Date].[Calendar Date] and one measure.

You’re probably expecting that NBi will also consider two columns. Unfortunately, it’s not the case: NBi will consider 4 columns! What are the additional and unexpected columns? The [Date].[Calendar].[Calendar Year] and [Date].[Calendar].[Calendar Semester] are also returned. In reality, this is not something specific to NBi, it’s just the standard behaviour of the ADOMD library and SSMS is cheating when only displaying one column for the level!

Click through for the solution.  And if NBi sounds interesting, check out Cedric’s prior post on the topic.

Comments closed

Test-Driven Database Development

Haroon Ashraf walks us through a simplified example of test-driven database development:

In TDDD, business requirements are encapsulated in database unit tests.

In case the requirement is adding a new category to the Category table, it is necessary to implement TDDD according to the following steps:

  1. Creating of database unit test to check the existence of AddNewCategory database object.

  2. Failing of the unit test because of the database object absence.

  3. Creating the AddNewCategory object in order for the unit test to pass.

  4. The unit test determines whether AddNewCategory stored procedure is actually adding a new category or not.

  5. That unit test also fails.

  6. AddNewCategory procedure code changes to add a new category that verifies afterrerunning the unit test, which is able to pass now.

Laying out my biases, I’m not a fan of TDD for application development and definitely not a fan of it for database development.  “Unit testing” inside a database is extremely limited, particularly when there are so many side effects and encapsulation tends to be actively harmful.

Comments closed

Parsing T-SQL Scripts With Pester

Rob Sewell shows us how to use Pester to ensure that a set of SQL scripts are valid T-SQL:

This is a quick Pester test I wrote to ensure that some SQL Scripts in a directory would parse so there was some guarantee that they were valid T-SQL. It uses the SQLParser.dll and because it was using a build server without SQL Server I have to load the required DLLs from the dbatools module (Thank you dbatools 🙂 )

It simply runs through all of the .sql files and runs the parser against them and checks the errors. In the case of failures it will output where it failed in the error message in the failed Pester result as well.

This particular example doesn’t ensure that the scripts do what you want them to do, but hey, Pester was built for that as well.

Comments closed