Press "Enter" to skip to content

Category: Integration Services

Conditional Processing

Bill Fellows shows options for handling date-based conditional processing:

Do you see the problem? Really, there are two but the one I’m focused on is the use of GETDATE to determine which branch of logic is executed. Today is Monday and I need to test the logic that runs on Friday. Yes, I can run these steps in isolation and given that I’m not updating the logic that fiddles with the branches, my change shouldn’t have an adverse effect but by golly, that sucks from an testing perspective. It’s also really hard to develop unit tests when your input data is server date. What are you going to do, allocate 5 to 7 days for testing or change the server clock. I believe the answer is No and OH HELL NAH!

This isn’t just an SSIS thing, either. I’ve seen the above logic in TSQL as well. If you pin your logic to getdate/current_timestamp calls, then your testing is going to be painful.

I liken this to solving dependency injection problems in general:  make the caller define the date or date part.  That way, your test callers can define other dates and the “smarts” around which branch to take move up to a more swappable layer.

Comments closed

SSIS Performance Testing

Koen Verbeeck shows a framework he uses for performance testing in Integration Services:

The proc passes the @RunID parameter to the package, as well as other usual suspects, such as the package name, folder name and project name. You can also choose if a package is run synchronously or asynchronously. When run synchronously, the stored procedure doesn’t finish until the package is finished as well.

Using this stored procedure, it is easy to run a package multiple times in a row using a WHILE loop.

Also of interest is Andy Leonard’s SSIS Performance site, whose goal is to set up some performance benchmarks for Integration Services.

Comments closed

Lineage Improvements

Andy Leonard shows LineageId improvements over the years:

SSIS 2016 CTP3.3 offers a solution. First, there are now two new columns in the SSIS Data Flow Component Error Output – ErrorCode – Description and ErrorColumn – Description:

The new columns provide extremely useful (plain language) error metadata that will, in my opinion, greatly reduce the amount of time required to identify data-related load failures in the Data Flow Task.

But that’s not all. If you configure a log to capture the DiagnosticEx event, you will receive a message that provides the Data Flow column ID. To have a look, add a new log to an SSIS package that contains a configured Data Flow Task. On the Details tab, select the DiagnosticEx event:

Backtracing LineageId was a painful experience for me, so I’m happy that they’re making this better.

Comments closed

Copying SSIS Packages

Andy Galbraith shows how to copy SSIS packages using DTUTIL:

A frequent need when performing a server migration is to copy the SSIS packages from one server to a new server.  There are a couple of different ways to do this, including a wizard in SSMS. (See https://www.mssqltips.com/sqlservertip/2061/how-to-manage-ssis-packages-stored-in-multiple-sql-server-database-instances/).  The catch to this is that these are manual and they only move one package at a time.

I recently had to migrate a server with over twenty packages, and I knew I didn’t want to click-click-click over and over again.  🙂

The best answer would be to have your packages safe and secure in source control, but sometimes that’s not an option.

Comments closed

Single-Package Deployments In SSIS 2016

Andy Leonard looks at what happens when your first SSIS project deployment is a single-package deployment:

To test, I created a new SSIS project named “DeploymentTest1” and added three simple SSIS packages. I right-clicked Package3.dtsx and clicked Deploy Package. The Integration Services Deployment Wizard started, as expected. Since I was deploying to a pristine Catalog, I created a new Catalog Folder named “Deployment”. There were no projects in my catalog, and I was curious how the Deployment Wizard would handle this.

Andy notes that this works, but you might want to stick to the tried-and-true method of deploying entire projects and naming your catalog project the same as your SSDT project.

Comments closed

Incremental Loading Using Datetime Columns

Reza Rad shows a pattern for implementing incremental loads using modified dates:

The idea behind this method is to store the latest ETL run time in a config or log table, and then in the next ETL run just load records from the source table that have modified (with their modified date greater than or equal to) after the latest ETL run datetime. This will create the change set for the data table. The change set might contains inserted, updated, or deleted records. to identify which change happened on the record you need to compare the change set with existing records and separate inserted, updated, and deleted records. This change set with the action on each record can be inserted into staging tables, and then be used to apply on the fact table based on appropriate action.

In my experience, the hardest part about this is making sure people update ModifiedTime when they update rows in the table.

Comments closed

MSDTC + SSIS

Kenneth Fisher discusses how to get MSDTC working with Integration Services:

tl;dr; The MSDTC service has to be not only turned on, but configured on all of the machines involved. Including the machine running the SSIS package (possibly a workstation).

Configuring remote MSDTC, sure.  Configuring local MSDTC, though, is something I hadn’t realized was important, at least if you want to use SSIS transactions.  Probably goes to show how often I use SSIS transactions…

Comments closed

Populating Lookup Query Using A Variable

Meagan Longoria shows us how to use a variable to populate a lookup query in SSIS:

I already had my data flow populated with the lookup for MSA. I set it to full cache and entered a query in the connection to initially populate the fields that would be returned (simply my lookup query without the where clause).

Next, I opened the data flow properties, located Expressions and clicked on the ellipses to open the Property Expression  Editor. I found the SQLCommand property for my MSA lookup and set it to my package variable that contained my query.

I had issues in the past with full cached lookups and variables.  Fortunately, you can get around a lot of problems with expressions.

Comments closed

Type 6 Dimensions With BIML

Meagan Longoria shows us type 6 dimensions with BIML:

In my previous post, I provided the design pattern and BIML for a pure Type 2 Slowly Changing Dimension (SCD). When I say “pure Type 2 SCD”, I mean an ETL process that adds a new row for a change in any field in the dimension and never updates a dimension attribute without creating a new row.  In practice, I tend to create more hybrid Type 2 SCDs where updates to some attributes require a new row and others update the value on the existing rows. A similar pattern that I find I implement more often than a pure Type 2 is a Type 6 SCD. A Type 6 SCD builds on the Type 2 technique by adding current attributes alongside the historical attributes so related measures can be grouped by the historical or current dimension attribute values. The only difference between what I call a hybrid Type 2 and a Type 6 is that in the Type 6, there are no Type 1 attributes in the dimension that do not also have a Type 2 version in the dimension to capture the historical values.

Dear Mr. President:  there are too many types these days.  Please eliminate three.  I am NOT a crackpot.

Comments closed

Collecting ETL Metrics

Andy Leonard has a long and useful post on collecting ETL metrics in SQL Server 2016:

“In an age of the SSIS Catalog, why would one ever employ this kind of metadata collection, Andy?” That’s a fair question. The SSIS Catalog is an awesome data integration execution, logging, and externalization engine. There are a handful of use cases, though, where enterprises may opt to continue to execute SSIS packages from the file system or the MSDB database. Perhaps the biggest reason to do so is that’s the way the enterprise is currently executing SSIS. When SSDT-BI converts pre-Catalog-era (2005, 2008, 2008 R2) SSIS packages to current, it imports these packages in a “Package Deployment Model” SSIS Project. This allows developers to upgrade the version of their SSIS project to SSIS 2016 (and enjoy many benefits for so doing) while continuing to execute SSIS packages in the file system. Kudos to the Microsoft SSIS Development Team for this backwards compatibility!

Andy asks the question I wanted to ask and gives a good answer.

Comments closed