This first post of 3 takes a quick peek at SSRS 2016 using the Community Technical Preview (CTP) 3.2. I will be making a quick post installation tweak and then guiding you through the steps to build your first report. If you are experienced with SSRS you can probably just scan this post to see the differences in 2016. See the past post Installing SQL Server 2016 for details on the install I did prior to working on this post.
Installation and configuration don’t look radically different from the last couple versions, but it’s good to get a refresher on the topic.
Anyway, on to my original topic—I know most shops aren’t Comcast where we were deploying multiple servers a week. However, automating your installation process has a lot of other benefits—you don’t have to worry about misconfigurations, unless you put in into your script. It also forces you into adopting a standard drive letter approach to all of your servers. So when something breaks, you know exactly where to go. And more importantly you save 15 minutes of clicking next.
For any shop which deploys SQL Server more than a couple times every few years, automating your installation process is a smart move. Even if you rarely deploy, the consistency benefits make it worthwhile.
Note that the logical reads are the exact same and neither query is doing physical reads (the execution plans are the same– the optimizer doesn’t care what locks you are using). The queries were run with SET STATISTICS IO,TIME OFF and Execution Plans turned off, just to reduce influencing factors on duration and CPU.
The database engine is simply having to do more work here. Locking the pages in the clustered index is less work than locking each of the 1,825,433 rows.
Even though our locks are more granular, making queries run longer by taking out individual locks will typically lead to more blocking down the road.
Kendra follows up with several optimization possibilities, so read the whole thing.
In conclusion, we see a significant reduction in disk space and IO when sparse columns are used, and they perform slightly better than non-sparse columns in our simple data modification tests (note that retrieval performance should also be considered; perhaps the subject of another post).
Sparse columns have a lot of potential value, but in my experience, they fall short in one huge way: you cannot compress tables with sparse columns. Given that both sparse columns and data compression are things which benefit from scale, it’s important to make the right choice upfront.
SSIS 2016 CTP3.3 offers a solution. First, there are now two new columns in the SSIS Data Flow Component Error Output – ErrorCode – Description and ErrorColumn – Description:
The new columns provide extremely useful (plain language) error metadata that will, in my opinion, greatly reduce the amount of time required to identify data-related load failures in the Data Flow Task.
But that’s not all. If you configure a log to capture the DiagnosticEx event, you will receive a message that provides the Data Flow column ID. To have a look, add a new log to an SSIS package that contains a configured Data Flow Task. On the Details tab, select the DiagnosticEx event:
Backtracing LineageId was a painful experience for me, so I’m happy that they’re making this better.
A frequent need when performing a server migration is to copy the SSIS packages from one server to a new server. There are a couple of different ways to do this, including a wizard in SSMS. (See https://www.mssqltips.com/sqlservertip/2061/how-to-manage-ssis-packages-stored-in-multiple-sql-server-database-instances/). The catch to this is that these are manual and they only move one package at a time.
I recently had to migrate a server with over twenty packages, and I knew I didn’t want to click-click-click over and over again. 🙂
The best answer would be to have your packages safe and secure in source control, but sometimes that’s not an option.