Press "Enter" to skip to content

Month: July 2016

Handling Delimited Strings

Ed Pollack looks at several methods of creating delimited strings, and then several methods of breaking out delimited strings:

The execution plan is cut off, but you can be assured that there are six more similar plans below the ones pictured here. These metrics are misleading as each loop doesn’t seem too bad, right? Just 9% of the subtree cost or a few hundred reads doesn’t seem too wild, but add up all of these costs and it becomes clear that this won’t scale. What if we had thousands of rows to iterate through? For 5,000 rows, we would be looking at about 147,995,000 reads! Not to mention a very, very long execution plan that is certain to make Management Studio crawl as it renders five thousand execution plans.

Alternatively, we could cache all of the data in a temp table first, and then pull it row-by-row. This would result in significantly less reads on the underlying sales data, outperforming cursors by a mile, but would still involve iterating through the temp table over and over. For the scenario of 5,000 rows, we’d still have an inefficient slog through a smaller data set, rather than crawling through lots of data. Regardless of method, it’s still navigating quicksand either way, with varying amounts of quicksand.

This is a good article and includes STRING_SPLIT and OPENJSON methods in SQL Server 2016, but does not include building your own CLR methods.  Check it out if you need to do any delimited string manipulation.

Comments closed

Writing Good Tests In R

Brian Rowe discusses testing strategy in R:

It’s not uncommon for tests to be written at the get-go and then forgotten about. Remember that as code changes or incorrect behavior is found, new tests need to be written or existing tests need to be modified. Possibly worse than having no tests is having a bunch of tests spitting out false positives. This is because humans are prone to habituation and desensitization. It’s easy to become habituated to false positives to the point where we no longer pay attention to them.

Temporarily disabling tests may be acceptable in the short term. A more strategic solution is to optimize your test writing. The easier it is to create and modify tests, the more likely they will be correct and continue to provide value. For my testing, I generally write code to automate a lot of wiring to verify results programmatically.

I started this article with almost no idea how to test R code.  I still don’t…but this article does help.  I recommend reading it if you want to write production-quality R code.

Comments closed

SSAS And The Monthly Release Cycle

Chris Webb shares some thoughts on what the monthly SSDT/SSMS release cycle could mean for Analysis Services:

While getting bugs fixed quickly is great, I really hope that this new focus on tooling means that we get new BI-related features in SSMS and SSDT. SSMS has, in particular, been shockingly bad at supporting BI developers: for example, I find it unbelievable that we still don’t have a DAX query window in SSMS when the language has been in SSAS Tabular since SQL Server 2012 (I know the community have filled the gap with DAX Studio, but that’s not the point). Aaron Nelson recently announced a Trello board where ideas for new features in SSMS can be debated but there’s very little BI-related stuff there at the time of writing.

One can hope.  The big problem is getting enough support to bump items up on the backlog, and for that, there’s a Trello board.

Comments closed

The Benefits Of Polybase

I take a look at running a Hadoop query against a big(gish) data set:

Nearly 12 minutes doesn’t sound fantastic, but let’s remember that this is running on a single-node sandbox hosted on my laptop.  That’s hardly a fair setup for a distributed processing system.  Also, I have done nothing to optimize the files; I’m using compressed, comma-separated text files, have not partitioned the data in any meaningful way, and have taken the easy way out whenever possible.  This means that an optimized file structure running on a real cluster with powerful servers behind it could return the data set a lot faster…but for our purposes, that’s not very important.  I’m using the same hardware in all three cases, so in that sense this is a fair comp.

Despite my hemming and hawing, Polybase still performed as well as Hive and kicked sand in the linked server’s face.  I have several ideas for how to tune and want to continue down this track, showing various ways to optimize Polybase and Hive queries.

Comments closed

Tabular Image For Azure VMs

Mark Vaillancourt has a new Connect item:

Currently, when utilizing the SQL Server images in the VM Gallery in Azure, any installations of SQL Server Analysis Services default to Multidimensional. Thus, if you want SSAS Tabular, you have additional work to perform.

I was just chatting with a Senior Program Manager on the SQL Server Analysis Services product team. They currently don’t have anything in their plans for providing SQL Server Gallery Images with SSAS Tabular instead of Multidimensional. We agreed that it is a good idea for that to happen. We also agreed that a Connect suggestion would be a great way to gauge broader community support/appetite for providing Gallery images with Tabular installed.

Here’s the Connect item.

Comments closed

Casting Oracle Number To Numeric

Jon Morisi points out a rounding issue when casting Oracle’s Number data type to SQL Server’s Numeric:

Perhaps the 2014 SQL Server is implicitly converting to float, using the nearest even prior to the explicit cast to Numeric.  However, how the scale (number of decimal digits that will be stored to the right of the decimal point) would be determined in such a scenario is a conundrum.   Either way, although the mapping is defined the same, the behavior demonstrated between the two versions of SQL Server is inconsistent.
Research into ANSI and IEEE both boil down to truncation and/or rounding is implementation defined.

It’s an interesting issue.  Read on for more details.

Comments closed

Private Clouds

James Serra argues that virtualization does not by itself make for a private cloud:

Since virtualization only solves #3, a lot more should be done to create a private cloud.  Also, a cloud should also support Platform-as-a-service (PaaS) to allow for application innovation.  Fortunately there are products to add the other characteristics to give you a private cloud, such as Microsoft’s Azure Stack.  And of course you can always use a public cloud.

Read James’s post to get the full listing of what makes for a “cloud” offering.

Comments closed

Catalog Compare For Migration

Andy Leonard shows how to use Catalog Compare to migrate SSIS 2014 to 2016:

I recently tried to use the SSISDB Upgrade Wizard to upgrade a restored SSISDB (backed up in an earlier version) to SQL Server 2016. It didn’t go well.

I decided to use SSIS Catalog Compare to generate the scripts and ISPAC files from the previous instance, and deploy them to the SSIS 2016 Catalog.

“You Can Do That?”

Yes. Yes you can. Here’s how…

This is a paid product, but if you want to perform this upgrade, it sounds like a good tool for the job.

Comments closed

DATEFROMPARTS

Aaron Bertrand looks at the DATEFROMPARTS function in SQL Server 2012 and later:

The point of these functions is to make it easier to construct a date, or datetime, or datetime2 variable, when you know the individual parts. DATEFROMPARTS() takes three arguments: year, month, and day, and returns adate value. So, for example, SELECT DATEFROMPARTS(2016,7,6); would yield the date 2016-07-06.

Read on for a comparison of this function against about a dozen other methods of building dates from components.

Comments closed

Parallel Insertion

Sanjay Mishra and Arvind Shyamsundar show that you can use parallelism with the INSERT INTO [Table] SELECT [Values] construct:

Two important criteria must be met to allow parallel execution of an INSERT … SELECT statement.

  1. The database compatibility level must be 130. Execute “SELECT name, compatibility_level FROM sys.databases” to determine the compability level of your database, and if it is not 130, execute “ALTER DATABASE <MyDB> SET COMPATIBILITY_LEVEL = 130” to set it to 130. Changing the compatibility level of a database influences some behavior changes. You should test and ensure that your overall application works well with the new compatibility level.

  2. Must use the TABLOCK hint with the INSERT … SELECT statement. For example: INSERT INTO table_1 WITH (TABLOCK) SELECT * FROM table_2.

This is a limited use case, but it does sound very useful for large staging table loads or backfills when you can control table access.

Comments closed