Press "Enter" to skip to content

Category: T-SQL

Arrays And Lists In SQL Server

Erland Sommarskog has updated his essay on Arrays and Lists in SQL Server.  He’s broken it down into a few parts.  First, the short version:

Now you know why IN (@list) does not work as you hoped for, but if you have a comma-separated list you still need to know to work with it.

The best approach in my opinion is to reconsider having a comma-separated list at all. After all, you are in a relational database, so why not use a table instead? That is, you should pass the data in a table-valued parameter (TVP) instead of that comma-separated list. If you have never used TVPs before, I have an article, Using Table-Valued Parameters in SQL Server and .NET, where I give a tutorial of passing TVPs from .NET to SQL Server, and there is a detailed description exactly of the case of passing a comma-separated list to a TVP. You will find that it is astonishly simple.

Unfortunately, not all environments support TVPs – Entity Framework has no real support for TVPs, reportedly nor has Reporting Services. The same applies if you are on SQL 2005 or earlier, since TVPs were added in SQL 2008. Or you may just be plain stubborn and want to use your comma-separated list. Or you are simply pressed for time, and don’t have the time to learn something new right now.

If you want a longer article on using table-valued parameters, Erland has one of those as well:

This is an article that is intended to get you started with passing table-valued parameters (TVPs) from SQL Server to .NET. I begin with presenting how you use table-valued parameters in SQL Server itself whereupon I give a quick overview of the mechanisms to pass TVPs from ADO .NET to SQL Server.

The main meat of this article are two real-world examples where I use TVPs. The first example is the classical problem of passing a comma-separated list of values to SQL Server, this time through a table-valued parameter. You will be amazed of how simple it is. In the second example I show two ways to load a file with master-detail data into tables in SQL Server. In addition to the examples, there is also some discussion on how you can improve performance when loading large amounts of data.

Despite the appearance of .NET in the title of this article, there is a final chapter that explores the possibilities in other APIs, of which some and some do not support TVPs. This includes Entity Framework which has no for support TVPs. In this chapter I briefly discuss workarounds when TVPs are not available to you.

And for the advanced look at arrays and lists, you have the long-form article:

A problem that has been popular over the years with SQL Server is how to handle a list of values. In the majority of the cases, people have a comma-separated list, because this format is produced by commonly used tools like multi-choice controls in .NET, Reporting Services and other places.

When I say that the problem is popular, I don’t only mean that the questions are commonplace – but so are solutions. You can find no end of blog posts etc that presents string-splitting functions, including performance tests of such functions and there are function that are known to be the fastest etc.

The aim of this article is two-fold: 1) Give a general discussion of how to design string-splitting functions. 2) Present and discuss each method from the angles I bring up in the general discussion. This includes performance, but not only.

Even if you’ve read this article before, it’s worth checking again to refresh your memory and to see his changes.

Comments closed

Graphing Customer Churn

Fang Zhou and Wee Hyong Tok have released a case study on a telephone company’s customer churn:

In the case of telco customer churn, we collected a combination of the call detail record data and customer profile data from a mobile carrier, and then followed the data science process —  data exploration and visualization, data pre-processing and feature engineering, model training, scoring and evaluation — in order to achieve the churn prediction. With a churn indicator in the dataset taking value 1 when the customer is churned and taking value 0 when the customer is non-churned, we addressed the problem as a binary classification problem and tried varioustree-based models along with methods like bagging, random forests and boosting. Because the number of churned customers is much less than that of non-churned customers (making the data set quite unbalanced), SMOTE (Synthetic Minority Oversampling Technique) was applied to adjust the proportion of majority class over minority class in the training data set, thus further improving model performance, especially precision and recall.

All the above data science procedures could be implemented with base R. Rather than moving the data out from the database to an external machine running R, we instead run R scripts directly on SQL Server data by leveraging the in-database analytics capability provided by SQL Server R Services, taking advantage of the rich and powerful CRAN R packages plus the parallel external memory algorithms in the RevoScaleR library. In what follows, we will describe the specific R packages and algorithms that we used to implement the data science solution for predicting telco customer churn.

They have provided the relevant materials in GitHub as well.

Comments closed

Parsing DBCC MEMORYSTATUS

Slava Murygin has a script to turn DBCC MEMORYSTATUS output into one result set:

When I wanted to research memory problem on a server and started to dig deeper into “DBCC MEMORYSTATUS” command.Very useful links to understand that command were from Microsoft:
https://support.microsoft.com/en-us/kb/271624
https://support.microsoft.com/en-us/kb/907877

During the research I’ve faced two problems:
1. I had to wait several seconds to get the full result set.
2. I had to scroll down and shuffle 115 different data sets to find the counter I want.

To eliminate both these problem all these different counters have to be in the same table/data set.
That will make research easier and data can be stored or compared with the base line.

Read on for the T-SQL script.

Comments closed

Pivoting Data

Jeffrey Verheul shows how to use the PIVOT operator in T-SQL:

The idea of using key-value pairs to store data isn’t new, but with the rapid development of cloud solutions like Azure and the hype around NoSQL databases, using key-value pairs to store data got a big boost. Especially developers (in my experience) love using key-value pair to store their data, because it’s easy for them to consume the data in an application. But it gives the database professional an extra challenge because we’re used to retrieve columns with values instead of a record per value. So how can we turn those key-value pairs into rows?

This is a good example of using PIVOT.  I’m not a big fan of storing data in key-value pairs and using pivoting operators because you’re burning CPU on that very expensive SQL Server instance (and you’re not taking advantage of what relational databases do well); if you really need to store data as key-value, I’d recommend doing the pivot in cheaper application servers.

Comments closed

Generating Absurd Numbers Of Columns

Brent Ozar wants to generate SmallInt.Max columns:

Alright, so we’ve learned that I can’t return more than 65,535 columns, AND I can only use 4,096 elements in my SELECT. I can think of several workarounds there – 65,535 / 4096 = about 16, which means I could create a few tables or CTEs and do SELECT *’s from them, thereby returning all 65,535 columns with less than 4,096 things in my SELECT. But for now, we’ll just start with 4,096 things in my SELECT:

If you think you need 65K columns returned, I refer you to Swart’s Ten Percent Rule.

Comments closed

Filtering Data

Slava Murygin shows various ways to filter data, in particular data from fn_dblog():

Would say we need to extract an information associated with an “UPDATE” for LSNs started at “0000004f:00000087:0001”. You can just specify Starting and Ending LSNs as “fn_dblog” parameters:

[…]

That portion of code would return you ONLY Log records between LSNs “0000004f:00000087:0001″ and “0000004f:00000088:0001″.

Slava’s post uses fn_dblog() as an example but the techniques are applicable across the board, and in practice sum up to “get the fewest number of rows and fewest number of columns you need to solve the problem at hand.”

Comments closed

Joins Versus NOT IN Clause

Kevin Hill explains a potential performance difference between using NOT IN and using a left join:

Basic stuff, right?   Both will return 951 records (books) that I do not own.  And, very quickly…because the tables are tiny.   Sub-1 second is fast.

The issue here is HOW the rows are compared.

English version now, techy stuff later:

In the first query, this is equivalent to you standing at the bookstore and calling home to have someone check to see if the book in your hand is already in your collection.  EVERY time.  One by one.

In the second, you got really smart and brought a list with you, which you are comparing to the books on the shelf at the store.   You’ve got both “lists” in one place, so it is far more efficient.

Even in the case with a few hundred records, you can see why there’d be a performance difference.

Comments closed

Installing R Packages In SQL Server

Tomaz Kastrun shows how to install packages in SQL Server R Services:

Julie Koesmarno made a great post on installing R packages. Please follow this post. Also Microsoft suggests the following way to install R packages on MSDN.

Since I wanted to be able to have packages installed directly from SQL Server Management Studio (SSMS) here is yet another way to do it. I have used xp_cmdshell to install any additional package for my R (optionally you can setEXECUTE AS USER).

This is a bit of a backdoor method, but it does work.

Comments closed