Press "Enter" to skip to content

Curated SQL Posts

T-SQL Tuesday 104 Roundup

Bert Wagner reviews the entries for T-SQL Tuesday 104:

This month’s T-SQL Tuesday topic asked “What code would you hate to live without?” Turns out you like using script and code to automate boring, repetitive, and error-prone tasks.

Thank you to everyone who participated; I was nervous that July holidays and summer vacations would stunt turnout, however we wound up with 42 posts!

Watch tsqltuesday.com for next month’s topic and consider signing up to host.

Read on for the 42 submissions.

Comments closed

Obfuscating Continuous Variables

Phil Factor continues his series on data obfuscation:

Imagine that you have a table giving invoice values. You will want your spoof data to conform with the same ups and downs of the real data over time. You may be able to get the overall distribution the same as the real data, but the resulting data would be useless for seeing the effect of last years sales promotion. The invoice values will depend on your sales promotions if your marketing people have done their job properly.

By making your data the same distribution as your production data, you don’t necessarily get the same strategy chosen by the query analyser, but you dramatically increase the chances of getting it. SQL Server uses a complex paradigm to select amongst its alternative plans for a query. It maintains distribution statistics for every column and index that is used for selecting rows. These aren’t actually histograms in the classic sense, but they perform a similar function and are used by the SQL Server engine to predict the number of rows that will be returned.

The focus is on independent variables, though there is a little bit at the end about working with dependencies.

Comments closed

Relationships In Power BI

Teo Lachev shows us the importance of defining relationships in Power BI:

However, If there isn’t a direct relationship between ResellerSales and Employee, the moment you add an unsummarized field from the second table on the many side, such as Employee[FullName] after adding SalesTerritoryCountry and ResellerSales[SalesOrderNumber), you’ll get the error “Error: Can’t determine relationships between the fields”.

Solution: Interestingly, the report works fine if a summarized field, such as COUNT(Employee[EmployeeKey]) is used. In this case, the SalesTerritory dimension acts as a conformed dimension joined to two fact tables. The reason why it doesn’t work when Employee[FullName] is added is because there is no aggregation on the Employee table and the relationship between ResellerSales[SalesOrderNumber] and Employee[FullName] becomes Many:Many over SalesTerritory which is now a bridge table. One employee may be associated with multiple sales and a sale can be associated with multiple employees. How do we solve this horrible problem?

Good data modeling is important, and Power BI dashboards are no exception to the rule.

Comments closed

Building TensorFlow Neural Networks On Spark With Keras

Jules Damji has an example of using the PyCharm IDE to use Keras to build TensorFlow neural network models on the Databricks MLflow library:

Our example in the video is a simple Keras network, modified from Keras Model Examples, that creates a simple multi-layer binary classification model with a couple of hidden and dropout layers and respective activation functions. Binary classification is a common machine learning task applied widely to classify images or text into two classes. For example, an image is a cat or dog; or a tweet is positive or negative in sentiment; and whether mail is spam or not spam.

But the point here is not so much to demonstrate a complex neural network model as to show the ease with which you can develop with Keras and TensorFlow, log an MLflow run, and experiment—all within PyCharm on your laptop.

Click through for the video and explanation of the process.

Comments closed

Scatterplots For Multivariate Analysis

Neil Saunders declutters a complicated visual with a simple scatterplot:

Sydney’s congestion at ‘tipping point’ blares the headline and to illustrate, an interactive chart with bars for city population densities, points for commute times and of course, dual-axes.

Yuck. OK, I guess it does show that Sydney is one of three cities that are low density, but have comparable average commute times to higher-density cities. But if you’re plotting commute time versus population density…doesn’t a different kind of chart come to mind first? y versus x. C’mon.

Let’s explore.

Simple is typically better, and that adage holds here.

Comments closed

Calculating Future Revenue Streams With Phasing

Matt Allington walks us through a somewhat complicated scenario with income stream projections but inconsistent monthly payout rates:

The first thing I decided to do was work out which was the first month of the policy.  I used LOOKUPVALUE to do this because the Calendar table was not connected to the policy table.  Instead of returning the actual month name, I returned the unique MonthID from my calendar table.  I ALWAYS load a unique MonthID in my calendar tables as they are REALLY useful. In short this column is an integer, starting at 1 and incrementing by 1 for each month in the table.  So starting in Jan, after 3 full years the MonthID will be 36.

Pol First Month = LOOKUPVALUE (
     'Calendar'[Month Index],
     'Calendar'[Date],
     MAX ( Policies[Pol Date] )
)

After writing this measure (and all subsequent measures), I placed the measure in the pivot table shown below.  This pivot table has the policy date from the policy table on rows and the YYMM from the Calendar table on columns.  I did this for a very important reason. I knew this would be the way I needed to see the final results and therefore I needed to make sure all measures returned the expected outcome once this layout was used.  And don’t think I wrote the all the formulas correctly to achieve this outcome the first time.  In fact I made plenty of mistakes.  But the point is I wouldn’t have known they were mistakes if I didn’t place the result in a pivot table like the one below.

It’s a good example of breaking apart a fairly complicated concept and rethinking equations as set operations in DAX.

Comments closed

Creating An Inline Table-Valued Function In SQL Server

Jeanne Combrinck looks at inline table-valued functions in SQL Server:

Lets start off with what is a table-valued function (TVF)? A TVF is a dynamic table produced at the time of execution, depending on parameters. Like a view, a TVF creates a result set only when it’s executed, but, unlike a view, it can be parameterized.

You get two types of TVFs, an Inline Table-Valued Function (ITVFs) and Multi-statement Table-Valued Function (MTVFs). I find them easy to remember, think of the “I” in ITVF as 1 (single statement) and the “M” in MTVF as “many” (multiple statements).

As you can imagine, a TVF produces a result set that can be used as a virtual table or view. Yes, you can actually select data from a TVF, or join it with some other tables, views, or even other TVFs. In this post I will go into more detail on ITVFs.

In my experience, the best part about using inline table-valued function is the 3-9x performance improvement you get when removing them and inlining their code.  It’s a great idea but the performance cost is just too high for me.

Comments closed

De-bracketing SSMS Scripts

Andy Mallon hates brackets and likes regular expressions:

There are cases where you need brackets, such as having objects with “illegal” characters in them. These rules apply to database names, column names, and all object names. (I’m going to simply refer to “object names” for simplicity, though I concede that “identifier” might be a more correct term.) If you want to put a space or hyphen in an object name, then you’re going to have to use [brackets] to “quote” the name every time you reference it. Similarly, you can’t start an object name with a number, even though it’s a valid character in any other position of the object name: 8Ball is an illegal object name, but AM2 is perfectly OK. There are a bunch of other scenarios, but I won’t go into them all.

If SMO tried to only use [brackets] where necessary, that would likely be a complicated and error-prone branch of code. It’s safer to always include [brackets], and there’s no time when [brackets] will break your code.

Read on to see how Andy gets rid of those pesky things.

Comments closed

Monitoring Query Store Space usage

Erin Stellato has a process which alerts her when query store space utilization reaches a defined threshold:

For those of you not familiar with the Query Store settings, please check out my post which lists each one, the defaults, and what I would recommend for values and why.  When discussing MAX_STORAGE_SIZE_MB, I mention monitoring via sys.database_query_store_options or Extended Events.  As much as I love Extended Events, there isn’t an event that fires based on a threshold exceeded.  The event related to size is query_store_disk_size_over_limit, and it fires when the space used exceeds the value for MAX_STORAGE_SIZE_MB, which is too late.  I want to take action before the maximum storage size is hit.

Therefore, the best option I’ve found is to create an Agent job which runs on a regular basis (maybe every four or six hours initially) that checks current_storage_size_mb in sys.database_query_store_options and calculates the space used by Query Store as a percentage of the total allocated, and then if that exceeds the threshold you set, send an email.  The code that you can put into an Agent job is below.  Please note you want to make sure the job runs in the context of the user database with Query Store enabled (as sys.database_query_store_options is a database view), and configure the threshold to a value that makes sense to your MAX_STORAGE_SIZE_MB.  In my experience, 80% has been a good starting point, but feel free to adjust as you see fit!

Click through for the script.

Comments closed