Press "Enter" to skip to content

Category: T-SQL

Random Number Generation in T-SQL via Marsaglia Polar Method

Sebastiao Pereira implements a method for generating random numbers in T-SQL:

Generating random numbers from a normal distribution is essential for accuracy and realistic modeling, simulation, inference, and algorithm design for scientific, engineering, statistical, and AI domains. How can we build a random number generator using Marsaglia Polar method in SQL Server without the use of external tools?

It’s an interesting technique that works well for drawing points from a two-dimensional space around a point.

Comments closed

Simple Data Quality Validation with T-SQL

Kenneth Omorodion builds a validation process:

As the need and use of data grows within any organization, there is a corresponding rising issue for the need of data quality validation. Most organizations have large stores of data but most of it are not managed efficiently in terms of data quality assurances, thus leading to inaccurate insights for the business which in turn leads to distrust in the data.

Organizations have now, more than ever, realized the importance of an efficient data quality process as part of their Business Intelligence and Analytics processes. The issue is, how can they implement data quality for their data? For larger and more data-centric organizations, they might be using pre-built data management and validation tools like Microsoft Purview or other Master Data Management tools like Informatica, Talend, SAP, Talend, and Stibo Systems. But for those organizations that can not commit to subscribing to pre-built options, or they are operating primarily on On-Premises environments, they might want to build one themselves, that’s where this article comes in.

I’m of two minds about this. One the one hand, I appreciate the effort that Kenneth put into this and expect that it would work reasonably well. On the other hand, I look at what it can do and say “Yes, but if you just use constraints like you should, then you don’t need this process.” It’s basically a very asynchronous way of applying check constraints, foreign key constraints, and specifying that an attribute is NOT NULL.

If there’s some reason why applying these constraints is impossible—such as receiving this data as an extract from some poorly-designed system—then this can do a good job of describing the failures of the upstream system. But this is where data architects need to get it right up-front.

1 Comment

Using Temporary Stored Procedures to Output Common Messages

Louis Davidson shows a neat use for temporary stored procedures:

On another connection (on another computer for that matter), I am right no doing some pretty long loads of some test data. The script is comprised of 6 queries, and they each may take 10 minutes (not completely sure, this is my first run of the scripts). And of course, I want to get some feedback on these queries to know how long they are taking.

One common way to do this is to put a PRINT statement between the queries so you can see the progress. But PRINT statements are notorious for one thing. Caching the output until the output buffer reaches some level.

One addition I’d make to Louis’s post is to make use of the FORMATMESSAGE() functionality that SQL Server 2016 introduced. This use case is right in its wheelhouse.

    SET @message = FORMATMESSAGE(N'%s%s%s%s',
			@Message,
			CASE
				WHEN @AddTimeToMessageFlag = 1 THEN CONCAT(N' : Message Time - ', SYSDATETIME())
				ELSE N''
			END,
			CASE
				WHEN @AddSpidToMessageFlag = 1 THEN CONCAT(N' : ProcessId - ', @@spid)
				ELSE N''
			END,
			CASE
				WHEN @AddOriginalLoginToOutputFlag = 1 THEN CONCAT(N' : LoggedInUserId - ', original_login())
				ELSE N''
			END); 

FORMATMESSAGE() provides a moderate benefit to readability versus a lengthy CONCAT(). And if you always wanted to emit all fields versus the optional setup that Louis has in place, FORMATMESSAGE() makes the result even clearer to understand.

SET @message = FORMATMESSAGE(N'%s : Message Time - %s : ProcessId - %i : LoggedInuserId - %s',
			@Message,
			CAST(SYSDATETIME() AS NVARCHAR(100)),
			@@spid,
			original_login()); 
3 Comments

Local Variables and Cardinality Estimates in SQL Server

Erik Darling lays out an explanation of what SQL Server does to estimate the cardinality of queries using local variables. Erik covers some of the conceptual questions around how a compiler could work, and then explains what SQL Server does.

Erik talks about dependency on specific behavior and how people get upset when things change, and he’s absolutely right. My favorite example of this is the “quirky update” technique people used back in the day prior to SQL Server 2012 to calculate running totals. That technique was something that relied on unsupported, accidental but repeatable behavior around creating clustered indexes on temp tables. Even though the SQL Server team explicitly stated that this behavior could change at any time, it was clear that they didn’t go out of their way to break that functionality.

Comments closed

Decimal Precision and Rounding in SQL Server

Jiri D. provides a warning:

Do you ever worry about how you declare NUMERIC or DECIMAL data types in SQL?
Do you sometimes “add a bit more” precision—just to be safe?
Have you considered how that small decision could actually change your arithmetic results?

I ran into this recently when comparing data between two environments that should have produced identical results. One calculated field was slightly off — and the culprit turned out to be a difference in numeric data type declarations.

Read on to see what happened. The differences weren’t massive, but if you were expecting an exact match, seeing a difference, even at the 7th or 8th spot after the decimal, could be jarring.

Comments closed

Adding Commas to Numeric Output in SQL Server

Andy Yun is speaking my language:

One thing that’s always driven me crazy is when I have large numbers in my resultsets and the lack of commas for readability. For a lot of different things I do, the more commas a number has, the more attention I want to give to a given value.

Andy shows examples of formatting to two and zero spots after the decimal, respectively. In a talk I give on analyzing business data with T-SQL, I also demonstrate how to show currency-based results:

FORMAT(SUM(o.Quantity * sih.LastCostPrice), N'$0,###.##') AS TotalCost

This starts each cost record with a dollar sign, ensures you have commas in the thousands spots, and have a two-digit decimal value. That would return back a result like $31,409,113.00, which is a lot easier to read than 31409113.

Comments closed

ROW_NUMBER() Filtering Performance

Erik Darling answers an office hours question in detail. The question comes down to why a filter on ROW_NUMBER() where the row number is equal to 1 could differ from the same query where row number is less than or equal to 1. Knowing that ROW_NUMBER() starts at 1 and can never be anything other than a natural number, you’d think that SQL Server would treat these exactly the same. But Erik shows an example where the two can differ, and the answer was a good one. I will admit that my pre-video guess was wrong but once he showed the execution plans, things clicked. And, like Erik mentions, this is why it’s so important to dig into the execution plan, because the answers are typically in there somewhere.

1 Comment

Splitting Strings with T-SQL

Courtney Woolum splits a string:

If you’ve escaped string parsing thus far, count thyself lucky. I’ve seen some numbing scripts from pre-SQL Server 2016 when STRING_SPLIT was introduced. I think the fact this month’s T-SQL Tuesday is entirely on string parsing says enough about how many ways there are to paint this particular cat.

In the post, Courtney mentions learning early on to avoid using the APPLY operator. I want to have harsh words with whoever taught her that. Purposefully avoiding the APPLY operator artificially hamstrings your ability to write effective T-SQL code.

Comments closed

Parsing and Avoiding Composite Keys

Hugo Kornelis prefers surrogate keys:

I am currently most known for my performance tuning and execution plan work. But when I started working with database, I actually came from a background of data modelling, database design, and normalization. And that has never fully left me. In fact, I have in the past two years created a whole series of YouTube videos about database design and normalization. And a much longer time ago, I recorded a Pluralsight course on this topic that is still available for viewing.

One of the very basics of schema design for a relational database is to store atomic values in every column. One column, one value, no more, no less. That automatically rules out all repeating groups. So a single column to list all my email addresses? Sorry. You are doing it wrong. And you will pay the price when you try to protect the integrity of your data. Or even just report on it.

I wanted to copy this second paragraph because CJ Date, in his book Database Design and Relational Theory (2nd edition), issues a mea culpa around repeating groups, stating that it’s best to ignore his prior arguments on the topic. Though in Date’s case, he specifies a repeating group as something like { Name, Email1, Email2, Email3, … } rather than a delimited list.

But even composite items can be in 1st normal form. For example, a US telephone number has a country code (+1), an area code, an exchange, and a four-digit number, followed potentially by an extension. The name “Bob” is an array of characters, and each array of characters is a composite of bits forming 1-4 bytes depending on collation and other details. At the end of the day, first normal form is about the shape of the tuple (a heading exists with a known set of names and data types; all tuples follow the same header; no duplicate tuples are allowed; attribute and tuple order does not matter for operations; and all attributes are regular inasmuch as they have names, data types, are not hidden, etc.).

At the end of the day, what Hugo is saying is good practice: if you have a business need to identify segments of an attribute separately, then it makes sense for each segment to be an attribute on its own. But because there is no solid mathematical property that explains exactly what an attribute is, and because database normalization is ultimately a series of mathematical formulations, we cannot use normalization as the reason to keep or separate the contents of an attribute. Thankfully, there is more to database design than normalization alone (and thankfully, database normalization itself is such a robust field that provides good advice that people should follow).

Comments closed