Today I was diving into some extremely wide tables, I wanted to take a quick look at things like “How many unique values does this table have in every column?”.
This can be super useful if you have a spreadsheet of results or a schema without effective normalization and you want to determine which rows are the “most unique” – or have high cardinality.
The Github gist is embedded at the bottom of the page, but I will run you through the code in case you want an explanation of how it works
Click through for the script.
Now, the idea is that we will join the Application.People table to the Numbers table for a number of rows. We will do this for all of the numbers that are from 1 to the length of the name. Then use that value to get the substring of the value for that 1 character. I also include the Unicode value in the output to allow for some case sensitive operations, since UNICODE(‘a’) <> UNICODE(‘A’).
This is an example of how powerful tally tables can be.
In this scenario, we’re going to keep the data for X days after it’s created. Then we delete it. That’s it. X could be 3 days or 3 years–it doesn’t matter, we’ll follow the same design pattern.
In today’s world, we generate loads of log data, sensor data, telemetry data, etc. All that data is super duper valuable. But only for a while. Eventually, all that granular data becomes less useful, and isn’t worth keeping around. Maybe it gets aggregated, summarized, or maybe it just gets thrown out.
You’ll have a lot of data with more complex requirements, but I think you’ll also be surprised at how much data has simple date-based retention based on it’s creation.
Also read the comments, as they include additional techniques.
Recently I had to look up the definition for a bunch of SQL objects and didn’t want to manually retrieve them manually in SSMS (with Create Scripts) or Visual Studio (by searching the object name in my TFS repository).
Since lazyness and automation are the basis of a well done engineering work, I wanted to create a list, where I could basically click on the object that I needed and see the definition right away, without any tool or having to code something externally, of course.
Click through for the solution, which is short and sweet.
This is going to be a bit of a brain storming post that comes from an interesting question that I was asked today…
“I’ve got a table with a ID code field, now some of the rows have a value in that field and some are NULL. How can I go about filling in those NULL values with a valid code but at the same time avoid introducing duplicates?”
Click through for David’s solution.
Pedro Lopes who is now on the SQL Server dev team wrote the blog post “T-SQL Misconceptions – JOIN ON vs. WHERE” which does a good job of explaining the “it depends” around the newer style of joins. Why link this article? Microsoft deprecated the OUTER JOIN operators in SQL Server 2008. Two other sources of information on this are here and here (the first is Ward Pond’s old technet blog, and sadly will probably go away soon). If you’re keeping score at home, WHERE clauses are not deprecated except if you’re using *= and =*). The changes people made were wholly unnecessary and as the author, the newer stuff is harder to decipher than what I originally did. They were putting their own biases onto things.
I personally do not like ANSI-89 syntax because it blurs the lines between filters and join criteria. Those are separate things serving different purposes and keeping them separate clarifies queries more than it obscures. Allan’s example doesn’t have any filters but in a more complex scenario with several filters and several join criteria, it can require extra care figuring out what’s going on, especially with multi-column join criteria and filters mixed in (meaning it’s not join criteria and then filters, but a mishmash of the two).
The next step is to write the code to capture the counter values and insert the data it the temporary table created above. Because we need to capture the values over a period of time, the WAITFOR DELAY is used. In this case the delay is 10 seconds, although you can change this to suit your needs. Of course, don’t forget to increment the counter variable. You will need to determine what counters you would like to capture. Notice in the WHERE clause, we are looking for an instance_name of ” or ‘_total’. This will allow the code to only capture one row for each counter. The number 10 is the number of times we want to capture the counter values. If you want to capture the data more frequently, simpley modify the number of seconds in the WAITFOR DELAY. Here is link to my post in this topic, WAITFOR.
Dynamic pivoting in SQL is unnecessarily difficult, especially compared to languages like R.
I’m getting close to the end of my series on near-zero downtime deployments. This latest post involves identity column changes:
There are some tables where you create an identity value and expect to cycle through data. An example for this might be a queue table, where the data isn’t expected to live permanently but it is helpful to have a monotonically increasing function to determine order (just watch out for those
wrap-aroundsand you’re fine). An example of reseeding is below:
DBCC CHECKIDENT('dbo.MyTable', RESEED, 1);
This operation needs to take a
LCK_M_SCH_Mlock, otherwise known as a schema modification lock. Any transactions which are writing to the table will block your transaction but so will any readers unless you have Read Committed Snapshot Isolation turned on or the reader is in
If you are using RCSI and don’t have extremely long-running transactions, this is an in-and-out operation, so even though there’s a little bit of blocking, it’s minimal.
Not all changes are this easy, though.
Itzik Ben-Gan has started a new series on T-SQL bugs, pitfalls, and best practices. The first post in this series is on non-deterministic behavior:
A nondeterministic function is a function that given the same inputs, can return different results in different executions of the function. Classic examples are SYSDATETIME, NEWID, and RAND (when invoked without an input seed). The behavior of nondeterministic functions in T-SQL can be surprising to some, and could result in bugs and pitfalls in some cases.
Many people assume that when you invoke a nondeterministic function as part of a query, the function gets evaluated separately per row. In practice, most nondeterministic functions get evaluated once per reference in the query.
This is the start to a great series.
The locking story is not the same as with the primary and unique key constraints. First, there’s one extra piece: the transition will block access to
dbo.LookupTableas well as the table we create the constraint on. That’s to keep us from deleting rows in our lookup table before the key is in place.
Second, the locks begin as soon as we hit F5. Even
SELECTstatements get blocked requesting a
LCK_M_SCH_Slock. Bad news, people.
So what can we do to get around this problem? Two routes: the ineffectual way and the ugly way.
Despite my being a ray of sunshine here, you should still check this out. It’s shorter than the average Russian novel, at least.