Press "Enter" to skip to content

Curated SQL Posts

Tracking Long-Running Queries

Ryan Booz walks us through tracking long-running queries with sp_whoisactive:

This solution runs sp_WhoIsActive every minute and saves the output into a global temp table. From there, I look for any processes that have been running for more than the low threshold setting. Any of the processes that have not been identified and stored previously get logged, output to an HTML table, and an email alert sent.

Next, I take a second look at the table for anything that’s been running longer than the high threshold.  If a second email alert has not been sent for these processes, we output the same data and send the email. If two alerts have already been sent for these processes, I don’t do anything else at the moment. One of the next updates to this script will send an alert to our DevOps notification system for anything running longer than some final threshold (or maybe just the high threshold).

I particularly like this part about not re-alerting over and over for a long-running query.  It’s a relatively minor part of the whole solution, but it gets annoying watching the same e-mail come in every 5 minutes, especially if there’s nothing you can (or at least want to) do about the cause.

Comments closed

The OOM Killer Cometh

Venu Cherukupalli shows how to keep Linux’s Out of Memory Killer from taking down SQL Server:

When an index rebuild was kicked off on a large table (around 25GB), the reindex operation terminated, and the availability group had failed over to the other replica.

Upon further investigation, we discovered that the SQL Server process terminated at the time reindex operation was run and this resulted in the failover.

To determine the reason for the unexpected shutdown, we reviewed the Linux System Logs (/var/log/messages on RHEL) & pacemaker logs. From the pacemaker logs and system logs, we saw entries indicating that oom-killer was invoked, and as a result SQL Server process was terminated.

Read on for the two solutions.  I was hoping for a solution that involved making the SQL Server executable immune from oom-killer’s wily ways, but not so much in this post.

Comments closed

Memory-Optimized Table Maintenance

Ned Otter has a great post looking at what you can and cannot do with memory-optimized tables containing certain types of indexes:

Now, let’s attempt to create a NONCLUSTERED COLUMNSTORE INDEX:

ALTER TABLE dbo.InMemADD 
ADD INDEX NCCI_InMem NONCLUSTERED COLUMNSTORE (col1);

Msg 10794, Level 16, State 76, Line 76
The feature ‘NONCLUSTERED COLUMNSTORE’ is not supported with memory optimized tables.

It fails because we can only create a CLUSTERED columnstore index (CCI). For 25 years, Microsoft SQL Server differentiated between indexes that physically ordered data on storage (CLUSTERED) and those that did not (NONCLUSTERED). Unfortunately, they chose to ignore that pattern when creating the syntax for memory-optimized tables; using the word CLUSTERED is required when creating a columnstore index on memory-optimized tables. 

It’s a great post with plenty of trial and error.

Comments closed

Data Lakes Aren’t New

Shannon Lowder reveals one of the deep, dark data lake secrets:

Turns out there are three basic zones or areas to a data lake. Raw, Managed, and Presentation.

The raw zone should be optimized for fast storage.  The goal is to get the data in as quickly as possible.  Don’t make any changes to this data.  You want it stored as close to the original format as possible.  It sounds just like staged data to me.  Data you’d build an extract package to get from source to your staging environment, right?

Maybe you’re thinking this is just a coincidence…let’s move on.

Spoilers:  it’s not a coincidence.

Comments closed

Linked Servers And Inaccessible Statistics

Jason Brimhall troubleshoots an error message involving linked servers and statistics:

On more than one occasion I have had an emergency request because everything was broken. The everything in almost every incident is an SSIS package that is failing with error messages. The error message will typically have text similar to the following:

Could not locate statistics ‘_WA_Sys_00000015_346C780E’ in the system catalogs.

Due to the error, the package fails processing and grinds to a halt. When diving into the package it is discovered that the missing stats happen to be coming from a linked server query. This raises a big bright blaring alarm for me. Why is the SSIS package accessing the data via a linked server? This is rather counter-productive and definitely contrary to what is desired from a performance perspective.

Jason methodically walks us through the troubleshooting process and provides the solution at the end.

Comments closed

Modifying Availability Group Endpoint URLs

Ronald Dameron shows how to change the endpoint URLs for Availability Group replicas:

I recently had to modify the Endpoint URLs on our SQL Server Availability Group replicas.

The reason for this blog post is that I could not answer the following questions:

Do I need to suspend data movement prior to making this change? 

Would this change require a restart of the database instance?

Now Ronald can, and after reading his post, you’ll be able to as well.

Comments closed

Reasons For Using Docker With R

Jeroen Ooms gives us a few reasons why we might want to containerize our R-based products:

The flagship of the OpenCPU system is the OpenCPU server: a mature and powerful Linux stack for embedding R in systems and applications. Because OpenCPU is completely open source we can build and ship on DockerHub. A ready-to-go linux server with both OpenCPU and RStudio can be started using the following (use port 8004 or 80):

docker run -t -p 8004:8004 opencpu/rstudio

Now simply open http://localhost:8004/ocpu/ and http://localhost:8004/rstudio/ in your browser! Login via rstudio with user: opencpu (passwd: opencpu) to build or install apps. See the readme for more info.

This is in the context of one particular product, but the reasons fit other scenarios too.  H/T R-Bloggers

Comments closed

Temporal Table Permissions

Kenneth Fisher shows us the permissions needed to create temporal tables:

Msg 13538, Level 16, State 3, Line 6
You do not have the required permissions to complete the operation.

Well, that’s not good. What permissions do I need exactly? Well, again, according to BOL I need CONTROL on the table and its history table. For those that don’t know CONTROL is the top level permission for any object. You can do anything at all with it.

Read the whole thing.  I gather the reason for requiring this level of access is that you don’t want people to go monkeying with data collected for auditing purposes.

Comments closed

Anomaly Detection With Kafka Streams

Ajmal Karuthakantakath shows us an application which performs fairly simple anomaly detection using Kafka Streams:

The problem is in the banking loan payment domain, where customers have taken a loan and they need to make monthly payments to repay the loan amount.

Assume there are millions of customers in the system and all these customers need to make monthly payments to their account. Each customer may have a different monthly due date depending on their monthly loan due date.

Each customer payment will appear as a PaymentScheduleEvent event. Customers can make more than one PaymentScheduleEvent per month. Each monthly due date for a customer will appear as a PaymentDueEvent.

An arbitrarily chosen anomaly condition for this example is that if the amount due is more than $150 for any customer at any point in time, this generates an anomaly.

Click through for instructions, the application, and further resources.  If you want to learn Kafka Streams, this should keep you busy for a little while.

Comments closed

Supplementary Characters In SQL Server 2017

Solomon Rutzky tests SQL Server 2017 collations to determine whether they support supplementary characters:

As you scroll through the list, does anything appear to be missing? Go ahead, take another look. I’ll wait :wink:. Figure it out? Yep, that’s right: none of those Collations end in “_SC“. Collations ending in “_SC” were added in SQL Server 2012 and support the full UTF-16 character set. Without the “_SC” ending built-in functions and comparisons / sorting only fully support the base UCS-2 character set (i.e. the first 65,536 Unicode Code Points). You can, of course, store and view all Unicode Code Points, even Supplementary Characters, in non-“_SC” Collations, but they will be interpreted as being two separate “unknown” characters instead of as a single character.

To illustrate this point, the following query shows: the two surrogate Code Points (not actual characters by themselves, but when used in pairs of any of the assigned combinations, produce a single character), the resulting Supplementary Character, and how the LEN function interprets that sequence in both non-“_SC” and “_SC” Collations.

Read on for Solomon’s testing, which includes a cat face emoji.  I know that I, for one, thought there was insufficient cat face emoji representation in SQL Server prior to 2017.

Comments closed