Press "Enter" to skip to content

Month: October 2017

Tracking Replication Throughput

Phil Grayson has a couple of replication-related queries:

It’s a right laugh when replication goes wrong, if you haven’t been there and use replication heavily, you’re in for the night / early morning of your life. Like any issue, the cause of replication problems could be one of many, but checking what commands are being passed through the distributor is always worthwhile, as an inefficient query could be updating the same data over and over, putting unnecessary strain on the distributor, infrastructure or the subscribers.

Read on for those queries.

Comments closed

XESmartTarget: Event Handling With SQL Server

Gianluca Sartori introduces a new tool to help with event handling in SQL Server:

I am proud to introduce XESmartTarget: the easiest way to interact with extended events sessions without writing a single line of code.

XESmartTarget uses the Extended Events streaming API to connect to a session and execute some actions in response to each event. The actions you perform can be configured in a simple .json file, which controls the Response types and all their properties.

For instance, if you have a session to capture deadlocks, you may want to be notified whenever such an event is captured. All you need to do is configure XESmartTarget to send an email using the EmailResponse class. You can do that by creating a simple .json file with all the parameters needed

This looks quite interesting.

Comments closed

Replacing DAX PathContains With OR

Chris Koester shows the performance benefits of replacing the PathContains function in DAX with a simple OR operator:

This post shows how you can generate optimized multi-value DAX parameters in SSRS and achieve greater performance compared to the DAX PathContains function. This will be a short post that provides the SSRS expression to convert multiple SSRS parameters into a double-pipe delimited string for use in a DAX query. In other words, the goal is to use the DAX OR operator (||) instead of the PathContains function. I’m assuming the reader has experience with SSRS, so not all steps will be shown.

Read on for the example, which ended up being a 16X performance improvement.

Comments closed

Detecting RAID Failures

Randolph West has an interesting after-the-fact test to determine whether data on a RAID array which experienced major failure is recoverable:

We took a look through some random files on the disk. I was looking for evidence that the VMDK file that had been copied was taken from a RAID array in a corrupt state, namely that one of the disks had failed, and that the second disk had failed during the rebuild of the array.

The easiest way to see this is to look for a JPEG or PNG file over 256 KB in size. Most RAID block sizes are multiples of 64 KB, usually 128 KB or 256 KB. Each block is split over the individual physical disks, with a parity bit, so for a particular block of data, if the RAID array has failed, you will see a corrupt image, or the image won’t be viewable at all.

Randolph presents an interesting smoke test here.  Read the whole thing.

Comments closed

Installing dbatools

Chrissy LeMaire has a good walkthrough on how to install a module from the Powershell gallery, using dbatools as an example:

It’s easy for PowerShell toolmakers to forget that new users may have questions about ExecutionPolicy and Repositories. If dbatools is the first module you’ve ever installed and used, this guide is intended to help you with all of your install/setup questions and concerns.

In order to emulate what your experience may be like, I spun up a fresh Windows 10 instance in Azure and went through all of the required steps which include addressing:

  • Setting the Execution Policy

  • Explicitly trusting Microsoft’s PowerShell Gallery repository

  • Installing dbatools

  • Explicitly trusting dbatools as a Publisher, before first use

If you’ve never installed dbatools before, Chrissy shows just how easy it can be.

Comments closed

Unsupervised Decision Trees

William Vorhies describes what unsupervised decision trees are:

In anomaly detection we are attempting to identify items or events that don’t match the expected pattern in the data set and are by definition rare.  The traditional ‘signature based’ approach widely used in intrusion detection systems creates training data that can be used in normal supervised techniques.  When an attack is detected the associated traffic pattern is recorded and marked and classified as an intrusion by humans.  That data then combined with normal data creates the supervised training set.

In both supervised and unsupervised cases decision trees, now in the form of random forests are the weapon of choice.  Decision trees are nonparametric; they don’t make an assumption about the distribution of the data.  They’re great at combining numeric and categoricals, and handle missing data like a champ.  All types of anomaly data tend to be highly dimensional and decision trees can take it all in and offer a reasonably clear guide for pruning back to just what’s important.

To be complete, there is also category of Semi-Supervised anomaly detection in which the training data consists only of normal transactions without any anomalies.  This is also known as ‘One Class Classification’ and uses one class SVMs or autoencoders in a slightly different way not discussed here.

Interesting reading.  I’d had no idea that unsupervised decision trees were even a thing.

Comments closed

Stop Using word2vec

Chris Moody wants you to stop using word2vec:

When I started playing with word2vec four years ago I needed (and luckily had) tons of supercomputer time. But because of advances in our understanding of word2vec, computing word vectors now takes fifteen minutes on a single run-of-the-mill computer with standard numerical libraries. Word vectors are awesome but you don’t need a neural network – and definitely don’t need deep learning – to find them. So if you’re using word vectors and aren’t gunning for state of the art or a paper publication then stop using word2vec.

Chris has a follow-up post on word tensors as well:

There’s only three steps to computing word tensors. Counting word-word-document skipgrams, normalizing those counts to form the PMI-like M tensor and then factorizing M into smaller matrices.

But to actually perform the factorization we’ll need to generalize the SVD to higher rank tensors 1. Unfortunately, tensor algebra libraries aren’t very common 2. We’ve written one for non-negative sparse tensor factorization, but because the PMI can be both positive and negative it isn’t applicable here. Instead, for this application I’d recommend HOSVD as implemented in scikit-tensor. I’ve also heard good things about tensorly.

I’m going to keep using word2vec for now, but it’s a good pair of posts.

Comments closed

Tracking Long-Running Queries

Ryan Booz walks us through tracking long-running queries with sp_whoisactive:

This solution runs sp_WhoIsActive every minute and saves the output into a global temp table. From there, I look for any processes that have been running for more than the low threshold setting. Any of the processes that have not been identified and stored previously get logged, output to an HTML table, and an email alert sent.

Next, I take a second look at the table for anything that’s been running longer than the high threshold.  If a second email alert has not been sent for these processes, we output the same data and send the email. If two alerts have already been sent for these processes, I don’t do anything else at the moment. One of the next updates to this script will send an alert to our DevOps notification system for anything running longer than some final threshold (or maybe just the high threshold).

I particularly like this part about not re-alerting over and over for a long-running query.  It’s a relatively minor part of the whole solution, but it gets annoying watching the same e-mail come in every 5 minutes, especially if there’s nothing you can (or at least want to) do about the cause.

Comments closed

The OOM Killer Cometh

Venu Cherukupalli shows how to keep Linux’s Out of Memory Killer from taking down SQL Server:

When an index rebuild was kicked off on a large table (around 25GB), the reindex operation terminated, and the availability group had failed over to the other replica.

Upon further investigation, we discovered that the SQL Server process terminated at the time reindex operation was run and this resulted in the failover.

To determine the reason for the unexpected shutdown, we reviewed the Linux System Logs (/var/log/messages on RHEL) & pacemaker logs. From the pacemaker logs and system logs, we saw entries indicating that oom-killer was invoked, and as a result SQL Server process was terminated.

Read on for the two solutions.  I was hoping for a solution that involved making the SQL Server executable immune from oom-killer’s wily ways, but not so much in this post.

Comments closed

Memory-Optimized Table Maintenance

Ned Otter has a great post looking at what you can and cannot do with memory-optimized tables containing certain types of indexes:

Now, let’s attempt to create a NONCLUSTERED COLUMNSTORE INDEX:

ALTER TABLE dbo.InMemADD 
ADD INDEX NCCI_InMem NONCLUSTERED COLUMNSTORE (col1);

Msg 10794, Level 16, State 76, Line 76
The feature ‘NONCLUSTERED COLUMNSTORE’ is not supported with memory optimized tables.

It fails because we can only create a CLUSTERED columnstore index (CCI). For 25 years, Microsoft SQL Server differentiated between indexes that physically ordered data on storage (CLUSTERED) and those that did not (NONCLUSTERED). Unfortunately, they chose to ignore that pattern when creating the syntax for memory-optimized tables; using the word CLUSTERED is required when creating a columnstore index on memory-optimized tables. 

It’s a great post with plenty of trial and error.

Comments closed