Press "Enter" to skip to content

Author: Kevin Feasel

Ditching IsNumeric()

Phil Factor on the IsNumeric() function:

IsNumeric() is actually answering the question ‘Can this string be converted or ‘cast’ to a numeric datatype by SQL Server?’ It is pretty good at getting this answer right but it doesn’t tell you which datatype. It also isn’t interested in such things as overflow.

This problem of IsNumeric() was solved in SQL Server 2012. There was no way of morphing IsNumeric() intro something more valuable, so Try_Cast(), Try_Parse(), and Try_Convert() were introduced. The introduction of these system functions really solve the problem, unless you are still on earlier versions than SQL Server 2012. I’ll show how to do the same thing in previous versions of SQL Server, and demonstrates one or two tricks you can do with these functions.

I’ll stick with the post-2012 version, please.

Comments closed

CSV Import Speeds With H2O

WenSui Liu benchmarks three CSV loading methods in R:

The importFile() function in H2O is extremely efficient due to the parallel reading. The benchmark comparison below shows that it is comparable to the read.df() in SparkR and significantly faster than the generic read.csv().

I’d wonder if there are cases where this would vary significantly; regardless, for reading a large data file, parallel processing does tend to be faster.

Comments closed

Linear Prediction Confidence Region Flare-Out

John Cook explains why the confidence region of a tracked object flares out instead of looking conical (or some other shape):

Suppose you’re tracking some object based on its initial position x0 and initial velocity v0. The initial position and initial velocity are estimated from normal distributions with standard deviations σx and σv. (To keep things simple, let’s assume our object is moving in only one dimension and that the distributions around initial position and velocity are independent.)

The confidence region for the object flares out over time, something like the bell of a trumpet.

Read on for the explanation.

Comments closed

Building Random Number Ranges

David Fowler shows how to generate a random number for each record in a result set:

Hmmmmm…. It looks like we’ve got the same number for every person, that wasn’t what we wanted.  And that’s my issue with RAND(), it’ll give you a different random number every time it runs but if run as part of a query it’ll always return the same number for every row returned.

So what else can we do?  We’ll there is something that gives us a ‘random’ value for every row in the query and that’s our good old friend NEWID().  Let’s try the same query but this time we’ll swap RAND() with NEWID().

One major use case for this is sampling data sets for model training and testing:  if you pull from a range of 1-10, you could perhaps train against 1-5, cross-validate against 6-7, and test against 8-10.  Doing this instead of TOP X% reduces the likelihood of sampling bias.

Comments closed

Conditional Job Retry

Chris Bell has a procedure which conditionally retries a failed SQL Agent job from a pre-determined step:

When the job fails, and the alert message compiled, this procedure gets called and the job name, step name, a delay value are passed to it. There is also a retry flag that comes back fro this procedure.

The first thing this procedure does is go and find the last failed step for the particular job. It then counts and based on the @retry value verifies if a retry job has already been created. This is in case some other process tries to do this same thing and should help prevent too many retries from firing off.
If a retry job does not exist, this process creates a new disposable job that will rerun the original from the beginning or the step that failed based on the checking for “Level 1” or “Level 2” in the job name. The job is prefixed with ‘Retry -‘ so it can be found easily in your server’s job list.
If a delay is specified, 2 minutes in this example, then it calculates a new run time for the retry job and finally creates the job.

This helps make SQL Agent jobs a little more robust.

Comments closed

Bundling Measures Together

Philip Seamark shows how to bundle measures together in Power BI so they all appear at the top of the Fields section:

I’m going to share in this blog a technique I’ve found useful in Power BI for collecting measures together in once place AND placing them at the top of the field list.

The good news is, calculated measures do not have to exist on the table that stores the underlying data specific to that measure.  Measures can be placed on any table in the model and they will still work as expected.  This may not be immediately obvious but it’s handy to know.

So far, I’ve kept measures on their logical best-fit tables, but Philip’s hint looks quite useful once the set of measures grows, or if there are a number of cross-table measures.

Comments closed

Separating Data And Log Files

Brent Ozar looks at an old chestnut:

So it’s time for a quiz:

  1. If you put all of a SQL Server’s data files & logs on a single volume, how many failures will that server experience per year?
    • Bonus question: what kinds of data loss and downtime will each of those failure(s) have?
  2. If you split a SQL Server’s data files onto one volume, and log files onto another volume, how many failures will that server experience per year?
    • Bonus question: what kinds of data loss and downtime will each of those failures have?

Think carefully about the answers – or read the comments to see someone else’s homework, hahaha – before you move on.

With SANs, this advice is not even that good on the performance side—especially with modern SANs which don’t let you dedicate spindles.  It definitely doesn’t fly on the reliability side.

Comments closed

Columnstore Dictionaries

Niko Neugebauer explains some interesting facts about columnstore index dictionaries:

From a recent experience at a customer, I had an opportunity to dive into the details of the Columnstore Indexes Dictionaries. I have to admit that my understanding of them was pretty low, from what I have learned in the recent days, and I would like to share what I have learned with everyone.

These are some of the key findings that I have discovered:
– The local dictionaries are not exclusively connected with just 1 Row Group, but with multiple ones;
– The dictionaries within Columnstore Indexes are compressed in a different way, depending on the type of the compression applied (Columnstore vs Columnstore Archival);

and let us dive into each one of them:

Read the whole thing.

Comments closed

Stopping SQL Injection

Wayne Sheffield has a post explaining what SQL injection is and discussing how to stop it:

Me: Umm, boss… Does this report allow users to enter in search criteria?

Boss: But of course!

Me: Well, I really hate to tell you this, but we have a SQL Injection problem.

And after a bit of back and forth where the developers were insisting that no way was there a SQL Injection problem, I sat down with the dev team lead and the boss and proved it to them. We created a dummy table in the database, went to the report criteria form, and I dropped the table.

Wayne: +1000

Development Team: -1000

Injection attacks are still the most common form of attack out there.  Sadly.

Comments closed

Kafka Offset Management With Spark Streaming

Guru Medasana and Jordan Hambleton explain how to perform Kafka offset management when using Spark Streaming:

Enabling Spark Streaming’s checkpoint is the simplest method for storing offsets, as it is readily available within Spark’s framework. Streaming checkpoints are purposely designed to save the state of the application, in our case to HDFS, so that it can be recovered upon failure.

Checkpointing the Kafka Stream will cause the offset ranges to be stored in the checkpoint. If there is a failure, the Spark Streaming application can begin reading the messages from the checkpoint offset ranges. However, Spark Streaming checkpoints are not recoverable across applications or Spark upgrades and hence not very reliable, especially if you are using this mechanism for a critical production application. We do not recommend managing offsets via Spark checkpoints.

The authors give several options, so check it out and pick the one that works best for you.

Comments closed