Press "Enter" to skip to content

Curated SQL Posts

Legible Function Chaining In R

John Mount shows a few techniques for legible function chaining with R:

The dot intermediate convention is very succinct, and we can use it with base R transforms to get a correct (and performant) result. Like all conventions: it is just a matter of teaching, learning, and repetition to make this seem natural, familiar and legible.

My preference is to use dplyr + magrittr because I really do like that pipe operator.  John’s point is well-taken, however:  you don’t need to use the tidyverse to write clean R code, and there can be value in using the base functionality.

Comments closed

Loading JSON-Based Data Into SQL Server From .NET

Chris Koester has a quick example demonstrating one way take JSON data from .NET code and load it into SQL Server:

Next we need to create a stored procedure that will accept JSON text as a parameter and insert it into the table. Two important points here:

  • JSON text must use the NVARCHAR(MAX) data type in SQL Server in order to support the JSON functions.

  • The OPENJSON function is used to convert the JSON text into a rowset, which is then inserted into the previously created table.

The whole process is quite easy; check it out.

Comments closed

More Bash Basics: Commands, Conditions, And Math

Mark Wilkinson continues his Bash for the Powershell developer series:

An expression is a statement that evaluates to either true or false. In our case we are just checking if the $hour_of_day variable is less than 12 and greater than or equal to 5. When comparing numeric values you must use the same operators you are familiar with from PowerShell: -lt -gt -le -ge -ne. When comparing string values you have to use operators you might be more used to from other languages: == != and both values must be enclosed in double quotes [ "string1" == "string2" ].

With just the things Mark has shown so far, you can begin to build helpful scripts.

Comments closed

DATETIME2 In SQL Server

Randolph West continues his SQL Server date & time data types series:

SQL Server 2008 introduced new data types to handle dates and times in a more intelligent way than the DATETIME and SMALLDATETIME types that we looked at previously.

This week, we look at the DATETIME2 data type. I’m not the first person to think that this was probably not the best name for a data type, but here we are, a decade later.

DATETIME2 is, at its heart, a combination of the DATE and TIME data types we covered in previous weeks. DATE is 3 bytes long and TIME is between 3 and 5 bytes long depending on accuracy. This of course means that DATETIME2 can be anything from 6 to 8 bytes in length.

Nowadays, if you want to store a date plus time, this should be your default, not DATETIME.

Comments closed

Function Calls Missing From dm_exec_query_stats

Kendra Little blogs about a data collection oddity with functions in SQL Server:

Some of my functions in the demo code were showing up just fine. I was really puzzled by that. I thought …

Maybe this is a bug with ‘CREATE OR ALTER’? A sign of some weird memory pressure? Something introduced in SQL Server 2017? A buggy side effect of implicit conversions in some of the functions? A problem with the queries I was using? A weird setting on the database? (Also: about 100 other things that didn’t turn out to be the case.)

I finally wrote up some simple demo code, tested it against a SQL Server 2008 R2 instance (omitting the Query Store components), compared it with SQL Server 2017, and found it to be consistent.

Click through to see which types of functions show up and which ones stay hidden.

Comments closed

Understanding Recovery Intervals

Paul Randal explains what the recovery interval option on SQL Server really means:

One of the concepts I find people misunderstand frequently is the recovery interval, either for the server as a whole or the per-database setting that was introduced in SQL Server 2012 for indirect checkpoints.

There are two misconceptions here:

  1. The recovery interval equals how often a checkpoint will occur

  2. SQL Server guarantees the recovery interval (i.e. crash recovery for the database will only take the amount of time specified in the recovery interval)

It’s good to keep this in mind.

Comments closed

Tabs And Spaces In SSMS

Wayne Sheffield continues his SQL Server Management Studio Tips & Tricks series:

Regardless of whether you like to use tabs or spaces, this is where you go to configure your settings. The first part of the screen controls the indenting options. If “None” is selected, then the next line will start at the beginning of the line. If you have selected “Block”, then it will align the next line with the previous line. And if you are using “Smart”, then the appropriate language will determine which indenting style to use.

The next section controls the tab size / indent size. This controls how many characters that a tab takes. It also controls whether tabs are converted to spaces or kept as tabs.

You can read more about these options at this link: Manage Code Formatting.

I turn on the View Whitespace option that Wayne mentions because I’m a formatting pedant that way.

Comments closed

Kafka Partitioning Strategies

Amy Boyle shares some thoughts on Kafka partitioning strategy:

If you have enough load that you need more than a single instance of your application, you need to partition your data. The producer clients decide which topic partition data ends up in, but it’s what the consumer applications will do with that data that drives the decision logic. If possible, the best partitioning strategy to use is random.

However, you may need to partition on an attribute of the data if:

  • The consumers of the topic need to aggregate by some attribute of the data.

  • The consumers need some sort of ordering guarantee.

  • Another resource is a bottleneck and you need to shard data.

  • You want to concentrate data for the efficiency of storage and/or indexing.

Good advice.

Comments closed

Single-Node Hadoop 3 Installation

Mark Litwintschik has a fairly simple guide for installing Hadoop 3 on a single node for testing:

This post is meant to help people explore Hadoop 3 without feeling the need they should be using 50+ machines to do so. I’ll be using a fresh installation of Ubuntu 16.04.2 LTS on a single computer. The machine has an Intel Core i5-7300HQ CPU clocked at 2.50GHz, 8 GB of RAM and a 200 GB mechanical disk drive. I intentionally picked a low end machine to demonstrate not much is needed to try out Hadoop in a learning exercise.

Please do be mindful these instructions are aimed at building a test environment that is cut off from the outside world. Beyond the fact this is a single machine installation for software which is meant to run on multiple machines there would need to be significant content changes to turn these instructions into production installation notes.

It’s a useful guide if you’re not interested in going with one of the platform vendors like Hortonworks or Cloudera.

Comments closed

Reproducibility And ML Projects

Pete Warden explains some of the difficulties around reproducing ML models:

Why does this all matter? I’ve had several friends contact me about their struggles reproducing published models as baselines for their own papers. If they can’t get the same accuracy that the original authors did, how can they tell if their new approach is an improvement? It’s also clearly concerning to rely on models in production systems if you don’t have a way of rebuilding them to cope with changed requirements or platforms. At that point your model moves from being a high-interest credit card of technical debt to something more like what a loan-shark offers. It’s also stifling for research experimentation; since making changes to code or training data can be hard to roll back it’s a lot more risky to try different variations, just like coding without source control raises the cost of experimenting with changes.

It’s not all doom and gloom, there are some notable efforts around reproducibility happening in the community. One of my favorites is the TensorFlow Benchmarks project Toby Boyd’s leading. He’s made it his team’s mission not only to lay out exactly how to train some of the leading models from scratch with high training speed on a lot of different platforms, but also ensures that the models train to the expected accuracy. I’ve seen him sweat blood trying to get models up to that precision, since variations in any of the steps I listed above can affect the results and there’s no easy way to debug what the underlying cause is, even with help from the authors. It’s also a never-ending job, since changes in TensorFlow, in GPU drivers, or even datasets, can all hurt accuracy in subtle ways. By doing this work, Toby’s team helps us spot and fix bugs caused by changes in TensorFlow in the models they cover, and chase down issues caused by external dependencies, but it’s hard to scale beyond a comparatively small set of platforms and models.

I see two separate problems:  reproducing the process and reproducing the result.  Reproducing the process is why you want to use something like notebooks:  it’s a proof that you (and others!) can generate the same type of model the same way multiple times.  Reproducing the result is harder given the stochastic nature of ML, but if you’re following the same process, you’re at least more likely to end up close to the same result.

Comments closed