Press "Enter" to skip to content

Curated SQL Posts

Compression On Temporal Tables

Daniel Janik notes that system-generated temporal tables automatically use page-level compression:

At first I was a bit puzzled. I noticed that the system generated table was consistently smaller than my user created table. It was not only smaller it was twice as small!

I did some further testing on my Surface this weekend and here’s what I found:

— Side note:  I use Person.Address a lot in demos, so I decided to create a new table to test with in hopes of not breaking any other demos I do regularly.

I think this is a good decision for a default, but if you are unable to support page-level compression for some reason, there’s a workaround:  create your history table beforehand.

Comments closed

Who Is Active Update

Adam Machanic has an update to sp_whoisactive:

Four and a half years have flown by since I released sp_whoisactive version 11.11.

It’s been a pretty solid and stable release, but a few bug reports and requests have trickled in. I’ve been thinking about sp_whoisactive v.Next — a version that will take advantage of some newer SQL Server DMVs and maybe programmability features, but in the meantime I decided to clear out the backlog on the current version.

Given that I have three keyboard shortcuts dedicated to sp_whoisactive, you know I’m excited.  Adam also has a new domain for the product.

Comments closed

Azure Data Lake Updates

Michael Rys has the October updates for Azure Data Lake:

We seem to be just cranking out new stuff :). Here are the October 2016 Updates for Azure Data Lake U-SQL!

The main take away is that the October refresh has now removed the old deprecated syntax of the items we have announced over the last couple of release notes!

Thanks for those who volunteered to test the new version of more scalable file set. Please contact us if you want to try it and help us validate it.

Click through for the release notes.

Comments closed

Machine Learning Algorithms In R

Ginger Grant has a list of machine learning algorithms and their implementations in R:

Often times determining which algorithm to use can take a while.  Here is a pretty good flowchart for determining which algorithm should be used given some examples of what the desired outcomes and data contain. The diagram lists the algorithms, which are implemented in Azure ML.  The same algorithms can be implemented in R.  In R there are libraries to help with nearly every task.  Here’s a list of libraries and their accompanying links which can be used in Machine Learning.  This list is no means comprehensive as there are libraries and functions other than the ones listed here, but if you are trying to write a Machine Learning Experiment in R, and are looking at the flowchart, these R functions and Libraries will provide the tools to do the types of Machine Learning Analysis listed.

I think algorithm determination is one of the most difficult parts of machine learning.  Even if you don’t mean to go there, the garden of forking paths is dangerous.

Comments closed

Custom Sorts

Rob Farley looks at ways of sorting data more efficiently:

Another option, which is more longwinded (some might suggest that would suit me – and if you thought that: Oi! Don’t be so rude!), and uses more reads, is to consider what we’d do in real life if we needed to do this.

If I had a pile of 73,595 orders, sorted by Salesperson order, and I needed to return them with a particular Salesperson first, I wouldn’t disregard the order they were in and simply sort them all, I’d start by diving in and finding the ones for Salesperson 7 – keeping them in the order they were in. Then I’d find the ones that weren’t the ones that weren’t Salesperson 7 – putting them next, and again keeping them in the order they were already in.

My first inclination is to think that this is a fragile solution—what about parameterization?  Will that affect the execution plan in unexpected ways?  I like the approach, however, and will have to add it to the toolbox for those cases in which it makes sense.

Comments closed

Kafka Consumer

I build a consumer and aggregator of Kafka data:

From here, I hook into the OnMessage event just like before, and like before we decode the Kafka payload and turn it into a string.  Unlike before, however, I call Newtonsoft’s DeserializeObject method and return a Flight type, which I’ve defined above.  This is the same definition as in the Producer, so in a production-quality environment, I’d pull that out to a single location rather than duplicating it.

Going back to the main function, I call the consumer.Start() method and let ‘er rip.  When I’m ready to aggregate, I’ll hit the enter key and that’ll call consumer.Stop().  When that happens, I’m going to have up to 7 million records in a list called flights.  Out of all of this information, I only need two attributes:  the destination state and the arrival delay in minutes.  I get those by using the map function on my sequence of flights, taking advantage of F#’s match syntax to get all relevant scenarios safely and put the result into a tuple.  The resulting sequence of tuples is called flightTuple.  I pass that into the delaysByState function.

By the time I give this presentation, I’m going to change the way I aggregate just a little bit to cut down on the gigs of RAM necessary to do this operation.  But hey, at least it works…

Comments closed

Computed Columns And Indexes

Derik Hammer looks at computed columns with a bonus section on unique indexes (or maybe vice versa):

What can an index do that a constraint cannot?

  • Set FILL FACTOR.
  • Add non-key (INCLUDED) columns.
  • Data compression.
  • Lock management settings.

What can a constraint do that an index cannot?

The only potential benefit I was able to find, or think of, is that constraints can be disabled. If you could enable and disable a unique constraint, that could be one feature that the unique index does not. This is counter-intuitive knowing that the unique constraint is enforced with a unique index behind the scenes.

For me, the big benefit of unique constraints over unique indexes is that they provide a logical separation.  Unique constraints show up in the Keys section in Management Studio and let you say, “Yes, I made this thing because the data model requires uniqueness here.”  By contrast a unique index can tell the same thing, but could also say “for the subset of data in a filtered index, I can assume uniqueness” or “for performance reasons, this combination is unique, but its uniqueness is not important to the logical data model.”  I will happily admit that I’m splitting the hair pretty thin on this one and that in practice, there are benefits to using unique indexes across the board.

Comments closed