Press "Enter" to skip to content

Author: Kevin Feasel

Stop Using Datetime(?)

Kenneth Fisher says to stop using datetime and start using date, time, and datetime2(x):

Why is everyone still using the DateTime datatype exclusively?

Back in SQL 2008 we gained a whole new range of date/time datatypes. Isn’t it about time we started to use them?

In my experience, most of the issue is supporting legacy app code which chokes on these types.  You’d think people would have updated that .NET 2.0 code, but not always.

Comments closed

Why Virtualize?

David Klee answers why you might still want to virtualize a single-instance SQL Server which resides on a single host:

That’s a wonderful question, and I get asked this all the time.

I can justify the desire for virtualization in the scenario you described. There are a number of reasons to consider virtualization given those constraints.

Virtualize everything, as Klee suggests.  The worst case is that administration gets slightly more complex, but the advantages are worth it.

Comments closed

Installing And Using SQL Server R Services

I have three blog posts on installing and using R in SQL Server.

First, installing SQL Server R Services:

I’m excited that CTP 3 of SQL Server 2016 is publicly available, in no small part because it is our first look at SQL Server R Services.  In this post, I’m going to walk through installing Don’t-Call-It-SSRS on a machine.

Then, using RODBC to connect a Linux machine with RStudio installed to a SQL Server instance:

Getting a Linux machine to talk to a SQL Server instance is harder than it should be.  Yes, Microsoft has a Linux ODBC driver and some easy setup instructions…if you’re using Red Hat or SuSE.  Hopefully this helps you get connected.

If you’re using RStudio on Windows, it’s a lot easier:  create a DSN using your ODBC Data Sources.

Finally, using SQL Server R Services:

So, what’s the major use of SQL Server R Services?  Early on, I see batch processing as the main driver here.  The whole point of getting involved with Revolution R is to create sever-quality R, so imagine a SQL Agent job which runs this procedure once a night against some raw data set.  The R job could build a model, process that data, and return a result set.  You take that result set and feed it into a table for reporting purposes.  I’d like to see more uses, but this is probably the first one we’ll see in the wild.

It’s a preview of a V1 product.  Keep that in mind.

The first and third posts are for CTP 3, so beware the time-sensitive material warnings.

Comments closed

T-SQL Tuesday #72

Mickey Stuewe hosted T-SQL Tuesday this month.  Her topic:  data modeling gone wrong.  A few choice posts on the topic follow.

Mickey herself looks at a case in which surrogate keys didn’t quite do the trick:

One of the problems I’ve seen with careless use of surrogate keys are the duplication of natural keys. Quite often it’s overlooked that the natural key still needs to have a unique constraint. Without it, the reporting team ends up having to use MAX or DISTINCT to get the latest instance of the natural key, or SSIS packages are needed to clean up the duplicates. This can be compounded with many-to-many tables.

Surrogate keys are not replacements for natural keys; they are physical implementation mechanisms to make your life easier.

Rob Farley wants you to think about design and whether your warehouse is built in a way that helps the business:

Many data professionals look at a data warehouse as a platform for reporting, built according to the available data sources. I disagree with this.

The models within a data warehouse should describe the business. If it doesn’t, it’s a data model gone wrong.

What is the central thing that your business does? What is the main interest point? What do you need to look after? For me, this forms the core of the warehouse.

Thomas Rushton says name your stuff right.  Picking the right name can be difficult.  “Field1” probably isn’t the right name, though.

2 Comments

Back Those Things Up

Brian Krebs reminds us to back up all the things:

The tools for securely backing up computers, Web sites, data, and even entire hard drives have never been more affordable and ubiquitous. So there is zero excuse for not developing and sticking with a good backup strategy, whether you’re a home user or a Web site administrator.

PC World last year published a decent guide for Windows users who wish to take advantage of the the OS’s built-in backup capabilities. I’ve personally used Acronis and Macrium products, and find both do a good job making it easy to back up your rig. The main thing is to get into a habit of doing regular backups.

There are good guides all over the Internet showing users how to securely back up Linux systems (here’s one). Others tutorials are more OS-specific. For example, here’s a sensible backup approach for Debian servers. I’d like to hear from readers about their backup strategies — what works — particularly from those who maintain Linux-based Web servers like Apache and Nginx.

This article doesn’t directly relate to SQL Server, but it does act as a nice reminder:  go make sure you have good backups.  Of everything.

Comments closed

Learning R

Jen Stirrup has started a new series on getting started with R.  First, installing R:

First up, what do you need to know about SQL Server installation with R? The installation sequence is well documented here. However, if you want to make sure that the R piece is installed, then you will need to make sure that you do one thing: tick the Advanced Analytics Extension box.

Her next post covers language basics in contrast to SQL Server:

There are similarities and differences between SQL and R, which might be confusing. However, I think it can be illuminating to understand these similarities and differences since it tells you something about each language. I got this idea from one of the attendees at PASS Summit 2015 and my kudos and thanks go to her. I’m sorry I didn’t get  her name, but if you see this you will know who you are, so please feel free to leave a comment so that I can give you a proper shout out.

I’m looking forward to the rest of this series.

Comments closed

Red Gate SQL Monitor On Azure VMs

Thomas Rushton has a post on VLAN rules necessary to get Red Gate SQL Monitor to work in an environment running on Azure VMs:

Our basic architecture was:

  • Multiple VLANs containing SQL Servers to be monitored
  • VLAN containing the monitoring server

Probably not the best for what we were wanting to do, but you work with what you’re given. I installed SQL Monitor, fired it up, and nothing worked.

After much trial and error, and a lot of network monitoring by a very enthusiastic young infrastructure guy, here are the inbound rules that we needed to put in place on each SQL Server VLAN to get this working

Note that this is Azure IaaS, not Azure SQL Database.

Comments closed

Buck Woody On R & Python

Buck Woody’s back to blogging, and his focus is data science.  Over the past month, he’s looked at R and Python.

First, on installing R:

In future notebook entries we’ll explore working with R, but for now, we need to install it. That really isn’t that difficult, but it does bring up something we need to deal with first. While the R environment is truly amazing, it has some limitations. It’s most glaring issue is that the data you want to work with is loaded into memory as a frame, which of course limits the amount of data you can process for a given task. It’s also not terribly suited for parallelism – many things are handled as in-line tasks. And if you use a package in your script, you have to ensure others load that script, and at the right version.

Enter Revolution Analytics – a company that changed R to include more features and capabilities to correct these issues, along with a few others. They have a great name in the industry, bright people, and great products – so Microsoft bought them. That means the “RRE” engine they created is going to start popping up in all sorts of places, like SQL Server 2016, Azure Machine Learning, and many others. But the “stand-alone” RRE products are still available, and at the current version. So that’s what we’ll install.

Also on installing and getting started with Python:

Python has some distinct differences that make it attractive for working in data analytics. It scales well, is fairly easy to learn and use, has an extensible framework, has support for almost every platform around, and you can use it to write extensive programs that work with almost any other system and platform.

R and Python are the two biggest languages in this slice of the field, and you’ll gain a lot from learning at least one of these languages.

Comments closed

Hash Join Performance

Paul White wrote something.  That’s good enough to get tagged here:

It might be tempting to ask why the optimizer does not routinely add null-rejecting filters prior to equality joins. One can only suppose that this would not be beneficial in enough common cases. Most joins are not expected to encounter many null = null rejections, and adding predicates routinely could quickly become counter-productive, particularly if many join columns are present. For most joins, rejecting nulls inside the join operator is probably a better option (from a cost model perspective) than introducing an explicit Filter.

It does seem that there is an effort to prevent the very worst cases from manifesting through the post-optimization rewrite designed to reject null join rows before they reach the build input of a hash join. It seems that an unfortunate interaction exists between the effect of optimized bitmap filters and the application of this rewrite. It is also unfortunate that when this performance problem does occur, it is very difficult to diagnose from the execution plan alone.

I’m only on reading #2 of the post, so I’ve got a few more things yet to learn from it.

Comments closed

Get Started With U-SQL

Microsoft is pushing U-SQL pretty hard.  Here’s a tutorial by Jonathan Gao to whet your appetite:

U-SQL is a language that unifies the benefits of SQL with the expressive power of your own code to process all data at any scale. U-SQL’s scalable distributed query capability enables you to efficiently analyze data in the store and across relational stores such as Azure SQL Database. It enables you to process unstructured data by applying schema on read, insert custom logic and UDF’s, and includes extensibility to enable fine grained control over how to execute at scale. To learn more about the design philosophy behind U-SQL, please refer to this Visual Studio blog post.

You do need Data Lake Tools for Visual Studio, but it looks like you can run it locally.

The VS blog had something a month ago on the topic.  I’m not saying get on it…yet…

Comments closed