Press "Enter" to skip to content

Day: June 22, 2017

Business Use Cases For Temporal Tables

Bert Wagner walks through some of the business cases for using temporal tables:

I know this query seems lame — it’s just a SELECT FROM statement. There are no FOR SYSTEM TIME clauses, WHERE statements, and no other interesting T-SQL features.

But that’s the point! Have you ever had to get the “current” rows out of a table that is keeping track of all transactions? I’m sure it involved some GROUP BY statements, some window functions, and more than a few cups of coffee.

Temporal tables automatically manage your transaction history, providing the most current records in one table (dbo.CarInventory) and all of the historical transactions in another (dbo.CarInventoryHistory). No need for complicated queries.

Temporal tables definitely have their uses.  At present, those uses are primarily around versioned fact data.

Comments closed

Using Python In SQL Server 2017

Prashanth Jayaram walks through installing SQL Server Machine Learning Services and running Python external scripts in SQL Server 2017:

Python supports a limited number of data types in comparison to SQL Server. As a result, whenever you use data from SQL Server in Python scripts, the data might be implicitly converted to a type compatible with Python. However, often an exact conversion cannot be performed automatically, and an error is returned. This table lists the implicit conversions that are provided. Other data types are not supported.

This article will get you started, and from there, the wide world of Anaconda awaits you.

Comments closed

Openrowset On Linux

Steve Jones shows how to use the OPENROWSET command to bulk load data into SQL Server on Linux:

I wanted to import the million song dataset in SQL Server on Linux. There’s a github repo that has the SQL to allow you to use this with the graph database features. However, it’s built for Windows.

Linux is a slightly different beast. Once I started down this path, I had memories of working on SunOS in college, messing with permissions and moving files.

I run Ubuntu in VMWare, so I first downloaded the files to my Documents folder. That’s pretty easy. However, once there, the mssql user can’t read them. Rather than mess with permissions for my home, I decided to move these to a location where the mssql user could read them.

Much of the post is about file permissions.  This is because SQL on Linux is SQL on Windows, and that’s a glorious thing.

Comments closed

Persisting Containerized Data

Andrew Pruski has started a series on persisting data in Docker containers.  He starts off the series with an easy method of keeping data around after you delete the container:

Normally when I work with SQL instances within containers I treat them as throw-away objects. Any modifications that I make to the databases within will be lost when I drop the container.

However, what if I want to persist the data that I have in my containers? Well, there are options to do just that. One method is to mount a directory from the host into a container.

Full documentation can be found here but I’ll run through an example step-by-step here.

Statefulness has been a tough nut to crack for containers.  I’m interested in seeing what Andrew comes up with.

Comments closed

The Importance Of A Test Environment

Randolph West explains why it’s important to have a test environment separate from your development and production environments:

Some companies I’ve worked with have different forms of testing environments, including QA (Quality Assurance), IAT (Internal Acceptance Testing), and UAT (User Acceptance Testing). What they are called doesn’t matter, so long as they exist.

In a typical IT deployment, whether using Waterfall, Agile, or other development methodologies of the month, it pays to have a basic developmenttestproduction deployment path.

Randolph explains it in some detail but one of the big benefits for me is that you can make sure that deployment process works before deployment time.  Knowing that your checked-in scripts won’t break the deployment (because they didn’t break the CI build and release) makes the release process a lot less stressful.

Comments closed

Automatic Processing Of Azure Analysis Services Models

Dustin Ryan shows how to use Azure Functions to refresh Azure Analysis Services models:

Download the latest client libraries for Analysis Services. This needs to be done on your local machine so you can then copy these files to your Azure Function App.

After you’ve downloaded the client libraries, the DLLs can be found in C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies. The two files you need are:

C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\Microsoft.AnalysisServices.Core.DLL
C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\Microsoft.AnalysisServices.Tabular.DLL

This step is important because the documentation in Azure references the 130 assemblies, which will not work. You need the assemblies in 140 otherwise you’ll get errors.

Dustin walks through the whole process of setting up an Azure Function step by step.

Comments closed

Finding Candidates For Memory-Optimized Tables

Ned Otter points out a very interesting report in SSMS 2016 and 2017, which helps you determine if you should migrate a table to be memory-optimized:

The chart attempts to display both the best candidates and the degree of difficulty for migration. But there are a few problems with the “difficulty” rating of this internal query, and as a result, if we really want to know how easy/difficult the process might be, we’ll have to roll our own query.

Read on for more details, as well as a script Ned has put together to do the same in T-SQL.

Comments closed