Press "Enter" to skip to content

Author: Kevin Feasel

More On Temporal Tables

Newly-minted Canadian citizen Randolph West’s latest article is on modifying temporal tables:

Last week I demonstrated how temporal tables in SQL Server 2016 work. If you have implemented a history table—populating it with triggers or stored procedures—it works the same way.

This week, we are going to look at how to modify a temporal table that already has data in the history table.

If you’re using temporal tables for auditing, it’s important to know that yes, data can be modified.

Reza Rad also has a recent blog post on temporal tables:

Temporal tables are new type of database tables introduced in SQL Server 2016, these tables are system-versioned and keep history of changes (insert, delete, update) of everything happened on data rows. Retrieving change log from these tables are easy. These tables can simply tell you what was the data at specific point of the time in the table. These tables works with datetime2 columns to keep FROM DATE and TO DATE information of each change. This means these tables can be used for implementing changes in dimensions, yes you know what it called; Slowly Changing Dimension!

Slowly-changing dimensions in an OLTP scenario was the first use I could think of for temporal tables, so I’m glad Reza put this article together.

Comments closed

Connecting To SQL Server From Linux

Erland Sommarskog has an article on ways in which you can connect to SQL Server from a Linux (or UNIX) box:

TDS — Tabular Data Stream — is the protocol that SQL Server talks with its clients. This is a proprietary protocol, owned by Microsoft (and Sybase, who have their version). Nevertheless there is exists FreeTDS which originally was a reverse-engineering effort of TDS. Now when Microsoft has published the TDS specification, they should be able to repair any cracks they may have. Check out the FreeTDS home page for further details. There appears to be a DBD::FreeTDS that goes along with it.

I used FreeTDS to connect to SQL Server from RStudio, so I endorse that method.

Comments closed

Happy Thanksgiving

Curated SQL will not be taking Thanksgiving off.  To compensate for the pace of blog posting activity usually drops around Thanksgiving, I’ll link to a few classic articles.  That way, even if you’re in the office, you’ll have something to occupy that extra time.

Comments closed

Calculating Partition Sizes

Rolf Tesmer has a nice series on partitioning going. His latest entry involves calculating partition sizes in advance:

Sometimes (just sometimes) you need to calculate the size your table partitions upfrontbefore you actually go to the pain and effort of partitioning (or repartition) a table.  Doing this helps with pre-sizing the database files in advance instead of having them auto-grow many many times over in small increments as you cut data over into the partitions.

Check out the entire series.

Comments closed

Bypass SSDT Publish

Ed Elliott has just added another SSDT Dev Pack feature:

Open a document with a table valued function or procedure, click tools–> SSDT Dev Pack –> QuickDeploy or use the keyboard options in SSDT to make Tools.QuickDeploy to a key combination (I use Ctrl+Q, Ctrl+D but it is a little awkward) and your code will be deployed, any messages are added to the standard output window.

If the active document has anything other than stored procedures or table valued functions then they will not be deployed and also if you have more than one procedure of function, all of them in the active document will be deployed.

Ed is quickly becoming one of my favorite bloggers.  It seems like every other day, he has a new tool available for us.

Comments closed

JSON Leads To New Wave Of 1NF Failures

Jovan Popovic talks about storing JSON in SQL Server:

Instead of single JSON object you can organize your data in this “collection”. If you do not want to explicitly check structure of each JSON column, you don’t need to add JSON check constraint on every column (in this example I have added CHECK constraint only on EmailAddresses column).

If you compare this structure to the standard NoSQL collection, you might notice that you will have faster access to strongly typed data (FirstName and LastName). Therefore, this solution is good choice for hybrid models where you can identify some information that are repeated across all objects, and other variable information can be stored as JSON. This way, you can combine flexibility and performance.

Okay, we’ve hit my first major problem with JSON support:  rampant violation of first normal form.  You can create check constraints on JSON code, and that’s pretty snazzy I guess, but I know a better way to store relational data in a relational database system.  JSON support is great when you ask SQL Server to be a holder of text blobs, but this is begging for bad design decisions.

Comments closed

Highlight Expensive Queries

Ed Elliott has another tool in his SSDT DevPack:

When you enable the query cost for a document (I map it to the keys ctrl+k, ctrl+q) what the tool does is connect to a SQL Server instance and run the stored procedure using “SET SHOWPLAN_XML ON” so it isn’t actually executed but the estimated query plan is returned and the cost of each statement checked to see how high it is.

By default high statements must have a cost over 1.0 to be marked as high and anything over 0.2 is marked as a warning – you can override these with this in your “%UsersProfile%\SSDTDevPack\config.xml” :

You can quibble with the cost values but this is a really cool feature.

Comments closed

Storage Spaces Direct

Glenn Berry introduces us to Storage Spaces Direct:

One of the more exciting new features in Windows Server 2016 is called Storage Spaces Direct (S2D), which enables organizations to use multiple, clustered commodity file server nodes to build highly available, scalable storage systems with local storage, using SATA, SAS, or PCIe NVMe devices. You can use internal drives in each storage node, or direct-attached disk devices using “Just a Bunch of Disks” (JBOD) where each JBOD is only connected to a single storage node. This eliminates the previous requirement for a shared SAS fabric and its complexities (which was required with Windows Server 2012 R2 Storage Spaces and SOFS), and also enables using less expensive storage devices such as SATA disks.

This sounds like a technology with interesting potential, and not requiring SAS disks will make it more likely to be adopted.

Comments closed

DAX Performance

Bill Anton reminds me that I don’t know a thing about DAX:

As you can see, using DAX variables is a much better solution than using the aliases…the performance improvement is about the same, however, variables we can wrapped up in the calculated measures inside the model allowing us to take advantage of the performance gain with all tools (not just those allowing us to hand-craft the DAX queries).

The query used in this post is too simple to highlight the performance benefit (small data dataset, simple calculation)…but it did make it easier to cruise the query plans and SE requests. In reality, a better use case for highlighting the performance benefits of these optimizations is with a query that hammers the Formula Engine (FE).

Interesting stuff, even for someone with no knowledge of DAX.

Comments closed