Press "Enter" to skip to content

Category: Temporal Tables

Temporal Memory-Optimized Tables

Ned Otter describes how hybrid disk + memory-optimized temporal tables differ from on-disk temporal tables:

As changes are made to rows in the temporal memory-optimized table, before being transferred to the history table on disk, they are first migrated to an internal memory-optimized staging table. That means when you query the “history table”, you could be retrieving rows from both the on-disk history table, and internal staging table. Because no custom indexing was possible on the internal staging table, there could be performance implications when executing queries against historical data. Microsoft addressed these potential performance issues in SQL 2016 SP1 (detailed in this CAT blog post).

The internal staging table only gets flushed to the on-disk history table when it reaches 8% of the size of the temporal table. Given the current capacities of Windows Server 2016 (24TB memory), it’s now possible to have very large memory-optimized tables. 8% of one of those large memory-optimized tables could be quite large, which will affect query performance, if predicates don’t match available indexes.

Read on for some sobering thoughts on the topic.

Comments closed

Using Temporal Tables For SCD2

I have a post on pain that I experienced with temporal tables:

This query succeeds but returns results we don’t really want:

ProductModelTemporalSameDate

This brings back all 9 records tied to products 1 and 2 (because product 3 didn’t exist on July 2nd at 8 AM UTC). But it gives us the same start and end date, so that’s not right. What I really want to do is replace @InterestingTime with qsp‘s DatePredictionMade, so let’s try that:

ProductModelTemporalInvalid

This returns a syntax error. It would appear that at the time FOR SYSTEM_TIME is resolved, QuantitySoldPrediction does not yet exist. This stops us dead in our tracks.

This is one of the two things I’d really like to change about temporal tables; the other thing (now that auto-retention is slated for release) is the ability to backfill data without turning off system versioning.

Comments closed

Automatic Temporal Table Data Purging

Bert Wagner has warmed the cockles of my heart with this news:

The problem with temporal tables is that they produce a lot of data. Every row-level change stored in the temporal table’s history table quickly adds up, increasing the possibility that a low-disk space warning is going to be sent to the DBA on-call.

In the future with SQL Server 2017 CTP3, Microsoft allows us to add a retention period to our temporal tables, making purging old data in a temporal table as easy as specifying:

I’m in a situation where this will be very useful.

Comments closed

Business Use Cases For Temporal Tables

Bert Wagner walks through some of the business cases for using temporal tables:

I know this query seems lame — it’s just a SELECT FROM statement. There are no FOR SYSTEM TIME clauses, WHERE statements, and no other interesting T-SQL features.

But that’s the point! Have you ever had to get the “current” rows out of a table that is keeping track of all transactions? I’m sure it involved some GROUP BY statements, some window functions, and more than a few cups of coffee.

Temporal tables automatically manage your transaction history, providing the most current records in one table (dbo.CarInventory) and all of the historical transactions in another (dbo.CarInventoryHistory). No need for complicated queries.

Temporal tables definitely have their uses.  At present, those uses are primarily around versioned fact data.

Comments closed

Transactional Replication And Temporal Tables

Transactional replication found Drew Furgiuele’s little black book of T-SQL syntax and went ballistic when it found out Drew was seeing temporal tables on the side:

So let’s say you ran this script (or, maybe someone checked it in as a database change to production). For a while, things are great: you’re making changes to data on your publisher and things are flowing nicely to your subscribers. Sooner or later though, someone’s going to ask you to set up a new subscription (or maybe you need to reinitialize one). Let’s simulate that on my lab: we’re going to remove Person.Address from replication and we’re going to put it back, and then create a snapshot. The key difference here is that now, Person.Address has system versioning turned on. When we try and add the table back to the publication, we’re in for a shock:

This could come back to bite you, so if you use replication and are interested in temporal tables, read this closely.

Comments closed

Cleaning Up Erroneous Data Using Temporal Tables

Bert Wagner shows how to use a temporal table to clean up incorrectly entered data if you already have a temporal table in place:

The good news is that all of your data is still intact — it’s been copied over to the historical table. Phew!

Now all you need to do is rollback this inadvertent row insertion and make your tables look just like you did before you started breaking them.

This should be easy right?

Well not exactly — there’s no automatic way to roll back the data in a temporal table. However, that doesn’t mean we can’t write some clever queries to accomplish the same thing.

Read the whole thing, as there’s a multi-step process.

Comments closed

Versioning R Code In SQL Server

Steph Locke shows how to combine R models and SQL Server temporal tables for versioning:

If we’re storing our R model objects in SQL Server then we can utilise another SQL Server capability, temporal tables, to take the pain out of versioning and make it super simple.

Temporal tables will track changes automatically so you would overwrite the previous model with the new one and it would keep a copy of the old one automagically in a history table. You get to always use the latest version via the main table but you can then write temporal queries to extract any version of the model that’s ever been implemented. Super neat!

I do exactly this.  In my case, it’s to give me the ability to review those models after the fact once I know whether they generated good outcomes or not.

Comments closed

Building Temporal Tables

Bert Wagner shows how to create a temporal table in SQL Server 2016:

I want to make my life easier by using temporal tables! Take my money and show me how!

I’m flattered by your offer, but since we are good friends I’ll let you in on these secrets for free.

First let’s create a temporal table. I’m thinking about starting up a car rental business, so let’s model it after that:

There are some places where temporal tables can get better (particularly around feeling more like a type 2 slowly changing dimension), but I’m pretty happy with this feature.

Comments closed

Change Detection Temporal Tables

Adam Machanic shows how to find net changes using temporal tables:

For now, consider the following set of propositions, given that we’re asking at time Y for all changes since a prior time X.

  • INSERT: The key did not exist at time X but does exist at time Y.
  • DELETE: The key existed at time X but does not exist at time Y.
  • UPDATE: The key existed at both time X and at time Y, and at least one change occurred between time X and time Y.

Given these assumptions, we can begin work on a temporal queries that return the necessary rows. Solving for these conditions will require all rows that were active as of time X, rows that were (or are) active at time Y, and for the final case, all rows that were active in-between times X and Y. Since this is a range-based scenario, our best Temporal predication option will be either FROM or BETWEEN. The difference between these two is subtle: FROM uses an open interval (non-inclusive at both endpoints), whereas BETWEEN uses a half-open interval, inclusive on the end date. Given the choice in the scenario, BETWEEN makes more sense, as we can take advantage of the inclusive endpoint to avoid dropping a badly-timed row. But more on that in a moment.

Adam put a lot of thought into edge cases, making this a must-read.

Comments closed