I want to make my life easier by using temporal tables! Take my money and show me how!
I’m flattered by your offer, but since we are good friends I’ll let you in on these secrets for free.
First let’s create a temporal table. I’m thinking about starting up a car rental business, so let’s model it after that:
There are some places where temporal tables can get better (particularly around feeling more like a type 2 slowly changing dimension), but I’m pretty happy with this feature.
For now, consider the following set of propositions, given that we’re asking at time Y for all changes since a prior time X.
- INSERT: The key did not exist at time X but does exist at time Y.
- DELETE: The key existed at time X but does not exist at time Y.
- UPDATE: The key existed at both time X and at time Y, and at least one change occurred between time X and time Y.
Given these assumptions, we can begin work on a temporal queries that return the necessary rows. Solving for these conditions will require all rows that were active as of time X, rows that were (or are) active at time Y, and for the final case, all rows that were active in-between times X and Y. Since this is a range-based scenario, our best Temporal predication option will be either FROM or BETWEEN. The difference between these two is subtle: FROM uses an open interval (non-inclusive at both endpoints), whereas BETWEEN uses a half-open interval, inclusive on the end date. Given the choice in the scenario, BETWEEN makes more sense, as we can take advantage of the inclusive endpoint to avoid dropping a badly-timed row. But more on that in a moment.
Adam put a lot of thought into edge cases, making this a must-read.
Although the process of converting an In-Memory Optimized OLTP table to a system-versioned table is similar, there are some differences that we need to cover and demonstrate in this section.
You need to be aware of some specific details when converting the in-memory optimized table to the system-versioned table
Read on for those specifics.
Have you ever got that sinking feeling after hitting the Execute button in SSMS, thinking “I should not have done that”? DML statements with missing WHERE clause, DROP statements accidentally targeting slightly mistyped (but existing) tables or databases, RESTORE statements overwriting databases with new data that haven’t been backed up, are all examples of actions prompting an “Oops…” (or worse) shortly thereafter. “Oops recovery” is the term that became popular to describe the process of fixing the consequences.
For most of these scenarios, the usual, and often the only, recovery mechanism is to restore the database from backup to a point in time just before the “oops”, known as point-in-time recovery (PITR). Even though PITR remains the most general and the most effective recovery mechanism, it does have some drawbacks and limitations: the recovery process requires a full database restore, taking the time proportional to the size of the database; a sequence of restores may be needed if multiple “oops” transactions have occurred; in the general case, there will be difficulties reconciling recovered data with data modified after the “oops” point in time, etc. Nevertheless, PITR remains the most widely applicable recovery method for SQL Server databases, both on-premises and in the cloud.
The solution entails a non-trivial amount of effort, but it is interesting.
At first I was a bit puzzled. I noticed that the system generated table was consistently smaller than my user created table. It was not only smaller it was twice as small!
I did some further testing on my Surface this weekend and here’s what I found:
— Side note: I use Person.Address a lot in demos, so I decided to create a new table to test with in hopes of not breaking any other demos I do regularly.
I think this is a good decision for a default, but if you are unable to support page-level compression for some reason, there’s a workaround: create your history table beforehand.
I’ve been thinking a lot about SQL Server 2016 temporal tables of late. I think it’s possibly the most compelling feature in the release, with broad applications across a number of different use cases. However, just like any v.1 feature, it’s not without its faults.
I created a couple of new Connect items and decided to see what other things people had submitted. I combed the list and came up with a bunch of interesting items, all of which I think have great merit. Following is a summary of what I found. I hope you’ll consider voting these items up and hopefully we can push Microsoft to improve the feature in forthcoming releases.
I particularly like the idea about dropped column retention, at least as an optional feature. If temporal tables are interesting to you, click through and check out these Connect items.
I have chosen to blog about Richard T. Snodgrass’ book “Developing time-oriented database applications in SQL.” I heard about this book last year around this time, when I started to investigate the new temporal feature “System versioned tables” in SQL Server 2016. I believe it was my old colleague Peter Gram from Miracle who pointed out the book to me, and usually when Peter recommends a book, I buy it and (eventually) read it. It was also about time (no pun intended), since I’m giving a talk on “All things time-related” for two SQL Saturdays during the next few months, and I needed to spice up the presentation with some new material.
In this blog post, I will quickly scratch down a few of the takeaways, I have taken from the book already.
Sounds like an interesting read.
Again the wall. There is no way you can choose a temporal table and apply encryption to a column or columns using the wizard.
I tried then using the powershell (after manually creating the keys) as this is true the only way to encrypt existing columns, just in case.
Raul doesn’t stop there, though, and he does figure out a workaround.
Why would you want to do this? In my case I have a PowerShell function to accept an input object, convert it to a DataSet (a .NET representation of tables and the links between them), and bulk copy the data into SQL Server. However because the input objects can vary slightly with additional tables or additional columns I use the flexibility of SMO to create the tables and columns on the fly.
There’s a lot of power in programmatically defining objects. I’m not sure that’s a good default scenario, but there are times in which it can come in handy.
Tracking changes to your data is a big deal, and in databases with heavy churn, you can end up needing a massive amount of space to handle your history, especially if you need to retain seven years of data, like some of our customers.
If data storage is a concern, I’m going to recommend SQL Server Enterprise Edition. The Books Online documentation specifically states that the history tables for Temporal Tables are implemented with Page Compression by default, which is an Enterprise Edition feature.
My quick thought is, use them when you want a type 2 dimension without putting in the effort to create a type 2 dimension—in other words, non-warehouse systems in which you need long-term data changes.