Press "Enter" to skip to content

Category: Temporal Tables

Notes on Temporal Tables

Teo Lachev describes some benefits and properties of temporal tables:

At the same time, temporal tables are somewhat more difficult to work with. For example, you must disable system versioning before you alter the table. Here is the recommended approach for altering the schema by the documentation:

I think the main drawback to using temporal tables in this way is that we can only use system time as the separator, unless you manually load data into the history table. It’d be great to have a user time capability to open up temporal tables to these sort of warehousing scenarios, such as using them for type-2 slowly-changing dimensions.

Leave a Comment

The Internals of DATETIME2

Chad Baldwin digs in:

I noticed in sys.column_store_segments the min_data_id and max_data_id columns store very large bigint values in the segments for datetime2 columns. After doing a bit more googling and tinkering, I found for bit/tinyint/smallint/int/bigint it stores the min/max of the actual values rather than dictionary lookup values. So I assume it’s likely doing the same for date/time/datetime/datetime2 and storing some sort of bigint representation of the actual value.

This post is going to focus on datetime2(7) datatypes mainly because that’s what I was dealing with. Though I’m sure it wouldn’t be much work to figure out the other types.

Click through to learn more about the datatype and see how this wraps into a discussion of temporal table cleanup and columnstore indexes.

Comments closed

Temporal Table History Cleanup

Chad Baldwin reads the docs:

I recently built a system for collecting index usage statistics utilizing temporal tables, clustered columnstore indexes (CCIs) and a temporal table data retention policy. The basic idea behind the system is that it collects various stats about indexes and updates this stats table. However, because it’s a temporal table, all changes are logged to the underlying history table.

My history table is built using a clustered columnstore index and had a data retention policy set up for the temporal table, like so:

Read on to see the problem Chad ran into and why it turned out not to be an actual problem (except maybe of the PEBKAC variety). In fairness, I would have made the same mistake.

Comments closed

Retaining SQL Agent Job History in a Managed Instance

Andy Brownsword gets around the limitations:

In a Managed Instance, the SQL Agent job history is fixed at 1000 records or 100 records per job. This isn’t configurable like a regular SQL Server install. So how can we maintain a history of these if we want to retain those records?

There are 3 approaches which could be worth considering. Two of these have been well covered by others and the final one I’ll demonstrate here:

Click through for those three techniques.

Comments closed

Table Audits with Temporal Tables

Erik Darling is keeping an eye on you:

Sort of recently, a client really wanted a way to figure out if support staff was manipulating data in a way that they shouldn’t have. Straight away: this method will not track if someone is inserting data, but inserting data wasn’t the problem. Data changing or disappearing was.

The upside of this solution is that not only will it detect who made the change, but also what data was updated and deleted.

Read on to see how it works. I’ve used temporal tables for this type of scenario, and they’re fine for stable table designs.

Comments closed

Implementing Temporal Tables with Existing Data

Matthew McGiffen gets to one of my problems with temporal tables:

I also referred to Temporal Tables which are available to us from SQL Server 2016 onward.

Temporal tables aren’t just about monitoring change, they also provide really nice methods for being able to query historical data – to see what the values were at a particular point in time e.g.

SELECT * FROM dbo.SomeData FOR SYSTEM_TIME AS OF '1900-01-01';

My big problem with temporal tables is that they only implement system-defined times. That’s fine in a quasi-historical OLTP scenario, where you want to track history but only occasionally make use of it. But if they supported application time, then you have the ability to create something akin to a type-2 slowly changing dimension with just a few extra words. I understand that the tricky part is that application-defined temporal tables lose the nicety of knowing that the latest insert always goes into the main table and drives the prior record into the historical table, but there are some clever ways around this problem as well. It’s just too early in the morning for me to articulate them is all…

Comments closed

Securing Temporal Tables

Daniel Hutmacher does a little locking down:

You may have already discovered a relatively new feature in SQL Server called system-versioned temporal tables. You can have SQL Server set up a history table that keeps track of all the changes made to a table, a bit similar to what business intelligence people would call a “slowly changing dimension”.

What happens behind the scenes is that SQL Server creates a separate table that keeps track of previous versions of row changes, along with “from” and “to” timestamps. That way, you can view the contents of the table as it was at any given point in time.

But how to you version the contents of a table, while hiding things like deleted records from prying eyes?

There’s not a whole lot we can do, but Daniel shows what we are able to do.

Comments closed

Error 13535: Data Modification Failed with Temporal Tables

Bob Dorr troubleshoots an issue:

When 2 or more workers are modifying the same row, it is possible to encounter 13535.  The time of the begin transaction and the modification are the defining properties.  When the transaction begin time is before the latest row modification, error 13535 is encountered.

Click through for an example of how you might trigger this error. This ultimately is the optimistic concurrency problem: how do you deal with multiple writers when using snapshot-based optimistic concurrency? Silently clobber or raise an error? Looks like temporal tables, like memory-optimized tables, raise an error instead of going quietly into the night.

Comments closed

Temporal Tables and Azure DevOps Deployments

Rayis Imayev notes a problem with Azure DevOps deployments:

Here is one thing that still doesn’t work well when you try to alter an existing temporal table and run this change through the [SqlAzureDacpacDeployment@1] DevOps task, whether this change is to add a new column or modify existing attributes within the table. Your deployment will fail with the “This deployment may encounter errors during execution because changes to … are blocked by …’s dependency in the target database” error message.

Read on to see what causes this problem and what we can do to work around it.

Comments closed