Press "Enter" to skip to content

Category: T-SQL Tuesday

T-SQL Tuesday 182 Round-Up

Rob Farley gives us a roundup:

I was the host for T-SQL Tuesday this month, inviting people to write about integrity. And because I don’t believe in just providing a list of posts (although I’ve also done that at the end of the post), I have a completely fabricated story for you. And no, this wasn’t created using AI. I have more integrity than that (see what I did there?).

Read on for Rob’s summary and all of the links.

Leave a Comment

Automating DBCC CHECKDB on Full Backups

Shane O’Neill riffs off of a script:

Like many shops out there, we can’t run corruption checking on our main production database instance. So, then, what do we do? We take the backups and restore them to a test instances, and then run corruption checking on those restored databases.

At least this way we can test the backups we take can be restored, as well.

But, I don’t want to spend every day manually restoring and corruption checking these databases, so let’s automate this bit…

Click through for a script that Shane built off of a Madeira Data Solutions script that Eitan Blumin put together.

Leave a Comment

Thoughts on Data Integrity

Deborah Melkin shares some thoughts:

The first way to think of data integrity is a very small and literal interpretation. This is making sure that our data in the database is good. In many ways, these are easy to enforce – you add constraints. Primary Keys ensure that you know what makes each row unique. Unique constraints represent what would make each record unique if the primary key constraint, which is often a surrogate key these days, didn’t exist or offer different options. 

Read on for more about database design, default constraints, and a dive into data modeling.

Leave a Comment

It’s Probably Not Data Corruption on Disk

Andy Yun talks storage:

I cannot tell you how many times I’ve encountered scenarios where “this data looks wrong.” Well… can one ensure that it is being retrieved and displayed correctly from the storage media that it resides on in the first place? Are you viewing/validating the data in question correctly? Whatever client/method you are using to review your data – that is suspect and its integrity is in question.

It is technically possible for bits to flip, but that’s also why we have checksums on disk. I’m sure there are people who have experienced storage corruption that changed just enough to cause problems but not enough to be noticeable, but Andy is right on the money.

Leave a Comment

Database Nostalgia

Louis Davidson looks back on the past:

I will admit that the feature I will mention probably won’t be relevant to you, as it has been nearly 30 years since this feature was introduced. But since this is the most nostalgic time of the year, I want to go back, through the mists of times, to a time where we hadn’t even quite begun to fear the changing of the centuries. To a time when we a single CPU was all we had even thought of, computer screens were often colorless, a few megabytes of RAM was all that was needed. It was a simpler time, though to be fair, it was also a time before we had sports on our phones.

Louis goes on to reminisce about an implementation of database quality guarantees in a world without foreign key constraints. Which is usually about the time I rag on MySQL for its insistence that foreign keys weren’t important to the product until relatively late in its existence (version 3.23, released in 2000).

Comments closed

SQL Server Execution Plan Analysis Features

Hugo Kornelis looks back at some nice additions to SQL Server:

It’s December. The last month of the year, and hence also the last T-SQL Tuesday of the year. Edition 181 is hosted by Kevin Chant. His chosen topic for this episode is to have us talk about which SQL Server announcement excited us the most.

This posed a bit of a problem for me. The only truthful answer here is that I have never been as excited about an announced new feature as when the Query Store was announced. But I also realized that I don’t have much to write about the Query Store, except how awesome it is. Okay, I could go on a full explanation, but many others do that much better, and have already done so. So that doesn’t leave much for me.

And that’s why I’ll also include my second favorite announcement.

Read on for a little bit around Query Store, followed up with coverage of several interesting additions to SQL Server over the years.

Comments closed

Number of Rows Read in SQL Server Execution Plans

Rob Farley cashed in all of his chips and got something nice out of it:

I had written to Santa (okay, the product group at Microsoft) to ask for this. It wasn’t quite “If I’m on the nice list, what I’d like is a unicorn train set”, but more like “If it’s not too much trouble, this thing here would be amazing…”

The thing was the ability to see the “number of rows read” by an Index Seek operation in a SQL plan containing the “Actuals” (post-execution).

Read on for information about why this is so useful.

Comments closed

The Joy of Query Store

Andy Brownsword gets into the spirit:

Reviewing performance for a SQL database over time can be a challenge without hand crafted tooling or using external monitoring apps. This isn’t about putting fires out, we’re looking at proactive monitoring and taking action for incremental improvements.

The Query Store comes to the rescue by providing built-in insights into key performance metrics over time. We can use different combinations of them to better understand how the database is working.

Query Store isn’t perfect, but it’s very good solution to an extremely common problem.

Comments closed

SQL Database in Microsoft Fabric

Deepthi Goguri is pleased with a new spin on an existing product:

“SQL database in Microsoft Fabric is a developer-friendly transactional database, based on Azure SQL Database, that allows you to easily create your operational database in Fabric. A SQL database in Fabric uses the same SQL Database Engine as Azure SQL Database.”

As you read, this is a transactional database that can be created in fabric and can be replicated to Data Lake for the analytical workloads. The other main goal is to help build AI apps faster using the SQL Databases in Fabric. The data is replicated in near real time and converted to Parquet, in an analytics-ready format.

Read on to learn more about the offering. I’m still not 100% sold on its virtues versus simply having an Azure SQL Database and enabling mirroring.

Comments closed