Press "Enter" to skip to content

Category: T-SQL Tuesday

SQL Server Execution Plan Analysis Features

Hugo Kornelis looks back at some nice additions to SQL Server:

It’s December. The last month of the year, and hence also the last T-SQL Tuesday of the year. Edition 181 is hosted by Kevin Chant. His chosen topic for this episode is to have us talk about which SQL Server announcement excited us the most.

This posed a bit of a problem for me. The only truthful answer here is that I have never been as excited about an announced new feature as when the Query Store was announced. But I also realized that I don’t have much to write about the Query Store, except how awesome it is. Okay, I could go on a full explanation, but many others do that much better, and have already done so. So that doesn’t leave much for me.

And that’s why I’ll also include my second favorite announcement.

Read on for a little bit around Query Store, followed up with coverage of several interesting additions to SQL Server over the years.

Comments closed

The Joy of Query Store

Andy Brownsword gets into the spirit:

Reviewing performance for a SQL database over time can be a challenge without hand crafted tooling or using external monitoring apps. This isn’t about putting fires out, we’re looking at proactive monitoring and taking action for incremental improvements.

The Query Store comes to the rescue by providing built-in insights into key performance metrics over time. We can use different combinations of them to better understand how the database is working.

Query Store isn’t perfect, but it’s very good solution to an extremely common problem.

Comments closed

SQL Database in Microsoft Fabric

Deepthi Goguri is pleased with a new spin on an existing product:

“SQL database in Microsoft Fabric is a developer-friendly transactional database, based on Azure SQL Database, that allows you to easily create your operational database in Fabric. A SQL database in Fabric uses the same SQL Database Engine as Azure SQL Database.”

As you read, this is a transactional database that can be created in fabric and can be replicated to Data Lake for the analytical workloads. The other main goal is to help build AI apps faster using the SQL Databases in Fabric. The data is replicated in near real time and converted to Parquet, in an analytics-ready format.

Read on to learn more about the offering. I’m still not 100% sold on its virtues versus simply having an Azure SQL Database and enabling mirroring.

Comments closed

The Value of Sufficiency

Rob Farley is good enough:

The French philosopher Montesquieu wrote, “Le mieux est le mortel ennemi du bien, which means, “The best is the mortal enemy of the good.” Montesquieu was an interesting man who lived in the 18th century, and influenced many of the political structures around the world. His book “The Spirit of Law” from 1748 described ways to avoid absolute power, established concepts like “innocent until proven guilty”, and campaigned against slavery. I guess some countries adopted those ideas more than others.

I mentioned it because Josephine Bush has invited us to write about the topic of “Good enough is perfect“. And while I realise that the two statements are not identical, I consider that they are essentially the same. If we always strive for perfect then we can all too easily miss the mark of being able to deliver anything worthwhile. As a consultant I have to understand that the ideal solution for my customers is not the one where everything is completely perfect if achieving that means that the cost is too high. Instead, I need to find the point where good enough really is good enough.

Understanding value at the margin is a vital skill.

Comments closed

T-SQL Tuesday 179 Round-Up

Tim Mitchell hires some data detectives:

Earlier this month, I hosted the monthly T-SQL Tuesday invitation in which I asked, “What’s in your data detective toolkit?” We got some great responses which I’ll recap here, and I’ll share a few thoughts of my own at the end.

Click through this month’s responses, as well as Tim’s answer to the question.

Comments closed

The Most Important Tool for a Data Detective

Andy Yun wants you to use your earholes:

The All Powerful…

… Question. That is what I now believe is the most important tool for a Data Detective.

Asking Questions Effectively

This nuance involves HOW you ask a question. Some of this involves knowing your audience. Is this the right place or the right time? Sometimes there comes a point where asking questions is just counter-productive because your audience has no interest in answering. And it also means you need to make sure you’re asking the correct audience in the first place.

Asking questions is difficult, so instead, I just strawman my way to success.

Comments closed

Building a Data Detective Toolkit

Deb Melkin talks tools:

Happy T-SQL Tuesday! I wasn’t really sure I’d be able to crank something out for this one but somehow I managed to squeeze it in. Tim Mitchell ( b ) is hosting and he has a great topic for us: What’s in our Data Detective toolkit?

I love this topic for so many reasons. Partly because I feel like I’m asked to look at so many projects where I’m dropped in and asked to figure things out, usually performance related but occasionally new functionality or features. But as I’m asked to do this fairly often, I may have to see if Data Detective can be my new title… hmm…

Being a Data Detective in a film noir. On the one hand, that sounds like a really neat idea. On the other hand, things usually don’t turn out so well for the detective.

Comments closed

Generating a Multi-Aggregate Pivot in Spark

Richard Swinbank troubleshoots an issue:

I’m using a stream watermark to handle late arriving data – basically1) my watermark enables the stream to accept data arriving up to 10 seconds late …and that’s where the problem shows up.

When I run this streaming query – in Azure Databricks I can do this simply with display(df_pivot) – I receive the error:

AnalysisException: Detected pattern of possible ‘correctness’ issue due to global watermark. The query contains stateful operation which can emit rows older than the current watermark plus allowed late record delay, which are “late rows” in downstream stateful operations and these rows can be discarded. Please refer the programming guide doc for more details. If you understand the possible risk of correctness issue and still need to run the query, you can disable this check by setting the config `spark.sql.streaming.statefulOperator.checkCorrectness.enabled` to false.

Read on to learn more about the scenario, the issue, and the solution.

Comments closed

Query Start Times in Query Store

Hugo Kornelis describes an issue:

I was hired by a customer who had a very annoying issue with the daily data load of their data warehouse. The volume of data to be loaded is high and they were already struggling to finish the load before business opens. But that was not their biggest issue. The biggest problem, the real pain point that they hired me for, is that at unpredictable moments, the load would run much longer than normal, pushing it well into business hours. They wanted me to find out what caused those irregular delays, and find a way to stop them from happening.

Read on to learn more about the issue itself, as well as a discrepancy in what Query Store showed. Hugo also points out that the quick-and-easy solution may not be the right solution.

Comments closed