Press "Enter" to skip to content

Curated SQL Posts

The User Transaction Scope for Temporary Objects

Bob Dorr troubleshoots a performance problem:

When the temporary table is bound to the user transaction it is both created and destroyed as part of the transaction.  The same logic in a procedure attempts to avoid the creation and destruction, for each execution, by using temporary table caching.

From the issue I was debugging, the user transaction scope mattered because creation and destruction of metadata may be an expensive operation.  

This post ties into two separate things: first, how temp objects tie to specific sessions; and second, the cost of creating and destroying temporary objects. For the latter, a couple quick pieces of advice:

  • Reduce the number of temporary objects you create. If you can solve a problem with fewer temp tables or table variables while maintaining acceptable performance, that can help on busy systems.
  • Never explicitly drop temp tables. There’s no benefit to explicitly dropping temp tables, as they’ll go away as soon as the session ends. Also, not dropping temp tables is the first step to:
  • Embrace temp table reuse. There are specific rules around when you can re-use a temp table. Each re-use of a temp table means two fewer metadata operations (one delete and one create).
  • Use memory-optimized table variables instead of temp tables or table variables.
  • Turn on memory-optimized tempdb metadata. The biggest issue here is that you lose cross-database queries into tempdb views. That can end up being painful and is why I can’t recommend it as a general solution.

Comments closed

Simplifying a Complex Multi-Visual Chart

Amy Esselman re-designs a mess of a chart:

When faced with any unfamiliar but complicated graph, it can be helpful to think about it piece by piece to gain a better understanding of what’s being communicated. That way, we’ll have a better handle on how we can improve the overall visual. 

The goal of this chart is to allow managers to compare their store’s performance against its forecasted range and the actual performance of other stores in the region. 

Click through for the full process.

Comments closed

Iteratively Tuning Graph Neural Networks

Luis Bermudez takes us through the process of tuning one flavor of neural network:

We made our own implementations of OGB leaderboard entries for two popular GNN frameworks: GraphSAGE and a Relational Graph Convolutional Network (RGCN). We then designed and executed an iterative experimentation approach for hyperparameter tuning where we seek a quality model that takes minimal time to train. We define quality by running an unconstrained performance tuning loop, and use the results to set thresholds in a constrained tuning loop that optimizes for training efficiency.

Read on to see how they did it.

Comments closed

Consolidating Indexes

Erik Darling runs through an exercise:

The more columns you have in a table, the more potential column combinations there are for indexes. Much like columns, indexes tend to get added following the path of least resistance.

Very rarely does someone consider current indexes when deciding to add an index.

Erik’s process is a good one. The real pain comes when there are 40-50 indexes on a table (seriously…) and there are a lot of similar-but-not-quite-similar-enough indexes.

Comments closed

Microsoft Purview

Wolfgang Strasser looks at Microsoft Purview:

I was ready for a nice relaxing evening today, when an email appeared in my inbox “Azure Purview is now Microsoft Purview!”

Initially I thought… yeah.. “just another Microsoft product name renaming” .. but when I read through it more in depth I found out, that this is NOT just a renaming.

Read on to understand what it includes.

Comments closed

Installing Prometheus Exporter for Windows Clients

Jamie Wick exports some data:

Prometheus is an open-source monitoring solution that our Linux team has been using for several years. More recently, we began using it for our Windows-based servers too. (I’ll post a writeup about Prometheus in the future)

One of the obstacles to implementing Prometheus monitoring on our Windows servers was finding and installing an agent. We ultimately decided to use the windows_exporter agent available in the Prometheus Community on GitHub. The exporter is free to use under an MIT license and supports an extensive list of WMI metrics that are grouped into Collectors.

Read on for more info, including ways to avoid common errors.

Comments closed

Imagining a SaaS Plane for Data Mesh in Azure

Paul Andrew shares some deep thoughts:

For part 7 of this series, I want to explore what else could be delivered in our Azure Data Mesh if we continue our established thinking around the planes of interaction for our data products. As with part 6, we are still missing good Azure Resources that can deployed for certain situations. However, I want to frontload some concepts now, so we are ready if/when a suitable technical answer arrives in the cloud.

Note that this is all speculative. It’s interesting speculation, though.

Comments closed

Seeing Top N in Power BI

Reza Rad does some filtering:

I have previously written articles about how you can write a measure in DAX that helps with TOP N filtering. However, you may not need that calculation for many situations. If all you want is just simply to get the top 10 customers based on the sales amount, or bottom 5 products, etc, then you can simply use the visual-level filter GUI to perform this filtering. This is not a new functionality in Power BI, However, many users might not have yet seen it, so I’ll explain it in this short article and blog.

Read on to understand when you can use this and when you should go to TOPN() in DAX.

Comments closed

Cross-Subscription Restore for Dedicated SQL Pools

Steve Howard announces some good news:

We are excited to announce the release of cross-subscription restore. This has been one of our top requested features from customers as it unlocks multiple scenarios from dev/test to simplified billing at the subscription level for restored data warehouses.

Click through to see how you can do this. There was a workaround in the past but this should be quite a bit faster.

Comments closed

Currying and Partial Application

Prakhar explains the difference between currying and partial application:

Currying simply means converting a function taking more than one parameter can be into a series of functions with each taking one parameter. Example:

Click through for an example, as well as the difference between currying and partial application. As for why currying is important, this is how we tie together the concept of mathematical functions, which require exactly one parameter (a function being defined as, for every value of the domain, there is one and only one value of the range), with computer science functions, which may have multiple parameters. Currying allows us to bridge that gap without needing to write loads of intermediary functions.

Comments closed