Press "Enter" to skip to content

Curated SQL Posts

Processing Data: Aggregate and Join Operators

Erik Darling continues a series on understanding plains. First up is aggregate operators:

Aggregates can be useful for all sorts of things in a query plan, and can show up in many different forms.

It would be tough to cover all of them in a single post, but what I’d like to do is help all you nice folks out there understand some of their finer points.

Then we have join operators:

Anyone who tells you there are only three types of joins in SQL Server isn’t your friend.

Okay, maybe that’s harsh. Maybe they’re just getting you prepared for the bonne promenade through all the many different faces a join can wear in your query plans.

Maybe they have a great explanation for Grace Hash Joins in their back pocket that they’re waiting to throw in your face like a bunch of glitter.

Do read both.

Comments closed

Running a SQL Assessment from Powershell

David Alcock performs a bit of assessment:

My last post covered the SQL Assessment extension in Azure Data Studio and in this article I’m going to look at running the SQL Assessment API using PowerShell instead. Whilst in ADS we can export assessment results to HTML or into SQL via a script the PowerShell option is a bit more flexible, we can export straight to a SQL table and run the assessment across multiple instances.

I’ll follow the PowerShell instructions from the SQL Assessment API web page; this is the code that will run the assessment on my localhost instance and it will also write the results to a table (Assessment.Results) in the SQLAssessmentDemo database, the -Force parameter will create the objects if they don’t already exist:

Read on for that code as well as a few tweaks you can make to make your life easier.

Comments closed

DirectQuery and SQL Query Limitations

Chris Webb lays out the limits:

A few days ago I was tagged on a thread on Twitter: my old pal Mim was upset to find that he couldn’t use a SQL query with a Common Table Expression as the source of a table in DirectQuery mode. He worked out why without my help but I thought that it was worth me writing up an explanation for other people who might be struggling with the same problem.

That’s a fairly frustrating limitation, even if you have control of the database you’re querying.

Comments closed

Selective Document Copy in Elasticsearch

The Hadoop in Real World team show how to migrate specific documents when building a new index:

As shown in the other post, we still use a reindex by specifying the source and destination but this time we also specify a query in source along with the term which indicates that the documents with state = ‘ny’ will be filtered from the source.

So only documents with state ny will be copied to the new index account_v3 with this reindex operation.

Click through for an example of how this works.

Comments closed

Tempdb Improvements over Time

Melody Zacharias takes us through the history of tempdb improvements:

In my previous blog and this one, there is a lot of talk of trace flags and you can see the life cycle of them.  They are implemented and then sometimes become part of the product over time. For a long time, the trace flag 1118 was a common performance improvement trick known only by industry experts. Over the years as a consultant, I have often been asked by clients if they should use trace flags, and generally speaking, as long as they are documented by Microsoft they are safe to use.  I would certainly not recommend using undocumented trace flags.  They are not supported and therefore not recommended.  So always be sure to check the trace flag list before setting trace flags.  So of course, as I say that, I have to offer another one.  

Read on for more details and also advice on getting the most out of tempdb.

Comments closed

Merging and Appending Queries in Power Query

Joe Billingham shows off two similar-sounding functions:

I had an issue recently where a third-party application had been updated and both the new and legacy versions were being used side-by-side. Logging data from both versions was being written to two separate Azure SQL databases.

The customer needed a Power BI report showing both old and new logging data sets as a single source. If both databases were SQL Server databases, I could have written a view with a cross-database join, imported that into Power BI and thought no more about it. However, the two sources being Azure SQL Databases, with no easy way to join the tables, caused an issue.

This is where the Merge and Append functions in Power Query come in.

Read on for the solution.

Comments closed

Delta Lake Operability in Azure Synapse Analytics

James Serra lets us know when and where we can use Delta Lake within Azure Synapse Analytics:

Many companies are seeing the value in collecting data to help them make better business decisions. When building a solution in Azure to collect the data, nearly everyone is using a data lake. A majority of those are also using delta lake, which is basically a software layer over a data lake that gives additional features. I have yet to see anyone using competing technologies to delta lake in Azure, such as Apache Hudi or Apache Iceberg (see A Thorough Comparison of Delta Lake, Iceberg and Hudi and Open Source Data Lake Table Formats: Evaluating Current Interest and Rate of Adoption).

Read on for more information.

Comments closed

Saving Data Reqeusts

Kenneth Fisher shares a good idea:

The other day a friend of mine mentioned that they were questioned on one of the scripts they ran recently and they were in fact able to pull that script back up and show them. I was reminded of something I’ve thought about writing several times so it seemed like a good excuse.

An old manager of mine started his team saving all of our work in folders on a shared network space. A few months back I started doing this again, just in sub folder of my documents folder.

Yep. Every data request I have gets saved in a folder. Because those are usually in Excel, I have the spreadsheet for output and the original query as a separate tab within the report

Comments closed

Workload Analysis with XESmartTarget

Gianluca Sartori continues a series on XESmartTarget. First up, let’s analyze a workload:

The idea comes from a blog post by Brent Ozar about “How to Find Out Whose Queries are Using The Most CPU“. Brent uses the Resource Governor to detect who’s using the CPU. That’s an interesting approach, but you can do the same more efficiently with XESmartTarget.

Analyzing a workload means capturing all the queries on a server, categorize them by application name, database name and login name, and creating samples at regular intervals, in order to describe the behavior of the workload over time, let’s say every one minute.

From there, we’ll capture the queries running on our server:

In the previous recipe we used XESmartTarget to analyze a workload and we could characterize it by application/login/database. Now let’s take one more step: let’s capture all the queries and their plans and add query_hash and query_plan_hash to the analysis table. We will also capture plans and query text on separate tables.

It is fun getting to see all of the versatility in XESmartTarget.

Comments closed

Capturing a Call Stack without a Dump

Bob Dorr shows off Arvind’s SQL Call Stack Resolver:

Some outputs, such as the XEvent call stack action output the raw stack frame information and require a rebase to loaded module information in order to symbolize.   The security feature for random address virtualization loads images at different addresses each time the image is loaded.   This requires the module base address and the raw address in order to calculate the relative virtual address for symbolization.

Click through for more information.

Comments closed