Press "Enter" to skip to content

Day: March 4, 2022

Selective Document Copy in Elasticsearch

The Hadoop in Real World team show how to migrate specific documents when building a new index:

As shown in the other post, we still use a reindex by specifying the source and destination but this time we also specify a query in source along with the term which indicates that the documents with state = ‘ny’ will be filtered from the source.

So only documents with state ny will be copied to the new index account_v3 with this reindex operation.

Click through for an example of how this works.

Comments closed

Tempdb Improvements over Time

Melody Zacharias takes us through the history of tempdb improvements:

In my previous blog and this one, there is a lot of talk of trace flags and you can see the life cycle of them.  They are implemented and then sometimes become part of the product over time. For a long time, the trace flag 1118 was a common performance improvement trick known only by industry experts. Over the years as a consultant, I have often been asked by clients if they should use trace flags, and generally speaking, as long as they are documented by Microsoft they are safe to use.  I would certainly not recommend using undocumented trace flags.  They are not supported and therefore not recommended.  So always be sure to check the trace flag list before setting trace flags.  So of course, as I say that, I have to offer another one.  

Read on for more details and also advice on getting the most out of tempdb.

Comments closed

Merging and Appending Queries in Power Query

Joe Billingham shows off two similar-sounding functions:

I had an issue recently where a third-party application had been updated and both the new and legacy versions were being used side-by-side. Logging data from both versions was being written to two separate Azure SQL databases.

The customer needed a Power BI report showing both old and new logging data sets as a single source. If both databases were SQL Server databases, I could have written a view with a cross-database join, imported that into Power BI and thought no more about it. However, the two sources being Azure SQL Databases, with no easy way to join the tables, caused an issue.

This is where the Merge and Append functions in Power Query come in.

Read on for the solution.

Comments closed

Delta Lake Operability in Azure Synapse Analytics

James Serra lets us know when and where we can use Delta Lake within Azure Synapse Analytics:

Many companies are seeing the value in collecting data to help them make better business decisions. When building a solution in Azure to collect the data, nearly everyone is using a data lake. A majority of those are also using delta lake, which is basically a software layer over a data lake that gives additional features. I have yet to see anyone using competing technologies to delta lake in Azure, such as Apache Hudi or Apache Iceberg (see A Thorough Comparison of Delta Lake, Iceberg and Hudi and Open Source Data Lake Table Formats: Evaluating Current Interest and Rate of Adoption).

Read on for more information.

Comments closed

Saving Data Reqeusts

Kenneth Fisher shares a good idea:

The other day a friend of mine mentioned that they were questioned on one of the scripts they ran recently and they were in fact able to pull that script back up and show them. I was reminded of something I’ve thought about writing several times so it seemed like a good excuse.

An old manager of mine started his team saving all of our work in folders on a shared network space. A few months back I started doing this again, just in sub folder of my documents folder.

Yep. Every data request I have gets saved in a folder. Because those are usually in Excel, I have the spreadsheet for output and the original query as a separate tab within the report

Comments closed

Workload Analysis with XESmartTarget

Gianluca Sartori continues a series on XESmartTarget. First up, let’s analyze a workload:

The idea comes from a blog post by Brent Ozar about “How to Find Out Whose Queries are Using The Most CPU“. Brent uses the Resource Governor to detect who’s using the CPU. That’s an interesting approach, but you can do the same more efficiently with XESmartTarget.

Analyzing a workload means capturing all the queries on a server, categorize them by application name, database name and login name, and creating samples at regular intervals, in order to describe the behavior of the workload over time, let’s say every one minute.

From there, we’ll capture the queries running on our server:

In the previous recipe we used XESmartTarget to analyze a workload and we could characterize it by application/login/database. Now let’s take one more step: let’s capture all the queries and their plans and add query_hash and query_plan_hash to the analysis table. We will also capture plans and query text on separate tables.

It is fun getting to see all of the versatility in XESmartTarget.

Comments closed

Capturing a Call Stack without a Dump

Bob Dorr shows off Arvind’s SQL Call Stack Resolver:

Some outputs, such as the XEvent call stack action output the raw stack frame information and require a rebase to loaded module information in order to symbolize.   The security feature for random address virtualization loads images at different addresses each time the image is loaded.   This requires the module base address and the raw address in order to calculate the relative virtual address for symbolization.

Click through for more information.

Comments closed