Press "Enter" to skip to content

Author: Kevin Feasel

Lessons Learned Troubleshooting High CPU in Azure SQL DB

Kendra Little has an after-action report:

I’ve just had the pleasure of publishing my first new article in the Microsoft Docs, Diagnose and troubleshoot high CPU on Azure SQL Database.

This article isn’t really “mine” – anyone in the community can create a Pull Request to suggest changes, or others at Microsoft may take it in a different direction. But I got to handle the outlining, drafting, and incorporation of suggested changes for the initial publication.

It was a ton of fun, and I learned a lot about Azure SQL Database in the process.

Click through for what Kendra learned specific to Azure SQL Database, and also read the article itself.

Comments closed

Permission Requirements for Temp Tables

Jeff Iannucci looks at permissions:

Managing permissions is a constant issue for Database Administrators, but rarely do DBAs consider permissions for tempdb. Everybody’s looking for something, but how often do you get requests for “access to read and write in the tempdb database”? Like…never?

OK, but what if you were asked the subject of this post in a job interview? Even if you’ve worked with SQL Server for ages, would you know how to answer this? Moreover, would you know why the answer should give you some concern?

Read on for the answers.

Comments closed

Discovering Pester Tags

Jeffrey Hicks has a two-parter on discovering Pester tags. Part one is Jeffrey’s take:

As I resolved at the end of last year, I am doing more with Pester in 2022. I’m getting a bit more comfortable with Pester 5 and as my tests grow in complexity I am embracing the use of tags. You can add tags to different Pester test elements. Then when you invoke a Pester test, you can filter and only run specific tests by their tag. As I was working, I realized it would be helpful to be able to identify all of the tags in a test script. After a bit of work, I came up with a PowerShell function.

Part two is a reader’s take:

Yesterday I shared some PowerShell code I wrote to discover tags in a Pester test. It works nicely and I have no reason to complain. But as usual, there is never simply one way to do something in PowerShell. I got a suggestion from @FrodeFlaten on Twitter on an approach using the new configuration object in Pester 5.2. I’ll readily admit that I am still getting up to speed on the latest version of Pester. That’s one of my goals for this year, so this was a great chance to learn something new.

Click through to see how both approaches work.

Comments closed

Flexible File Components with SSIS

Bill Fellows hides SSIS DNA in a can of Barbasol shave cream:

The Azure Feature Pack for SSIS is something I had not worked with before today. I have a client that wants to use the Flexible File Task/Flexible File Source/Flexible File Destination but they were having issues. The Flexible File tools allow you to work with Azure Blob storage. We were dealing with ADLS Gen2 but the feature pack can work with classic blob storage as well. In my hubris, I said no problem, know SSIS. Dear reader, I did not know as much as I thought I did…

Click through for a whopper of a story. But be sure to read to the very end, as you don’t want to stop at using TLS 1.0.

Comments closed

When Not to Use Apache Kafka

Kai Waehner looks at when we may (or may not) want to use Apache Kafka:

Apache Kafka is the de facto standard for event streaming to process data in motion. With its significant adoption growth across all industries, I get a very valid question every week: When NOT to use Apache Kafka? What limitations does the event streaming platform have? When does Kafka simply not provide the needed capabilities? How to qualify Kafka out as it is not the right tool for the job? This blog post explores the DOs and DONTs. Separate sections explain when to use Kafka, when NOT to use Kafka, and when to MAYBE use Kafka.

I appreciate this kind of post a lot, especially from someone directly invested in the product. No technology can or should fit all purposes and the better you can explain where something does not fit, the better you can explain where it does fit.

Comments closed

Azure Data Factory Activity Queue Times

Meagan Longoria waits in line:

I’ve been working on a project to populate an Operational Data Store using Azure Data Factory (ADF). We have been seeking to tune our pipelines so we can import data every 15 minutes. After tuning the queries and adding useful indexes to target databases, we turned our attention to the ADF activity durations and queue times.

Data Factory places the pipeline activities into a queue, where they wait until they can be executed. If your queue time is long, it can mean that the Integration Runtime on which the activity is executing is waiting on resources (CPU, memory, networking, or otherwise), or that you need to increase the concurrent job limit.

Click through to see how you can calculate queue times across activities, pipelines, and data factories.

Comments closed

Log Analytics and Power BI

Chris Webb has started a new series:

As a Power BI administrator you want to see what’s happening in your tenant right now: who’s running queries, which datasets are refreshing and so on. That way if a user calls you to complain that their report is slow or their dataset hasn’t refreshed yet you can start troubleshooting immediately. Power BI’s integration with Log Analytics (currently in preview with some limitations) is a great source of information for this kind of troubleshooting: it gives you the ability to send various useful Analysis Services engine events, events that give you detailed information about queries and refreshes among other things, to Log Analytics with a latency of only a few minutes. Once you’ve done that you can write KQL queries to understand what’s going on, but writing queries is time consuming – what you want, of course, is a Power BI report.

Click through to see how to use Power BI to access KQL data in Log Analytics, which you’re using to monitor Power BI behavior.

Comments closed

Macros in Tabular Editor 3

Matt Allington notes a key feature in Tabulor Editor 3:

Today I am talking about Macros in Tabular Editor 3. This is a new name for an old feature. In Tabular Editor 2, this feature is called Advanced Scripting (a term I actually prefer, but oh well).  I think one reason for the name change is there are now multiple types of scripting, including the new DAX scripting feature (I covered that as a key feature I love in the article linked above).

Click through to see how it works. Tabular Editor 3 is a paid product, though the free Tabular Editor 2 is still around if your employer won’t front the cash for 3.

Comments closed

Automatic Plan Correction in Query Store

Deepthi Goguri hits on the type of benefit Query Store can provide:

How wonderful will that be if SQL Server has a way of automatically tune our Queries based on our workloads, amazing! Right?

Thanks to Microsoft for introducing the automatic tuning feature in SQL Server 2017 and available in Azure SQL Database. Automatic tuning has two features. Automatic plan correction and Automatic index correction (Source: Microsoft)

So, what is this automatic option, and how it works?

Click through to learn more. My experience with it has been very positive. It’s not perfect, but it does work really well.

Comments closed