Press "Enter" to skip to content

Day: January 6, 2022

When Not to Use Apache Kafka

Kai Waehner looks at when we may (or may not) want to use Apache Kafka:

Apache Kafka is the de facto standard for event streaming to process data in motion. With its significant adoption growth across all industries, I get a very valid question every week: When NOT to use Apache Kafka? What limitations does the event streaming platform have? When does Kafka simply not provide the needed capabilities? How to qualify Kafka out as it is not the right tool for the job? This blog post explores the DOs and DONTs. Separate sections explain when to use Kafka, when NOT to use Kafka, and when to MAYBE use Kafka.

I appreciate this kind of post a lot, especially from someone directly invested in the product. No technology can or should fit all purposes and the better you can explain where something does not fit, the better you can explain where it does fit.

Comments closed

Log Analytics and Power BI

Chris Webb has started a new series:

As a Power BI administrator you want to see what’s happening in your tenant right now: who’s running queries, which datasets are refreshing and so on. That way if a user calls you to complain that their report is slow or their dataset hasn’t refreshed yet you can start troubleshooting immediately. Power BI’s integration with Log Analytics (currently in preview with some limitations) is a great source of information for this kind of troubleshooting: it gives you the ability to send various useful Analysis Services engine events, events that give you detailed information about queries and refreshes among other things, to Log Analytics with a latency of only a few minutes. Once you’ve done that you can write KQL queries to understand what’s going on, but writing queries is time consuming – what you want, of course, is a Power BI report.

Click through to see how to use Power BI to access KQL data in Log Analytics, which you’re using to monitor Power BI behavior.

Comments closed

Azure Data Factory Activity Queue Times

Meagan Longoria waits in line:

I’ve been working on a project to populate an Operational Data Store using Azure Data Factory (ADF). We have been seeking to tune our pipelines so we can import data every 15 minutes. After tuning the queries and adding useful indexes to target databases, we turned our attention to the ADF activity durations and queue times.

Data Factory places the pipeline activities into a queue, where they wait until they can be executed. If your queue time is long, it can mean that the Integration Runtime on which the activity is executing is waiting on resources (CPU, memory, networking, or otherwise), or that you need to increase the concurrent job limit.

Click through to see how you can calculate queue times across activities, pipelines, and data factories.

Comments closed

Automatic Plan Correction in Query Store

Deepthi Goguri hits on the type of benefit Query Store can provide:

How wonderful will that be if SQL Server has a way of automatically tune our Queries based on our workloads, amazing! Right?

Thanks to Microsoft for introducing the automatic tuning feature in SQL Server 2017 and available in Azure SQL Database. Automatic tuning has two features. Automatic plan correction and Automatic index correction (Source: Microsoft)

So, what is this automatic option, and how it works?

Click through to learn more. My experience with it has been very positive. It’s not perfect, but it does work really well.

Comments closed

Macros in Tabular Editor 3

Matt Allington notes a key feature in Tabulor Editor 3:

Today I am talking about Macros in Tabular Editor 3. This is a new name for an old feature. In Tabular Editor 2, this feature is called Advanced Scripting (a term I actually prefer, but oh well).  I think one reason for the name change is there are now multiple types of scripting, including the new DAX scripting feature (I covered that as a key feature I love in the article linked above).

Click through to see how it works. Tabular Editor 3 is a paid product, though the free Tabular Editor 2 is still around if your employer won’t front the cash for 3.

Comments closed

Addressable Disk Space and File Counts in SQL MI General Purpose

Niko Neugebauer has been busy:

In the previous blog posts in the SQL MI How-Tos we have already touched on the aspect of SQL MI reserved and available Disk Space, but as in everything – there is so many things to add and expand. In this post we shall focus on the General Purpose service tier and the remote disk storage that is used in this service tier. Besides the explicit limits of the addressable space that is connected to the number of CPU vCores, there are important aspects of the remote storage that will limit the number of database files that can be located there.

If you are interested in other posts on how-to discover different aspects of SQL MI – please visit the  http://aka.ms/sqlmi-howto, which serves as a placeholder for the series.

Click through to see how it all fits together with Managed Instances.

Comments closed