Press "Enter" to skip to content

Category: KQL

Joins in KQL

Robert Cain picks back up on a series:

I’m still working on my ArcaneBooks project, mostly documentation, so I thought I’d take a quick break and go back to a few posts on KQL (Kusto Query Language). In this post we’ll cover the join operator.

join in KQL operates much as it does in SQL. It will join two datasets together into a single result.

Even so, there’s a little more to joins in KQL than in T-SQL, with innerunique being unique to KQL. The closest T-SQL analog would probably be a CROSS APPLY (SELECT TOP(1) ...) operation. KQL also uses join to combine the concepts of EXISTS and NOT EXISTS in SQL.

Comments closed

Monitoring Datasets with Log Analytics for Power BI

Chris Webb has had a busy month:

Maybe the fourth- or fifth-most exciting Power BI-related announcement last month (admittedly it was an exciting month) was that Log Analytics for Power BI datasets is now GA and you can now link multiple Power BI workspaces to a single Log Analytics workspace. This, for me, means that enabling Log Analytics has gone from being useful to essential for anyone interested in monitoring Analysis Services engine activity in an enterprise Power BI/Fabric deployment. It also works with Direct Lake datasets too!

Read on for a few KQL queries which allow you to get pertinent information from your Log Analytics workspace.

Comments closed

Showing KQL Queries

Dany Hoter looks at some KQL query plans:

Each visual on the page is going to summarize data from one or more queries and add the summarize part of the query.

If your model contains multiple tables in direct query with relations between them, the connector will generate joins between the tables.

Selecting values in filters will create multiple where conditions.

In order to see the final query and understand the performance implications of each query and the total query load created by a report, you need to use the command “.show queries” in the context of the database.

Click through for Dany’s notes on the topic, including a few tips on what to look for.

Comments closed

Kusto Detective Agency Season 2

Anshul Sharma announces season 2 to a great program:

Greetings, esteemed investigators and data enthusiasts! We are thrilled to announce the highly anticipated launch of Kusto Detective Agency Season 2. After the immense success of Season 1, with over 10,000 participants diving deep into the world of data investigation, we cannot thank you enough for your incredible support and enthusiasm! 

Season 2 of Kusto Detective Agency is set to be an even grander adventure, filled with more challenges, mind-bending mysteries, and countless opportunities to showcase your analytical skills. Prepare yourself for a journey that will push the boundaries of your data prowess and reward you with an unforgettable experience. 

I just finished season 1 yesterday and saw the link to season 2, but didn’t touch it yet. If you’re learning the Kusto Query Language, this is a series of challenges which will really push your skills. As I was going through season 1, there were several times when I’d say “I know exactly how to answer this in T-SQL but how do I answer it in KQL?” If your KQL skills aren’t great, there are plenty of people who have shown their answers online as well, so you can walk through it with them.

Admittedly, I want more Poppy the goldfish lore. The twist in challenge 5 was not something I’d expected.

Comments closed

Connecting Power BI to ADX via Private Endpoint

Dany Hoter keeps it all on the Azure backbone:

The PBI developer creating datasets and reports need to connect to the ADX cluster using Power BI desktop.

To establish such a connection, the user’s IP address should be allowed access to the private end point.

The access should be tested using Kusto Web explorer (KWE) to make sure that the cluster can be reached.

If KWE can connect , Power BI desktop should also connect successfully and a report using the cluster in Direct Query or import can be created.

That’s the goal, and Dany shows us the way to do it.

Comments closed

Log Tokenization and Reduction in Azure Data Explorer

Brian Bønk tries out some new functions:

Before the release described below – the ADX service had a good handfull of features to help with anomaly detection and clustering on semi structured data.

With the functions like basket() and autocluster() the service can find patterns based on common values across the columns. The problem with these functions, is that they are not able to parse free text columns and extract tokens and repeatable patterns.

Yes, you could use the diffpatterns_text() function – but that is not strong enough to cover real diversity of free text log records.

It’s interesting that the end result is looking for log entries whose shape differs from the norm. That’s a clever approach to log file analysis.

Comments closed

Converting JSON to a Relational Schema with KQL

Devang Shah does some flattening and moving:

In the world of IoT devices, industrial historians, infrastructure and application logs, and metrics, machine-generated or software-generated telemetry, there are often scenarios where the upstream data producer produces data in non-standard schemas, formats, and structures that often make it difficult to analyze the data contained in these at scale. Azure Data Explorer provides some useful features to run meaningful, fast, and interactive analytics on such heterogenous data structures and formats. 

In this blog, we’re taking an example of a complex JSON file as shown in the screenshot below. You can access the JSON file from this GitHub page to try the steps below.

Click through for the example, which is definitely non-trivial.

Comments closed

KQL: Show Me

Brian Bønk shows off:

In Kusto and the services Azure Data Explorer and Synapse Data Explorer, there is one main part of the meta data queries – the .show command. The .show command preceeds the rest of the following commands for exploring the meta data in the engine:

  1. queries
  2. commands
  3. commands-and-queries
  4. journal
  5. operations
  6. ingestion failures
  7. table data statistics

Read on for examples of how the .show command can be quite useful.

Comments closed