Press "Enter" to skip to content

Day: August 4, 2022

KQL Parse

Robert Cain continues a series on KQL:

The previous post in this series Fun With KQL – Extract, showed how we can use the extract operator to pull part of a string using regular expressions. I think you’d agree though, using regular expressions can be a bit tricky.

If you have a string that is well formatted with recurring text you can count on, and want to pull one or more strings from it into their own columns, Kusto provides a much easier to use operator: parse.

Robert includes a series of examples, including examples of things you cannot do.

Comments closed

Data Mesh at Netflix

Bo Lei, et al, describe their Data Mesh architecture:

Realtime processing technologies (A.K.A stream processing) is one of the key factors that enable Netflix to maintain its leading position in the competition of entertaining our users. Our previous generation of streaming pipeline solution Keystone has a proven track record of serving multiple of our key business needs. However, as we expand our offerings and try out new ideas, there’s a growing need to unlock other emerging use cases that were not yet covered by Keystone. After evaluating the options, the team has decided to create Data Mesh as our next generation data pipeline solution.

Click through for a high-level overview of the architecture.

Comments closed

Feeding Synapse Spark Info to On-Prem Kafka Clusters

Bhadreshkumar Shiyal finds a solution:

Microsoft’s official documentation for Azure Data Factory contains a tutorial which explains how to access an On-Premises SQL Server from Azure Data Factory which is inside a Managed Vnet. You can go through that article here: Access on-premises SQL Server from Data Factory Managed Vnet using Private Endpoint – Azure Data Fac….

Although based upon the article’s solution, to meet our requirements we needed to substitute On-Prem Apache Kafka for On-Prem SQL Server and instead of an Azure Data Factory inside a Managed Vnet, we used a Synapse Workspace inside a Managed Vnet. The “Forwarding Vnet” concept explained in the above tutorial remains as-is in our approach.

As soon as you turn on Data Exfiltration Protection (DEP), the lockdown is real. Click through to see what the process of exfiltrating data through an approved mechanism looks like.

Comments closed

Monitoring Log Shipping with T-SQL

Lori Brown tracks log shipping operations:

For Log Shipping, some information is only available on the primary or only on the secondary.  That means that I had to set up a linked server on the primary to connect to the secondary.  I do not want to create any OPENROWSET queries for this since that would require that AdHoc Distributed Queries be enabled.  I am not a fan of opening that up for the following reasons:

1) It can allow buffer overflow bugs to compromise systems.

2) It can allow a compromised server to connect to a non-compromised server. 

Read on to see what Lori prefers instead.

Comments closed

Save and Unsafe Simple Parameterization

Paul White puts on the safety glasses:

When a statement passes the earlier parser and decoder checks, it arrives at the trivial plan stage as a prepared (parameterized) statement. The query processor now needs to decide if the parameterization attempt is safe.

Parameterization is considered safe if the query processor would generate the same plan for all possible future parameter values. This might seem like a complex determination to make, but SQL Server takes a practical approach.

Read on to learn more about the process.

Comments closed

Concurrency Control and VACUUM in Postgres

Paul Randal explains how multi-version concurrency control works in Postgres:

PostgreSQL uses an optimistic isolation system known as Multi-Version Concurrency Control (MVCC). MVCC ensures transactions writing data to the database don’t block concurrent transactions needing to read the data being modified. This works through the magic of row-versioning—PostgreSQL creates versions of rows in the database tables to minimize blocking from concurrent access. As more and more versions are generated, a garbage control mechanism called VACUUM must be used to ensure the tables are properly maintained. In this article, I’ll explain how all this works via a series of examples.

This is quite similar to Read Committed Snapshot Isolation in SQL Server but with a couple of twists, including the need to vacuum tuples.

Comments closed

From Azure Data Explorer to Excel

Dany Hoter views data in Excel:

In a previous article Direct Query from Excel to Azure Data Explorer (microsoft.com) I described a way to mimic Direct Query access ala Power BI in Excel.

The method used in this article that allows the user to filter the imported data using values entered into cells in the grid.

In this article I would like to describe a way to really query Kusto data in real time without importing any data and without any volume limitations.

Read on to see how, though there’s a pretty big intermediate step.

Comments closed

“Expensive” Queries

Erik Darling asks, what’s in a name?

When we talk about finding queries to tune, there’s an unfortunate term that gets thrown around: Expensive Queries.

Why is it unfortunate? Well, it reinforces the wrong mindset when it comes to query tuning, and leads people down the wrong path, looking at the wrong metrics.

I disagree on the “bad name” bit but agree on the substance. The term “expensive query” has a very useful connotation: this is a query which requires a significant amount of resources. Where I fully agree with Erik is that “query cost” from the optimizer does not do a great job of describing “significant amount of resources.” There is also a relevant point that expensive queries may not be the most important ones to look at. Reasons why can include:

  • The query runs at a time when there’s little load on the system, so it does not impact anybody else.
  • The query runs within acceptable performance boundaries for customers: it may take 10 minutes to run but it’s a batch process and the relevant business unit might only need it within an hour.
  • The amount of work that the query is doing is such that further optimizations are either not possible at all or they are only possible with a significant restructure that the business is unwilling to accept.

Even so, the term “expensive query” is still very useful. So is “expensive query relative to what it could be,” although we do tend to conflate the latter with the former. But now we’re getting deep into semantics and I forgot my waders.

Comments closed

Power BI Dataflows and On-Prem Data Gateways

Marc Lelijveld keeps getting pulled back in:

Another week, another challenge! Summer holidays is the perfect time to run into challenges that you normally would not encounter or simply do not know about because they are fixed before you know. This blog will elaborate on a case I had at a client, where we run into issues with the on-premises data gateway in Power BI and in particular related to building dataflows based on sources that connect via the gateway.

Read on for the problem as well as Marc’s solution.

Comments closed