Press "Enter" to skip to content

Month: January 2020

Reporting Queries and User-Defined Functions

Erik Darling tells a story of two things which do not get along very well:

The OLTP part was working fine, but there was a reporting element that was dog slow, and would cause all sorts of problems on the server.

When we got into things, I noticed something rather funny: All of their reporting queries had very high estimated costs, and all the plans were totally serial.

The problem came down to two functions that were used in the OLTP portion, which were reused in the reporting portion.

Read on to understand why SQL Server 2019 and its function inlining capability would not have helped here.

Comments closed

Refreshing Power BI from Your Outlook Calendar

Chris Webb has a nice use for Power Automate and Outlook:

The ideal way to view when multiple events are scheduled is a calendar and we’ve got great calendar functionality in Outlook. What if you could schedule refresh of your datasets from a calendar in Outlook? It turns out to be easier than you might think! Here’s how.

Read the whole thing, including Chris’s warning not to put it into production. You wouldn’t want the person with all of those calendar entries to leave your company and have things suddenly break, after all.

Comments closed

Streams and Tables in Apache Kafka

Michael Noll has started a four-part series. Part one serves as a primer:

In my daily work as a member of Confluent’s Office of the CTO and as the former product manager for ksqlDB and Kafka Streams, I interact with many users of Apache Kafka—be it developers, operators, or architects. Some have a stream processing or Kafka background, some have their roots in relational databases like Oracle and MySQL, and some have neither. But many of them have the same set of technical questions, such as: What’s the difference between an event stream and a database table? Is a Kafka topic the same as a stream? How can I best leverage all these pieces when I want to put my data in Kafka to use?

By the end of this series, you will have answers to each of these common questions and many more. If you are interested to learn about Kafka, I invite you to join me on this journey through Kafka’s core fundamentals!

Part 2 looks at storage fundamentals:

Part 1 of this series discussed the basic elements of an event streaming platform: events, streams, and tables. We also introduced the stream-table duality and learned why it is a crucial concept for an event streaming platform like Apache Kafka®. Here in part 2, we will take a deep dive into Kafka’s storage fundamentals. Notably, we will explore topics and—in my opinion, the most important concept in Kafka: partitions.

We’ll start with the most basic storage question: how do I store data in Kafka?

I’m looking forward to parts 3 and 4.

Comments closed

Installing .NET Core on a Raspberry Pi 4

Hasan Savran continues a series on Microsoft + Pi:

I have been writing about Azure IOT Hub and Raspberry Pi 4. So far, I bought a Raspberry Pi 4. I registered it as Azure IOT Edge device. Now, I am ready to write some code in Raspberry Pi. In this post, I will show you how install .NET Core 3.1 to Raspberry Pi so we can write some code to generate some data and push this data to Azure IOT Hub.

     First, you need to go to the .NET Core homepage to get the latest version’s url. Following page lists all .NET Core version, 3.1 was the latest when I was writing this blog. Pick the latest one from this list.

Another route might be to install Docker on your Pi.

Comments closed

Why Use Aggregations in Power BI

Kasper de Jonge has five reasons why you should use Power BI aggregations:

A lot has already been written on aggregations and it is one of the most exciting features of Power BI in a while. The guys at SQLBI did a session on it and my colleague Phil Seamark wrote an amazing host of articles with many details and tips and tricks on it. So why another article? Most people think aggregations is only applicable to petabyte scale data sets. That doesn’t have to be the true though, in this blog post I wanted to give my take on it and why it might be applicable to your current, smaller, model.

Read on for those reasons.

Comments closed

Causality Tracking in Extended Events

Grant Fritchey explains what causality tracking is in SQL Server Extended Events:

So, then what happens? It’s simple. A given task, let’s say, for example, an INSERT query, will be given a GUID. Then, all the events associated with that task, from the compile, to the query start, query finish, waits, recompiles, etc., all get associated with that GUID. They also receive a sequence number.

Therefore, through the use of causality tracking, you can see all the behavior associated with a given task and the order in which it occurred. On a test system, with no load and no activity, this isn’t that big a deal. On a real system under load, whether we’re talking, dev, test or production, where you’re going to see tons of simultaneous tasks occurring. Causality tracking enables you to isolate everything and group by task. You can pluck one set of behavior out of the mess.

That’s really useful, though it does add overhead. It also will never be available for Profiler.

Comments closed

Learning About the Power BI Activity Log

David Eldersveld takes us through the new Power BI activity log:

What is this activity data, and how is it valuable?

The audit/activity log data contains details for every interaction that users in your tenant have with the Power BI service (powerbi.com). Activities such as viewing reports, publishing apps, modifying gateway data sources, changing workspace security, and dozens of others have records broken out by user and timestamp.

Using this data, organizations not only know who does what and at what time. You can move beyond a simple audit trail to measure how well Power BI adoption is progressing at your enterprise. In this case, adoption targets for a group’s collective number of touchpoints can be compared to the actuals obtained from the logs—even down to the individual object level. Using the logs in this manner by combining actuals to targets, BlueGranite often finds underutilized reports or other opportunities to improve adoption.

Read on for more info about what it does, how it differs from the Office 365 audit log, and more.

Comments closed

Data Culture Needs to Come from the Top

Matthew Roche talks about tough love and data culture:

Chuy’s comments made me think of advice that I received a few years back from my manager at the time. She told me:

Letting others fail is a Principal-level behavior.

Before I tie this tough love into the context of a data culture and executive sponsorship, I’d like to share the context in which the advice was given.

This is a great article which reminds us that you need executive support to keep any substantive project going.

Comments closed

Azure SQL Database Edge in Public Preview

Amit Banerjee announces the public preview for Azure SQL Database Edge:

Azure SQL Database Edge is available in public preview. Azure SQL Database Edge runs on ARM and Intel architecture and brings the most secure Microsoft SQL engine to the edge. By running the same Microsoft SQL database engine both on-premises and in the cloud, you now only need to develop your applications once and deploy anywhere across the edge, your datacenter, and Azure.

With the availability of Azure SQL Database Edge in public preview, we’re inviting customers, partners, and ISVs to join the early adopter program to experience the power of SQL and AI on the edge.

I’m interested in checking out more of these time series capabilities.

Comments closed

Displaying SSRS Usage Stats Through Grafana

Alessandro Alpi takes queries to view SQL Server Reporting Services data and visualize it in Grafana:

One of the problems that often occur in our organization as well as some of our customers, is to get immediate feedback about usage statistics of reports. Usually, the request of creating reports is out of control and some of them are executed only “that time” and not anymore. In the worst-case scenario, many of them aren’t executed at all and some of them could become even overlapped or duplicated.

Therefore, it is important to know the usage statistics, user by user and report by report, to make the reader aware of them, let him interpreting the values of the same query in multiple ways and graphical layouts. While this is not possible with a tabular format (unless you export the values using any external tools such as Excel) it is simpler when it comes to a dashboard.

And that’s where Grafana excels.

Comments closed