Press "Enter" to skip to content

Category: Power BI

The Power Automate Custom Visual in Power BI

Imran Burki tries out a new custom visual:

Using the Power Automate Custom Visual in Power BI is the same process as using any custom visual. We’ll use the Defects Dashboard I created from my last blog post as an example. We want to send a Teams message when we notice defects in a plant require the attention of the plant supervisor. After that, we want to create a meeting in Outlook to discuss findings from our dashboard. Previously, there wasn’t a straightforward way to do this directly in Power BI. However, with the Power Automate Custom Visual, we can create flows directly in Power BI without ever having to leave Power BI! Now that’s cool! Let’s get started.

This is really interesting for setting up rules-based alerting.

Comments closed

DirectQuery on REST APIs

Chris Webb illuminates us:

One of the most common questions I get asked is “How can I use Power BI in DirectQuery mode on top of a REST API?”. This seems like a reasonable thing to do but almost everyone who tries it will fail, and in this post I will explain why.

To answer this question we first of all have to review the two main ways of working with data in Power BI: Import mode and DirectQuery mode. In Import mode data is cached in Power BI’s own internal database and all the DAX queries that are generated by your reports are answered from there. In DirectQuery mode no data is stored inside Power BI; instead, when a report is run and DAX queries are fired off against your dataset, Power BI in turn generates queries against the data source to get the data needed. Most of the data sources that can be used with DirectQuery mode in Power BI are relational databases and so that means Power BI will generate SQL queries to get data from them, but Power BI can also generate queries in other languages too.

Read on for the bad news, although there are some third-party products which can make it work in specific cases.

Comments closed

Pareto Charts in Power BI

Imran Burki builds a chart:

Last week we built a Manufacturing Yield dashboard that showed first and final pass yield numbers. In this post, we’re going to introduce the concept of manufacturing defects and build a Pareto chart in Power BI. Unlike Excel, the Pareto chart isn’t out-of-the-box in Power BI. Instead, we must create DAX to build the Pareto. Before we dig into the DAX, let’s talk about why we would create a Pareto in the context of manufacturing and why defects are important to track.

Read on to learn what a Pareto chart is and how you can build the DAX function which gives us the relevant information.

Comments closed

From Azure Analysis Services to Power BI PPU

Gilbert Quevauvilliers teases a new series:

I have been doing a lot of evaluation and investigations for organizations who currently are using Azure Analysis Services (AAS) and looking to see if they can leverage Power BI Premium Per User (PPU)

In this series I am going to cover the following details below, which I completed to see if the migration was not only feasible but should be the new normal.

Looks like it will be an 11-parter, so we have some reading to look forward to.

Comments closed

Enabling and Disabling Table Load in Power Query

Jon Fletcher shows how to disable query processing in Power Query:

In Power BI Power Query there is an option to enable or disable whether a table is loaded into the report. The option ‘Enable load’ can be found by right clicking on the table. Typically, by default, the load is already enabled.

Click through to see how you can disable query processing or remove a particular query from report refreshes.

Comments closed

Improving Dataflow Performance in Power BI

Chris Webb shows how you can improve dataflow performance in Power BI after switching to a Premium Per User model:

Over the years I have written a lot about Power BI/Power Query performance but it has always been in the context of loading data direct into datasets, not dataflows. A lot of cool things have been happening in dataflows recently, though, and now that Premium Per User has made Premium features to a much wider audience I thought it would be interesting to look at an example of how PPU can help dataflow performance and specifically how and when the Enhanced Compute Engine can make dataflow refresh faster.

Click through for some interesting findings.

Comments closed

Updating or Creating a Measure for Power BI PPU without Full Publication

Gilbert Quevauvilliers follows up:

Following on from my previous blog post How to complete granular deployment of Power BI Desktop changes to the Power BI Service (Using PPU), I want to also show how to update or create a measure in my dataset, where I can deploy this via ALM Toolkit.

This now saves me from doing the following tasks previously:

– Time taken to refresh the PBIX file so that the data is up to date.
– Re-uploading my PBIX.
– If configured re-creating the incremental refreshing
– Time and effort to upload and wait for dataset refresh.
– Quick updates to my dataset.

I will not have to worry about saving my PBIX file, file and if configured re-creating the incremental refreshing. This saves me a lot of time and effort.

Read on to see how.

Comments closed

Importing Data from ADLS Gen2 into Power BI

Chris Webb summarizes a significant amount of work:

Over the last few months I’ve written a series of posts looking at different aspects of one question: what is the best way to import data from ADLSgen2 storage into a Power BI dataset? For example, is Parquet really better than CSV? Should you use Azure Synapse Serverless? In this post I’m going to summarise my findings and offer some recommendations – although, as always, I need to stress that these are the conclusions I can draw from my test results and not the absolute, incontrovertible “Microsoft-says” truth so please do your own testing too.

Read on and check it out for yourself.

Comments closed

Power BI Cross-Report Drill-Through

Marc Lelijveld takes us through the benefits and challenges of drilling through to a different report in Power BI:

It is very common to have multiple reports for different audiences, while there is also one (group of) user(s) that requires to have an overview over all these different insights. The main challenge you will face, is having cross-report interactivity and find related insights.

Let’s take an example of three different roles, where we have a customer account manager, reseller manager and a regional manager. Of course, they should have the same single source of truth, but there is one thing you want to avoid as report creator! You do not want to create three different report for the three mentioned audiences. But as they have different roles and responsibilities, you do not want them to see each other’s data and keep it clean and simple! In this blog I will describe how you can setup cross-report drill through to jump from one report to another, while respecting applied filters and avoiding building three separate reports!

Click through for the process, as well as potential issues you may hit along the way.

Comments closed

Star Schemas and Power BI Go Together

Marco Russo and Alberto Ferrari explain why star schemas make so much sense for Power BI:

Why should I have products, sales, date and customers as separate tables? Wouldn’t it be better to store everything in a single table named Sales that contains all the information? After all, every query I will ever run will always start from Sales. By storing everything in a single table, I avoid paying the price of relationships at query time, therefore my model will be faster.

There are multiple reasons why a single, large table is not better than a star schema. Here anyway, the focus is strictly on performance. Is it true that a single table is faster than a star schema? After all, we all know that joining two tables is an expensive operation. So it seems reasonable to think that removing the problem of joins ends up in the model being faster. Besides, with the advent of NOSQL and big data, there are so many so-called data lakes holding information within one single table… Isn’t it tempting to use those data sources without any transformation?

Read on to see why this is not the case.

Comments closed