Press "Enter" to skip to content

Curated SQL Posts

Taking Over a Power BI Dataset with a Service Principal

Angela Henry takes it out of the user’s hands:

A little background for those new to using Power BI and Data Gateways. If the data source for your Power BI dataset lives on-prem or behind a private endpoint, you will need a Data Gateway to access the data. If you want to keep your data fresh (either using Direct Query or Import mode), but don’t want to rely on a specific user’s credentials (because we all want to go on vacation at some point), you will need to use a service principal for authentication.

Read on for the step-by-step instructions on how to do this.

Comments closed

Managing Database Test Data

Phil Factor maintains some tests:

When learning about relational databases, we all tend to use ‘toy’ databases such as PubsAdventureWorksNorthWind, or ClassicModels. This is fine, but it is too easy to assume that one can then do real-world database development in the same way. You have your database full of data and just cut code that you then test. From a distance, it all seems so easy.

In fact, rapid and effective database development usually requires a much more active approach to data. You need to work out how to test your work as you go, and to test continuously. For that, you need appropriate data with the right characteristics, in the suitable quantity. You also need to plan how to ensure that, when you make changes to the database, or even minor changes to its settings, all business processes continue to work correctly. In Agile terms you need a test-first methodology, fast feedback loop, and iterative development. You should never cut some SQL Code and only then think to yourself “I wonder how I’ll be able to test this?“.

This is something I’ve historically been pretty lazy about, to my detriment. Phil does an outstanding job of making the case for why generating and working with your own test data (versus live data) is important, as well as categorizing the purposes of this test data and the types of tests you’ll want to have.

Comments closed

Filtering Calculation Items in a Slicer

Marco Russo and Alberto Ferrari do some slicing and filtering:

Slicers with too many values might be inconvenient for users, as they must search for the desired selection among too many lines. In such cases, a common solution is to build a hierarchy and use slicers with multiple columns inside, or multiple slicers, each with one column. However, this solution works only in structures with a natural hierarchy, like continents and countries. Indeed, each country belongs to only one continent so the hierarchy can be easily created with a new column.

If the hierarchy is non-natural, the relationship between the parent and the children is many-to-many, requiring a specific type of relationship. 

Click through to see what that relationship looks like and how you can build it.

Comments closed

Index Maintenance in Azure SQL DB

Kendra Little gives an answer:

Have you ever received advice that was technically correct, but which was delivered in such a way that it was too hard to understand?

I think of this as “accidental bad advice,” because it leads to confusion. There’s a LOT of accidental bad advice out there on index maintenance for SQL Server and cloud versions like Azure SQL, even in the official documentation.

In this post I’m answering a common index maintenance question, and we’re going to keep it simple.

The answer is essentially the same as it would be on-premises: yes, but perform index maintenance when it is appropriate. Read on to learn what that means in this case.

Comments closed

Modularizing an Existing Shiny App

Peter Baranovskiy breaks it down:

There are multiple tutorials available online on writing modular Shiny apps. So why one more? Well, when I just started with building modular apps myself, these didn’t do much for me. So I really only learned how to write modules when I had an opportunity to team up with an experienced R Shiny developer. The reason I guess is that Shiny modules is an advanced topic, and you typically get to writing modules only when you finally need to scale your apps – and keep opportunities for further scaling open. This typically means when your app goes into production. By then you probably have already developed multiple apps, and switching over to a way of thinking required to write modules may be challenging. If you don’t know what modules are, I recommend starting here and then coming back to this post. Otherwise, read on.

So, I decided to try a different approach and instead of building a simple modular app from scratch, to go in the opposite direction by breaking down a complex real-life app into modules. Here’s the app’s original non-modular code. Note a single app.R file that contains the entire app. static_assets.R includes some object definitions which I moved to a separate file for convenience. calgary_crime_data_prep.R is not part of the app; it is a data retrieval and cleaning script executed once a month with cron. Running the script each time the app launches would have made it extremely slow and would use way too much bandwidth, as the script downloads and processes 150+ Mb of data on each run.

Read on for the reasoning behind using modules, as well as Peter’s notes on the process.

Comments closed

Bug in fn_xe_file_target_read_file

Erik Darling notes a bug:

SQL Server has had the fn_xe_file_target_read_file function for a while, but starting with SQL Server 2017, a column called timestamp_utc was added to the output.

Somewhat generally, it would be easier to filter event data out using this column… if it worked correctly. The alternative is to interrogate the underlying extended event XML timestamp data.

That’s… not fun.

Erik shows us the problem and also provides a workaround, as well as the Microsoft Feedback issue you can vote on to get this done sooner.

Comments closed

An Overview of Semantic Modeling in Microsoft Fabric

Teo Lachev talks semantic modeling:

In retrospect, I’d say I owe 50% of my BI career to Analysis Services and its flavors: Multidimensional, Tabular, and later Power BI. This is why I closely follow how this technology evolves. Fast forwarding to Fabric, there are no dramatic changes. Unlike the other two Fabric Engines (Lakehouse and Warehouse), Power BI datasets haven’t embraced the delta lake file format to store its data yet. The most significant change is the introduction of a new Direct Lake data access mode alongside the existing Import and DirectQuery.

Read on for Teo’s thoughts. I think there’s a good chance that the Bad/Ugly points will be eliminated by the time Fabric goes GA, though we’ll have to wait and see if that’s the case.

Comments closed

TreeMaps vs Pie Charts

Rita Fainshtein talks treemaps:

The main challenge arises from the difficulty in comparing the areas of the segments within the chart.

Consequently, pie charts and similar graphs become challenging to interpret.

Colors for categories don’t make it easy either since the brain tries to figure out how the color relates to the category, which in the example here just happens to be a random choice.

Let’s see if the Treemap helps and make it easier.

Rita notes that treemaps work well in a specific niche: hierarchical, categorical data. But within their niche, they work really well, which is more than you can say about pie charts…

Comments closed

Power BI Treemap Visual Desired Enhancements

Meagan Longoria has an airing of grievances:

I recently created a treemap in Power BI for a Workout Wednesday challenge. Originally, I had set out to make a different treemap, but I ran into some limitations with the visual. I ended up with the treemap below, which isn’t bad, but it made me realize that the treemap is in need of some improvements to make it really useful. So I decided to share my thoughts here.

Read on for Meagan’s thoughts on the existing treemap visual. I agree with all of Meagan’s points and would love to see this visual be updated.

Comments closed

Power BI Report Deployment with Connections to Shared Datasets

Rayis Imayev does some large-scale deployment:

Let’s say you have a collection of Power BI .pbix files stored in a git-based source control system (GitHub, Azure DevOps, or any other system). Among these files, one is your data model, while the others are Power BI visual reports and dashboards connected to the published dataset from your data model. Your published dataset is located in a separate workspace dedicated to shared content, and the visualization Power BI reports are placed in another workspace with appropriate permissions to access them.

Now, let’s consider an additional complexity: you have this collection of files not only in one development environment but also in two others. These environments support your Power BI reporting testing (UAT) and the release of your Power BI reports to end-users (Production).

The questions that arise are: How do you deploy your solution, and most importantly, how do you automate it?

Click through for an architectural diagram, as well as the answer to this question.

Comments closed