Press "Enter" to skip to content

Curated SQL Posts

Atomic Design for Report Development

Kurt Buhler has an interesting approach:

Developing a good semantic model or report takes a lot of time and effort. One way to reduce this cost is by re-using parts of an existing solution for a new model or project. This modular approach is particularly valuable when a developer faces common or recurring challenges and processes. Despite this, many developers commonly repeat efforts when they start new projects, models, and reports. For example, developers will often manually recreate measures, date tables, and patterns in a new model, or spend precious hours formatting visuals in a new report, while they have already created the same or similar things in the past. One reason for this is that it is difficult to identify candidate elements to re-use, or how you can re-use them in a convenient and scalable manner.

In this article, we want to introduce a conceptual framework from UI/UX called the atomic design methodology from Brad Frost. This framework can help developers to approach Power BI models and reports in a modular way to improve productivity and consistency of a developer’s work. The purpose of this article is to introduce the concept as well as some approaches that exist to re-use parts of your model and report. In future articles and videos, we will elaborate on these and other methods in additional detail.

I like the idea a lot, but Kurt does describe some of the challenges you’ll likely need to work through to adopt it.

Comments closed

Deploying a Power BI Project File via Azure DevOps

Angela Henry deploys to prod:

When it was announced there was a collective cheer from Power BI source control advocates heard ’round the world. Since it’s preview release, Microsoft has also added GIT integration with Fabric workspaces. This makes it so easy to incorporate source control for all (or almost all) of your Fabric artifacts, including Power BI.

But what happens when your organization already has a mature CI/CD process in place using Azure DevOps? Do you really want to break from that pattern and have it controlled somewhere else? That’s what this post is about, using Azure DevOps CI/CD pipelines to deploy your Power BI Project files (.pbip).

I’m going to share my experience in hopes that it will save you some time if this is the route you need to take.

Read on for Angela’s experience. Note that this applies both to Microsoft Fabric as well as a Fabric-less Power BI.

Comments closed

Hot and Cold Partitions for Apache Kafka Data

Gautan Goswami splits the data:

At first, data tiering was a tactic used by storage systems to reduce data storage costs. This involved grouping data that was not accessed as often into more affordable, if less effective, storage array choices. Data that has been idle for a year or more, for example, may be moved from an expensive Flash tier to a more affordable SATA disk tier. Even though they are quite costly, SSDs and flash can be categorized as high-performance storage classes. Smaller datasets that are actively used and require the maximum performance are usually stored in Flash.

Cloud data tiering has gained popularity as customers seek alternative options for tiering or archiving data to a public cloud. Public clouds presently offer a mix of object and file storage options. Object storage classes such as Amazon S3 and Azure Blob (Azure Storage) deliver significant cost efficiency and all the benefits of object storage without the complexities of setup and management. 

Read on for an architecture that uses hot and cold tiers, as well as how you can set it up on an existing Kafka topic.

Comments closed

SHAP and Additive Models

Michael Mayer answers a pair of related questions:

Within only a few years, SHAP (Shapley additive explanations) has emerged as the number 1 way to investigate black-box models. The basic idea is to decompose model predictions into additive contributions of the features in a fair way. Studying decompositions of many predictions allows to derive global properties of the model.

What happens if we apply SHAP algorithms to additive models? Why would this ever make sense?

Read on for the answers to these two questions.

Comments closed

SQL Server Security Series Wrap-Up

Mike Walsh puts a bow on it:

Thanks for tuning into our posts for the 30 SQL Server security checks in 30 days series this month. I want to recap the entire month of posts with a few homework assignments to get you started today.

Read on for those three assignments, including adopting a security mindset, remembering that humans tend to be the weak points of security, and trying out sp_CheckSecurity.

Comments closed

DAX Memory Usage from Filtering on a Table

Chris Webb talks Power BI anti-patterns:

Following on from my last post on the Query Memory Limit in Power BI, in this post I want to look at one of the most common DAX antipatterns and its effect on query memory usage: filtering on a whole table, rather than an individual column, in the filter parameters of the Calculate() function. A lot has already been written on this particular antipattern from the point of view of query performance – for example see here and here – but it’s only with the recent addition of the Execution Metrics Profiler/Log Analytics event that you can see how bad it is for memory usage too.

Read on for an example. In Chris’s case, there’s a 3x memory difference between doing the right thing and the wrong thing, so this can easily add up.

Comments closed

Random Walks and Brownian Motion in healthyR.ts

Steven Sanderson goes for a walk on the stock exchange:

In the world of time series analysis, Random Walks, Brownian Motion, and Geometric Brownian Motion are fundamental concepts used in various fields, including finance, physics, and biology. Today, we’ll explore these concepts using functions from the healthyR.ts package.

Click through to learn about each of these concepts and some examples of how you can generate time series datasets following each of them.

Comments closed

Random Date Generation in Python

Chris LaGreca spits out some dates:

I often work with time series data and find it useful to have a variety of ways to randomly generate dates. This particular example is great for evenly distributed date partitions. Running the script below with the default arguments will output a list of random dates, one for each month of the year.

It looks like this is generating based off of a uniform distribution, which probably makes the most sense for “give me a day of the month” data generation.

Comments closed