Press "Enter" to skip to content

Day: April 30, 2024

Adding GIFs to Power BI Reports

Riqo Chaar adds a bit of motion to cards:

This article will describe the process behind adding GIFs to card visuals in Power BI. The GIFs we will create in this article will be as follows: animated arrows, looping only once, displaying the direction of movement relating to a particular value between the current period and the previous period. These GIFs work extremely well as a visual aid, highlighting key information quickly to users, without any overstimulating effect due to a single loop being used.

This article was inspired by a video from the YouTube channel, How to Power BI.

Click through for the article. I’m pretty well on the fence about this: adding GIFs is not something I would think to do, primarily because of the distraction factor. Even so, it’s still good to know that it’s possible.

Leave a Comment

Using strsplit() with Multiple Delimiters in R

Steven Sanderson shows off some more complex string splitting scenarios in R:

In data preprocessing and text manipulation tasks, the strsplit() function in R is incredibly useful for splitting strings based on specific delimiters. However, what if you need to split a string using multiple delimiters? This is where strsplit() can really shine by allowing you to specify a regular expression that defines these delimiters. In this blog post, we’ll dive into how you can use strsplit() effectively with multiple delimiters to parse strings in your data.

Read on for two examples of complex scenarios.

Leave a Comment

A Primer on Transactional Replication

Steve Stedman talks transactional replication:

Ensuring that your databases are synchronized across different locations with minimal delay is not just a convenience—it’s a necessity. This is where transactional replication in SQL Server shines, making it a pivotal strategy for systems that require real-time data replication with high consistency. Our latest video, “Transactional Replication in SQL Server”, dives deep into this topic, offering insights and visual walkthroughs that are invaluable for database administrators and developers.

Click through for the video and how the pieces fit together for transactional replication at a high level.

Leave a Comment

Understanding the Delta Lake Format

Reza Rad has a new post and video combo:

Please don’t get lost in the terminology pit regarding analytics. You have probably heard of Lake Structure, Data Lake, Lakehouse, Delta Tables, and Delta Lake. They all sound the same! Of course, I am not here to talk about all of them; I am here to explain what Delta Lake is.

Delta Lake is an open-source standard for Apache Spark workloads (and a few others). It is not specific to Microsoft; other vendors are using it, too. This open-source standard format stores table data in a way that can be beneficial for many purposes.

In other words, when you create a table in a Lakehouse in Fabric, the underlying structure of files and folders for that table is stored in a structure (or we can call it format) called Delta Lake.

Read on to learn more about this open standard and how it all fits together with Microsoft Fabric.

Leave a Comment

Data Compression and Data Type Changes

Bob Pusateri asks the important questions:

A few different times I have been asked one or more forms of the following question:

Can datatypes be changed faster with data compression enabled?

I’ve always replied that I’m pretty sure compression will help in this situation, because based on my understanding, it should. But I’ve never had any actual data to back up this belief. Until now. I recently set up a demonstration to test this, and I’m very happy to share the results.

If you want to see the results, you’re going to have to read Bob’s article.

Leave a Comment

Power BI Model Size and Memory Usage

Chris Webb lays out the limitations:

You probably know that semantic models in Power BI can use a fixed amount of memory. This is true of all types of semantic model – Import, Direct Lake and DirectQuery – but it’s not something you usually need to worry about for DirectQuery mode. The amount of memory they can use depends on whether you’re using Shared (aka Pro) or a Premium/Fabric capacity, and if you’re using a capacity how large that capacity is. In Shared/Pro the maximum amount of memory that a semantic model can use is 1GB; if you are using a capacity then the amount of memory available for models in each SKU is documented in the table here in the Max Memory column:

Read on to learn more.

Leave a Comment