Press "Enter" to skip to content

Month: December 2024

Recovering Power BI Reports and Semantic Models

Kurt Buhler saves the day:

In the Power BI service or Microsoft Fabric, you might encounter situations where you can’t download a report or model from a workspace. Depending on your workflow, this could be problematic; for instance, you might need to work further on this file in Power BI Desktop. To do that, you first need to recover a Power BI Desktop (PBIX) file or the newer format, Power BI Projects (PBIP).

Read on for several reasons as to why you might not be able to download the file, and what you can do about it, using the semantic link library and semantic-link-labs.

Leave a Comment

Shared Semantic Models in Power BI

Soheil Bakhshi shares some data:

This blog series complements a YouTube tutorial I published earlier this month, where I quickly covered the scenario and implementation of shared semantic models in Microsoft Fabric. However, I realised this topic demands a more detailed explanation for those who need a deeper understanding of the processes and considerations involved in one of the most common enterprise-grade BI scenarios.

Read on for part 2 of this series. Soheil also includes a link back to part 1 if you missed it.

Leave a Comment

The Year in DAX: 2024 Edition

Marco Russo wraps up the year:

In 2024, DAX added several functions to support visual calculations – a Power BI feature still in preview at the end of the year 2024. These 12 functions cannot be used in measures, calculated columns, nor calculated tables – they can only be used in visual calculations: COLLAPSECOLLAPSEALLEXPANDEXPANDALLFIRSTISATLEVELLASTMOVINGAVERAGENEXTPREVIOUSRANGE, and RUNNINGSUM.

Read on to learn more about what Microsoft has done with DAX as a language, as well as what has kept the SQLBI team busy and what’s coming in 2025.

Leave a Comment

PostgreSQL and Indexing on EXTRACT()

Henrietta Dombrovskaya troubleshoots a performance problem:

It’s Christmas time and relatively quiet in my day job, so let’s make it story time again! One more tale from the trenches: how wrong you can go with one table and one index?

Several weeks ago, a user asked me why one of the queries had an “inconsistent performance.” According to the user, “Sometimes it takes three minutes, sometimes thirty, or just never finishes.” After taking a look at the query, I could tell that the actual problem was not the 30+ minutes, but 3 minutes – when you have a several hundred million row table and your select yields just over a thousand rows, it’s a classical “short query,” so you should be able to get results in milliseconds.

Read on for the problem, as well as how Henrietta was able to coerce the PostgreSQL optimizer into choosing the correct path.

Leave a Comment

Microsoft Fabric and Power Platform Resources

Jon Voege has a collection of links for us:

This week, to round off the year, we try something different. I wanted to throw a shout out to all the community heroes out there, who also help make the most of Microsoft Fabric, through the use of Microsoft Power Platform (and vice versa).

Also, I wanted to highlight some of their contributions, and hopefully give you all a list of resources to peruse.

Click through for more than 20 links, showing how you can work with Power Automate, Power Apps, Power Pages, and data in Dataverse from Microsoft Fabric.

Leave a Comment

Mathematical Transformations of Data in R

Steven Sanderson does the math:

Data transformation is a fundamental technique in statistical analysis and data preprocessing. When working with R, understanding how to properly transform data can help meet statistical assumptions, normalize distributions, and improve the accuracy of your analyses. This comprehensive guide will walk you through implementing and visualizing the most common data transformations in R: logarithmic, square root, and cube root transformations, using only base R functions.

Click through for examples.

Leave a Comment

Azure AI Foundry Notes

Tomaz Kastrun wraps up a series on Azure AI. First up is tracing in Azure AI Foundry:

Tracing is a powerful tool that offers developers an in-depth understanding of the execution process of their generative AI applications. Though still in preview (in the time of writing this post), It provides a detailed view of the execution flow of the application and the essential information for debugging or optimisations.

After that, we can see how to evaluate model results:

With evaluation you performing iterative, systematic evaluations with the right evaluators and measure and address potential response quality, safety, or security concerns throughout the AI development lifecycle, from initial model selection through post-production monitoring.

With the Evaluation in Azure AI Foundry, you can evaluation the GenAI Ops Lifecycle production. In addition, it also gives you the ability to  assess the frequency and severity of content risks or undesirable behavior in AI responses.

Finally, Tomaz wraps up the series with some notes on documentation:

Documentation and material for Azure AI Foundry are plentiful and growing on a daily basis, since the topic on AI and GenAI is evermore so popular.

I appreciate the challenge that Tomaz has of putting together 25 blog posts in a month, especially when they’re all tied to a single theme.

Leave a Comment

Bulk Inserts and High Unused Space in SQL Server Tables

Vitaly Bruk works through an issue:

High allocated unused space is storage assigned to a SQL Server table that isn’t used. This condition often indicates internal fragmentation. Free space is present within allocated pages. Such fragmentation leads to inefficient storage and can degrade database performance.

Read on for an explanation of the issue, followed by a real-world situation whose ultimate cause was bulk insert operations.

Leave a Comment

Prompt Flow in Azure AI

Tomaz Kastrun continues a series on Azure AI. First up is an introduction to Prompt Flow:

Prompt flow in Azure AI Foundry is development tool for designing the flows (streamlines) for the complete end-to-end development cycle of LLM’s AI application. You can create, iterate, test, orchestrate, debug, and monitor your flows.

After that, we get a demonstration a Prompt Flow in Python:

Prompty gives you the ability to create an end-to-end solution, like RAG where you can chat with LLM over an article or document, where you can ask to classify the input data (list of URLs,…)

Prompty is a markdown file, structured in YAML and encapsulates a series of metadata fields pivotal for defining the model’s configuration and the inputs. After this front matter is the prompt template, articulated in the Jinja format.

Leave a Comment