Press "Enter" to skip to content

Category: Python

Migrating or Copying a Semantic Model across Microsoft Fabric Workspaces

Sandeep Pawar makes a move:

Here is a quick script to copy a semantic model from one workspace to another in the same tenant, assuming you are contributor+ in both the workspaces. I tested this for a Direct Lake model but should work for any more other semantic model. This just copies the metadata (not the data in the model) so be sure to set up other configurations (RLS members, refresh schedule, settings etc.). That can also be changed programmatically, thanks to Semantic Link Labs, but I will cover that in a future post.

Read on for the script, as well as an update from Sandeep on how you can do this even more easily.

Leave a Comment

Visualizing a SQL Server Kubernetes Statefulset

Andrew Pruski builds a diagram:

The other day I came across an interesting repo on github, KubeDiagrams.

What this repo does is generate Kubernetes architecture diagrams from Kubernetes manifest files…nice!

Deploying applications to Kubernetes can get complicated fast…especially with stateful applications such as SQL Server.

So having the ability to easily generate diagrams is really helpful…because we all should be documenting everything, right?

Click through for instructions and a couple of gotchas Andrew ran into along the way.

Leave a Comment

Automating V-Order Optimization in Microsoft Fabric

Miles Cole writes a script:

I’ve previously blogged in detail about V-Order optimization. In this post, I want to revisit the topic and demonstrate how V-Order can be strategically enabled in a programmatic fashion.

Since V-Order provides the most benefit and consistent improvement for Direct Lake Semantic Models, why not leverage platform metadata to enable it automatically—but only for Delta tables used by these models?

This will be a short blog—let’s get straight to the concept, the source code, and then move on to more strategic use of this feature.

Click through for the process and an explanation of what’s happening in the accompanying Gist.

Leave a Comment

Kernel Methods in Python

Matthew Mayo does a bit of kernel work:

Kernel methods are a powerful class of machine learning algorithm that allow us to perform complex, non-linear transformations of data without explicitly computing the transformed feature space. These methods are particularly useful when dealing with high-dimensional data or when the relationship between features is non-linear.

Kernel methods rely on the concept of a kernel function, which computes the dot product of two vectors in a transformed feature space without explicitly performing the transformation. This is known as the kernel trick. The kernel trick allows us to work in high-dimensional spaces efficiently, making it possible to solve complex problems that would be computationally infeasible otherwise.

Read on for the pros and cons of kernel methods and a pair of techniques that use them.

1 Comment

Near Real-Time Data Plotting in Python

Hristo Hristov wants to know where the International Space Station is:

Gathering data on events as they occur in real-time is a powerful and popular technique in scientific and industrial computing. If we can query an online REST API representing the position of the International Space Station’s (ISS), how can we visualize these data in real time? How do you plot the data points as soon as they arrive and observe changes in the station’s position immediately? Let’s look at using Python for a real time plot of data.

Click through for the solution and plenty of explanation along the way.

Comments closed

Data Masking in Azure Databricks

Rayis Imayev hides some information:

One way to protect sensitive information from end users in a database is through dynamic masking. In this process, the actual data is not altered; however, when the data is exposed or queried, the results are returned with modified values, or the actual values are replaced with special characters or notes indicating that the requested data is hidden for protection purposes.

In this blog, we will discuss a different approach to protecting data, where personally identifiable information (PII – a term you will frequently encounter when reading about data protection and data governance) is actually changed or updated in the database / persistent storage. This ensures that even if someone gains access to the data, nothing will be compromised. This is usually needed for refreshing the production database or dataset containing PII data elements to a lower environment. Your QA team will appreciate having a realistic data volume that resembles production environment but with masked data.

Rayis goes into depth on the process. I could also recommend checking out the article on row filters and column masks for more information.

Comments closed

Custom SCD2 with PySpark

Abhishek Trehan creates a type-2 slowly changing dimension:

A Slowly Changing Dimension (SCD) is a dimension that stores and manages both current and historical data over time in a data warehouse. It is considered and implemented as one of the most critical ETL tasks in tracking the history of dimension records.

SCD2 is a dimension that stores and manages current and historical data over time in a data warehouse. The purpose of an SCD2 is to preserve the history of changes. If a customer changes their address, for example, or any other attribute, an SCD2 allows analysts to link facts back to the customer and their attributes in the state they were at the time of the fact event.

Read on for an implementation in Python.

Comments closed

The Importance of Virtual Environments in Python

Jack Wallen proselytizes for virtual environments:

When developing with Python, chances are pretty good that you’ll need to install various libraries, dependencies and apps to get your project started. The good news is that (in most cases) those installations are pretty straightforward (thanks to pip and other tools).

Problems can arise, however, if you simply install all of those project requirements on your system. It’s like installing any given application, hoping it won’t cause problems with other applications, your OS or your data. In most cases, it’s safe, but there’s always that one instance where things can quickly go awry.

Read on to see how virtual environments can alleviate many of these pains. It took a while for me to understand exactly why virtual environments are so important, but this is definitely something I recommend doing if you work with Python in any capacity.

Comments closed

Finding Capacity-Level Fabric Settings with Semantic Link Labs

Sandeep Pawar lists some Microsoft Fabric properties:

Just before the holidays last year Michael Kovalsky released version 0.8.10 of Semantic Links Labs with a bunch of new helpful functions, among them list_server_properties() lists properties of an Analysis Services instance. As you know, in Fabric, the workspace acts as a server which is tied to a capacity. You define these server properties in the Capacity Settings. As far as I am aware, there wasn’t an API to get these capacity settings for audit/monitoring/debugging. With this new function, you can programmatically get the Semantic Model (i.e. Power BI workload) settings.

Click through for an example.

Comments closed

Prompt Flow in Azure AI

Tomaz Kastrun continues a series on Azure AI. First up is an introduction to Prompt Flow:

Prompt flow in Azure AI Foundry is development tool for designing the flows (streamlines) for the complete end-to-end development cycle of LLM’s AI application. You can create, iterate, test, orchestrate, debug, and monitor your flows.

After that, we get a demonstration a Prompt Flow in Python:

Prompty gives you the ability to create an end-to-end solution, like RAG where you can chat with LLM over an article or document, where you can ask to classify the input data (list of URLs,…)

Prompty is a markdown file, structured in YAML and encapsulates a series of metadata fields pivotal for defining the model’s configuration and the inputs. After this front matter is the prompt template, articulated in the Jinja format.

Comments closed