Sandeep Pawar has a script for us:
Sharing a function I have been using to get all the Fabric tenant settings and the description of each setting.
Read on for a quick note and the Python function that does the job.
Comments closedA Fine Slice Of SQL Server
Sandeep Pawar has a script for us:
Sharing a function I have been using to get all the Fabric tenant settings and the description of each setting.
Read on for a quick note and the Python function that does the job.
Comments closedChris Webb has a public service announcement:
Since the announcement in March that Power BI Premium P-SKUs are being retired and that customers will need to migrate to F-SKU capacities intead I have been asked the same question several times:
Why are you forcing me to migrate to Fabric???
This thread on Reddit is a great example. What I want to make clear in this post is the following:
Moving from P-SKU capacities to F-SKU capacities is not the same thing as enabling Fabric in your tenant
Click through for Chris’s explanation. Also check out the comments section for this one, as there are plenty of questions and responses in there.
Comments closedGilbert Quevauvilliers links everything together:
I have been doing a fair amount of work lately with Fabric Notebooks.
I am always conscious to ensure that when I am authenticating using a Service Principal, I can make sure it is as secure as possible. To do this I have found that I can use the Azure Key Vault and Azure identity to successfully authenticate.
Read on for some of the advantages of using Azure Key Vault for this sort of credential management, as well as how to get it all working.
Comments closedTo manage expectations, Microsoft do openly state during the introduction that this white paper was created by combining multiple online security documents together.
Which probably explains some of the repetition. However, multiple references are better than none.
Plus, in the introduction they provide a link to the main Microsoft Fabric security page. Which is good starting point if you know what security feature you are looking for.
Anyway, the content itself is good. It provides some really good explanations and diagrams relating to certain areas. To help demystify certain aspects of security for some people.
Read on for Kevin’s first impressions of the whitepaper.
Comments closedRon L’Esteve wants to know what’s happening:
With Microsoft Fabric now generally available, organizations are interested in implementing this flagship Unified Data and AI Intelligence Platform for several reasons. Its native integration within the Azure stack provides seamless and secure access to widely used technologies for data integration, business intelligence, and advanced analytics. Microsoft Fabric’s storage and compute capacity is utilized by resources within this unified analytics platform, including storage repositories, such as data warehouses and data lakes, and compute capacity for Power BI, Pipelines, DW processing, and artificial intelligence (AI)/machine learning (ML) workloads.
Fabric capacity can be purchased on Azure with a pay-as-you-go model, and a 60-day free trial (64 CUs) is offered to test the platform. Organizations that have an existing Power BI Premium capacity can easily enable access to Fabric by using the Microsoft Fabric admin switch. Enabling Fabric in Power BI Premium as opposed to Azure Portal creates a problem: there is no easy way to monitor and set alerts on your Fabric capacity metrics in the Azure Portal.
Click through to learn how to install and use the Microsoft Fabric Capacity Metrics App.
Comments closedGilbert Quevauvilliers wants to know what time it is:
How to add current DateTime to existing PySpark data frame in a Fabric Notebook
In the blog post below, I am going to describe how to add the current Date Time to your existing Spark data frame.
This is really useful when I am inserting data into a Fabric Lakehouse table, and I want to know when the data got inserted.
Read on for the answer.
Comments closedA Fabric Pipeline uses JSON as source code. They are also saved in repositories as JSON.
We first idea we get is editing the pipeline in JSON format. We can copy the JSON and create new pipelines with small variations, making changes directly on the JSON.
However, at first sight we get disappointed, because the pipeline doesn’t allow the JSON to be edited. We have the option to view the JSON, but nothing else.
Read on to see how to tell the Fabric pipeline who’s boss.
Comments closedThis article is solely about one question: what has to be done if a content creator needs to create and publish reports but the content creator is not allowed to see all the data?
This seems to be a simple requirement: develop content (finally publish the report), but with Row Level Security (RLS) applied.
To answer the question, I think it’s necessary to understand the following core principle, at least to some extent:
- Workspace roles
Read on for more information about how workspace roles work in this domain.
Comments closedKoen Verbeeck lays out a process:
The goal of metadata driven code is that you build something only once. You need to extract from relational databases? You build one pipeline that can connect to a relational source, and you parameterize everything (server name, database name, source schema, source table, destination server name, destination table et cetera). Once this parameterized piece of code is ready, all you must do is enter metadata about the sources you want to extract. If at a later point an additional relational source needs to be extracted, you don’t need to create a brand-new pipeline. All you need to do is enter a new line of data in your metadata repository.
Aside from speeding up development – after you’ve made the initial effort of creating your metadata driven pipeline – is that everything is consistent. You tackle a certain pattern always in the same way. If there’s a bug, you need to fix it in one single location.
Read on to see how this works. The idea is certainly not new, as Koen mentions, but there are some specific factors that come into play for Microsoft Fabric pipelines.
Comments closedKevin Chant opens a can of worms:
Since I got asked about it this week during the Learn Together session I did alongside Shabnam Watson (l/X). Plus, it is a highly debated topic in our community, and I wanted to share my thoughts about it.
Due to the fact that my personal opinion is that it depends. However, the number you choose depends on a variety of reasons which I intend to cover in this post.
By the end of this post, you will know my personal opinions as to why. Plus, plenty of things to consider when deciding on the number of workspaces to implement.
Read on for Kevin’s thoughts. My quick opinion is, one workspace per layer. Just from a logistical standpoint, keeping the several layers separated in one workspace is an immense challenge and typically requires exposing data engineering details (like what “gold”/”silver” or “curated”/”refined” actually means) with end users.
Comments closed