Press "Enter" to skip to content

Category: Cloud

Monitoring and Alerting on Fabric Capacity Metrics

Ron L’Esteve wants to know what’s happening:

With Microsoft Fabric now generally available, organizations are interested in implementing this flagship Unified Data and AI Intelligence Platform for several reasons. Its native integration within the Azure stack provides seamless and secure access to widely used technologies for data integration, business intelligence, and advanced analytics. Microsoft Fabric’s storage and compute capacity is utilized by resources within this unified analytics platform, including storage repositories, such as data warehouses and data lakes, and compute capacity for Power BI, Pipelines, DW processing, and artificial intelligence (AI)/machine learning (ML) workloads.

Fabric capacity can be purchased on Azure with a pay-as-you-go model, and a 60-day free trial (64 CUs) is offered to test the platform. Organizations that have an existing Power BI Premium capacity can easily enable access to Fabric by using the Microsoft Fabric admin switch. Enabling Fabric in Power BI Premium as opposed to Azure Portal creates a problem: there is no easy way to monitor and set alerts on your Fabric capacity metrics in the Azure Portal.

Click through to learn how to install and use the Microsoft Fabric Capacity Metrics App.

Comments closed

Building Workers in Azure Data Factory

Martin Schoombee continues a series on orchestration in Azure Data Factory:

We’re finally ready to dive into the Data Factory components that form part of the framework, and we’re going to work our way from the bottom up. To paraphrase the previous blog post, worker pipelines perform the actual work of either moving data (from source to staging) or executing a stored procedure that will load a dimension/fact table.

Although worker pipelines can contain any number of tasks you may need, my worker pipelines that move data from a source system into the staging area follow a similar pattern with at least the following activities:

Click through for that list, as well as more information.

Comments closed

Editing the JSON of a Microsoft Fabric Pipeline

Dennes Torres makes a change:

A Fabric Pipeline uses JSON as source code. They are also saved in repositories as JSON.

We first idea we get is editing the pipeline in JSON format. We can copy the JSON and create new pipelines with small variations, making changes directly on the JSON.

However, at first sight we get disappointed, because the pipeline doesn’t allow the JSON to be edited. We have the option to view the JSON, but nothing else.

Read on to see how to tell the Fabric pipeline who’s boss.

Comments closed

Comparing SQL Server to Databricks

Paul Andrew makes a comparison:

Microsoft SQL Server and Azure Databricks over the many years I’ve been working in the data/IT industry have easily become my two favourite data processing tools. When Databricks became a first-class resource in Microsoft Azure, it was a big moment for the evolution of the data platform architectures I’ve designed and built (but architecture isn’t the focus for this blog). That said, rather than considering the tooling and technology as an evolution, I find a lot of people drawing comparisons between the products. This often leads to confusion and friction, as they are ultimately offering a lot of different capabilities, with only some common areas where comparisons could be made.

Read on for Paul’s thoughts. Spoilers: I agree with pretty much all of it.

Comments closed

Building an Elastic Job with Bicep

Josephine Bush flexes some muscles:

Bicep is an open-source Domain-Specific Language (DSL) that simplifies the process of deploying Azure resources. It is an abstraction layer on top of Azure Resource Manager (ARM) templates, making it easier to write and understand infrastructure code. Bicep lets you describe your Azure infrastructure using a cleaner and more concise syntax than traditional ARM templates.

It’s definitely easier to read and work with Bicep than directly with ARM template JSON. Larger Bicep scripts can still be pretty confusing, but it’s definitely easier to write and maintain.

Comments closed

Digging into Azure Elastic Jobs

Rod Edwards has a job to do:

After a lengthy period in Public Preview it seems, the boffins at Microsoft have finally pushed Elastic Jobs for SQL Azure DB to general availability. Hooray!

But what are Elastic Jobs? And why would I want to use them in SQL Azure DB?

That’s one of the things you’ll learn in this post.

Comments closed

Building a Docker Image with Docker Build Cloud

Andrew Pruski shows off Docker Build Cloud:

In a previous blog post we went through how to build a Docker container image from a remote (Github) repository.

Here we’re going to expand on that by actually building the image itself remotely, using Docker Build Cloud.

What we can do with Docker Build Cloud is instead of building the image locally and then having to push to a remote container registry (for example the Docker Hub), we can build remotely and then immediately push that image to the registry so that it is available for immediate use by say, our team members or deployment/testing pipelines.

Read on to see how it works.

Comments closed

Copying Azure SQL Managed Instance Databases

Scott Klein performs a migration:

So, back to our customer. They essentially lifted and shifted their on-premises databases to Azure SQL Managed Instance and have been using it successfully for nearly two years. Again, this is awesome.

Last week they came to us and asked about reporting with Managed Instance. They were looking at data marts and data warehouses, but we needed more information. It turns out they have some people that just want the ability to query the databases, and potentially hook up Excel to these databases for data analysis.

The caveat is that the people I was talking to didn’t want to give the other group direct access to the production environment. Toootally get that. Yeah, like 100% get it. So, what are the options?

Read on for the solution Scott came up with.

Comments closed

Three Layers of Azure Data Factory Framework Components

Martin Schoombee continues a series on orchestration in Azure Data Factory:

Before we dive into the details of the Data Factory pipelines, it is worth explaining the conceptual structure of my framework and its components. How it all fits together is important, and after reading the post on the metadata as well the pieces of the puzzle will hopefully start falling into place.

When I started thinking about what I’d like the framework to do, three conceptual layers started to emerge and we’ll review them from the bottom up:

Click through for the description of each layer.

Comments closed