Press "Enter" to skip to content

Category: Cloud

Backup to URL via Managed Identity in SQL Server 2022

Joey D’Antoni doesn’t trust user logins:

Backing up databases to the cloud is not a new thing. Microsoft introduced the BACKUP TO URL functionality to SQL Server 2012 SP1 CU2. I’m not going to tell you how long ago. Still, it wasn’t last month, and Microsoft recently celebrated the 15th anniversary of Azure so that you can get an idea. When the feature started—it was minimal; you could only backup a database of up to a single terabyte and couldn’t stripe over multiple files. Additionally, you had to use the access key to the storage account, which gave complete control over the storage account—that wasn’t a good thing.

Read on for a quick overview of the feature and guidance on how it all works.

Comments closed

Receiving Notification when a Microsoft Fabric Notebook Fails

Gilbert Quevauvilliers gets an e-mail:

What I have found is that when I created a pipeline in Microsoft Fabric that uses a notebook, when there is an error with the notebook, I do not get an alert that the notebook has failed.

This has happened to me in the past and I have found this pattern below to work consistently to notify me of errors.

In this blog post I will show you how I get notified when a notebook fails in a pipeline.

Read on to learn how.

Comments closed

Azure Elastic Jobs to Run Powershell and T-SQL

Josephine Bush kicks off a job:

I’ve covered how to create Elastic Jobs in the portal (this one is important to read if you aren’t familiar with elastic jobs already), with Terraform, and with Bicep. Now, I’ll cover how to create them and their associated objects with PowerShell. Don’t do this in prod to start. Always test in a lower environment first.

Click through for the process, as well as the script.

Comments closed

Using the Microsoft Fabric Capacity Metrics App

Reitse Eskens uses a tool:

In a number of previous blogs and in my session on loadtesting Microsoft Fabric, I’ve always questioned the metrics app and one specific point is the timepoint detail. When you click on a graph, you get the option to go to the timepoint detail and read more.

This is all fun and games but looking at the list of active processes at that specific point in time, you’ll quickly see processes that are way out of the selected point in time. For me, it rendered this thing useless because it messed up the things I wanted to see.

Read on to see the right way to handle this app.

Comments closed

Controlling Execution Flow in Fabric Data Pipelines

Reza Rad has everything under control:

In Microsoft Fabric, the Data Factory is the workload for ETL and data integration, and the Data Pipeline is a component in that workload for orchestrating the execution flow. There are activities in the pipeline, and you can define in which order you want the activities to run. In this article and video, you will learn about the execution order and output states in Data Pipeline and how they can be used in real-world scenarios of data integration.

The mechanisms here are fundamentally similar to what we’ve had in Azure Data Factory (obviously) and SQL Server Integration Services.

Comments closed

Billing for SQL Database in Microsoft Fabric

Amar Digamber Patil makes an announcement:

Since SQL database is a native item in Fabric, it utilizes Fabric capacity units like other Fabric workloads. Compute charges apply only when the database is actively used, so you only consume what you need. Storage is billed separately on a monthly basis, as are automatic backups, which are retained for seven days.

Billing for compute usage and data storage for SQL databases in Fabric will commence after February 1st.

Click through for more information, including links to more information regarding billing and monitoring.

Comments closed

Object Ownership in Databricks

Chen Hirsh shares a tale of woe:

Have you ever made a change in your system and immediately regretted it? A few weeks ago, I did just that while working with a customer on their Databricks platform. His IT guys made some changes, moving a user to another domain. In Databrick, this is considered a new user, so I added the new user and gave him the same permissions as the old user.

And then, without thinking twice, I deleted the old user from Databricks.

Things did not go well from there. Read on to learn what happened, why, and how to avoid this problem in the future.

Comments closed

Using the Azure SQL DB Query Editor

Josephine Bush writes a query:

I keep losing track of this wondering where it went. You have to access it at the database level. Adding this post to remind me for later. This came in very handy when my home internet went down and I couldn’t auth on my phone hotspot without timeouts in Azure Data Studio.

You can login in with SQL Server auth or Entra.

Read on for some notes about limitations. It is definitely a helpful tool for occasional queries or having a simpler way to access data without having to set up a VPN and a whole bunch of tools.

Comments closed

Security Baselines for Azure SQL Workloads

Mika Sutinen builds a baseline:

I’ve recently had to work a bit more with the Microsoft Defender and the vulnerability assessment in Azure. Following those efforts, it dawned to me that the topic of security baselines is sometimes slightly misunderstood. So, in this post, we’ll look into what a security baseline should cover (and what they probably shouldn’t).

But first things first. Security baselines are provided by the Microsoft Defender for Cloud service, which I always recommend enabling for Azure workloads (unless there’s a 3rd party solution for it already). If you don’t have anything of the sorts enabled for your databases and servers, I highly recommend you go and turn Defender on. Seriously. Do it now.

Read on to learn more about why having a security baseline is so important and where to draw the cut-off between security and functionality.

Comments closed

Automatically Refreshing a Power BI Semantic Model after Dataflow Loads

Reza Rad refreshes a model:

Although this seems to be a simple thing to do, it is not a function that you can turn on or off. If you have a Dataflow that does the ETL and transforms and prepares the data, then to get the most up-to-date data into the report, you will need to refresh the Power BI semantic model after that, only upon successful refresh of both dataflow and semantic model is when you will have the up-to-date data into the report. Fortunately, in Fabric, this is a straightforward setup. In this article and video, I’ll explain how this is possible.

Click through for the video and the blog post. Granted, this feature is in preview, but using it is pretty straightforward.

Comments closed