Press "Enter" to skip to content

Category: Cloud

Cannot Open Backup Device with SQL Managed Instance and SAS Token

Sam Garth troubleshoots an issue:

On a recent case, a customer was trying to restore a database from a storage account using a SAS token when they received the below error.

An exception occurred while executing a Transact-SQL statement or batch.
(Microsoft.SqlServer.ConnectionInfo)

Additional information:
Cannot open backup device
https://storage.blob.core.windows.net/container/dbbackup_2024_03_21_121901.bak
Operating system error 86(The specified network password is not correct.).
RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3201)

Read on for the troubleshooting steps Sam followed to solve the problem.

Comments closed

Azure Regions and Pricing

Koen Verbeeck has a public service announcement:

Today I was having a nice discussion with some colleagues about Fabric and pricing/licensing came up. I mentioned an F2 is only around €250 a month, but a colleague said “no no, it’s over €300”.

There can be significant differences in prices for services based on region, not just for Microsoft Fabric, but also for a variety of services. This will depend on how new the hardware is, how much demand there is in the region, and a few other factors. Cloud Price does a good job of keeping track of VM pricing by region, and even tells you the cheapest region for each class of VM. For other services, you may have to trawl through Azure APIs and pricing pages to get the best deal.

Comments closed

Metadata Tables and Azure Data Factory

Martin Schoombee brings back metadata tables:

The metadata that drives the execution within a framework is probably the most critical part. Going back to our analogy of building a house, the metadata would be the foundation. It is here where you are going to make some architectural decisions outside of which the framework cannot operate.

One such decision is how configurable or flexible you’d like the framework to be. In other words, how many attributes would you like to be dynamic and/or have the option to change during execution. It seems like an easy choice, and most engineers would lean towards “everything” or “as much as possible” as an answer. In reality however, the trade-off is complexity and the more dynamic you make the framework the more complicated it becomes. And you pay for the complexity later when you need to maintain or add new functionality to it.

Read on to see how it all fits together.

Comments closed

Enhanced Patching for SQL Server on Azure VMs

Taryn Pratt has an update:

We are pleased to announce the GA release of enhanced patching capabilities for SQL Server on Azure VMs using Azure Update Manager. When you register your SQL Server on Azure VM with the SQL IaaS Agent extension, you unlock a number of feature benefits, including patch management at scale with Azure Update Manager.

Read on to see what this does, how you can set it up, and how you can migrate from the SQL Server IaaS agent extension’s automated patching service.

Comments closed

Elastic Jobs for Azure SQL DB

Josephine Bush digs into Elastic Jobs:

I know if you are a SQL Server DBA using Azure SQL DB, you’ve been sorely missing the agent. Enter Elastic Jobs to help you schedule jobs more easily against Azure SQL DB. I will cover setting up and scheduling Elastic Jobs to execute Ola index maintenance. If you’ve used Elastic Jobs in the past, there are some very nice improvements with the recent GA release, so don’t feel discouraged if you didn’t like it in the past—it’s way better now!

Read on for a deep dive into Elastic Jobs.

Comments closed

Tips for Configuring Alerts for Azure Data Factory

Teo Lachev shares some advice:

Alerting is an important monitoring task for any ETL process. Azure Data Factory can integrate with a generic Azure event framework (Azure Monitor) which makes it somewhat unintuitive for ETL monitoring. You can set up and change the alerts using the ADF Monitoring hub.

Read on for five pieces of advice, in particular, covering how to set up one of these alerts.

Comments closed

Documenting Table Columns with the Python SDK for Purview

Danaraj Ram Kumar breaks out the Python IDE:

There are several approaches to work with Microsoft Purview entities programmatically, especially when needing to perform bulk operations such as documenting a large number of tables and columns dynamically. 

This article shows how to use the Python SDK for Purview to programmatically document Purview table columns in bulk – assuming there are many tables and columns that needed to be automatically documented based off a reference tables – as in this example, the data dictionary maintained in Excel.

On the other hand, Purview REST APIs can be used to natively work with the REST APIs whereas the Python SDK for Purview is a wrapper that makes it easier to programmatically interacts with the Purview Atlas REST APIs in the backend.

Click through for sample code and explanations.

Comments closed

Elastic Jobs in Azure SQL DB Now GA

Srini Acharya makes an announcement:

Elastic Jobs is a fully integrated Azure SQL database service that allows you to automate and manage administrative tasks across multiple SQL databases in a secure, scalable way. It can run one or more T-SQL job scripts in parallel using Azure portal, PowerShell, REST, or T-SQL APIs. Jobs can be run on a schedule or on-demand, targeting any tier of Azure SQL Database. Job target can include all databases in a server, in an elastic pool, across multiple servers and even databases across different subscriptions and geo regions on Azure. Servers and pools are dynamically enumerated at runtime, so jobs run against all databases that exist in the target group at the time of execution.

If you’ve held off on Azure SQL DB because of a lack of the SQL Agent, take a look at this option.

Comments closed

Speeding up Databricks Lakehouse Queries with Redis

Drew Furgiuele has the need for speed:

Since compute and storage are now separated, this means that any time you want to work with your data, you need some form of compute engine that is capable of connecting to and reading your data from your storage locations. Compute engines vary, but one of the best is Apache Spark, which gives you a great distributed compute layer suitable for all sorts of workloads, whether they be analytical and ad-hoc queries, dashboard or BI workloads, data engineering related, or even data science or AI/ML use cases. It really can do it all, and it does it very well.

But what about use operational use cases? For instance: let’s say your Lakehouse is hosting some data that is critical to customer-facing systems that demand low-latency response times, such as real-time users lookups, API interfaces, or event-driven systems, sometimes the overhead required to take a query, schedule it, and run it can be in the hundreds of milliseconds. For some workloads, that’s a lifetime.

Read on to see how you can build a caching layer on top of certain lakehouse operations when some operation needs to be as fast as possible.

Comments closed

Searching for Files in a Blob Storage Container

Andy Brownsword hits one of my bugbears:

Shifting from handling data on premises to Azure has been a real change of mindset. Whilst what I want to build may be similar, the how part is completely different. There’s a learning curve not just to the tooling but how you use it too.

This is one of those instances.

I had a storage container with files which had a date in their name. I wanted to perform a wildcard search to select some of them. That sounds straight forward, right?

This is unnecessarily painful, especially if you’re trying to find the right full backup in a container filled with full and transaction log backup files. Andy’s solution does work but also requires a full scan of keys. And I don’t think there’s a better way to do it.

Comments closed