Press "Enter" to skip to content

Category: Cloud

Online DR from SQL Server 2022 and Azure SQL MI Now Available

Djordje Jeremic announces general availability of one of the key selling points from SQL Server 2022:

Today, we are announcing the general availability of the following two major capabilities of the Managed Instance link feature with SQL Server 2022:

  • Two-way failover between SQL Server 2022 and SQL Managed Instance through the link to unlock true disaster recovery (DR) with Azure
  • Creating a link from SQL Managed Instance to SQL Server 2022 to unlock off-PaaS data mobility for regulatory and dev/test scenarios 

Click through for more detail.

Leave a Comment

Client Diagnostics in Cosmos DB

Arthur Daniels digs into diagnostic data:


This topic keeps coming up with my customers so the purpose of this blog post is to keep all the useful information in one post. Think of this post as a consolidation of different pieces of information. If you’re stuck on what “pipelined”, “created”, or “transit time” means, you’re in the right spot.

We are not talking about Diagnostic Settings/Log Analytics in this post, I’d describe those as server-side rather than client-side diagnostics. They are useful in different ways, our client diagnostics will help us understand the path that a request takes from the application to Cosmos DB and back, along with any latency on that path.

Note: before we get started, check to see if you’re using Direct or Gateway mode. Ideally sending your requests Direct mode in the .NET or Java SDKs will usually result in faster requests.

Read on to see how to retrieve and interpret this diagnostic data.

Leave a Comment

Permissions and BACPAC Files in Azure SQL DB

Roberto Yonekawa diagnoses an error:

We had a support request where the customer was getting an error when trying to export his Azure SQL Database to a bacpac file using the SqlPackage command-line utility.

Error message:

Microsoft.Data.Tools.Diagnostics.Tracer Error: 19 : 2024-08-21T16:10:56 : Microsoft.SqlServer.Dac.DacServicesException: One or more unsupported elements were found in the schema used as part of a data package.

Error SQL71627: The element Permission has property Permission set to a value that is not supported in Microsoft Azure SQL Database v12.

Click through for the specific issue Roberto found. I’d imagine that there are other permission sets that are incompatible with Azure SQL Database and would cause this error message to pop up as well.

Leave a Comment

Reduced Auto-Pause Delay for Azure SQL DB Serverless

Morgan Oslake goes to sleep sooner:

Azure SQL Database serverless automatically scales compute based on workload demand and bills for compute used per second.  In the General Purpose tier, serverless also provides an option to automatically pause the database during idle usage periods when only storage related costs are billed.  When workload activity returns, the database is automatically resumed.

Customers choosing to enable auto-pausing can specify the auto-pause delay as part of the serverless configuration.  The auto-pause delay is the length of time the database must be idle before auto-pausing.  The lower the auto-pause delay and the more frequently auto-pausing occurs, the greater the potential compute cost savings. 

Read on for the update in minimum auto-pause time.

Leave a Comment

Reading Data from Azure Blob Storage in Snowflake

Arun Sirpal explains a common architectural pattern:

Let’s go back to data platforms today and I want to talk about a very common integration I see nowadays, Azure Blob Storage linked to Snowflake via a storage integration which then we can access semi structured files via external tables, it is a good combination of technology I have to say.

Click through for an architecture diagram and example of the code you’d need.

Leave a Comment

Mounting Azure Data Factory in Fabric Data Factory

Andy Leonard takes up a factory job:

Thanks to the hard work of the Microsoft Fabric Data Factory Team, it’s now possible to mount an Azure Data Factory in Fabric Data Factory. This post describes one way to mount an existing Azure Data Factory in Fabric Data Factory. In this post, we will:

  • Mount an existing Azure Data Factory in Fabric Data Factory
  • Open the Azure Data Factory in Fabric Data Factory
  • Test-execute two ADF pipelines
  • Modify and publish an ADF pipeline

Read on to see how it all works. One of the odd things about Microsoft Fabric—and its predecessor, Azure Synapse Analytics—is the penchant for similar-but-not-quite-the-same services. Yes, we have Data Factory…but it’s not quite the same. Yes, we have Azure Data Explorer (and KQL)…but it’s not quite the same. I get that there are reasons for this (such as not having a resource group with a dozen separate services hanging around), but I’m sure it’s a bit frustrating working on several separate code bases and trying to keep them all approximately in sync.

Leave a Comment

Trying out the Databricks For-Each Task

Chen Hirsh goes in a loop:

Databricks recently added a for-each task to their workflow capability. Workflows are Databricks jobs, like Data factory pipelines, or SQL server jobs, a pipeline that you can schedule, that include a number of tasks that together complete some business logic.

Theoretically, the long-awaited for-each task should make the run of multiple processes easier. for example, one of the things I often do is run a list of notebooks, each processing a different table, with no dependencies between them. At the moment I use parallel notebooks – https://docs.databricks.com/en/notebooks/notebook-workflows.html#run-multiple-notebooks-concurrently

As you will see later, this use case is not supported yet. But before that, let’s see what we can do with the for-each task.

Read on to see what it currently can do, and what it cannot.

Leave a Comment

Memoizing Functions with Snowflake

Kevin Wilkie is speaking my language (that is, the language of functional programming):

If you’ve been working with data for several years like I have – mostly using the SQL language – then I have a term for you that other languages, like JavaScript or Python, have had for a few years. The term is “memoizable” and it means, in a nutshell, to remember. A memoizable function caches the results so that it can return the resultset in record time, given the same parameters.

Yeah, it’s a fancy term that basically states, “Instead of calculating the result each time, I’ll just create a lookup table of all possible inputs and what the output is.” It’s really helpful when you have a small number of possible inputs and generating a result takes a while.

Read on to learn more about how this works in Snowflake, including several limitations.

Leave a Comment

Connecting to Azure Storage from SSIS

Andy Brownsword makes a connection:

Migrating to the cloud can be disruptive to existing processes. Moving storage to Azure isn’t a simple configuration change for SSIS packages.

SSIS doesn’t have native connections for Azure. That doesn’t mean we need to completely re-engineer the process or change technology though.

How can we take the simple package below and move to using Azure storage?

Read on for the answer. Also, I am 100% on Team SAS Token. They are easy to create and give you a lot of control over who gets access to what.

Leave a Comment