Press "Enter" to skip to content

Category: Cloud

Azure SQL DB Serverless for Hyperscale now GA

Morgan Oslake has an announcement:

Optimizing resource allocation to achieve performance goals while controlling costs can be a challenging balance to strike especially for database workloads with complex usage patterns.  Azure SQL Database serverless provides a solution to help address these challenges, but until now the general availability of serverless has only been available in the General Purpose tier.  However, many workloads that can benefit from serverless may require greater performance and scale along with other capabilities unique to the Hyperscale tier.

We are pleased to announce the general availability of serverless auto-scaling for Hyperscale in Azure SQL Database.  The benefits of serverless and Hyperscale now come together into a single database solution.

Read on to see what this means for you and how it can change the billing strategy around Hyperscale.

Comments closed

Generating Synthetic Data for Streaming in Microsoft Fabric

Sandeep Pawar builds out some data:

If you want to learn or demo Real Time Analytics in Microsoft Fabric, you will need a streaming data source. You can use the built-in samples to get started. But there are several data generators which you can use to create custom streaming sample datasets, Azure Stream Analytics data generator being one of them. You can see them here. In this blog, I will show how to set one up to use with Fabric Eventstream.

Read on for a step-by-step guide.

Comments closed

Finally Blocks and Error Handling in Data Factory

Chen Hirsh doesn’t let failure get in the way of doing work:

Today I stumbled upon a weird behavior in Azure Data Factory (ADF) error handling.

ADF lets us add error handling in the flow control, In this example, I’m trying to copy some data, and if that fails go to on failure branch (red line). If the activity succeeded, go to on success branch (green line)

These work great (If you can call a failure great…).

Let’s take another step. What if I want to run another activity at the end, no matter if the copy succeeded or failed?

The behavior is a bit weird, as it doesn’t work quite the way you’d expect. Chen, however, shows us how to do it.

Comments closed

Connect to Azure SQL Database via Azure Entra ID Service Principal

Jaime Garcia de Alba becomes the machine:

In this guide, I am going to outline the steps on how to connect to an Azure SQL database using Entra SPN with tools such as SSMS and PowerShell. This demo covers detailed steps for using an existing user when the token is received correctly. Additionally, the steps cover creating a new user from scratch in case there are issues with the existing user.

I’ve used service principals and managed identities in the past in application code, but it wasn’t until this post that I learned you could also use them directly to connect to an instance.

Comments closed

New Features in Azure SQL MI Instance Pools

Djordje Marinkovic shows off what’s new:

When migrating small SQL Server instances to Azure it is often the case that a single SQL Managed Instance turns out to be overkill in terms of size and, consequently, cost. The oversizing problem can happen whenever very small instances are required, for example when an ISV company builds a multi-tenant app requiring a small SQL MI instance for each customer. In such cases the smallest size (4-vCores) for a single SQL MI can still turn out to be too large and too expensive for the given use case. This is where SQL MI pools (“instance pools”) deliver great value.

Click through for more information on instance pools, as well as new features for instance pools.

Comments closed

Working with Erik Darling’s Stored Procedures in Azure SQL DB

Josephine Bush tries out some stored procedures:

Erik Darling, founder of Darling Data, has created these fantastic stored procedures to query SQL Server more efficiently to get health, log, or performance information. I will go through them here regarding using them in Azure SQL database since I don’t have any SQL Servers I manage anymore.

Read on to see which ones you can use in Azure SQL DB and which require SQL Server.

Comments closed

Linked Servers from SQL MI using Azure Entra ID

Luis Aranda has the first of a two-part series:

Lately, we have seen some customers interested on the options available to use linked servers from Managed Instance and using Entra Authentication (formerly Azure Active Directory). It is certainly possible to create Linked Servers on SQL Managed instance (SQL MI) to connect to other PaaS databases such as other SQL MIs, Azure SQL Databases or Synapse databases using Entra Authentication.

Click through to see how you can do this using a managed identity. In the next article, Luis promises to show us how to do it with pass-through authentication, so you use your credentials instead of the managed identity’s credentials to access the remote server.

Comments closed

Running Spark Jobs on Databricks with Spark Connect and .NET

Ed Elliott runs a Databricks job:

This post aims to show how we can create a .NET application, deploy it to Databricks, and then run a Databricks job that calls our .NET code, which uses Spark Connect to run a Spark job on the Databricks job cluster to write some data out to Azure storage.

In the previous post, I showed how to use the Range command to create a Spark DataFrame and then save it locally as a parquet file. In this post, we will use the Sql command, which will return a DataFrame or, in our world, a Relation. We will then pass that relation to a WriteOperation command, which will write the results of the Sql out to Azure storage.

The code is available HERE

Read on for the description of how everything works.

Comments closed