Press "Enter" to skip to content

Category: Cloud

Auto-Failover Groups for Azure SQL Hyperscale

Melody Zacharias fills us in on a recent announcement:

On January 5th they announced, auto-failover groups for Azure SQL Hyperscale are now available in preview. Auto-failover groups is a feature that allows you to manage the failover and replication of a group of databases on a server or managed instance from one region to another region in Azure. This can be done manually or in conjunction with a user-defined policy. 

Click through for more information on how it all works.

Comments closed

Automating Azure SQL DB Maintenance with Data Factory

Hiten Bhavsar works around the lack of SQL Agent:

As we know, it’s crucial that you run Database maintenance regularly in order to keep your database performance up with the latest statistics and healthy indexes, here we provide another way to schedule this job using Azure Data Factory; this can be done on a scheduled time interval weekly/bi-weekly/monthly.

Click through for the process and try not to think too hard about this secretly being maintenance plans all over again.

Comments closed

Simple Mapping Data Flows in Synapse

Joshuha Owen announces a new feature:

This week, we are excited to announce the public preview for Map Data, a new feature for Azure Synapse Analytics and Database Templates! The Map Data tool is a guided process to help users create ETL mappings and mapping data flows from their source data to Synapse lake database tables without writing code. This experience will help you get started with transformations into your Synapse Lake database quickly but still give you the power of Mapping Data Flows.

This process starts with the user choosing the destination tables in Synapse lake databases and then mapping their source data into these tables. We will be following up with a demo video shortly.

Click through for more details on how it works.

Comments closed

Azure ML Deployments and Endpoints

I continue a series on low-code machine learning with Azure ML:

The first thing we need to do is create an inference pipeline. Inference pipelines differ from training pipelines in that they won’t use the training dataset, but they will accept user input and provide a scored response. There are two types of inference pipeline: real-time and batch. Real-time inference pipelines are intended for small-set work. We’ll host a service on some compute resource in Azure and people will make REST API calls to our service, sending in a request with a few items to score and we send back classification results.

By contrast, a batch pipeline is what you’d use if you have a nightly job with tens of millions of items to score. In that case, the typical pattern is to have a service listening for changes in a storage account and, some time after people drop new files into the proper folder, the batch inference process will pick up these files, score the results, and write those results out to a destination location.

This post is all about inference pipelines. The next post will be all about batch pipelines.

Comments closed

MLOps on Databricks

Piotr Majer and Michael Shtelma complete a series on MLOps on Databricks:

This is the second part of a two-part series of blog posts that show an end-to-end MLOps framework on Databricks, which is based on Notebooks. In the first post, we presented a complete CI/CD framework on Databricks with notebooks. The approach is based on the Azure DevOps ecosystem for the Continuous Integration (CI) part and Repos API for the Continuous Delivery (CD). This post extends the presented CI/CD framework with machine learning providing a complete ML Ops solution.

Check it out.

Comments closed

Testing Failover Group and TCP Connectivity with Managed Instances

Niko Neugebauer has a pair of connectivity tests for us. First up is failover group connectivity:

When you set up a failover group between primary and secondary SQL Managed Instances in two different regions, each instance is isolated using an independent virtual network. Replication traffic needs to be allowed between these VNets.

To allow this kind of traffic, one of the prerequisites is:

– “You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000-11999 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.”

Click through for a SQL Agent job script which helps with the test. Meanwhile, you can also test TCP connectivity from a managed instance:

In this post we shall focus on helping you determining the TCP connectivity from SQL Managed Instance against a given endpoint and port of your choice.

If you are interested in other posts on how-to discover different aspects of SQL MI – please visit the  http://aka.ms/sqlmi-howto, which serves as a placeholder for the series.

There are scenarios where it would be nice to be able to test if a SQL Managed Instance can reach some “external” endpoints, like Azure Storage as an example.

Check out both posts.

Comments closed

Capturing SQL Server Audit Events with Azure Monitor

Bruno Gabrielli connects Azure Montor to SQL Server Audit:

Today I am going to cover an interesting aspect on how to capture security audit events from both Azure and non-Azure SQL Server machines. Most of you probably know that SQL Server is capable of auditing security related information, such as access to a given database, record creation or deletion, configuration change and so on) according to the Audit configuration applied to a given instance or database.

In this post, we will not dig into SQL Server Audit configuration or capability. We will rather explore the steps and configurations necessary to collect data using Azure Monitor.

Read on for the process. You will need the appropriate agent for this, but that agent doesn’t necessitate that your machine be in Azure.

Comments closed

Training a Model in the Azure ML Designer

I continue a series on low-code machine learning in Azure ML:

Machine learning is a lot like an action film from the 1980s: we see early on that there’s a problem, we train in a cool montage with upbeat rock music, and then we come back to the problem and defeat it with car chases and bazookas and quippy one-liners. Well, maybe that simile got away from me a little bit, but I think I’ll stick with it.

What we’ll do in this post is cover the process of training a simple model using the Azure ML designer. I won’t deviate too far from the “classic” Azure ML script, which involves using the Designer to train a model and then deploy an endpoint for consumption. And away we go!

Sometimes, when a model is running, I say to it, “I have to remind you Sully, this is my weak arm!”

Comments closed