Press "Enter" to skip to content

Category: Cloud

MLOps on Databricks

Piotr Majer and Michael Shtelma complete a series on MLOps on Databricks:

This is the second part of a two-part series of blog posts that show an end-to-end MLOps framework on Databricks, which is based on Notebooks. In the first post, we presented a complete CI/CD framework on Databricks with notebooks. The approach is based on the Azure DevOps ecosystem for the Continuous Integration (CI) part and Repos API for the Continuous Delivery (CD). This post extends the presented CI/CD framework with machine learning providing a complete ML Ops solution.

Check it out.

Comments closed

Testing Failover Group and TCP Connectivity with Managed Instances

Niko Neugebauer has a pair of connectivity tests for us. First up is failover group connectivity:

When you set up a failover group between primary and secondary SQL Managed Instances in two different regions, each instance is isolated using an independent virtual network. Replication traffic needs to be allowed between these VNets.

To allow this kind of traffic, one of the prerequisites is:

– “You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000-11999 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.”

Click through for a SQL Agent job script which helps with the test. Meanwhile, you can also test TCP connectivity from a managed instance:

In this post we shall focus on helping you determining the TCP connectivity from SQL Managed Instance against a given endpoint and port of your choice.

If you are interested in other posts on how-to discover different aspects of SQL MI – please visit the  http://aka.ms/sqlmi-howto, which serves as a placeholder for the series.

There are scenarios where it would be nice to be able to test if a SQL Managed Instance can reach some “external” endpoints, like Azure Storage as an example.

Check out both posts.

Comments closed

Capturing SQL Server Audit Events with Azure Monitor

Bruno Gabrielli connects Azure Montor to SQL Server Audit:

Today I am going to cover an interesting aspect on how to capture security audit events from both Azure and non-Azure SQL Server machines. Most of you probably know that SQL Server is capable of auditing security related information, such as access to a given database, record creation or deletion, configuration change and so on) according to the Audit configuration applied to a given instance or database.

In this post, we will not dig into SQL Server Audit configuration or capability. We will rather explore the steps and configurations necessary to collect data using Azure Monitor.

Read on for the process. You will need the appropriate agent for this, but that agent doesn’t necessitate that your machine be in Azure.

Comments closed

Training a Model in the Azure ML Designer

I continue a series on low-code machine learning in Azure ML:

Machine learning is a lot like an action film from the 1980s: we see early on that there’s a problem, we train in a cool montage with upbeat rock music, and then we come back to the problem and defeat it with car chases and bazookas and quippy one-liners. Well, maybe that simile got away from me a little bit, but I think I’ll stick with it.

What we’ll do in this post is cover the process of training a simple model using the Azure ML designer. I won’t deviate too far from the “classic” Azure ML script, which involves using the Designer to train a model and then deploy an endpoint for consumption. And away we go!

Sometimes, when a model is running, I say to it, “I have to remind you Sully, this is my weak arm!”

Comments closed

The Evolution of Cloud Architecture

Ben Brauer has a two-parter looking at how architecture is changing. Part 1 looks at containers and machine learning:

Let’s start describe containers at a high level.  A container is a packaging and distribution mechanism that abstracts and resolves many of the installer issues that result from ‘unique’ environments.  We’ve all heard developers exclaim “well, it works on my machine,” after pushing an application to a new environment only to realize its broken.  Containers strive to address this problem by creating a hard boundary between the infrastructure and the software stack used by an application. External dependencies are not necessarily added to the container, but all your internal dependencies (frameworks, runtimes, etc.) are there.  This makes the deployment of the application to a new environment significantly more predictable as the compute environment is consistent as its part of the container.

Part two looks at serverless compute and low-code/no-code development:

Low-code (or no-code) development for applications is not a new concept. It strives to democratize development in a similar way as decades ago Visual Basic expanded the number of developers from thousands of C++ developers to hundreds of thousands of developers creating Windows-based solutions. Low-code takes this concept to non-technical professionals. Although this notion is great for productivity and usability, the maintenance and performance of these apps can be daunting to say the least. Now non-technical application authors need to learn about application management, documentation and, application deployment.  Without a clear understanding of these considerations, the environment can quickly become chaotic.  The good news is that platforms and tools have come a long way since Visual Basic. For example, Microsoft’s Power Apps platform provides many of the platform services needed to maintain a healthy application lifecycle and governance paradigm.

These are good concepts to know about, regardless of your particular cloud platform.

Comments closed

Trying Automated ML in Azure ML

I continue a series on low-code machine learning with Azure ML:

Automated Machine Learning (AutoML) provides two distinct benefits. The first benefit is the one that AutoML providers tend to tout: you don’t need (much) machine learning experience to use them. According to the marketing, AutoML does all of the work and you sit back and enjoy the fruits of its labor.

I am nowhere near sold on this use case for AutoML. Yes, you can get answers in a few clicks, but to get good answers, you need a lot more knowledge of data processing and statistics than they let on. Feeding in garbage data will get you mediocre results.

Click through for the second benefit, which I think applies much better. Also for a step-by-step demonstration of how AutoML works.

Comments closed

Streaming Data to Event Hubs via Kafka Connect and Debezium

Niels Berglund starts off a two-part sub-series within a series:

This post is the first of two looking at if and how we can stream data to Event Hubs from Debezium. Initially I had planned only one post covering this, but it turned out that the post would be too long, so therefore I split it in two.

It started with the post, How to Use Kafka Client with Azure Event Hubs. In that post, I looked at how the Kafka client can publish messages to – not only – Apache Kafka but also Azure Event Hubs. In the post, I said something like:

An interesting point here is that it is not only your Kafka applications that can publish to Event Hubs but any application that uses Kafka Client 1.0+, like Kafka Connect connectors!

Click through for the first part of this pairing.

Comments closed

Using Azure DevOps to Deploy Python Functions to Azure Function Apps

Rayis Imayev has a trick question for us:

Can I create a CI/CD pipeline to deploy Python Function to Azure Function App using Windows self-hosted Azure DevOps agent?

My short answer to this question is Yes and NoYes, you can use Windows self-hosted Azure DevOps agent to deploy Python function to the Linux based Azure Function App; and, No, you can’t use Windows self-hosted Azure DevOps agent to build Python code since it will require collection/compilation/build of all Python-depended libraries on a Linux OS platform.

Click through for the full answer.

Comments closed