Press "Enter" to skip to content

Category: Cloud

Switching Connections from AAS to Power BI

Marc Lelijveld wants to swap a connection from using Azure Analysis Services to Power BI Premium:

Having the context of an Azure Analysis Services dataset that is migrated to Power BI Premium, you might have to rebind many reports. Especially if this dataset is positioned as being a managed dataset that is also used for self-service purposes and has many related reports.

In this blog I will elaborate on how you can easily rebind all these reports to the new Power BI dataset, without downloading all reports and manual rebinding.

It’s not a trivial operation, but it is a lot easier than updating each entry individually.

Comments closed

Using Powershell in Azure Cloud Shell

Hope Foley shows how you can set up Powershell to be your Azure Cloud Shell language of choice:

Part of my job is doing POCs with customers to help with Azure Data Services.  Anything that helps me move quicker is helpful so I’m a tad bit obsessed with automating things.  I have used PowerShell for more years than I’m willing to admit to help me automate what I can.  There are a lot of ways to automate things like ARM templates and DevOps, but PoSH has been my preferred hammer.  As much as I love it, I’ve ran into issue sometimes with installing modules locally on folks machines and not to mention if they have a Mac.  I wondered recently if Azure Cloud Shell would help make things easier, and it very much did and I’m super pumped to share!  This post will help run through how to get setup to run PowerShell scripts in Azure.

For people who prefer Powershell to bash, check it out.

Comments closed

Using Azure Queue Storage as a Trigger for Function Apps

Aveek Das shows how you can use Azure Queue Storage as a way to trigger an Azure Function App:

In this article, we are going to learn how to trigger Function Apps from Queue Storage in Azure. Function Apps has been one of the most popular cloud services of Microsoft Azure. Function Apps allow users to write code in any language and then execute the code in the cloud. There is no infrastructure to be managed and hence is very flexible for writing and building applications on the go. Every Function App can be triggered in multiple ways, for example, by calling the function URL using an HTTP endpoint or from some other functions in Azure. In this article, we are going to trigger the Function App from Queue Storage in Azure and see how to pass a message from the queue to the Function App.

Queue Storage in Azure is another service in Azure that allows users to store multiple messages in it. Users can use a queue to create a list of items that need to be processed one by one. Messages to Queue Storage in Azure can be added by using the HTTP or HTTPS endpoints. Usually, a queue can store data up to 64 KB in size. We can add millions of messages in a queue if it is supported by the storage account.

Click through to see how. Though now I wonder why I might use Queue Storage instead of an Event Hub or an Event Grid. But I suppose that’s a question for a different article.

Comments closed

From Azure Data Factory to Synapse Pipelines

Kevin Chant copies and pastes:

In this post I want to share an alternative way to copy an Azure Data Factory pipeline to Synapse Studio. Because I think it can be useful.

For those who are not aware, Synapse Studio is the frontend that comes with Azure Synapse Analytics. You can find out more about it in another post I did, which was a five minute crash course about Synapse Studio.

By the end of this post, you will know one way to copy objects used for an Azure Data factory pipeline to Synapse Studio. Which works as long as both are configured to use Git.

Click through to see how.

Comments closed

Automating Notebook Execution with Powershell

Julie Koesmarno shows off an automation process for notebooks:

When I first think about automation, I generally think in the following way: in order to automate a script, we want to ensure that the script itself can be run via a command line interface (CLI) and with almost no user interaction (except for input and output parameters). Now, how do we apply this to Jupyter Notebooks so that we can automate SQL notebooks or PowerShell Notebooks?

The good news is that these SQL notebooks and PowerShell notebooks that we’ve created using Azure Data Studio, can be run on PowerShell CLI. If these notebooks can be run on PowerShell CLI, that means any automation systems or serverless architecture (Azure Automation combined with Azure Logic Apps as an example) should be able to run these notebooks also.

In this blog post, I’ll cover examples on using Invoke-SqlNotebook, using Invoke-ExecuteNotebook and putting it together with Azure Automation.

Click through to see the whole thing.

Comments closed

Ensemble Classification in Azure Machine Learning

Dinesh Asanka reminds me not to use the designer for tough Azure ML problems:

Let us see how we can extend the standard classification to Ensemble Classifiers in Azure Machine Learning. Before we discuss the details of this configuration, you can view or download the experiment from Ensemble Classification

The following figure shows the complex layout of the Ensemble Classifiers in Azure Machine Learning.

Dinesh is not kidding about that complexity. This is definitely a use case for the Azure ML SDK.

Comments closed

Monitoring Azure Data Factory, Integration Runtimes, and Pipelines

Sandeep Arora monitors all the things:

For effective monitoring of ADF pipelines, we are going to use Log Analytics, Azure Monitor and Azure Data Factory Analytics. The above illustration shows the architectural representation of the monitoring setup.

The details of setting up log analytics, alerts and Azure Data Factory Analytics are further discussed in this section.

If you manage Azure Data Factory in your environment, give this a read.

Comments closed

Role-Based Access Control in Snowflake

Warner Chaves explains how role-based access controls work in Snowflake:

The data access privilege granularity is the lowest level of securable that you will use to provide data access. This can theoretically go all the way down to rows and all the way up to full databases. 

I usually recommend that people start out with using Schema as their data access securable granularity. Database is usually too broad and you will inevitably have to re-do your roles and table level. Below is too specific to turn it into a general methodology—you would end up with way too many roles. See the FAQ later in this post on how to mix and match granularities if needed.

Once you have the granularity defined, you then create back-end roles at that level.

Read on to see what those roles look like. It’s a pretty standard RBAC setup.

Comments closed

Azure DevOps Templates for Data Platform Deployments

Kevin Chant has some toys for us:

For my T-SQL Tuesday contribution this month is I want to introduce my Azure DevOps templates for Data Platform deployments.

This months T-SQL Tuesday is hosted by Frank Geisler. Frank has invited us to write about deploying SQL components through descriptive methods and build some new cool templates for them.

Which is good timing for me, because I co-presented a session on the day this post is published. I showed how to use YAML in Azure DevOps for Data Platform deployments at Data Platform Virtual Summit. .

Click through to learn more and see Kevin’s repos, as well as more information on the topic.

Comments closed

Environments in Azure ML

Luis Valencia explains what environments are in Azure ML:

An Environment defines Python packages, environment variables, and Docker settings that are used in machine learning experiments, including in data preparation, training, and deployment to a web service. An Environment is managed and versioned in an Azure Machine Learning Workspace. You can update an existing environment and retrieve a version to reuse. Environments are exclusive to the workspace they are created in and can’t be used across different workspaces.

In basic terms for a developer, it’s basically a Docker Image with all the needed dependencies (conda/pip packages) to run your experiment.

A friendly word of advice from some bad experiences: stick with the curated environments as much as you can. Those are easy and rarely fail. Building your own environments from Conda files is a possibility, but it’s an, err, probabilistic exercise as to whether your compute target will actually work or not.

Comments closed