Press "Enter" to skip to content

Category: Cloud

Extracting the First Element from an Array in ADF

Rayis Imayev shows how you can find the first element in an array using Azure Data Factory:

A user recently asked me a question on my previous blog post (Setting Variables in Azure Data Factory Pipelines) about possibility extracting the first element of a variable if this variable is set of elements (array).

So as a spoiler alert, before writing a blog post and adding a bit more clarity to the existing Microsoft ADF documentation, here is a quick answer to this question.

You’ll have to click through even for the quick answer.

Comments closed

Azure Cloud Shell

Mark Broadbent gives us an introduction to Azure Cloud Shell:

There are two ways to access Azure Cloud Shell, the first being directly through the Azure Portal itself. Once authenticated, look to the top right of the Portal and you should see a grouping of icons and in particular, one that looks very much like a DOS prompt (have no fear, DOS is nowhere to be seen).

The second method to access Azure Cloud Shell is by jumping directly to it via shell.azure.com which will require you to authenticate to your subscription before launching. There is an ever so slight difference between each method. Accessing the Shell via the Azure Portal will not require you to specify your Azure directory context (assuming you have several) since your Portal will have already defaulted to one, whereas with the direct URL method that obviously doesn’t happen.

Read the whole thing.

Comments closed

Azure SQL Linux VM Configuration with dbatools

Rob Sewell walks us through configuring SQL Server on an Azure VM running Linux, installing Powershell, and using dbatools:

I had set the Network security rules to accept connections only from my static IP using variables in the Build Pipeline. I use MobaXterm as my SSH client. Its a free download. I click on sessions

There wasn’t much I could excerpt here, but this is a heavily screenshot-driven tutorial.

Comments closed

Defining Data Egress Charges with Azure Query Editor

Dave Bland looks into what constitutes data egress with Azure and looks at a specific example:

If you have attempted to calculate your price for your Azure environment, you know that the pricing can be complex, taking into account a number of factors.  These factors include data egress, compute and storage.  The intent of the blog is not to outline all the billing factors of Azure.  The purpose of this blog post is to answer one question.

When I use the Azure SQL Query Editor, does that count as part of the data egress charges?

I completed a blog post on the Azure Query Editor a few weeks back.  Here is a link, if you would like to check it out.

Read on to see what Dave learned.

Comments closed

CosmosDB Continuation Tokens

Hasan Savran walks us through the idea of a continuation token in CosmosDB:

In CosmosDB, TOP option is required and its default value is 100. You can change the default value by sending a different value using the request header “x-ms-max-item-count“. If you have 40000 rows in your Orders table, and run the same query in CosmosDB, you will get 100 rows(documents) rather than 40000 rows(documents). CosmosDB returns all kind of metadata with the data. You can find this metadata in the response headers. One of those responses is, “x-ms-continuation” and it is responsible to display the rest of the rows of your query. If you like to get the next set of results, you can take “x-ms-continuation” value from the response headers and attach it to your next request to get the next set of rows. CosmosDB SDK does this automatically for you. SDK checks for the x-ms-continuation value when you check HasMoreResults property. If this property is true, that means CosmosDB returned a continuation token.

I have fanciful notions of SQL Server offering something similar—think of a grid built from a query. Get the first 50 rows from the result set and store that off in tempdb somewhere, using the “continuation token” (which might just be the full name in tempdb) and auto-trashing after a certain amount of time.

Comments closed

Running Spark MLlib to Feed Power BI

Brad Llewellyn shows how you can take Spark MLlib results and feed them into Power BI:

MLlib is one of the primary extensions of Spark, along with Spark SQL, Spark Streaming and GraphX.  It is a machine learning framework built from the ground up to be massively scalable and operate within Spark.  This makes it an excellent choice for machine learning applications that need to crunch extremely large amounts of data.  You can read more about Spark MLlib here.

In order to leverage Spark MLlib, we obviously need a way to execute Spark code.  In our minds, there’s no better tool for this than Azure Databricks.  In the previous post, we covered the creation of an Azure Databricks environment.  We’re going to reuse that environment for this post as well.  We’ll also use the same dataset that we’ve been using, which contains information about individual customers.  This dataset was originally designed to predict Income based on a number of factors.  However, we left the income out of this dataset a few posts back for reasons that were important then.  So, we’re actually going to use this dataset to predict “Hours Per Week” instead.

Check it out. And Brad’s not joking when he says the resulting model is terrible. But that’s okay, because it was never about the model.

Comments closed

Building an AKS Cluster with Azure DevOps

Rob Sewell shows how to use Azure DevOps to build an AKS cluster with Terraform:

In the last few posts I have moved from building an Azure SQL DB with Terraform using VS Code to automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines to using Task Groups in Azure DevOps to reuse the same Build Process and build an Azure Linux SQL VM and Network Security Group. This evolution is fantastic but Task Groups can only be used in the same Azure DevOps repository. It would be brilliant if I could use Configuration as Code for the Azure Build Pipeline and store that in a separate source control repository which can be used from any Azure DevOps Project.

Luckily, you can

And Rob shows us how it’s done.

Comments closed

Query Editing with Azure SQL Database

Dave Bland shows off the Azure SQL Database query editor built into the Azure portal:

We have all been using SQL Server Management Studio to query and manipulate data, even for an Azure SQL database.  There is also an option to do this same thing built into the SQL Azure database interface in the Azure portal.  Although there have been a number of posts related to this topic dating back a few years, this feature is still marked as “preview” in the Azure portal.

Click through to see how it works, what you can do with it, and some of its limitations.

Comments closed

Backing Up Database to Azure Blob Storage

Jamie Wick shows us how we can back up database directly to Azure Blob Storage:

Azure storage, as a backup destination for SQL backups, is a great option for organizations that are contemplating replacing older on-prem NAS appliances or improve their Disaster Recovery functionality. The tiered storage pricing, along with local and global redundancy options, can be much more cost-effective than many traditional backup options.

In this post, we’re going to look at some of the key concepts and restrictions, along with how to back up an SQL database to an Azure storage location.

Click through for the demo.

Comments closed