Press "Enter" to skip to content

Category: Cloud

Deploying an Azure Function via Azure DevOps

Koen Verbeeck wants to deploy a Powershell-based Azure Function:

In the blog post Azure Function with PowerShell and the Power BI REST API I explained how you could create an Azure Function using the PowerShell scripting language. This Function connected with the Power BI REST API and retrieved the last refresh status of a dataset. Developing the Function is one thing, deploying it is another. In this blog post I’ll guide you through the set-up of a build and release pipeline in Azure Devops. As a prerequisite, the Azure Function and its dependencies (for example the requirements.psd1 file) are all checked into a Git repo. As a reminder, the folder structure looks like this:

Read on for the walkthrough.

Comments closed

Azure Databricks Security Considerations

Craig Porteous provides some advice on configuring Azure Databricks:

Azure Databricks is an analytics platform and often serves as the central compute component of a data platform, to process ETL/ELT data pipelines and data science workloads. As Databricks is a third-party platform-as-a-service offering securing it works differently to most other first-party services in Azure; for example, we can’t use private endpoints. (More on these in the Azure Storage post)

The two main approaches to working with Databricks in our secure platform are VNet Peering or VNet Injection

Click through to learn the difference between these two, as well as a few other factors to keep in mind as you’re deploying Databricks.

Comments closed

Logic Apps: Source Control and Deployment

Koen Verbeeck has a two-parter. First up is storing Logic App code in source control:

At a data warehouse project I’m using a couple of Logic Apps to do some lightweight data movements. For example: reading a SharePoint list and dumping the contents into a SQL Server table. Or reading CSV files from a OneDrive directory and putting them in Blob storage. Some of those things can be done in Azure Data Factory as well, but it’s easier and cheaper to do them with Logic apps.

Logic Apps are essentially JSON code behind the scenes, so they should be included into the source control system of your choice (for the remainder of the blog post we’re going to assume this is git).

The second post covers deployment:

It’s easy to duplicate an Azure Logic App in a resource group, but unfortunately you cannot duplicate a Logic App between environments (you might try to copy paste the JSON though). So unless you want to hand craft every Logic App yourself on each of your environments, you need a way to automatically deploy your Logic Apps. It’s easier, faster and less error-prone than any manual method.

Check out both posts.

Comments closed

Imagining a SaaS Plane for Data Mesh in Azure

Paul Andrew shares some deep thoughts:

For part 7 of this series, I want to explore what else could be delivered in our Azure Data Mesh if we continue our established thinking around the planes of interaction for our data products. As with part 6, we are still missing good Azure Resources that can deployed for certain situations. However, I want to frontload some concepts now, so we are ready if/when a suitable technical answer arrives in the cloud.

Note that this is all speculative. It’s interesting speculation, though.

Comments closed

Cross-Subscription Restore for Dedicated SQL Pools

Steve Howard announces some good news:

We are excited to announce the release of cross-subscription restore. This has been one of our top requested features from customers as it unlocks multiple scenarios from dev/test to simplified billing at the subscription level for restored data warehouses.

Click through to see how you can do this. There was a workaround in the past but this should be quite a bit faster.

Comments closed

Azure Redis Tips

Arun Sirpal enumerates some advice:

My learnings on Redis thus far which you may find useful:

1. Location of Redis should be close to your app.

2. Data structures within Redis, larger key value sizes lead to fragmentation of memory space and these larger memory requirements means more network data transfer, Redis states to use 100KB maximum, this will affect the transfer time allocated from the app. It could time out if the data request is big.

Click through for the rest of Arun’s advice. My advice on the 100KB maximum is that it really should be closer to 100 bytes or 1KB max in practice, especially for storing data which differs by entity (user, customer, organization, whatever your domain uses).

Comments closed

Using the master dacpac in Azure DevOps

Koen Verbeeck makes use of system databases in a database project:

I have a database project in Visual Studio. Inside the database, I use a couple of system views to fetch some metadata about tables. To make the project build successfully, you need to add a reference to the master database in the project.

That all works fine but there’s a bit more you need to do before Azure DevOps can work with the file. Read on to learn what that thing is.

Comments closed

Streaming Data into Synapse Dedicated SQL Pool

Lionel Penuchot loads some data:

This article reviews a common pattern of streaming data (i.e. real-time message ingestion) in Synapse dedicated pool. It opens a discussion on the simple standard way to implement this, as well as the challenges and drawbacks. It then presents an alternate solution which enables optimal performance and greatly reduces maintenance tasks when using clustered column store indexes. This is aimed at developers, DBAs, architects, and anyone who works with streams of data that are captured in real-time.

I’d probably avoid the MERGE statement in there because of how many problems there are with it. That said, this is a useful pattern for trickle-loading columnstore tables.

Comments closed

Costs for Managed Virtual Networks in Azure Data Factory

Martin Schoombee brings up an interesting point:

We were running SSIS in an Azure VM, spinning the VM up and down as required to run the ETL processes. A third-party SSIS component was used to extract data out of Dynamics 365 CRM, and accounted for a significant part of the yearly costs. I blogged about the reasons why I think it’s worth moving from Azure AS to Power BI PPU before, and combined with the move to Azure Data Factory I estimated a cost reduction of almost 35%.

After deploying the solution I noticed that our daily ETL costs were significantly higher than I thought it would be, and that started a little rabbit-hole exercise to figure out why.

I’m used to thinking about managed virtual networks in the case of Azure Synapse Analytics, where I think it makes a lot of sense as a default (especially because you can’t switch after you’ve made a decision).

Comments closed