Press "Enter" to skip to content

Category: Cloud

Controlling Azure Function Spend via Consumption Plan

Andy Brownsword saves some cash:

A consumption based App Service Plan in Azure provides us with a pay-as-you-go model for Function usage. This can help reduce spend from Premium plans where those plans exceed the requirements of the function, for example low volume or intermittent work.

Unfortunately you can’t move a Premium plan to Consumption based via the portal. Instead we’ll demonstrate how to use PowerShell to achieve this.

Read on for the code, as well as a bit more information on the Consumption tier..

Comments closed

Configuring Microsoft Fabric Data Mirroring for Snowflake

Koen Verbeeck copies some data:

We have a couple of Snowflake databases and would like to have that data available in Microsoft Fabric as well. Is there an easy solution to get the data quickly in Fabric? We don’t have many technical people on staff, so writing complex ETL is not an option.

Read on for more information on how it works. Mind you, you’re probably still writing the T and some of the L after using mirroring.

Comments closed

Tips for Databricks Asset Bundles

Dustin Vannoy has a new video:

This post and video is covering some specific examples people have brought up when defining their Databricks Asset Bundles. The video includes a bit of review, but for more introduction please see my first post on Databricks Asset Bundles. The github repository I use will probably be first to update with new examples, however I hope to continue to add to the examples in these posts plus additional videos.

Click through to check out those tips.

Comments closed

Copying an Azure SQL Database

Josephine Bush makes a copy:

It’s as simple as this for each db you want a copy of. Just run it from the master db. This works if you want to make a copy on the same server. If you want to make a copy from another server, you would have to connect via PowerShell.

Click through for the T-SQL syntax. I’ve used this before on some reasonably large databases and it can take a while for that copy command to finish, but if you’re feeling impatient, you can check the status of the job using sys.dm_operation_status.

Comments closed

Debugging Failed Function Calls in ADF

Andy Brownsword troubleshoots a problem:

I recently ran into an issue when trying to call a function from an ADF pipeline. The function returned a generic Internal Server Error with no details exposed. Here we’ll look at how to dig into the logs to identify the true cause of the failure.

In this instance the function was performing PGP encryption but this could apply to any function. Let’s start with the problem.

Click through for the very generic error message and how you can get the real details.

Comments closed

Advance Notifications for Azure SQL MI

Uros Milanovic gives us a heads up:

Advance notifications allow you to prepare for planned maintenance events on your SQL Managed Instance resources. They alert you 24 hours before a planned maintenance event. Advance notifications work hand-in-hand with SQL Maintenance Windows – with the two combined, you gain control over when your managed instances receive updates and receive a notification ahead of time.

Read on to learn more about how this works. There is a bit of setup involved to subscribe to these, though Uros provides a link to a guide on how to do it.

Comments closed

Downloading and Deleting Files in Oracle Object Storage

Brendan Tierney continues a series on Oracle Object Storage:

In my previous posts on using Object Storage I illustrated what you needed to do to setup your connect, explore Object Storage, create Buckets and how to add files. In this post, I’ll show you how to download files from a Bucket, and to delete Buckets.

Click through for scripts to perform both. If you just want to delete an item from a bucket without deleting the bucket as a whole, you can do so with a quick modification to Brendan’s script.

Comments closed

Dynamic Warehouse and Lakehouse Connections in Data Pipelines

Koen Verbeeck doesn’t want to hard-code the connection string:

When you develop data pipelines in Microsoft Fabric (the Azure Data Factory equivalent in Fabric, not to be confused with deployment pipelines), you will most likely have some activities with a connection to a warehouse, a lakehouse or a KQL database (for the remainder of the blog post I’ll talk about a warehouse, but it can be any of those three data stores). For example, in a Script, Lookup, or Copy activity. When you deploy your data pipeline to another workspace – using, you might’ve guessed it, deployment pipelines – the pipeline itself is copied to the other workspace. E.g., we deploy a pipeline from the development workspace to the test workspace.

Read on to see what this means for warehouse connections and how you can work around the existing messiness.

Comments closed

Bucket Operations in Oracle Object Storage

Brendan Tierney continues a series on working with Oracle Object Storage:

In a previous post, I showed what you need to do to setup your local PC/laptop to be able to connect to OCI. I also showed how to perform some simple queries on your Object Storage environment. Go check out that post before proceeding with the examples in this blog.

In this post, I’ll build upon my previous post by giving some Python functions to:

  • Check if Bucket exists
  • Create a Buckets
  • Delete a Bucket
  • Upload an individual file
  • Upload an entire directory

Read on for those examples.

Comments closed

Working with Oracle OCI Object Storage

Brendan Tierney peruses buckets:

This blog post will walk you through how to access Oracle OCI Object Storage and explore what buckets and files you have there, using Python and the OCI Python library. There will be additional posts which will walk through some of the other typical tasks you’ll need to perform with moving files into and out of OCI Object Storage.

It looks like the interface for this is substantially similar to AWS’s S3.

Comments closed