Press "Enter" to skip to content

Category: Cloud

Advance Notifications for Azure SQL MI

Uros Milanovic gives us a heads up:

Advance notifications allow you to prepare for planned maintenance events on your SQL Managed Instance resources. They alert you 24 hours before a planned maintenance event. Advance notifications work hand-in-hand with SQL Maintenance Windows – with the two combined, you gain control over when your managed instances receive updates and receive a notification ahead of time.

Read on to learn more about how this works. There is a bit of setup involved to subscribe to these, though Uros provides a link to a guide on how to do it.

Comments closed

Downloading and Deleting Files in Oracle Object Storage

Brendan Tierney continues a series on Oracle Object Storage:

In my previous posts on using Object Storage I illustrated what you needed to do to setup your connect, explore Object Storage, create Buckets and how to add files. In this post, I’ll show you how to download files from a Bucket, and to delete Buckets.

Click through for scripts to perform both. If you just want to delete an item from a bucket without deleting the bucket as a whole, you can do so with a quick modification to Brendan’s script.

Comments closed

Dynamic Warehouse and Lakehouse Connections in Data Pipelines

Koen Verbeeck doesn’t want to hard-code the connection string:

When you develop data pipelines in Microsoft Fabric (the Azure Data Factory equivalent in Fabric, not to be confused with deployment pipelines), you will most likely have some activities with a connection to a warehouse, a lakehouse or a KQL database (for the remainder of the blog post I’ll talk about a warehouse, but it can be any of those three data stores). For example, in a Script, Lookup, or Copy activity. When you deploy your data pipeline to another workspace – using, you might’ve guessed it, deployment pipelines – the pipeline itself is copied to the other workspace. E.g., we deploy a pipeline from the development workspace to the test workspace.

Read on to see what this means for warehouse connections and how you can work around the existing messiness.

Comments closed

Bucket Operations in Oracle Object Storage

Brendan Tierney continues a series on working with Oracle Object Storage:

In a previous post, I showed what you need to do to setup your local PC/laptop to be able to connect to OCI. I also showed how to perform some simple queries on your Object Storage environment. Go check out that post before proceeding with the examples in this blog.

In this post, I’ll build upon my previous post by giving some Python functions to:

  • Check if Bucket exists
  • Create a Buckets
  • Delete a Bucket
  • Upload an individual file
  • Upload an entire directory

Read on for those examples.

Comments closed

Working with Oracle OCI Object Storage

Brendan Tierney peruses buckets:

This blog post will walk you through how to access Oracle OCI Object Storage and explore what buckets and files you have there, using Python and the OCI Python library. There will be additional posts which will walk through some of the other typical tasks you’ll need to perform with moving files into and out of OCI Object Storage.

It looks like the interface for this is substantially similar to AWS’s S3.

Comments closed

I/O Analysis for SQL Server on Azure VMs

Ebru Ersan announces a new preview feature:

It is not easy to understand what’s going on when you run into an I/O related performance problem on an Azure Virtual Machine. It is a common, but complex problem. What you need is to figure out what’s happening at both the host level and your SQL Server instance where often, correlating host metrics with SQL Server workloads can be a challenge.

We developed a new experience that helps you do exactly that.

Click through to see how it works. Given that awful disk latency is a common problem in the cloud, this may at least tell you if you have things set up correctly.

Comments closed

Invoking a Fabric Data Factory Pipeline via REST API

Andy Leonard makes a call:

This post is current as of 30 May 2024. There are other posts by fantastic bloggers about how to use the Fabric REST API. Fabric development is progressing so fast, some of those posts are less up-to-date. Make no mistake, this post will most likely not age well, and for the very same reason. That’s ok. We bloggers live to serve. I, like all the rest, will endeavor to persevere – and we will all write more posts, Lord willing.

In this post, I share one way to invoke Fabric Data Factory pipelines using the REST API.
I will be using the web version of Postman to call REST API methods.
You can sign up for a free Postman account. Since it’s free, I encourage you to check the box to receive news and offers from them. As I mentioned in an earlier post, you can always unsubscribe if the messages are unhelpful or if they get too “chatty.”

Read on for that way.

Comments closed

Azure SQL Managed Instanced Update Policies

Rod Edwards is not amused:

Ah, SQL 2022, the release that finally brought box SQL and Azure managed instance closer together. We have wonderful toys such as Managed Instance Link, allowing us to connect our SQL 2022 on prem instances with Managed Instance Link. It waslike the first real effort to integrate modern Azure offerings with those who also need / prefer an On Prem presence.

Rob Litjens has a follow-up on this:

I prepared some questions:

  1. What polices does Managed Instance have?
  2. Why did Microsoft implement the ‘Always-up-to-date update policy’ Policy?
  3. Why is it named Policy?
  4. Do we need to update our Azure scripts to implement it (immediately)?
  5. Is there impact on offerings like Managed Instance Link

Do read both of these as they combine for a rounded perspective of the issue Rod brought up.

Comments closed

Recapping an Orchestration Framework

Martin Schoombee wraps up a series:

Frameworks are extremely useful when they are thoughtfully designed and implemented. I have seen both sides of the coin, but what I probably see the most of is a lack of any sort of framework. What I typically see are some naming conventions and coding standards, but many companies miss the opportunity to take it one step further and reduce the inefficiencies of repetitive tasks. There’s a ton of repetition in ETL processes, and in my opinion that gives us a really good opportunity to streamline the way in which we are doing things with a well designed framework.

Read on for Martin’s notes to keep in mind, as well as where to go from here.

Comments closed

Orchestration Controllers in Azure Data Factory

Martin Schoombee gets to the top of the pyramid:

Controllers are pipelines that initiate the execution of a single process or task in a specific order and with constraints. Whereas everything else in this framework is pretty automated, this part is entirely manual.

Why? Well, when I started thinking about the design of this framework I knew I needed something at the “highest level” that would execute an entire daily ETL process, or a modified ETL process that only loads specific data during the day. I wanted to maximize the flexibility of the framework, and that either meant adding another level to the metadata structure or creating this layer of pipelines that sit at the top. I opted for the second, because I did not feel it was worth the complexity of adding another layer into the metadata structure. That being said, it doesn’t mean it cannot or shouldn’t be done…it was a personal choice I made to keep things as simple as I could.

Read on to learn more about what the controller should look like and how the other pieces fit in.

Comments closed