Press "Enter" to skip to content

Category: Cloud

So You Want to Fail Over a SQL Managed Instance

Danimir Ljepava takes us through user-initiated failover of SQL Managed Instances:

In August 2020, we have released a new feature user-initiated manual failover allowing to manually trigger a failover on SQL Managed Instance using PowerShell or CLI commands, or through invoking an API call.

Manually initiated failover on a managed instance will be an equivalent of the automated failover for high availability and software patches initiated automatically by the service. Manually invoking a failover on MI will help test end-to-end applications for fault resiliency on automatic failovers in case of planned or unplanned events before deploying to production. In addition to testing how failover impacts existing database sessions, it can also help verify if it changes the end-to-end performance due to changes in the network latency. In some cases if performance issues are encountered on SQL MI, manually invoking a failover to a new node can help mitigate the performance issue.

Read on to see how you can perform failover and how you can confirm that it worked.

Comments closed

SQL Serverless in Azure Synapse Analytics

James Serra talks to us about SQL serverless (presently known as SQL on-demand but I’m getting ahead of the marketing curve this time):

Querying data in ADLS Gen2 storage using T-SQL is made easy because of the OPENROWSET function with additional capabilities (check out the T-SQL that is supported). The currently supported file types in ADLS Gen2 that SQL-on-demand can use are Parquet, CSV, and JSON. ParquetDirect and CSV 2.0 add performance improvements (see Benchmarking Azure Synapse Analytics – SQL Serverless, using .NET Interactive). You can also query folders and multiple files and use file metadata in queries.

Read on to learn a lot more about its use cases.

Comments closed

Stopping and Starting Virtual Machines in a Resource Group

Dennes Torres walks us through a script to stop or start all virtual machines in an Azure resource group:

Some tasks on azure are easier if we automate them. The Azure Portal provides us the cloud shell, which we can use for this kind of automation.

I was making some experiences with SQL Server Always On, so I created three VMs inside a resouce group. Every time I want to start some experiment I need to start all three VMs and, in the end, stop all three again.

Read on to see how Dennes is able to accomplish this.

Comments closed

Restoring SQL Server Backups from Azure Blob Storage

Niko Neugebauer walks us through special considerations when using Azure Blob Storage as your backup location:

If you are using Azure Blob Storage for SQL Server Backups, you need to know a couple of important details before you start with some significant project and as you should know (and in my head I am keep on hearing Grant Fritchey angrily declaring that there is no backup strategy that exists, if there is no restore strategy to be found in the plan).

The ACL permissions required by the Restore From URL operation in SQL Server (any SQL Server right now, starting with SQL Server 2012 page blobs and including SQL Server 2019 blob storage support that was started with SQL Server 2014) will require … [drumroll] … exclusive WRITE-permissions on the de underlying file(s).

Niko explains some of the pain around that requirement, as well as a few other bees in your bonnet.

Comments closed

Connecting to Cosmos DB via Linked Server

Frank Solomon takes us through communicating with Cosmos DB from SQL Server:

Every source table column becomes an expression in the SELECT clause. If needed, JSONLint, for example, can validate the output JSON format. In this query, the FOR XML PATH clause places each row into a formatted JSON row, with key/value pairs that match the column/value pairs of the original rows. To get the data ready, the empty (”) value in the FOR XML PATH() clause at line 10 separates each XML row with a default comma. At line 11, the STUFF function arguments format the result set as a string and remove the leading “.” in the original data. Save the finished result XML-format result set as a JSON file. This file will become the data we’ll import.

Cosmos DB database has zero or more collections, which correspond to SQL Server tables. A collection has zero or more documents, which correspond to SQL Server table rows. In the Cosmos DB

With SQL Server 2019, PolyBase also allows connections to Cosmos DB if (and only if) you are using the MongoDB API for Cosmos. But if that’s how your collection is set up, querying it becomes pretty easy.

Comments closed

Retrieving Secrets from Azure DevOps Pipelines

Gavin Campbell shows how you can pull secrets out of an Azure DevOps Pipeline:

For secrets created in the Azure DevOps UI, whether pipeline-scoped or in a variable group, it is not so simple to retrieve the variables after creation. This might be required for a number of reasons, most often troubleshooting. The need to do this is often an indicator that the project should have been using an Azure Key Vault in the first place.

Previously it was necessary to jump through some hoops to access secret variables, but it turns out this is no longer required. It also appears the recommended approach of mapping secrets to environment variables is currently not working for secret variables from variable groups.

I second the notion of using Key Vault for secrets management.

Comments closed

Postgres Change Data Capture into Kafka

Abhishek Gupta walks us through an example of change data capture to track events:

Change Data Capture (CDC) is a technique used to track row-level changes in database tables in response to create, update and delete operations. Different databases use different techniques to expose these change data events – for example, logical decoding in PostgreSQLMySQL binary log (binlog) etc. This is a powerful capability, but useful only if there is a way to tap into these event logs and make it available to other services which depend on that information.

Debezium does just that! It is a distributed platform that builds on top of Change Data Capture features available in different databases. It provides a set of Kafka Connect connectors which tap into row-level changes (using CDC) in database table(s) and convert them into event streams. These event streams are sent to Apache Kafka which is a scalable event streaming platform – a perfect fit! Once the change log events are in Kafka, they will be available to all the downstream applications.

Click through for the demo, using Azure components.

Comments closed

Storing SQL Server Database Files in Blob Storage

Tomaz Kastrun has a wacky idea:

Storing SQL Server database files in Azure blob storage is a great solution for all the databases that are often migrated between instances, servers, virtual machines, or would have been divided between instances. This scenario also has the positive aspect to it, since the ability to create snapshot backups to Azure is seamless.

Following the steps, we will create a Azure Blob storage, where MSSQL Server database files will reside with MSSQL Server running on-prem. Assuming, that you already have the Azure account (if not, you can get a free Azure account), let’s proceed by opening the Windows Terminal in PowerShell mode.

I’m impressed that it worked and could see it being an option for small demo databases, but I can’t imagine performance would be good enough for a production scenario.

Comments closed

Loading Data from S3 into Power BI

Gilbert Quevauvilliers loves a challenge:

I really enjoy a good challenge, and with my customer they have all their data stored in AWS S3. Whilst there is no native connector, I thought there must be a way for me to get the data from AWS S3 into Power BI.

I did a bit of Googling and could not find any suitable solution. I also found and learnt that I could use AWS Athena to query the data living in S3. (I am definitely an expert of have a lot of knowledge in the AWS space. I am fortunate that I have other people who know AWS and were able to setup, configure and give me the details to connect to S3 via AWS Athena)

Below are the steps on how I got this working.

Why they don’t have a proper connector is a bit of a head-scratcher to me given the sheer amount of data stored in S3 and the sheer number of connectors in Power BI.

Comments closed

Transforming JSON to CSV: ADF vs Databricks

Rayis Imayev compares two methods of transforming a JSON-structured data set into a CSV:

There is a well known and broadly advertised message from Microsoft that Azure Data Factory (ADF) is a code-free environment to help you to create your data integration solutions – https://azure.microsoft.com/en-us/resources/videos/microsoft-azure-data-factory-code-free-cloud-data-integration-at-scale/. I agree and support this approach of using drag and drop visual UI to build and automate data pipelines without writing code. However, I’m also interested to try if I can recreate certain ADF operations by writing code, just out of my curiosity.

Rayis includes a link to the Azure Data Factory step-by-step demonstration and then kicks it up a notch with Databricks. Read on to see how the two compare.

Comments closed