Press "Enter" to skip to content

Category: Cloud

Availability Groups and the Shakes

Niko Neugebauer coins a term:

Disclaimer: I am using the word shake by my own initiative and no Microsoft Documentation ever to my knowledge ever mentioned that situation. Those shakes are represented most of the time as health events to the cluster, such as the Lease Timeout resulting in a sudden attempt of Failover.
Why did I choose that word ? I don’t know. Honestly. 🙂

Read on to see it in context around hosts, CPU, and especially I/O.

Comments closed

Azure SQL Database Startup Time

John McCormack has a tip for us:

The traditional methods used for to find the start up time for SQL Server don’t work in Azure SQL DB.

I searched high and low to find this and thought I’ve got to share, and hopefully make it search engine friendly. A traditional google or bing search wasn’t bringing up the best way to find this out. I saw a lot of complicated queries to pull data, convert it and estimate start up time using functions and all kinds of magic. 

Click through for the one-liner script.

Comments closed

Hyperscale and RBIO_RG_STORAGE

Reitse Eskens runs into a strange bug in Azure SQL Database Hyperscale:

This single wait made sure our complete environment went dead in the water. Everything halted. To get some context, Microsoft has some documentation on this wait:

Occurs when a Hyperscale database primary compute node log generation rate is being throttled due to delayed log consumption at the page server(s).

Well, that’s not really helping, because that’s about everything they tell you about it.

Click through for Reitse’s findings and Microsoft’s advice.

Comments closed

Secure Cluster Connectivity in Azure Databricks

Abhinav Garg and Premal Shah have an announcement:

We’re excited to announce the general availability of Secure Cluster Connectivity (also commonly known as No Public IP) on Azure Databricks. This release applies to Microsoft Azure Public Cloud and Azure Government regions, in both Standard and Premium pricing tiers. Hundreds of our global customers including large financial services, healthcare and retail organizations have already adopted the capability to enable secure and reliable deployments of the Azure Databricks unified data platform. It allows them to securely process company and customer data in private Azure Virtual Networks, thus satisfying a major requirement of their enterprise governance policies.

Read on fore more detail about how this works.

Comments closed

Connecting Confluent and Databricks on Azure

Angela Chu, et al, take us through a streaming data ingestion process:

How do you process IoT data, change data capture (CDC) data, or streaming data from sensors, applications, and sources in real time? Apache Kafka® and Azure Databricks are widely adopted technologies in the industry, but they require specific skills and expertise to run. Leveraging Confluent Cloud and Azure Databricks as fully managed services in Microsoft Azure, you can implement new real-time data pipelines with less effort and without the need to upgrade your datacenter (or set up a new one).

This blog post demonstrates how to configure Azure Databricks to interact with Confluent Cloud so that you can ingest, process, store, make real-time predictions and gain business insights from your data.

Click through for a detailed demonstration.

Comments closed

Long-Term Backups on Azure SQL Database

Arun Sirpal takes us through a fairly new feature in Azure SQL Database:

There is a new (ish) interface to looking and configuring backups for your Azure SQL Database. This can be found within the settings section of the SQL Server.

As you can see, by default we have 7 days retention to allow for PITR – Point In Time Recovery, anything longer you will need to setup long term retention.

Click through to see how to set this up.

Comments closed

Optimizing a SQL Server 2019 Project for a Dedicated SQL Pool

Kevin Chant shows us how we can modify a database schema intended for SQL Server 2019 to work best with an Azure Synapse Analytics dedicated SQL pool:

In this post I want to cover how you can transform your SQL Server database schema for a dedicated SQL Pool if you are using Azure DevOps. Because I covered it at Data Toboggan over the weekend and it can be very useful.

By the end of this post, you will know one way you can transform the schema of a database project for SQL Server 2019 if you are using Azure DevOps. So that you can make it optimal for dedicated SQL Pools.

Click through for the process and an example. Note that this isn’t a quick “check this box and you’re done” type of solution, but if you already have a proper star schema, this will help you think through some of the things you’ll need to do.

Comments closed

A Review of AWS Athena

John McCormack updates an older review:

AWS’s own documentation is the best place for full details on the Athena offering, this post hopes to serve as further explanation and also act as an anchor to some more detailed information. As it is a managed service, Athena requires no administration, maintenance or patching. It’s not designed for regular querying of tables in a way that you would with an RDBMS. Performance is geared around querying large data sets which may include structured data or semi-structured data. There are no licensing costs like you may have with some Relational Database Management Systems (RDBMS) such as SQL Server and costs are kept low, as you only pay when you run queries in AWS Athena.

Click through for an overview of product benefits.

Comments closed

Using the Cloudshell Code Editor

Hamish Watson shows off the cloudshell code editor in Azure:

This post is about code which is an online editor you can use with cloudshell.

I live in the Azure platform all day (almost) every day. What this means is I need to be able to run scripts from just about any device – there are times where I will be scaling a Hyperscale database at 10:30pm from my phone….

So I use cloudshell a LOT in Azure

Click through for more details.

Comments closed

AzureCosmosR

Hong Ooi takes us through an R library for working with Cosmos DB:

Among other features, Azure Cosmos DB is notable in that it supports multiple data models and APIs. When you create a new Cosmos DB account, you specify which API you want to use: SQL/core API, which lets you use a dialect of T-SQL to query and manage tables and documents; MongoDB; Azure table storage; Cassandra; or Gremlin (graph). AzureCosmosR provides a comprehensive interface to the SQL API, as well as bridges to the MongoDB and table storage APIs. On the Resource Manager side, AzureCosmosR extends the AzureRMR class framework to allow creating and managing Cosmos DB accounts.

AzureCosmosR is now available on CRAN. You can also install the development version from GitHub, with devtools::install_github("Azure/AzureCosmosR").

Hong provides examples for us using three of the Cosmos DB APIs, so check it out.

Comments closed