Press "Enter" to skip to content

Category: Cloud

SQL Assessment for SQL Server on VMs

Ebru Ersan announces a new preview:

Wouldn’t it be great if there was a way to learn if your SQL Server on Azure Virtual Machines was configured optimally? Do you have the right options set? Do you have your tempdb on the right disk? Can your queries perform better? All these and more can be answered using the new Azure portal experience on the SQL virtual machine resource page. SQL Assessment feature, once enabled, will evaluate your SQL Server on Azure VM against configuration best practices to determine if your system is healthy and setup for success. This feature is currently in preview. We would love to hear your feedback.

Click through to see it in action.

Comments closed

Automating Single Table Refresh with Azure Data Factory and Azure Automation

Marc Lelijveld wants to refresh a single table:

Back in February, I wrote a blog on how you can trigger a single table to refresh in your Power BI data model. This blog described how you can achieve this goal using a PowerShell script and the ASCmd cmdlets for Analysis Services, which also works for Power BI Premium. In the wrap-up of that blog, I promised to follow-up with a blog on how to achieve the same goal with Azure Data Factory. It took a little bit longer than expected to finalize this post, but here it is!

In this blog, co-authored by my colleague Paulien van Eijk, we will describe how you can automate your single table refresh in the Power BI Service, including all dependencies with downstream dataflows using Azure Data Factory and Azure Automation. All this is based on real life scenarios and a solution build in collaboration between Dave Ruijter, Paulien and me.

Read on for Marc and Paulien’s solution.

Comments closed

From Kafka to Azure Data Explorer

Niels Berglund uses Kafka Connect to link an Apache Kafka topic to Azure Data Explroer:

If you follow my blog, you probably know that I am a huge fan of Apache Kafka and event streaming/stream processing. Recently Azure Data Explorer (ADX) has caught my eye. In fact, in the last few weeks, I did two conference sessions about ADX. A month ago, I published a blog post related to Kafka and ADX: Run Self-Managed Kusto Kafka Connector Serverless in Azure Container Instances.

As the title of that post implies, it looked at the ADX Kafka sink connector and how to run it in Azure. What the post did not look at was how to configure the connector and connect it to ADX. That is what we will do in this post (and maybe in a couple of more posts).

This post serves as a complete tutorial, though Niels does promise future posts on other ingestion methods, so stay tuned.

Comments closed

System-Versioned Ledger Tables

Randolph West has a series on ledger tables in SQL Server. First up is a primer on the topic:

System-versioned ledger tables leverage the same technology: there is a table with current data in it, and an underlying history table which keeps track of changes. However, it uses a cryptographic chain that provides digital forensic evidence of tampering. Yes, if you’ll pardon the use of this phrase, I’m talking about a blockchain.

This is not a cryptocurrency. No one is using expensive graphics cards to produce a fiat currency in someone’s basement. Instead, each transaction affecting the database in question is cryptographically hashed using a SHA-256 algorithm and then stored somewhere off-site.

Part two separates out the two types of ledger table:

This week we will look at the different types of ledger table: append-only and updatable.

Unlike temporal tables, a ledger table can be append-only which makes it immutable. You can only insert data and therefore it does not need a history table. In fact, you may be using append-only tables in your data warehouse already. While this is secure, it may not be practical.

Part three covers limitations:

Every choice we make is a trade-off. New features have limitations, and ledger tables are no exception.

Some of these limitations are perfectly sensible. For example, the whole point of ledger tables is to ensure that we can provide tamper evidence. This necessarily means you can’t turn it off once it’s enabled, unless you drop the database entirely — this is just one scenario where a full defence-in-depth strategy is required.

Part four is the one I’ve been waiting for—an explanation why you probably don’t need this:

After writing several posts about a neat feature in Azure SQL called system-versioned ledger tables, it reminded me about something I’ve wanted to say for a number of years now, outside of snarky tweets.

Here goes:

You don’t need a blockchain.

In the vast majority of use cases, you need a properly audited relational database system with ACID compliance and a good recovery strategy.

There are very specific use cases in which data hashes and ledger tables make sense.

Comments closed

Serverless SQL Pool CI/CD

Kevin Chant doesn’t have time for manual deployments:

I want to cover one way you can do CI/CD for Azure Synapse Analytics serverless SQL pools using Azure DevOps in this post. Because I know it is a popular topic.

It’s related to my post about how you can create a dacpac for an Azure Synapse Analytics dedicated SQL pool using Azure DevOps. Since they are both based in the same service.

Plus, a while ago I wrote about the increase in demand for Data Platform automation. So, I really wanted to do a post about how you can do CI/CD for Azure Synapse Analytics serverless SQL pools.

Read on to learn how.

Comments closed

Storage Testing for Azure SQL Managed Instances

Joe Obbish busts out the slide rule:

Lately I’ve been doing some exploratory performance testing on Azure SQL Managed Instances in preparation for a migration to that platform. This blog post documents some storage testing results and may even have practical advice near the end. All testing was done on a gen5 general purpose instance with 8 vCores.

Read on for Joe’s findings. Spoiler alert: there is practical advice at the end.

Comments closed

Real-Time Change Detection via Cumulative Sums

Nithin Sankar tracks deviations with cumulative sums:

With the advent of Internet of Things (IOT) and the proliferation of connected devices, comes the challenge of monitoring parts for maintenance before they break down. A common approach revolves around getting data from connected devices and performing a statistical test to determine the likelihood of the device failing. While this common approach is robust, it typically involves a significant time investment in exploratory data analysis, feature engineering, training, and testing to build a predictive model. It, therefore, often lacks the agility required to keep up with the monitoring demands of increasingly time-sensitive initiatives. 

In this context, the question becomes: how can we ensure a similar degree of rigor, but also improve the timeliness and responsiveness of being able to perform predictive maintenance? 

Click through for the process, as well as an example using Azure Stream Analytics and Power BI.

Comments closed

Automatic Backups on a Data Lake or Lakehouse

Dave Ruijter backs that thing up:

Out of the box, Azure Data Lake Storage Gen2 provides redundant storage. Therefore, the data in your Data Lake(house) is resilient to transient hardware failures within a datacenter through automated replicas. This ensures durability and high availability. In this blog post, I provide a backup strategy on how to further protect your data from accidental deletions, data corruption, or any other data failures. This strategy works for Data Lake as well as Data Lakehouse implementations. It uses native Azure services, no additional tools, software, or licenses are required.

Read on for a detailed strategy.

Comments closed

Optimizing Blob Storage Query Performance

Dennes Torres compares several strategies for querying data stored in Azure Blob Storage:

In the third part of the series Querying Blob Storage with SQL, I will focus on the performance behaviour of queries: What makes them faster, slower, and some syntax beyond the basics.

The performance tests in this article are repeated, and the best time of the queries is recorded. This doesn’t mean you will always achieve the same timing. Many architectural details will affect the timing, such as cache, first execution, and so on. The timing exposed on each query is only a reference pointing to the differences of the query methods that can affect the time and the usual result for better or worse performance.

Click through to see which patterns perform well and which don’t.

Comments closed