Press "Enter" to skip to content

Category: Cloud

Power BI Writeback via Fabric SQL Database

Jon Voge gives us a use case for Fabric SQL Databases:

Until recently, Fabric has allowed us to choose between Lakehouses and Warehouses as a backend. For write-back use cases, neither are ideal.

  • The SQL Endpoint of Lakehouses are Read-Only, making writes from Power Apps impossible.
  • While the SQL Endpoint of Warehouses are write-enabled, they do not support enforced Primary Keys, which are a hard requirement for Power Apps to be able to write directly to a data source.

Jon briefly describes two mechanisms people used and then how you can do this more effectively with a Fabric SQL Database. Based on the article, it seems that you could probably still do the same with an Azure SQL Database, though I suppose handling the managed identity could be an issue.

Comments closed

AWS DMS and a LOB Bug

Richard O’Riordan fixes an issue:

The table over in our Postgres cluster is similar except for the data type “text” being used instead of “varchar”. All kind of boring so far, but what we noticed that on some very rare occasions the “largevalue” column was empty over in the PostgreSQL database even though for that row it was populated in SQL Server.

This seemed odd to me, like you would expect if there was some error inserting the row on the PostgreSQL side then since it is all done within a transaction that it would either all succeed or all fail, how would the row be partially inserted, i.e. missing this text value.

Read on for the story and several tests of how things work.

Comments closed

Debugging in Databricks

Chen Hirsh enables a debugger:

Do you know that feeling, when you write beautiful code and everything just works perfectly on the first try?

I don’t.

Every time I write code It doesn’t work in the beginning, and I have to debug it, make changes, test it…

Databricks introduced a debugger you can use on a code cell, and I’ve wanted to try it for quite some time now. Well, I guess the time is now 

I’m having trouble in finding the utility for a debugger here. Notebooks are already set up for debugging: you can easily add or remove cells and the underlying session maintains state between cells.

Comments closed

Execute a Collection of Child Pipelines from Metadata in Data Factory

Andy Leonard continues a series on design patterns:

In this post, I clone and modify the dynamic parent pipeline from the previous post to retrieve metadata from an Azure SQL database table for several child pipelines, and then call each child pipeline from a parent pipeline.

When we’re done, this pipeline will:

  1. Read pipeline metadata from a table in an Azure SQL database
  2. Store some of the metadata (a collection of pipelineID values) in the (existing) pipelineIdArray variable
  3. Iterate the pipelineIdArray variable’s collection of pipelineID values
  4. Execute each child pipeline represented by each pipelineID value stored in the pipelineIdArray variable

Read on to learn how.

Comments closed

Configuring Azure Database Watcher

Rod Edwards configures Azure Database Watcher to watch databases in Azure:

First off, at the time of writing, this is still in Preview, and is only for Azure SQL PaaS offerings, namely Azure SQL DB and SQL Managed Instance, so if you’re out of luck if you’re using SQL on VM. Expect this to be added at some point in future, its number 2 on the published roadmap.

Preview or GA…the long and short of it is that it allows collection of performance AND config data into a central datastore for ALL of your SQL MI and Azure DB estate. With all of the data in one place, then dashboards are connected to here for easier estate-wide visualisations.

Read on for a step-by-step guide on configuring it. But also pay attention to Rod’s note near the end that troubleshooting setup is a pain—there aren’t many useful logs that show exactly why it isn’t working.

Comments closed

Analyzing Azure Network Security Group Flow Logs

Reitse Eskens says the bits must flow:

I had an interesting question lately where I was requested to show all the network traffic within an Azure landing zone. Specifically source and target IP, protocol and port. From the aspect of Zero Trust, it’s important to show both successful and failed connections in your network. To be able to answer this question I had prepared myself by enabling the so-called flow logs on the Network Security Groups (NSG). NSG’s are used to control traffic on the IP and port level between resources. There’s no packet inspection, just a check if IP 1 is allowed to connect to IP 2 on port 3. In this specific case, it also had to do with a migration to Azure Firewall where all the NSG rules had to be validated.

But getting the data is one thing, finding out what is in it is something else. In this blogpost I’ll drag you along the steps I took to get the raw JSON data into a SQL table and analyse the data.

Read on for the process and quite a bit of T-SQL code.

Comments closed

Reasons to Migrate from Synapse to Fabric

James Serra has a list:

Many customers ask me about the advantages of moving from Azure Synapse Analytics to Microsoft Fabric. Here’s a breakdown of the standout features that make Fabric an appealing choice:

  • Unified Environment for All Users
    Fabric serves everyone—from report writers and citizen developers to IT engineers—unlike Synapse, which primarily targets IT professionals.
  • Hands-Free Optimization
    Fabric is auto-optimized and fully integrated, allowing most features to perform well without requiring technical adjustments.

I suppose that James is too politic to give what I’d consider the top reason: because there have actually been meaningful updates to Microsoft Fabric in the past year. I’m not sure you can really say the same thing about Azure Synapse Analytics.

The tricky part about this, however, is that–to my knowledge, at least–there’s no clean way to migrate dedicated SQL pools.

Comments closed

Enabling System Tables on Databricks

Chen Hirsh wants to see system tables:

This post is about two things that are dependent on each other. First, it explains how to enable system tables (other than the one enabled by default) and second, how to use these system tables to view data about workflow runs and costs.

Please note that Unity Catalog is required to use these features. And a premium workspace is required for using dashboards.

Click through to learn more about what system tables are and what you can get from them.

Comments closed

Administrative Tasks in Azure Database for MySQL Flexible Server

Rajendra Gupta gives us a checklist:

The tip, Azure Database for MySQL, explored various deployment models for Azure MySQL and their features. Further, we deployed an Azure MySQL flexible server using the Azure portal. This tip will explore the tasks and operation items required for a MySQL flexible server. Let’s check it out.

Read on for notes regarding what Microsoft gives you up-front as well as what you, as an administrator, would still need to cover.

Comments closed

Cost Optimization in Azure

Albert McQuiston shares some advice:

Organizations using Azure Cloud services often overspend, eventually decreasing their operational efficiency. Leveraging cost-optimization techniques can help these businesses to focus on areas requiring more capital investment.

There are a few tips around specific actions you can take to understand why you’re spending so much and how to cut it down a bit. Albert also mentions but does not share a link to the Azure pricing calculator. This is a great tool if you already know what Azure resources you need and intend to price them out. It’s a real challenge getting the number close enough to right (especially for complex services with a lot of inputs, like Azure Synapse Analytics was), but can be useful in getting in the ballpark. But I also highly recommend going through a Well-Architected Review assessment, based on Azure’s Well-Architected Framework. This framework and its associated reviews cover cost-effectiveness as a key tenet.

Comments closed