Press "Enter" to skip to content

Category: Cloud

A Review of the Azure AI Foundry

Tomaz Kastrun starts a new series:

Microsoft Azure offers multiple services that enable developers to build amazing AI-powered solutions. Azure AI Foundry brings these services together in a single unified experience for AI development on the Azure cloud platform.

Until now, developers needed to work with multiple tools and web portals in a single project. With Azure AI Foundry, these tasks are now simplified and offers same environment for better collaboration.

Read on to see more about the Azure AI Foundry.

Comments closed

Table Cloning in Snowflake

Kevin Wilkie creates a clone:

In this coding scenario, I am copying everything from TableA and pushing it into a new table called TableB in the same database and schema.

If you check the size of the database before and after you clone a table, it will be the same size – no matter the size of TableA. This is because, at this point in time, TableB exists only as a “pointer” to the data that constitutes TableA. It is not until something changes in one of the tables – say adding a row to TableA, that it stops being a “pointer” and is artificially constituted.

Read on to learn more about how this works.

Comments closed

Comparing Azure Kubernetes Service and Container Apps

Gaurav Shukla makes a comparison:

Hello Readers!! Welcome to the new blog!! AKS vs ACA, which is best in cloud migration? When migrating an application to the cloud, choosing the right platform is crucial to ensure scalability, cost-effectiveness, and ease of management. Two of the prominent services offered by Azure for running containerized applications are Azure Kubernetes Service (AKS) and Azure Container Apps (ACA). Both are excellent choices, but their use cases, complexity, and operational overhead differ significantly. This blog will provide a detailed comparison of AKS and ACA, helping you decide which is the best approach for your cloud migration.

Read on for an overview of each service and a nice table outlining the differences.

Comments closed

Power BI Writeback via Fabric SQL Database

Jon Voge gives us a use case for Fabric SQL Databases:

Until recently, Fabric has allowed us to choose between Lakehouses and Warehouses as a backend. For write-back use cases, neither are ideal.

  • The SQL Endpoint of Lakehouses are Read-Only, making writes from Power Apps impossible.
  • While the SQL Endpoint of Warehouses are write-enabled, they do not support enforced Primary Keys, which are a hard requirement for Power Apps to be able to write directly to a data source.

Jon briefly describes two mechanisms people used and then how you can do this more effectively with a Fabric SQL Database. Based on the article, it seems that you could probably still do the same with an Azure SQL Database, though I suppose handling the managed identity could be an issue.

Comments closed

AWS DMS and a LOB Bug

Richard O’Riordan fixes an issue:

The table over in our Postgres cluster is similar except for the data type “text” being used instead of “varchar”. All kind of boring so far, but what we noticed that on some very rare occasions the “largevalue” column was empty over in the PostgreSQL database even though for that row it was populated in SQL Server.

This seemed odd to me, like you would expect if there was some error inserting the row on the PostgreSQL side then since it is all done within a transaction that it would either all succeed or all fail, how would the row be partially inserted, i.e. missing this text value.

Read on for the story and several tests of how things work.

Comments closed

Debugging in Databricks

Chen Hirsh enables a debugger:

Do you know that feeling, when you write beautiful code and everything just works perfectly on the first try?

I don’t.

Every time I write code It doesn’t work in the beginning, and I have to debug it, make changes, test it…

Databricks introduced a debugger you can use on a code cell, and I’ve wanted to try it for quite some time now. Well, I guess the time is now 

I’m having trouble in finding the utility for a debugger here. Notebooks are already set up for debugging: you can easily add or remove cells and the underlying session maintains state between cells.

Comments closed

Execute a Collection of Child Pipelines from Metadata in Data Factory

Andy Leonard continues a series on design patterns:

In this post, I clone and modify the dynamic parent pipeline from the previous post to retrieve metadata from an Azure SQL database table for several child pipelines, and then call each child pipeline from a parent pipeline.

When we’re done, this pipeline will:

  1. Read pipeline metadata from a table in an Azure SQL database
  2. Store some of the metadata (a collection of pipelineID values) in the (existing) pipelineIdArray variable
  3. Iterate the pipelineIdArray variable’s collection of pipelineID values
  4. Execute each child pipeline represented by each pipelineID value stored in the pipelineIdArray variable

Read on to learn how.

Comments closed

Configuring Azure Database Watcher

Rod Edwards configures Azure Database Watcher to watch databases in Azure:

First off, at the time of writing, this is still in Preview, and is only for Azure SQL PaaS offerings, namely Azure SQL DB and SQL Managed Instance, so if you’re out of luck if you’re using SQL on VM. Expect this to be added at some point in future, its number 2 on the published roadmap.

Preview or GA…the long and short of it is that it allows collection of performance AND config data into a central datastore for ALL of your SQL MI and Azure DB estate. With all of the data in one place, then dashboards are connected to here for easier estate-wide visualisations.

Read on for a step-by-step guide on configuring it. But also pay attention to Rod’s note near the end that troubleshooting setup is a pain—there aren’t many useful logs that show exactly why it isn’t working.

Comments closed

Analyzing Azure Network Security Group Flow Logs

Reitse Eskens says the bits must flow:

I had an interesting question lately where I was requested to show all the network traffic within an Azure landing zone. Specifically source and target IP, protocol and port. From the aspect of Zero Trust, it’s important to show both successful and failed connections in your network. To be able to answer this question I had prepared myself by enabling the so-called flow logs on the Network Security Groups (NSG). NSG’s are used to control traffic on the IP and port level between resources. There’s no packet inspection, just a check if IP 1 is allowed to connect to IP 2 on port 3. In this specific case, it also had to do with a migration to Azure Firewall where all the NSG rules had to be validated.

But getting the data is one thing, finding out what is in it is something else. In this blogpost I’ll drag you along the steps I took to get the raw JSON data into a SQL table and analyse the data.

Read on for the process and quite a bit of T-SQL code.

Comments closed

Reasons to Migrate from Synapse to Fabric

James Serra has a list:

Many customers ask me about the advantages of moving from Azure Synapse Analytics to Microsoft Fabric. Here’s a breakdown of the standout features that make Fabric an appealing choice:

  • Unified Environment for All Users
    Fabric serves everyone—from report writers and citizen developers to IT engineers—unlike Synapse, which primarily targets IT professionals.
  • Hands-Free Optimization
    Fabric is auto-optimized and fully integrated, allowing most features to perform well without requiring technical adjustments.

I suppose that James is too politic to give what I’d consider the top reason: because there have actually been meaningful updates to Microsoft Fabric in the past year. I’m not sure you can really say the same thing about Azure Synapse Analytics.

The tricky part about this, however, is that–to my knowledge, at least–there’s no clean way to migrate dedicated SQL pools.

Comments closed