Press "Enter" to skip to content

Category: Cloud

Features in Azure AI Foundry

Tomaz Kastrun continues a series:

Azure AI Foundry is all purpose tool that provides all of the capital ingredients that data scientists would need in order to create, develop and deploy the generative AI applications. The platfrom supports and gets you the following services and abilitiies:

Click through for those features and how you can access the Azure AI Foundry.

Comments closed

Optimizing AWS Costs

Albert McQuiston speaks my language (that is, saving money):

Every organization looks to save on its cloud expenses to align with business objectives. With the following tips, you can optimize your Amazon Web Services (AWS) cloud expenditure and review the key aspects where you can save more effectively.

Read on for some high-level tips. It doesn’t cover things like spot instances, but does a pretty decent job of laying out the problem and showing some of the cost and budgeting tools available to figure out where your company’s money is going.

Comments closed

A Review of the Azure AI Foundry

Tomaz Kastrun starts a new series:

Microsoft Azure offers multiple services that enable developers to build amazing AI-powered solutions. Azure AI Foundry brings these services together in a single unified experience for AI development on the Azure cloud platform.

Until now, developers needed to work with multiple tools and web portals in a single project. With Azure AI Foundry, these tasks are now simplified and offers same environment for better collaboration.

Read on to see more about the Azure AI Foundry.

Comments closed

Table Cloning in Snowflake

Kevin Wilkie creates a clone:

In this coding scenario, I am copying everything from TableA and pushing it into a new table called TableB in the same database and schema.

If you check the size of the database before and after you clone a table, it will be the same size – no matter the size of TableA. This is because, at this point in time, TableB exists only as a “pointer” to the data that constitutes TableA. It is not until something changes in one of the tables – say adding a row to TableA, that it stops being a “pointer” and is artificially constituted.

Read on to learn more about how this works.

Comments closed

Comparing Azure Kubernetes Service and Container Apps

Gaurav Shukla makes a comparison:

Hello Readers!! Welcome to the new blog!! AKS vs ACA, which is best in cloud migration? When migrating an application to the cloud, choosing the right platform is crucial to ensure scalability, cost-effectiveness, and ease of management. Two of the prominent services offered by Azure for running containerized applications are Azure Kubernetes Service (AKS) and Azure Container Apps (ACA). Both are excellent choices, but their use cases, complexity, and operational overhead differ significantly. This blog will provide a detailed comparison of AKS and ACA, helping you decide which is the best approach for your cloud migration.

Read on for an overview of each service and a nice table outlining the differences.

Comments closed

Power BI Writeback via Fabric SQL Database

Jon Voge gives us a use case for Fabric SQL Databases:

Until recently, Fabric has allowed us to choose between Lakehouses and Warehouses as a backend. For write-back use cases, neither are ideal.

  • The SQL Endpoint of Lakehouses are Read-Only, making writes from Power Apps impossible.
  • While the SQL Endpoint of Warehouses are write-enabled, they do not support enforced Primary Keys, which are a hard requirement for Power Apps to be able to write directly to a data source.

Jon briefly describes two mechanisms people used and then how you can do this more effectively with a Fabric SQL Database. Based on the article, it seems that you could probably still do the same with an Azure SQL Database, though I suppose handling the managed identity could be an issue.

Comments closed

AWS DMS and a LOB Bug

Richard O’Riordan fixes an issue:

The table over in our Postgres cluster is similar except for the data type “text” being used instead of “varchar”. All kind of boring so far, but what we noticed that on some very rare occasions the “largevalue” column was empty over in the PostgreSQL database even though for that row it was populated in SQL Server.

This seemed odd to me, like you would expect if there was some error inserting the row on the PostgreSQL side then since it is all done within a transaction that it would either all succeed or all fail, how would the row be partially inserted, i.e. missing this text value.

Read on for the story and several tests of how things work.

Comments closed

Debugging in Databricks

Chen Hirsh enables a debugger:

Do you know that feeling, when you write beautiful code and everything just works perfectly on the first try?

I don’t.

Every time I write code It doesn’t work in the beginning, and I have to debug it, make changes, test it…

Databricks introduced a debugger you can use on a code cell, and I’ve wanted to try it for quite some time now. Well, I guess the time is now 

I’m having trouble in finding the utility for a debugger here. Notebooks are already set up for debugging: you can easily add or remove cells and the underlying session maintains state between cells.

Comments closed

Execute a Collection of Child Pipelines from Metadata in Data Factory

Andy Leonard continues a series on design patterns:

In this post, I clone and modify the dynamic parent pipeline from the previous post to retrieve metadata from an Azure SQL database table for several child pipelines, and then call each child pipeline from a parent pipeline.

When we’re done, this pipeline will:

  1. Read pipeline metadata from a table in an Azure SQL database
  2. Store some of the metadata (a collection of pipelineID values) in the (existing) pipelineIdArray variable
  3. Iterate the pipelineIdArray variable’s collection of pipelineID values
  4. Execute each child pipeline represented by each pipelineID value stored in the pipelineIdArray variable

Read on to learn how.

Comments closed

Configuring Azure Database Watcher

Rod Edwards configures Azure Database Watcher to watch databases in Azure:

First off, at the time of writing, this is still in Preview, and is only for Azure SQL PaaS offerings, namely Azure SQL DB and SQL Managed Instance, so if you’re out of luck if you’re using SQL on VM. Expect this to be added at some point in future, its number 2 on the published roadmap.

Preview or GA…the long and short of it is that it allows collection of performance AND config data into a central datastore for ALL of your SQL MI and Azure DB estate. With all of the data in one place, then dashboards are connected to here for easier estate-wide visualisations.

Read on for a step-by-step guide on configuring it. But also pay attention to Rod’s note near the end that troubleshooting setup is a pain—there aren’t many useful logs that show exactly why it isn’t working.

Comments closed