Press "Enter" to skip to content

Category: Cloud

Azure SQL Trigger for Azure Functions

Drew Skwiers-Koballa announces a new feature:

The Azure SQL trigger for Azure Functions uses SQL change tracking functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted.  Change tracking is available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server, making the Azure SQL trigger for Azure Functions a flexible component for event-driven applications.

Similarly to the Azure SQL bindings for Azure Functions, a connection string for the SQL database is stored in the application settings of the Azure Function and supporting authentication options such as managed identity. In addition to the connection string, the SQL trigger is configured with a table name. The SQL trigger is specified on lines 12 and 13 in the C# Azure Function example below, which will log information about each change made to data in the dbo.Employees table.

Read on to see how it works.

Comments closed

Working with Multi-Channel Bots in Azure

Matt Eland creates a mega-bot:

The Azure Bot Service is effectively a registration for a conversational AI application on Azure. This registration allows you to connect a deployed chatbots to a wide number of supported channels that users can use to interact with the bot.

This lets you build one bot that can serve a variety of users across multiple different channels, including both text and voice channels.

Additionally, the Azure Bot Service gives you a centralized place to manage, secure, and monitor your bot, regardless of which channel people use to interact with your app.

Read on for an important caveat, as well as more information on Azure Bot Service.

Comments closed

The Importance of the Power BI Service

Reza Rad explains why the Power BI Service is useful:

The Power BI toolset comes in many shapes and forms. There is a Power BI Desktop, Power BI Mobile app, Power BI Report Server, and Power BI Service (and some other applications and components too). The questions I hear from the new users of Power BI are; Do I need to have an account for Power BI? do I need to use the Power BI website for creating visualization etc.? What is the Power BI website or service, and what is its usage? If I can do the reporting using Power BI Desktop for free, then why would I need the service? In this article and video, I will answer all of that.

Click through for a video or for the article explaining the purpose behind the Power BI Service. Having done work with places using Power BI Report Server and places using the Power BI Service, I will say that the latter takes more work to get corporate-compliant but offers a whole lot more.

Comments closed

GitHub CI/CD for Synapse Link for SQL Server 2022

Kevin Chant does a bit of CI/CD:

In this post I want to show how a GitHub CI/CD experience for Azure Synapse Link for SQL Server 2022 can look. Which uses GitHub Actions. Including how to automatically stop and start it in the pipeline.

In my last post I showed a complete CI/CD experience for Azure Synapse Link for SQL Server 2022 using Azure DevOps.

With this in mind, in this post I show an alternative GitHub CI/CD experience for Azure Synapse Link for SQL Server 2022 which uses GitHub Actions. Which includes automatically stopping the link before the database update and starting it again after the update has completed.

Read on to learn how.

Comments closed

One Repo for Every Environment

Meagan Longoria explains an important part of source control repositories:

I’ve seen a few people start Azure Data Factory (ADF) projects assuming that we would have one source control repo per environment, meaning that you would attach a Git repo to Dev, and another Git repo to Test and another to Prod.

Microsoft recommends against this, saying:

Read on for the citation as well as the practical reason why we don’t want multiple repos. This is true not only for Azure Data Factory but for every development project. You have one repository with branches. Certain branches represent checkpoints where code goes out to a specific environment via use of a release tool (e.g., Azure DevOps release pipelines, GitHub actions, etc.).

Comments closed

Testing Azure SQL DB Hyperscale Performance

Reitse Eskens continues a series on performance testing Azure SQL DB tiers:

So far, my blogs have been on the different Azure SQL DB offerings where there are differences between DTU and CPU based. But in general, the design is recognizable. With the hyperscale tier, many things change. There are still cores and memory of course, but the rest of the design is totally different. I won’t go into all the details, you’re better off reading them here [https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale?view=azuresql] and here [https://learn.microsoft.com/en-us/azure/azure-sql/database/hyperscale-architecture?view=azuresql] , but the main differences are the support of up to 100 TB of data in one database (all the other tiers max out at 40 TB), fast database restores based on file snapshots, rapid scale out and rapid scale up.

There are differences in testing this one versus the others, so buyer beware.

Comments closed

Software-as-a-Service: Single DB or Per-Client DB

Greg Low makes a choice:

On-premises applications are mostly single-tenant. They support a single organization. We do occasionally see multi-tenant databases. They hold the same types of information for many organizations.

But what about SaaS based applications? By default, you’ll want to store data for many client organizations. Should you create a large single database that holds data for everyone? Should you create a separate database for each client? Or should you create something in-between.

As with most things in computing, there is no one simple answer to this. Here are the main decision points that I look at:

Click through for Greg’s thoughts on the matter. Most of these factors are also relevant for on-premises SQL Server installations, not just Azure SQL DB/Managed Instance.

Comments closed

Search Optimization in Snowflake

Arun Sirpal doesn’t have time to create indexes:

I will use a clone of the table to compare it to when search optimisation is on. I will make sure no caching in on which could affect the test.
I activate the feature via:

ALTER TABLE data_staging ADD SEARCH OPTIMIZATION;

This takes time! If you run something like the below to confirm 100% completion. This is because there is a maintenance service that runs in the background responsible for creating and maintaining the search access path:

Click through to see what happens and the kinds of performance gains Arun realized.

Comments closed