Press "Enter" to skip to content

Author: Kevin Feasel

Automating Self-Hosted Integration Runtime Deployment

Jonathan D’Aloia doesn’t want to click next-next-next:

Welcome to my blog on how to fully automate the deployment of a Self-Hosted Integration Runtime using Terraform!

The title of this blog is very much self-explanatory but I hope you find the contents useful and are able to apply this on your projects in some aspect.

Click through for a brief overview of self-hosted integration runtimes, the process to follow, and a link to the repo.

Comments closed

Improving the Robustness of ML Model Deployment

Alexander Billington shares a few tools and tips:

Machine learning (ML) model deployment is a critical part of the MLOps lifecycle, and it can be a challenging process. In the previous blog, we explored how Azure Functions can simplify the deployment process. However, there are many other factors to consider when deploying ML models to production environments. In this blog, we’ll delve deeper into some of the essential hints and tips for more robust model deployments. We’ll look at topics such as proper model versioning and packaging, data validation, and performative code optimisations. By implementing these practices, data scientists and ML engineers can ensure their models are deployed efficiently, accurately, and with minimal downtime.

MLflow is definitely a good recommendation, as is Pydantic (which is on my to-learn list…one of these days).

Comments closed

Building Sets with GENERATE_SERIES

Aaron Bertrand is a fan of this new function:

I have come across a lot of use cases for manufacturing rows on the fly, aside from the common goal of populating a large data set such as a numbers or calendar table. A few favorites include building sample data, pivoting an unknown number of columns, data extrapolation, and filling gaps in date or time ranges.

If you are on SQL Server 2022 or Azure SQL Database, or have been reading up on new features, you’ve likely heard about one of the better T-SQL enhancements: a new built-in function called GENERATE_SERIES. The syntax is straightforward – it accepts arguments for start and stop, and an optional argument to indicate step (in case you want to iterate by more than 1, or backwards):

Click through to see how performance for this compares to two methods we’ve used in the past to generate similar results.

Comments closed

Point-in-Time Restoration with pg_basebackup

Matt Pearson and Luke Davies restore a PostgreSQL database:

I had a conversation with another DBA about interview questions, and one interview topic that came up was using pg_basebackup to restore a database. I had a horrible realisation that I had not done a restore using pg_basebackup in PostgreSQL 15. With modern backup tools, using pg_basebackup is like using a manual screwdriver instead of an electrically-powered screwdriver; it gets the job done, but much more effort is involved.

However, sometimes pg_basebackup is the only tool available.

So, in this blog, we’ll look at PostgreSQL’s recovery options and their implications for restoring. We’ll also look at a simple restore using pg_basebackup when a user fails. I’ll be using a PG 15 database for these tests.

Click through for the process and a demonstration.

Comments closed

Change Data Capture and the Cosmos DB Analytical Store

Mark Kromer and Revin Chalil show off an interesting preview feature:

Making it super-easy to create efficient and fast ETL processing the cloud, Azure Data Factory has invested heavily in change data capture features. Today, we are super-excited to announce that Azure Cosmos DB analytics store now supports Change Data Capture (CDC), for Azure Cosmos DB API for NoSQL, and Azure Cosmos DB API for Mongo DB in public preview!

This capability, available in public preview, allows you to efficiently consume a continuous and (inserted, updated, and deleted) data from the analytical store. CDC is seamlessly integrated with Azure Synapse Analytics and Azure Data Factory, a scalable no-code experience for high data volume. As CDC is based on the analytical store, it does not consume provisioned RUs, does not affect the performance of your transactional workloads, provides lower latency, and has lower TCO.

Click through to see how it works.

Comments closed

Overwhelming the IN Operator

Tariq Rasheed Al-Qaralleh recounts a customer problem:

In some cases, like when customers use Object-relational Mapping tools (ORM) Like Entity framework or LINQ, part of the code at the end will be converted to a TSQL executable statement.

For example, The LINQ with  .Where () method will be a TSQL Query with a Where clause :

Query Syntax and Method Syntax in LINQ (C#) | Microsoft Learn

Write LINQ queries in C# | Microsoft Learn

Read on for some of the practical consequences of doing this, including performance issues and possibly even runtime errors.

Tariq gives a couple examples of how to fix the issue, and a third possible fix is to pass in the IN clause as a table-valued parameter and join to that TVP.

Comments closed

Power BI Dataset CI/CD with Azure DevOps

Stephanie Bruno does a bit of continuous integration:

There’s a lot of information on how to get around the lack of an out-of-the box CI/CD solution for Power BI datasets, but for me it’s often complicated and I have to read too many pages before making much progress on my own. This post is here to strip it down and provide you with the easiest way we know to enable a bonafide CI/CD process for Power BI datasets with Azure DevOps. The post is still longer than we’d like, but it includes detailed step-by-step instructions to walk you through every part of the process. To save space, we used slideshows for the screenshots, but you can pause them as you follow along.

There are a lot of steps but the goal is a worthwhile one.

Comments closed

Fending off Sessions while in Single-User Mode

Eitan Blumin just wants to switch the database type:

Today we had an interesting use case where a customer reported that one of the databases they just restored from a backup got stuck in “Single-User” mode in one of their environments.

To resolve it, I first tried running the following command:

ALTER DATABASE MyDB SET MULTI_USER WITH ROLLBACK IMMEDIATE;

In response, I got deadlocked with the dreaded error 1205:

There were a few different attempts with no success until Eitan came up with the final script. Eitan’s analogy was to curling, though the first thing I thought of was Odysseus fighting off his wife’s suitors as he came back to claim his home.

1 Comment

Which Power Query Operations Are Most Resource-Intensive?

Chris Webb answers a question:

Last year I wrote a post about a change in the Power BI Service that meant the CPU Time associated with Power Query transformations was added to the total shown for a dataset refresh operation in Profiler traces and Log Analytics:

https://blog.crossjoin.co.uk/2022/07/03/measuring-power-query-cpu-usage-during-power-bi-dataset-refresh/

This was useful, but it didn’t tell you directly how much CPU Time was used by Power Query and it didn’t tell you which tables or partitions in a refresh were using the most CPU. It also didn’t tell you anything about Power Query memory usage. The good news that recently there has been another change that solves these problems.

Click through for the solution.

Comments closed