Press "Enter" to skip to content

Curated SQL Posts

Improving the Robustness of ML Model Deployment

Alexander Billington shares a few tools and tips:

Machine learning (ML) model deployment is a critical part of the MLOps lifecycle, and it can be a challenging process. In the previous blog, we explored how Azure Functions can simplify the deployment process. However, there are many other factors to consider when deploying ML models to production environments. In this blog, we’ll delve deeper into some of the essential hints and tips for more robust model deployments. We’ll look at topics such as proper model versioning and packaging, data validation, and performative code optimisations. By implementing these practices, data scientists and ML engineers can ensure their models are deployed efficiently, accurately, and with minimal downtime.

MLflow is definitely a good recommendation, as is Pydantic (which is on my to-learn list…one of these days).

Comments closed

Point-in-Time Restoration with pg_basebackup

Matt Pearson and Luke Davies restore a PostgreSQL database:

I had a conversation with another DBA about interview questions, and one interview topic that came up was using pg_basebackup to restore a database. I had a horrible realisation that I had not done a restore using pg_basebackup in PostgreSQL 15. With modern backup tools, using pg_basebackup is like using a manual screwdriver instead of an electrically-powered screwdriver; it gets the job done, but much more effort is involved.

However, sometimes pg_basebackup is the only tool available.

So, in this blog, we’ll look at PostgreSQL’s recovery options and their implications for restoring. We’ll also look at a simple restore using pg_basebackup when a user fails. I’ll be using a PG 15 database for these tests.

Click through for the process and a demonstration.

Comments closed

Building Sets with GENERATE_SERIES

Aaron Bertrand is a fan of this new function:

I have come across a lot of use cases for manufacturing rows on the fly, aside from the common goal of populating a large data set such as a numbers or calendar table. A few favorites include building sample data, pivoting an unknown number of columns, data extrapolation, and filling gaps in date or time ranges.

If you are on SQL Server 2022 or Azure SQL Database, or have been reading up on new features, you’ve likely heard about one of the better T-SQL enhancements: a new built-in function called GENERATE_SERIES. The syntax is straightforward – it accepts arguments for start and stop, and an optional argument to indicate step (in case you want to iterate by more than 1, or backwards):

Click through to see how performance for this compares to two methods we’ve used in the past to generate similar results.

Comments closed

Overwhelming the IN Operator

Tariq Rasheed Al-Qaralleh recounts a customer problem:

In some cases, like when customers use Object-relational Mapping tools (ORM) Like Entity framework or LINQ, part of the code at the end will be converted to a TSQL executable statement.

For example, The LINQ with  .Where () method will be a TSQL Query with a Where clause :

Query Syntax and Method Syntax in LINQ (C#) | Microsoft Learn

Write LINQ queries in C# | Microsoft Learn

Read on for some of the practical consequences of doing this, including performance issues and possibly even runtime errors.

Tariq gives a couple examples of how to fix the issue, and a third possible fix is to pass in the IN clause as a table-valued parameter and join to that TVP.

Comments closed

Change Data Capture and the Cosmos DB Analytical Store

Mark Kromer and Revin Chalil show off an interesting preview feature:

Making it super-easy to create efficient and fast ETL processing the cloud, Azure Data Factory has invested heavily in change data capture features. Today, we are super-excited to announce that Azure Cosmos DB analytics store now supports Change Data Capture (CDC), for Azure Cosmos DB API for NoSQL, and Azure Cosmos DB API for Mongo DB in public preview!

This capability, available in public preview, allows you to efficiently consume a continuous and (inserted, updated, and deleted) data from the analytical store. CDC is seamlessly integrated with Azure Synapse Analytics and Azure Data Factory, a scalable no-code experience for high data volume. As CDC is based on the analytical store, it does not consume provisioned RUs, does not affect the performance of your transactional workloads, provides lower latency, and has lower TCO.

Click through to see how it works.

Comments closed

Fending off Sessions while in Single-User Mode

Eitan Blumin just wants to switch the database type:

Today we had an interesting use case where a customer reported that one of the databases they just restored from a backup got stuck in “Single-User” mode in one of their environments.

To resolve it, I first tried running the following command:

ALTER DATABASE MyDB SET MULTI_USER WITH ROLLBACK IMMEDIATE;

In response, I got deadlocked with the dreaded error 1205:

There were a few different attempts with no success until Eitan came up with the final script. Eitan’s analogy was to curling, though the first thing I thought of was Odysseus fighting off his wife’s suitors as he came back to claim his home.

1 Comment

Power BI Dataset CI/CD with Azure DevOps

Stephanie Bruno does a bit of continuous integration:

There’s a lot of information on how to get around the lack of an out-of-the box CI/CD solution for Power BI datasets, but for me it’s often complicated and I have to read too many pages before making much progress on my own. This post is here to strip it down and provide you with the easiest way we know to enable a bonafide CI/CD process for Power BI datasets with Azure DevOps. The post is still longer than we’d like, but it includes detailed step-by-step instructions to walk you through every part of the process. To save space, we used slideshows for the screenshots, but you can pause them as you follow along.

There are a lot of steps but the goal is a worthwhile one.

Comments closed

Tips for Enhancing Power BI User Experience

Mara Pereira provides some guidance:

Before we dive into the nitty-gritty, let’s address the elephant in the room – what exactly is user experience, and why is it such an indispensable factor in reporting and Power BI?

To put it simply, user experience (UX) is the overall impression and interaction your audience has with your report. It goes beyond aesthetics; UX focuses on the ease of understanding, navigation, and the ability to extract valuable insights from the data presented. In the world of reporting, having a top-notch user experience is crucial, as it can make or break the effectiveness and adoption of your reports.

The post stays mostly at a high level, providing motivational guidance rather than “here are the specific actions to take on a given report.” What it does provide is the reasoning behind why you would make those changes.

Comments closed

Which Power Query Operations Are Most Resource-Intensive?

Chris Webb answers a question:

Last year I wrote a post about a change in the Power BI Service that meant the CPU Time associated with Power Query transformations was added to the total shown for a dataset refresh operation in Profiler traces and Log Analytics:

https://blog.crossjoin.co.uk/2022/07/03/measuring-power-query-cpu-usage-during-power-bi-dataset-refresh/

This was useful, but it didn’t tell you directly how much CPU Time was used by Power Query and it didn’t tell you which tables or partitions in a refresh were using the most CPU. It also didn’t tell you anything about Power Query memory usage. The good news that recently there has been another change that solves these problems.

Click through for the solution.

Comments closed

ANSI SQL and Trailing Spaces

Chris Johnson finds a language quirk:

Recently I found a quirk of T-SQL, where a group by statement was treating strings as the same if the only difference was one or more trailing spaces. So, ‘aa’ would be grouped with ‘aa ‘. I did some digging, and this is what I found.

Yeah, this isn’t just Microsoft’s T-SQL variant—it’s a standard part of SQL, as Chris notes later in the post.

My “just-so” story is that this might have been implemented to deal with CHAR(x) comparisons, such as CHAR(2) to CHAR(3). There’s no way to make that comparison unless you treat trailing spaces as irrelevant. Because we almost always use VARCHAR(x) or NVARCHAR(x), it isn’t something top of mind to most database practitioners, but there is a method to the madness.

Comments closed