Press "Enter" to skip to content

Day: May 6, 2026

Comparing {targets} in R to dbt for Data Engineering

Jonathan Carroll compares two approaches:

Thinking of a real-world project I could take for a spin, I decided to build some ingestion for my personal finances. I’ve used Quickbooks previously which connects up to my bank and helps categorise personal and business (as a freelance contractor) expenses. I decided I’ll build my own ‘slowbooks’ processing workflow based on some manual exports (I don’t think my bank has an API).

Both of the approaches I’ll compare here build on the idea of a Makefile which connects up commands to run based on dependencies, and only runs what is needed; if all the input dependencies of a step have not changed, there’s no need to re-run that step. From what I understand, you could largely get away with just writing some Makefiles (or the newer implementation just (just.systems)) but these two approaches help to better structure how that’s constructed.

Read on for Jonathan’s discovery process and ultimate findings. H/T R-Bloggers.

Leave a Comment

Migration Regret: SQL Server to Postgres Edition

Tim Radney provides an important reminder:

As a data nerd who’s spent the last 25+ years helping organizations keep their databases running smoothly, I’ve had this conversation more times than I can count: “We’re moving to Postgres to save on licensing costs.” It sounds great on paper, open source, no vendor lock-in, and those big SQL Server license fees go away. But lately, I’m hearing a different story from DBAs and architects after the migration is done. They’re calling it Post Regret. That sinking feeling when the promised savings evaporate, performance tanks, and the team realizes they might have been better off staying put (or at least doing a lot more due diligence).

If you’re considering a SQL Server to PostgreSQL migration (or already knee-deep in one), this post is for you. I’ll break down what Post Regret looks like in the real world, why it happens so often, and how to avoid becoming the next cautionary tale. I’ve seen it play out in enough environments to spot the patterns.

Click through for Tim’s tales of woe. Importantly, none of it is a knock on Postgres or a knock on SQL Server. It’s the fact that these are two separate products whose tuning options are very different. You can successfully migrate from one to the other, but to do so, you really need to have a great understanding of both platforms at scale, not just at the tutorial level.

1 Comment

Using the XMLA Endpoint for Power BI

Ruben Van de Voorde hits an endpoint:

Most Power BI developers have come across “XMLA endpoint” somewhere: a tenant setting, a Microsoft Learn page, or a tool’s connection dialog. The term sounds technical, and it is, but the idea behind it is straightforward.

Your semantic model is a database. Like any database, it lives somewhere: on your laptop while you’re authoring it in Power BI Desktop, or in a workspace once you’ve published it to the Power BI Service or Fabric. To use a database with anything other than the application that hosts it, you need a connection. The XMLA endpoint is that connection.

This article walks through what the XMLA endpoint is, where it comes from, how to turn it on, what you can do with it once you have it, and where the alternatives (the Power BI REST API, Semantic Link, and the Fabric REST API) fit in.

Click through for Ruben’s article, which does a good job of demystifying the endpoint.

Leave a Comment

Managed Identities in SQL Server 2025

Greg Low offers another security option for service management:

Those who have worked with SQL Server will understand the need to avoid storing passwords for accessing resources. Windows-based identities are fine for on-premises SQL Server systems, including those on cloud-based virtual machines (VMs), but are of no use when you need to access cloud-based resources like those in Azure.

Some Azure-based resources (including storage accounts) offer other access methods, such as shared access signatures (SAS), but these aren’t much of a step-up from passwords.

What’s really needed is for SQL Server to have its own Microsoft Entra based identity. These can be used directly with Azure-based resources – and that’s exactly where managed identities come in.

Click through to see how it works. Importantly, this is a feature that requires additional payment.

Leave a Comment

Making a Power BI Matrix Visual Look Nicer

Valerie Junk pretties up a visual:

Many Power BI developers view tables and matrix visuals as the enemy. They dislike building them, and often think, “the user is just going to export this to Excel anyway.”
But here’s the thing: tables and matrix visuals have an important business case, and sometimes a well-structured table communicates data far better than any chart would.

There’s also something we don’t talk about enough: trust. BI developers often assume users trust our data, but that’s rarely true. Many users have been burned before by incorrect data or unreliable tools. Providing a matrix visual for row-by-row verification is a powerful way to rebuild trust.

That said, a matrix visual that looks like default Power BI formatting isn’t doing you any favors. 

And they’re probably going to export it to Excel anyhow. Them’s the breaks.

Leave a Comment

Shredding JSON into Rows and Columns via T-SQL

Jared Westover shreds a bit of JSON:

Most databases I see nowadays have at least one column that stores JSON objects as NVARCHAR(MAX). If you look hard enough, I bet you have one. How do you convert JSON objects with arrays into a structured format of columns and rows? Not long ago, a developer asked me that exact question. It’s an important question given how rampant JSON is as a data exchange format, especially for web APIs.

This is a primer on SQL Server’s JSON functionality, at least when it comes to turning JSON into standard tabular data. I think, on the whole, SQL Server does a pretty good job of that, at least as long as your JSON data ultimately fits a tabular format.

Leave a Comment

Sharing Git Hooks with Team Members

Justin Bird shares a file:

Git hooks are scripts that Git executes before or after specific events, such as committing code or pushing to a repository. They can be used to automate tasks, enforce coding standards, or prevent specific actions. However, by default, Git hooks are stored in the .git/hooks directory of each developer’s local repository and are therefore not shared. This means that if you want to use the same hooks across your team, you need to instruct each developer to set up hooks manually, which can lead to inconsistencies and drift. In this post, we will explore a method to include Git hooks in a repository so that they can be easily shared with your team.

Click through to see how, and for a simple example.

Leave a Comment