Press "Enter" to skip to content

Category: Architecture

The Challenge of Many-to-Many Relationships in Power BI

Ben Richardson explains a common anti-pattern in Power BI semantic models:

Relationships sit at the heart of literally everything you do in Power BI.

Before you make measures, visuals and reports, relationships are established to define how your data fits together. Their job is simple on the surface – but vital: describe how each table is connected.

If you can design these relationships well, everything else will run much smoother.

Across any data domain, strong models rely on clear Grain, correct Cardinality, and a Star Schema built with well-defined Fact and Dimension tables.

Read on to understand how many-to-many relationships stress this understanding in Power BI an different techniques for dealing with those sorts of relationships.

Leave a Comment

Defining Technical Debt

Louis Davdison takes a favorite phrase of many an IT person:

Ah, the term “technical debt.” The implication of it is that you have this wonderful idea, this glorious design, and for time/money reasons, you said “we can’t achieve this.” I am not sure there has ever been a project that didn’t have technical debt. It happens in software, it happens in the real world. You probably have technical debt in your house, and huge companies like Disney make these glorious plans that never quite get finished.

Click through for a link to Louis’s video. As for my own definition of technical debt, I wrote a blog post about it a while back. As of this moment, the only part I might debate myself on is whether “It was a good decision at the time, but times have changed” is really technical debt or if it’s something else. From an ontological perspective, it’s probably a different category of thing. But from the standpoint of a practitioner with a code base or infrastructure as it is, I don’t know that it matters all that much whether we call it “technical debt” or “the ever-changing sands of time ruining all that is great.” Though definitely pull out the latter in a meeting when trying to explain to a PM why you need 40 hours of dev time to rewrite some code.

Comments closed

Thoughts on the Future of MySQL

Dave Stokes shares some thoughts:

 I am not intentionally trying to upset anyone with this blog post or minimize the efforts of many brilliant people whom I admire. However, I connected with several people over the 2025 holidays who all had the same question: What is the future of MySQL? At the upcoming FOSDEM conference, several events will discuss this subject and push a particular solution.  And in several ways, they are all wrong.

Oracle has not been improving the community edition for a long time now. They have laid off many of their top performers in the MySQL group. We got almost a good decade and a half out of Oracle’s stewardship of the “world’s most popular database”, and we should be thankful for that. However, now that time is over, it is time to consider future options that will involve no updates, CVEs, or innovation for what is the MySQL Community Edition.

Read on for a few possibilities, focusing on the open-source database market.

Comments closed

BIGINT Serial Columns in PostgreSQL

Elizabeth Christensen lays out an argument:

Lots of us started with a Postgres database that incremented with an id SERIAL PRIMARY KEY. This was the Postgres standard for many years for data columns that auto incremented. The SERIAL is a shorthand for an integer data type that is automatically incremented. However as your data grows in size, SERIALs and INTs can run the risk of an integer overflow as they get closer to 2 Billion uses.

We covered a lot of this in a blog post The Integer at the End of the Universe: Integer Overflow in Postgres a few years ago. Since that was published we’ve helped a number of customers with this problem and I wanted to refresh the ideas and include some troubleshooting steps that can be helpful. I also think that BIGINT is more cost effective than folks realize.

Click through for Elizabeth’s argument. I’d say that this is very similar for SQL Server, where I’m more inclined to create a BIGINT identity column, especially because I almost automatically apply page-level compression to tables so there’s not really a downside to doing this. Identity columns don’t have a domain, so there’s no domain-specific information like you’d get with a column such as Age; and with page-level compression, you’re not wasting space.

Comments closed

In Support of Ugly Code

John Cook defends (some) ugly code:

Ugly code may be very valuable, depending on why it’s ugly. I’m not saying that it’s good for code to be ugly, but that code that is already ugly may be valuable.

That something is ugly is typically a visceral reaction. But I try to tease out why I think code is ugly, as it can be for several reasons.

  • It’s not formatted well or consistently. That’s an easy fix for the most part.
  • Naming is inconsistent or contradictory. Depending on the tooling, this is a reasonably easy fix.
  • The logic is convoluted to me. This is where things get tricky. Is it convoluted because I don’t understand what’s going on? Or is it convoluted because the person who developed or maintained it didn’t understand something important? If it’s the former, I try (“try” being the operative word here) to bite my tongue and dig in deeper to understand it better. But if it’s the latter, I think that’s fair game for refactoring.

Younger me was all about rewriting and removing nasty, inefficient, ugly code. But older me realizes that only some nasty, inefficient, ugly code is actually bad. I still will heartily argue that code is a liability and that most code bases could make do with a spring cleaning. But it has to come from a place of understanding first. I have a lot more on the topic of technical debt in an essay I wrote a few years ago. And I did purposefully cut myself off at one point to be cute.

Comments closed

Choosing a Vector Database

Joe Sack has some advice:

Vector search has become a standard approach for semantic search and RAG. Whether you’re evaluating a dedicated vector database, SQL Server 2025, a Postgres extension like pgvector, or an in-memory library, there are certain production realities worth planning for.

Admittedly, my vector database decision boiled down to “What can I actually get to work in my non-internet-connected on-premises environment where everything is locked down to the point that bringing in new software is a major hassle?” That quickly narrowed down the set of viable options.

Comments closed

Reverse Engineering a Physical Model Diagram with Redgate Data Modeler

Steve Jones gives the new Regate acquisition a try:

I recently wrote about a logical diagram with Redgate Data Modeler. That was interesting, but creating all the objects is a pain. I decided to try creating a physical diagram from an existing database. This post looks at the experience.

Click through for Steve’s thoughts. I appreciate how he’s willing to call out the pain points that exist in the product today.

Comments closed

Common Star Schema Mistakes

Ben Richardson gets back to basics:

Sometimes the culprit isn’t actually your DAX, it’s your data model.

Star schema mistakes are incredibly common in Power BI, and really hard to track down.

When your data model isn’t a clean star schema, you end up with broken filters, confusing relationships and slow visuals.

It’s important to get it right from the start! So we broke down the top 10 most common mistakes people make, how to identify them and how to fix them!

This is where reviewing (or reading) Ralph Kimball’s Data Warehouse Toolkit can save you a lot of time and stress. The Microsoft data analytics world is predicated so heavily on Kimball-style dimensional modeling that the choices tend to be building a proper star schema up-front or spend processing and developer time trying to fix it in post-production using DAX or trickery.

Comments closed

Idempotence and Durable Execution

Jack Vanlightly does some thinking:

Determinism is a key concept to understand when writing code using durable execution frameworks such as Temporal, Restate, DBOS, and Resonate. If you read the docs you see that some parts of your code must be deterministic while other parts do not have to be.  This can be confusing to a developer new to these frameworks. 

This post explains why determinism is important and where it is needed and where it is not. Hopefully, you’ll have a better mental model that makes things less confusing.

Some of the examples Jack includes are pretty tricky, showing just how difficult it can be to ensure that multiple, independent systems are all on the same page.

Comments closed

Tips for Building a Data Warehouse

James Serra gets back to foundations:

I had a great question asked of me the other day and thought I would turn the answer into a blog post. The question is “I’m an experienced DBA in SQL Server/SQL DB, and my company is looking to build their first data warehouse using Microsoft Fabric. What are the best resources to learn how to do your first data warehouse project?”. So, below are my favorite books, videos, blogs, and learning modules to help answer that question:

Click through for James’s recommendations. I strongly agree with his advice to start with Ralph Kimball’s The Data Warehouse Toolkit, and frankly, I think a lot of James’s advice here is sound. The person asking focuses on Fabric, and there are plenty of Fabric-specific things to learn, but at the end of the day, modern data warehouses are still data warehouses.

Comments closed