Press "Enter" to skip to content

Category: Architecture

Centralized and Decentralized Data Architectures

James Serra looks at a pattern:

A centralized data architecture means the data from each domain/subject (i.e. payroll, operations, finance) is copied to one location (i.e. a data lake under one storage account), and that the data from the multiple domains/subjects are combined to create centralized data models and unified views. It also means centralized ownership of the data (usually IT). This is the approach used by a Data Fabric.

A decentralized distributed data architecture means the data from each domain is not copied but rather kept within the domain (each domain/subject has its own data lake under one storage account) and each domain has its own data models. It also means distributed ownership of the data, with each domain having its own owner.

So is decentralized better than centralized?

Read on for James’s answer, and allow me to include a Dilbert cartoon so old, the boss didn’t even have pointy hair yet.

How Decentralized Organizations Can be Effective | The Fourth Revolution Blog

Comments closed

Centralized Data Modeling via Power BI Templates

Haroon Ashraf aims to square the circle:

Data modeling is the way you can arrange and link your organizational data (typically in the form of tables) for reporting and analysis.

In other words, it is the strategy of lining tables with each other to get useful information by following the standard practices and domain knowledge of the organization.

Traditionally, it stands for implementing the star or snowflake schema from the perspective of the data warehouse BI solution.

What is Centralized Data Modeling?

Centralized data modeling means a generic data model consisting of some commonly used tables, relationships, and hierarchies that are shared across the organization. These elements the starting point for Power BI report development to anyone eligible, interested, and capable to do so.

With that in mind, read on to learn how you can use Power BI templates to bring this about. I joke about squaring the circle here because if you treat Power BI as a self-service business intelligence tool, the users may not be totally familiar with what you’re doing and could end up accidentally undermining your plans. That said, it’s a good approach to solving this common problem.

Comments closed


Kevin Wilkie knows that not all keys wear capes:

Well, that’s because we sometimes need different ways to describe what we’ve got going on. Of the four different types of keys we’ve discussed so far, they are all different enough that we need to differentiate them and be able to explain what the differences are. For the rest of the keys that we’ll go through today, the same idea exists. They are close to the others but different enough that there is a need for another name for that type of key.

There is a super key.

Read on to learn what a superkey is. That will put you one quarter of the way to understanding Boyce-Codd Normal Form: a relational variable is in Boyce-Codd Normal Form if and only if all functional dependencies have superkey determinants.

Comments closed

Delivering Data Insights using the Microsoft Data Platform

Paul Andrew has a talk:

Let’s start with a story, not a ‘once upon a time story‘, a story for your backlog 

As a solution architect
I need to design and build an Azure data analytics platform end to end
to deliver data insights for my customer.

In February 2021 I delivered a talk as part of the Scottish Summit conference on how you could/should build an end to end data platform solution in Azure to deliver data insights and analytics. This is one of my favourite sessions so thought it worth re-sharing the recording here.

Click through for the abstract as well as the video.

Comments closed

Soft Deletes in SQL Server

Erik Darling has some thoughts on soft deletes:

Implementing soft deletes for an app that’s been around for a while can be tough. In the same way as implementing Partitioning can be tough to add in later to get data management value from (rebuilding clustered indexes on the scheme, making sure all nonclustered indexes are aligned, and all future indexes are too, and making sure you have sufficient partitions at the beginning and end for data movement).

Read the whole thing. Incidentally, this also ties well into a recent post by Erik about deleting into a different table. It can be easier to implement soft deletes as deleting from the current table and adding to an archive table. That gives you the benefits of keeping deleted data while not running into some of the problems Erik mentions. And if you want to undo a deletion? Delete from the archive table and insert back into the main table.

Comments closed

When to Use Event Sourcing

Vikas Hazrati takes us through the pros and cons of using event sourcing for a project:

You would always get a ton of literature on Event Sourcing and CQRS. The key question is WHEN do you use it? Under what circumstances? Is your problem really in need of ES?

I would not go into the details of what Event Sourcing and CQRS is. The industry stalwarts have covered that in adequate detail. This post delves into battle-tested scenarios on where we should have used and otherwise ignored ES.

Click through for an analysis of pros and cons, as well as some advice on what it all means.

Comments closed

Moving Away from the Lambda Architecture

Xiang Zhang and Jingyu Zhu talk about migrating a project away from the Lambda architecture:

The Lambda architecture has become a popular architectural style that promises both speed and accuracy in data processing by using a hybrid approach of both batch processing and stream processing methods. But it also has some drawbacks, such as complexity and additional development/operational overheads. One of our features for Premium members on LinkedIn, Who Viewed Your Profile (WVYP), relied on a Lambda architecture for some time. The backend system supporting this feature had gone through a few architectural iterations in the past years: it started as a Kafka client processing a single Kafka topic, and eventually evolved to a Lambda architecture with more complicated processing logic. However, in an effort to pursue faster product iteration and lower operational overheads, we recently underwent a transition to make it Lambda-less. In this blog post, we’ll share some of the lessons learned in operating this system in the Lambda architecture, the decisions made in transitioning to Lambda-less, and the shifts necessary to undergo this transition.

When Lambda was first proposed back in 2015, it was intended as a compromise architecture trying to solve several important problems with the tools available in 2015 (well, 2013 and 2014—it was in a book, after all). I could definitely see the architecture fall into disuse within the next decade, not because it was at all bad, but because the world around it changed to the point that there is a better compromise available.

Comments closed

Durable Keys in Type 2 Dimensions

Martin Schoombee takes us through the idea of durable keys:

Also called an immutable or persisted key (I like durable better), a durable key is nothing more than a surrogate key (i.e. integer value or nonsensical number) used to identify a dimension member (company, employee, etc.) uniquely in a type-2 dimension. Confusing enough? It’s easier to explain with an example…

When I read Martin’s post, I kind of got it but said to myself, “How would I run this type of query more efficiently?” The thing that wasn’t clicking came from another article on the topic: you add the durable key to the fact as well as the current key. That way, you can join back to the Company dimension on CompanyKey if you want to get the company data as of the fact date, or you can join on DurableCompanyKey (and CurrentRecord = 1) to get the latest company data regardless of the fact date. Now that this is clear, I like the strategy a lot.

1 Comment