Press "Enter" to skip to content

Category: Tools

Reverse Engineering a Physical Model Diagram with Redgate Data Modeler

Steve Jones gives the new Regate acquisition a try:

I recently wrote about a logical diagram with Redgate Data Modeler. That was interesting, but creating all the objects is a pain. I decided to try creating a physical diagram from an existing database. This post looks at the experience.

Click through for Steve’s thoughts. I appreciate how he’s willing to call out the pain points that exist in the product today.

Leave a Comment

Running SQL Server on KubeVirt

Andrew Pruski builds a virtual machine:

With all the changes that have happened with VMware since the Broadcom acquisition I have been asked more and more about alternatives for running SQL Server.

One of the options that has repeatedly cropped up is KubeVirt

KubeVirt provides the ability to run virtual machines in Kubernetes…so essentially could provide an option to “lift and shift” VMs from VMware to a Kubernetes cluster.

Read on to learn a bit more about KubeVirt, including how to set up a Windows-based virtual machine with it. Andrew does document some performance woes, so that’d be a big concern to work out the why behind this.

Leave a Comment

Upgrading to SQL Server 2025

John Deardurff checks out a tool built into SSMS 22:

Starting with SQL Server Management Studio (SSMS) 22, the Hybrid & Migration Component delivers a streamlined experience for upgrade assessment and side-by-side migration. This replaces the Data Migration Assistant (DMA) that retired on July 16, 2025, consolidating assessment and migration into one tool. So what are the key capabilities:

Click through for those capabilities and a few tips on how to use it. I’m not sure how clean the upgrade process is to 2025 versus standalone installation. I’d imagine that, if you’re not using something like ML Services, it’s probably fine.

Leave a Comment

Thoughts on Data Modeling

Steve Jones has a two-fer. First up, he asks an opinion question about data modeling:

Recently, I had a few questions on database modeling. One was posted in the SQL Server Central forums, and a customer asked about ERD tooling on the same day. This came shortly after Redgate acquired Vertabelo (now Redgate Data Modeler). This stood out to me as very rarely in the last few years have I found people consulting and updating a diagram while performing database development.

Second, he takes a peek at a tool Redgate purchased:

Redgate acquired a data modeling tool from Vertabelo recently and I wanted to explore how it works. This is a short look at this tool and how it might be useful in working with databases.

My experience with data modeling has been that only the really large companies did a lot of work with upfront data modeling and keeping logical models up to date. It’s still quite useful for data warehouses, and that’s where the people I know who do a lot of data modeling make their living. But I find it’s too much of a hassle in fast-paced environments, especially when I can keep most or all of the data model in my head and I’m the person managing it all.

Essentially, data models are useful to the extent that they’re approximately true. But because they quickly get out of sync with reality, they quickly go from “quite useful” to “dirty lies.”

Leave a Comment

Running SQL Server in a Local Container via VSCode

Eduardo Pivaral uses the MSSQL extension in Visual Studio Code:

You are a developer using SQL Server for your applications, and you need to quickly setup a local development environment. How can you make sure the environment is OS agnostic, so it can run on any operating system? Let’s see how we can quickly create a local container to run SQL Server using the VSCode MSSQL Extension.

Read on for the instructions. I still do the old-fashioned thing of opening up a terminal window and running docker commands, but this is pretty convenient.

Comments closed

Running Data API Builder in an Azure Container Instance

Jess Pomfret deploys an API:

This is post two in my series about the Data API Builder (dab), the first post, Data API Builder, covers what dab is and how to test it locally against SQL Server in running in a container. This was great for testing, but now we want to start to productionise this, and the first step is to get it running somewhere other than my laptop.

There are several deployment options available, I recommend you review the Microsoft docs here: Deployment guidance for Data API builder.

ACI wouldn’t necessarily be my first choice for, well, much of anything. However, it is cheap and easy, so it has that going for it.

Comments closed

Generating an Entity Diagram in a Fabric Eventhouse

Guy Reginiano announces a new tool:

As your KQL database grows, tables gather data from several Eventstreams, functions connect different tables, update policies move and transform data, and materialized views quietly keep aggregated data up to date – all working together behind the scenes 

It’s powerful, but it can also be hard to see the full picture. 

That’s exactly why we built the Entity Diagram – to give you a simple, visual way to explore how everything in your database connects.

Click through to see how it works.

Comments closed

“The Parameter is Incorrect” with Copy-DBACredential

Jack Corbett diagnoses a problem:

I was working with a client to do an upgrade/migration from SQL Server 2016 to SQL Server 2022, and this client assigns non-default ports to SQL Server.  As part of the process, I had to create a credential on one node and needed it on the other node, so I went to the handy Copy-DBACredential dbatools cmdlet, but it didn’t work.

Read on for the troubleshooting process and the ultimate issue.

Comments closed

Simple Data Quality Validation with T-SQL

Kenneth Omorodion builds a validation process:

As the need and use of data grows within any organization, there is a corresponding rising issue for the need of data quality validation. Most organizations have large stores of data but most of it are not managed efficiently in terms of data quality assurances, thus leading to inaccurate insights for the business which in turn leads to distrust in the data.

Organizations have now, more than ever, realized the importance of an efficient data quality process as part of their Business Intelligence and Analytics processes. The issue is, how can they implement data quality for their data? For larger and more data-centric organizations, they might be using pre-built data management and validation tools like Microsoft Purview or other Master Data Management tools like Informatica, Talend, SAP, Talend, and Stibo Systems. But for those organizations that can not commit to subscribing to pre-built options, or they are operating primarily on On-Premises environments, they might want to build one themselves, that’s where this article comes in.

I’m of two minds about this. One the one hand, I appreciate the effort that Kenneth put into this and expect that it would work reasonably well. On the other hand, I look at what it can do and say “Yes, but if you just use constraints like you should, then you don’t need this process.” It’s basically a very asynchronous way of applying check constraints, foreign key constraints, and specifying that an attribute is NOT NULL.

If there’s some reason why applying these constraints is impossible—such as receiving this data as an extract from some poorly-designed system—then this can do a good job of describing the failures of the upstream system. But this is where data architects need to get it right up-front.

1 Comment