Press "Enter" to skip to content

Author: Kevin Feasel

Getting ML Services Running on SQL Server 2025

Greg Low takes a look at ML Services:

This is an update of a post that I wrote for SQL Server 2022 . Unfortunately, those instructions needed to be updated, not because anything notable has changed in SQL Server 2025, but because the recent distribution of Python has changed. Thanks to Peter Bishop for reporting what was now missing.

I hope that the versions Greg mentions—R 4.2 and Python 3.10—aren’t the latest that SQL Server supports, because those are both woefully out of date. Python 3.10 came out almost 4 years ago and R 4.2 is almost 3 years old at this point.

Leave a Comment

Upgrading to SQL Server 2025

John Deardurff checks out a tool built into SSMS 22:

Starting with SQL Server Management Studio (SSMS) 22, the Hybrid & Migration Component delivers a streamlined experience for upgrade assessment and side-by-side migration. This replaces the Data Migration Assistant (DMA) that retired on July 16, 2025, consolidating assessment and migration into one tool. So what are the key capabilities:

Click through for those capabilities and a few tips on how to use it. I’m not sure how clean the upgrade process is to 2025 versus standalone installation. I’d imagine that, if you’re not using something like ML Services, it’s probably fine.

Leave a Comment

Using the Microsoft Fabric Copy Job with Data in Dataverse

Laura Graham-Brown loads some data:

Dataverse is the data store behind parts of Dynamics and lots of Power Platform projects. So Dataverse can contain vital business data that will be needed for reporting. In this post we are going to look at one method which is using copy job with Dataverse to copy across data in Microsoft Fabric.

Click through to see how, including incremental data loads.

Leave a Comment

An Overview of Fabric IQ

Brian Bonk talks ontologies:

If you followed along with the announcements from Microsoft Ignite, you might have stumbled upon the new Fabric IQ service.

For many people, this new service can seem a bit strange to see the point in, so in this blogpost I will try to help you understand the usage and business value of the new service.

Ontologies aren’t new—it’s mostly a metadata management exercise—but there are several companies (like Palantir) pushing this hard in their tools, and Microsoft is working that market segment. But instead of using all of this metadata management for data quality or master data management reasons, it’s for feeding into language models.

Leave a Comment

Calendar-Based Time Intelligence and DirectQuery Performance

Chris Webb hits the Turbo button on his PC:

Calendar-based time intelligence (see here for the announcement and here for Marco and Alberto’s more in-depth article) is at least the second-most exciting thing to happen in DAX in the last few months: it makes many types of time intelligence calculation much easier to implement. But as far as I know only Reid Havens, in this video, has mentioned the performance impact of using this new feature and that was for Import mode. So I wondered: do these benefits also apply to DirectQuery mode? The answer is on balance yes but it’s not clear-cut.

Click through to see what Chris found.

Leave a Comment

Thoughts on CPU Optimization and SQL Server Licensing

Brendan McCaffrey cuts some cores:

Minimizing CPU core counts is a perfect example of how to add value, and is arguably one of the easiest ways to do so.

I run this exercise in my environments about every six months, typically right before true-up time and again at mid-year, just to make sure we haven’t drifted too far.

Read on to see what Brendan does.

This next bit is weird for me to write because I’ve always been an Enterprise Edition snob. But another tip that I have is to take a very serious look at Standard Edition. If you’re using SQL Server 2025, you can have up to 32 cores and 256 GB of RAM in your buffer pool. Taking a look at the available features, losing online index (re)builds and superior availability groups sucks, but it’s not the end of the world for most shops. If you have large enough databases to really benefit from online index rebuilds, read-ahead scans, merry-go-round scans, batch mode on rowstore, and the like—generally, data warehouses or large OLTP instances with heavy read workloads—then those could benefit from Enterprise. But the cost in terms of lost functionality has decreased considerably in the past decade.

Leave a Comment

OPENROWSET and External Tables in Fabric SQL Databases

Hugo Queiroz makes a connection:

Data Virtualization brings to Fabric SQL Database the same set of capabilities already available on Azure SQL Database, Azure SQL Managed Instance and SQL Server, customers can now use OPENROWSET and External Tables, with complete parity across SQL flavors, develop once deploy anywhere. Data Virtualization for Fabric SQL Databases directly supports Parquet and delimited text (CSV), but JSON files can also be read using functions like JSON_VALUE and OPENJSON.

This is currently in preview. Read on to see what’s in the preview.

Leave a Comment

The Basics of Framing in Window Functions

Jared Westover wants a range:

In this article, we’ll explore the concept of framing in window functions. We’ll compare the differences between the ROWS and RANGE clauses and discuss when to choose one over the other. We’ll also highlight common pitfalls of framing and whether it applies to all types of window functions. By the end, you’ll better understand how framing works with window functions, making it seem less complex.

Click through for a primer on frames in window functions. Admittedly, if I were writing this article, I’d toss out most of the “pitfalls” section, as pitfalls 2 and 3 aren’t particularly relevant or pitfall-y (because SQL Server always defines a frame on a window function if you don’t). Instead, I’d add that there are some annoying limitations on RANGE frames, where the ANSI SQL standard allows you to use intervals like date or time when defining frames, so you can get records ranging from three hours ago to right now, for example.

But that said, it’s a good overview if you’re fairly new to window functions.

Leave a Comment

Common Star Schema Mistakes

Ben Richardson gets back to basics:

Sometimes the culprit isn’t actually your DAX, it’s your data model.

Star schema mistakes are incredibly common in Power BI, and really hard to track down.

When your data model isn’t a clean star schema, you end up with broken filters, confusing relationships and slow visuals.

It’s important to get it right from the start! So we broke down the top 10 most common mistakes people make, how to identify them and how to fix them!

This is where reviewing (or reading) Ralph Kimball’s Data Warehouse Toolkit can save you a lot of time and stress. The Microsoft data analytics world is predicated so heavily on Kimball-style dimensional modeling that the choices tend to be building a proper star schema up-front or spend processing and developer time trying to fix it in post-production using DAX or trickery.

Leave a Comment