Press "Enter" to skip to content

Category: T-SQL

Migrating Azure Data Studio SQL Notebooks to VS Code Polyglot Notebooks

Haroon Ashraf gives us a somewhat unwieldy process:

As a SQL/BI developer, I want to run and store my SQL scripts and documentation efficiently in a Notebook as an alternative to using Azure Data Studio SQL Notebooks since Azure Data Studio is retiring soon. Read on to learn more about Visual Studio Code Polyglot Notebooks.

I liked the simplicity of having a SQL kernel in Azure Data Studio. Haroon shows how to work around it and get to roughly the same spot, but I do hope the SQL Server tools team is able to migrate that SQL kernel over to VS Code prior to Azure Data Studio’s ultimate demise.

Leave a Comment

Materializing Lake Views in Microsoft Fabric

Sairam Yeturi reduces ETL and ELT requirements:

Organizations often face challenges when trying to scale analytics across large volumes of data stored in centralized SQL databases. As business teams demand faster, more tailored insights, traditional reporting pipelines can become bottlenecks. By adopting Lakehouse architecture with Microsoft Fabric, business groups can mirror their SQL data into OneLake and organize it using the Medallion architecture—Bronze, Silver, and Gold layers. Materialized lake views play a crucial role in this setup, enabling automated, declarative transformations that clean and enrich data in the Silver layer. This empowers teams to build reliable dashboards and AI-driven insights on top of curated data, all while maintaining performance, governance, and security on a scale.

In this post, we will cover how enterprises can use materialized lake views to streamline data orchestration and enhance data quality, monitoring across silver and gold layers, while mirroring their SQL DB tables to Fabric in the Bronze layer.

The best use case for this is a scenario in which your underlying data is already essentially in a star schema or at least easily transformable into one, and you have no interest in modifying the data in the view directly. Do read the limitations before digging in, though, as there are some big ones.

Leave a Comment

Tracking Time Series Rates of Change in SQL Server

Rick Dobson wants a measure of variation:

This tip presents a brief introduction to Common Table Expressions (CTE), along with a few references for those seeking additional details on CTEs beyond those described and demonstrated here. We will examine CTEs that are defined by either one or two SELECT statements. Additionally, we will provide a demonstration of a recursive CTE. All the examples illustrate how to process time series datasets with CTEs.

Click through for the tip.

Leave a Comment

Row and Range Frames in Window Functions and Batch Mode

Erik Darling covers how your window frame (that is, ROWS or RANGE in the window function definition) can affect batch mode.

Erik looks at a classic performance difference between ROWS and RANGE, as well as what batch mode does to even the score. This is particularly nice because ROWS and RANGE both have their utility and focusing on one versus the other for performance differences can lead to awkward development practices to get around a window spool.

Erik also focuses primarily on batch mode on rowstore, so keep in mind the minimum requirements for it: 131,072 (or 2^17) rows in at least one table in the query, at least one operator that benefits from batch mode (which we’d cover in the window function), at least one input of the batch with 2^17 rows, and where the batch mode cost is lower than the row mode cost.

Leave a Comment

Pattern Matching with REGEXP_LIKE() in SQL Server 2025

Koen Verbeeck writes a regular expression:

I need to do some data validation in our SQL Server database. However, the validation rules are too complex for the T-SQL LIKE function, and I can’t seem to get it done either with PATINDEX or something similar. I’d like to use regular expressions as they’re more powerful. SQL Server 2025 now has a regex function regexep_like to use regular expressions.

Read on for some examples, advice on validating e-mail addresses, and more.

Leave a Comment

Multi- and Single-Line Regular Expression Processing in SQL Server

Louis Davidson continues a series on regular expressions in SQL Server:

There are currently only 4 flags that SQL Server supports and they are used to change some of the fundamental ways that the expressions are applied. These flags are:

i – insensitive
c – case sensitive
m – ^ and $ match end of line, not entire string
s – single line, dot matches newline

In Part 6, I covered i and c, no[w] let’s do m and s. These flags are not ones I expect to use all that often, but they are definitely useful to know.

Read on to see how they work, as well as some of the issues Louis ran into along the way.

Leave a Comment

Rolling Filtered Indexes in SQL Server

Aaron Bertrand only needs recent data:

I recently resolved an issue where a query pulling data from the last 30 days would time out due to the table’s size and the lack of a supporting index. Creating a supporting index is possible, but not ideal; it will be very large and may not be useful for most queries and use cases. I wonder how I could implement a filtered index that follows time and is always limited to the last n days.

Read on to see how Aaron did this. It’s a clever solution and Aaron does make clear the tricky part of filtered indexes: explicit conditions.

Leave a Comment

Flags in SQL Server Regular Expression Functionality

Louis Davidson continues a series on regular expressions:

In this week’s sixth entry of my learning RegEx series, I am going to do two last intro entries for a while, this one on case sensitivity, and another on multi and single line searches. After this I will move into all of the functions that are available in SQL Server 2025 and Azure SQL (and I will come back if I learn any additional things that we need to cover either right after that, or anytime I learn something new I want to share about RegEx).

Read on to see which flags SQL Server currently supports. Of those, Louis tries out a pair.

Leave a Comment

Making Leading Wildcard Searches Faster

Brent Ozar flips everything around:

99.9% of you are never gonna need this.

But let’s say you need to run queries with leading (not trailing) wildcards, like this search for all the different national versions of the Encabulator, each of which has different prefixes depending on which government it’s being sold to:

This is indeed a pretty uncommon scenario. I’m pretty sure I’ve only ever needed to do this once. Well, twice, but in one case I couldn’t actually use the REVERSE() function because the column was itself an awful non-deterministic function and this solution wouldn’t work.

Leave a Comment