Press "Enter" to skip to content

Author: Kevin Feasel

Dropping Objects in SQL Server and Snowflake

Kevin Wilkie gets the drop on us:

When you’re working between SQL Server and Snowflake, there can be a lot of crossover that may make you forget what system you’re working in. Sometimes it’s close, but not close enough.

Today, let’s go over something that should be rather simple – removing old objects that we shouldn’t need any longer.

Read on to see how the two data platform technologies differ in this regard.

Comments closed

The Proper Use of Views and Inline UDFs

Erik Darling plays tic-tac-toe:

The problem is really the stuff that people stick into views. They’re sort of like a junk drawer for data. Someone builds a view that returns a correct set of results, which becomes a source of truth. Then someone else comes along and uses that view in another view, because they know it returns the correct results, and so on and so on. Worse, views tend to do a bunch of data massaging, left joining and coalescing and substringing and replacing and case expressioning and converting things to other things. The bottom line is that views are as bad as you make them.

The end result is a trash monster with a query plan that can only be viewed in full from deep space.

Read on to learn the use cases for views and inline UDFs, as well as a few important notes regarding performance of each. Views are like mogwai: they’re fine as long as you never get them wet and never let them eat after midnight. The problem is, far too many companies are apparently the business equivalent of all-you-can-eat buffets at water parks.

Inline user-defined functions are like patenting a device that lets you shoot yourself in both feet with one pull of the trigger. Which, if I understand things correctly, means you’ll need a Form 4 for each inline UDF.

Comments closed

What’s New in SSMS 20

Erin Stellato gives us the skinny:

We expect that the first two posts, combined with the release notes and the new Connect with SQL Server Management Studio page, provide the details you need about the changes in SSMS 20 GA.  As such, the focus of this post is the roadmap for SSMS.  Our roadmap is heavily influenced by the evolving capabilities of SQL Server and Azure SQL, and feedback from SSMS users.  We’re currently collecting general feedback at https://aka.ms/sqlfeedback, and feedback on Copilot in SSMS at https://aka.ms/ssms-copilot-feedback.  Please comment and upvote on items that you would like to see in SSMS!

With SSMS 20 now being generally available, you can download it and try it out in your own environment. Erin quells any fears that Microsoft is abandoning SSMS and covers some of the big-ticket items on the roadmap.

1 Comment

Migrating from Power BI to Microsoft Fabric

Paul Turley gives us an overview:

Fabric is here but what does that mean if you are using Power BI? What do you need to know and what, if anything, will you need to change if you are a Power BI report designer, developer or BI solution architect? What parts of Fabric should you use now and how do you plan for the near-term future? As I write this in March of 2024, I’m at the Microsoft MVP Summit at the Microsoft campus in Redmond, Washington this week learning about what the product teams will be working on over the next year or so. Fabric is center stage in every conversation and session. To say that Fabric has moved my cheese would be a gross understatement. I’ve been working with data and reporting solutions for about 30 years and have seen many products come and go. Everything I knew about working with databases, data warehouses, transforming and reporting on data has changed recently BUT it doesn’t mean that everyone using Power BI must stop what they are doing and adapt to these changes. The core product is unchanged. Power BI still works as it always has.

Read on to learn more about Paul’s thesis and how the world changes with Microsoft Fabric.

Comments closed

Taking a Billion Taxi Rides with DuckDB

Mark Litwintschik tries out DuckDB:

DuckDB is an in-process database. Rather than relying on a server of its own, it’s used as a client. The client can work with data in memory, within DuckDB’s internal file format, database servers from other software developers and cloud storage services such as AWS S3.

This choice to not centralise DuckDB’s data within its own server, paired with being distributed as a single binary, makes installing and working with DuckDB much less complex than say, standing up a Hadoop Cluster.

The project isn’t aimed at very large datasets. Despite this, its ergonomics are enticing enough and it does so much to reduce engineering time that workarounds are worth considering. The rising popularity of analysis-ready, cloud-optimised Parquet files is removing the need for substantial hardware when dealing with datasets in the 100s of GBs or larger.

Read on to learn more about DuckDB, how it differs from SQLite, and a bit of nuttiness around how far you can push an in-memory database.

Comments closed

Creating Dynamic Moving Averages with Visual Calcs and Numeric Parameters

Erik Svensen builds a dynamic moving average:

With the introduction of visual calculation in the February 2024 release of Power BI desktop (https://powerbi.microsoft.com/en-us/blog/visual-calculations-preview/) – this gives us some new possibilities to add calculations on the individual visual and some new functions gives us some exiciting options.

One example could be to use the MOVINGAVERAGE function (link) to and combine it with numeric range parameter to make it dynamic.

Click through for a video and a description of how to do it.

Comments closed

Restorable Dropped Databases Naming in Azure SQL DB

Tanayankar Chakraborty asks, what’s in a name?:

An issue was reported recently where the customer complained that in their cost analysis report of their Azure SQL DBs, the db name appears appended with a comma(,) and a number. While they agreed with the DB name in the report, they didn’t understand the number after the comma and its significance. This is how the cost analysis report looks like:

Click through for a redacted version of the report, showing an example of the database in question, as well as an explanation of what this number means.

Comments closed

Thoughts on Common Table Expressions

Erik Darling has opinions:

Much like joins and Venn diagrams, anyone who thinks they have some advanced hoodoo to teach you about common table expressions is a charlatan or a simpleton. They are one of the least advanced constructs in T-SQL, and are no better or worse than any other abstraction layer, with the minor exception that common table expressions can be used to build recursive queries.

As I read through the post, I kept wanting to disagree with Erik more than I do. My short form is, I aesthetically prefer common table expressions to subqueries. But that doesn’t make CTEs faster.

Comments closed

A Dive into Direct Lake

Nikola Ilic digs into Direct Lake:

The most common question I’m hearing these days from clients is – how can we refresh the Direct Lake semantic model? It’s a fair question. Since they have been relying on Import mode for years, and Direct Lake promises an “import mode-like performance”…So, there has to be a similar process in place to keep your data up to date, right?

Read on to learn the answer to this question and quite a few more.

Comments closed