Press "Enter" to skip to content

Month: February 2024

Avoiding Time Intelligence DAX Functions in DirectQuery Mode

Marco Russo and Alberto Ferrari skip the slow stuff:

Calculations that use the DAX time intelligence functions mostly retrieve data at the day level, performing the required aggregations in the formula engine. By avoiding time intelligence DAX functions, you can force DAX to produce more optimized queries for your specific calculations.

DirectQuery over SQL and VertiPaq require the same patterns to optimize time intelligence calculations, even though the reasons are different. In VertiPaq, we try to stay away from DAX time intelligence functions to avoid large materialization at the day level. With SQL, materialization does not always happen because Tabular tries to push the grouping down to SQL. Still, time intelligence calculations often result in complex queries, and it is better to avoid the complexity by using simpler DAX code.

Check out the performance difference.

2 Comments

Checking SQL Server Connectivity with Powershell in Parallel

Rod Edwards builds a script:

The chances are that you have other systems monitoring your SQL servers already, so this task isn’t required at all. However, sometimes a quick ‘knock on the door of SQL’ to confirm a response isn’t a bad thing as a sanity check.

So, building on that, we’re going to use the same technique to essentially perform a sweep of our estate (with a few bells and whistles added) to give us a colour coded quick view of service status like below, with a little bit of additional info.

Click through for the script and explanation.

Comments closed

Troubleshooting a Slow Deletion

Aaron Bertrand has an admission:

Before looking at the code path, the query, or the execution plan, I didn’t even believe the application would regularly perform a hard delete. Teams typically soft delete “expensive” things that are ever-growing (e.g., change an IsActive column from 1 to 0). Deleting a user is bound to be expensive, because there are usually many inbound foreign keys that have to be validated for the delete to succeed. Also, every index has to be updated as part of the operation. On top of that, there are often triggers that fire on delete.

While I know that we do sometimes soft delete users, the engineer assured me that the application does, in some cases, hard delete users.

Click through for the full story and a minor bout of self-petard-hosting. I’m as guilty as anyone else of jumping to conclusions, and this is a good reminder to go through the process even when you think you know the answer.

Comments closed

Troubleshooting a Problem with sp_send_dbmail

Shane O’Neill applies the Sgt. Schultz defense:

I don’t like posting about issues unless I fundamentally understand the root cause. That’s not the case here. A lot of the explanation here will be hand-waving while spouting “here be dragons, and giants, and three-headed dogs”, but I know enough to give you the gist of the issue.

Click through for a fun story about Kerberos and behavioral changes after absolutely nothing happened.

Comments closed

Working with Date Sequences in R

Steven Sanderson isn’t satisfied with a single date:

In the world of data analysis and manipulation, working with dates is a common and crucial task. Whether you’re analyzing financial data, tracking trends over time, or forecasting future events, understanding how to generate date sequences efficiently is essential. In this blog post, we’ll explore three powerful R packages—lubridate, timetk, and base R—that make working with dates a breeze. By the end of this guide, you’ll be equipped with the knowledge to generate date sequences effortlessly and efficiently in R.

Click through for several ways to generate date sequences, including weekly sequences.

Comments closed

Weirdness with Aggregation

Erik Darling digs into a problem. Part 1 sets up the scenario:

Here’s the query plan, which yes, you’re reading correctly, runs for ~23 seconds, fully joining both tables prior to doing the final aggregation.

I’m showing you a little extra here, because there are missing index requests that the optimizer asks for, but we’ll talk about those in tomorrow’s post.

The wait stats for this query, since it’s running in Batch Mode, are predictably HT-related.

Part 2 covers those missing indexes:

I’ve taken a small bit of artistic license with them.

The crappy thing is… They really do not help and in some cases things get substantially worse.

Maybe it’s because it’s early and I’m trying to compile things in my head rather than actually trying it out, but it seems like a combo of CTE + CROSS APPLY or a pair of CROSS APPLY statements could work better (especially with a good index), assuming that join doesn’t need to be in place. Given the query as it is, with two MAX() aggregations and no GROUP BY clause, that could be an avenue for improvement, though one I have not actually tested. Nonetheless, read both of Erik’s posts.

Comments closed

Data Loading with BCP

Peter Schott describes a recent bit of messiness:

However, at the time this popped up, my most recent “ticket” was a separate request. I’d been chatting with a client who had mentioned that they were closing an account for one of the SaaS apps they use. The vendor would provide DDL and extract files for import into their own system, but only after the account was closed. We chatted back and forth about some ideas for them to load the data into their own Azure SQL DB instance. At one point, he asked if I’d want to just do it for a small consulting fee. We chatted a bit more and he realized that he really didn’t want to do it.

Read on for the rest of the story. BCP is powerful but always felt finicky to me. Either that or I wasn’t very good at using it. Either could be the case.

Comments closed

Azure SQL DB Serverless for Hyperscale now GA

Morgan Oslake has an announcement:

Optimizing resource allocation to achieve performance goals while controlling costs can be a challenging balance to strike especially for database workloads with complex usage patterns.  Azure SQL Database serverless provides a solution to help address these challenges, but until now the general availability of serverless has only been available in the General Purpose tier.  However, many workloads that can benefit from serverless may require greater performance and scale along with other capabilities unique to the Hyperscale tier.

We are pleased to announce the general availability of serverless auto-scaling for Hyperscale in Azure SQL Database.  The benefits of serverless and Hyperscale now come together into a single database solution.

Read on to see what this means for you and how it can change the billing strategy around Hyperscale.

Comments closed

Modeling I/O Utilization with Resource Governor

Michael J. Swart does some modeling:

How do we predict whether it’s safe to put workloads from two servers onto one?

We use Availability Groups to create readable secondary replicas (which I’ll call mirrors). The mirrors are used to offload reporting workloads. The mirrors are mostly bound by IOPS and the primaries are mostly bound by CPU, so I wondered “Is there any wiggle room that lets us consolidate these servers?”

Can we point the reporting workloads (queries) at the primary replica safely?

Read on for the answers to these questions.

Comments closed

Identifying Old OLEDB and ODBC Drivers on Machines

Lucas Kartawidjaja goes on a quest:

The vulnerabilities are affecting Microsoft ODBC Driver 17 and 18, as well as OLE DB Driver 18 and 19. For more information and also download location for the security update/ hotfix can be found on the following page: Update: Hotfixes released for ODBC and OLE DB drivers for SQL Server

We do an automated security scanning tool that would flag the systems (servers, desktops, latptops, etc.) that haven’t been patched. So we can quickly identify the systems that need to be patch and patched those systems quickly.

For this post, I was wondering if there is a quick way to identify Microsoft ODBC and OLE DB drivers that are being installed on the systems. 

Click through to see what Lucas came up with.

Comments closed