Press "Enter" to skip to content

Month: November 2022

LATERAL and APPLY

Lukas Eder shows off one of my favorite operators:

The SQL:1999 standard specifies the <lateral derived table>, which is SQL’s way of allowing for a derived table (a subquery in the FROM clause) to access all the lexically preceding objects in the FROM clause. It’s a bit weird in terms of syntax, I personally think that Microsoft SQL Server has a much nicer solution for this concept via APPLY. Oracle supports both syntaxes (standard and T-SQL’s). Db2, Firebird, MySQL, PostgreSQL only have LATERAL.

Click through to see how the operator works.

Comments closed

Non-Parallel Plans from Computed Columns with Scalar Functions

Etienne Lopes tells a tale:

I must say that per principle I’m not a big fan of neither computed columns nor scalar UDFs. I mean, I find them attractive in the way they (appear to) make “things simpler” also allowing code reuse, improving queries readability, etc. Yes but they also hide or mask the complexity behind their use, which can often be quite deceiving, making it much harder to troubleshoot and solve performance problems. Furthermore they have several limitations by design that can hurt performance and all this combined, can sometimes make a “simple” query take many minutes or hours to run, instead of just a few seconds! When you see this situation happen again and again while fine tuning databases, their use becomes much less appealing.

Having this said, sometimes they can be useful of course but it’s very important to choose carefully where, how and when to use computed columns and scalar UDFs, so that performance won’t get hurt and its benefits outweigh the drawbacks.

Click through for an example of where the combo really falls short. I do like computed columns, though never with user-defined functions.

Comments closed

The Importance of the Power BI Service

Reza Rad explains why the Power BI Service is useful:

The Power BI toolset comes in many shapes and forms. There is a Power BI Desktop, Power BI Mobile app, Power BI Report Server, and Power BI Service (and some other applications and components too). The questions I hear from the new users of Power BI are; Do I need to have an account for Power BI? do I need to use the Power BI website for creating visualization etc.? What is the Power BI website or service, and what is its usage? If I can do the reporting using Power BI Desktop for free, then why would I need the service? In this article and video, I will answer all of that.

Click through for a video or for the article explaining the purpose behind the Power BI Service. Having done work with places using Power BI Report Server and places using the Power BI Service, I will say that the latter takes more work to get corporate-compliant but offers a whole lot more.

Comments closed

Incorporating Power BI with Azure Synapse Analytics

Ginger Grant counts the ways:

The first is to connect Power BI to Azure Synapse to explore and visualize data. You can examine your datasets that you have loaded in your datalake with Power BI to help with the analysis of the data either for a data science solution or to determine how you are going to transform the data. For more information on how to do this, check out my previous blog .

Click through for three additional methods.

Comments closed

MAXDOP Calculation Discrepancy

Brent Ozar does the math:

In this case, the SQL Server has multiple NUMA nodes, with greater than 16 logical processors per node – that’s the last line of the screenshot. In that line, Microsoft says MAXDOP should be half of the number of logical processors with a max of 16 – so 16.

But it’s recommending 8. Hmm.

Read on for the answer.

Comments closed

STONITH Resources for Pacemaker Clusters

Andrew Pruski picks up Chekov’s Gun:

Recently I had to create another pacemaker cluster, this time on-premises using VMWare virtual machines. The steps to create the pacemaker cluster and deploy an availability group where pretty much the same as in my original post (minus any Azure marlarkey) but one step was different, creating the STONITH resource.

A STONITH resource is needed in a pacemaker cluster as this is what prevents the dreaded split brain scenario…two nodes thinking that they’re the primary node. If the resource detects a failed node in the cluster it’ll restart that node, hopefully allowing it to come up in the correct state.

Read on to see how Andrew did it.

Comments closed

Archival Tables in SQL Server

Aaron Bertrand starts a new series:

We all have one: the table that grows forever. Maybe it contains chat messages, post comments, or simple web traffic. Eventually, the table gets large enough that it becomes problematic – for example, users will notice that searches or updates take longer and longer as this massive, ever-growing table is scanned.

People often deal with this by archiving older data into a separate table. In this tip series, I’ll describe an archive table, explain why that solution carries its own set of problems, and show other potential ways to deal with data that grows indefinitely.

This is where we say, “Ah, if only Stretch DB had been priced approximately 1/100th of what it really was.” Stretch DB also had its own problems—especially if you ever needed to change the large table’s schema—but stay tuned for Aaron’s answers.

Comments closed

GitHub CI/CD for Synapse Link for SQL Server 2022

Kevin Chant does a bit of CI/CD:

In this post I want to show how a GitHub CI/CD experience for Azure Synapse Link for SQL Server 2022 can look. Which uses GitHub Actions. Including how to automatically stop and start it in the pipeline.

In my last post I showed a complete CI/CD experience for Azure Synapse Link for SQL Server 2022 using Azure DevOps.

With this in mind, in this post I show an alternative GitHub CI/CD experience for Azure Synapse Link for SQL Server 2022 which uses GitHub Actions. Which includes automatically stopping the link before the database update and starting it again after the update has completed.

Read on to learn how.

Comments closed

SQL Server: Learning from Query Store

Grant Fritchey explains a second-order effect of Query Store:

I may have occasionally talked about the importance of Query Store, but today I want to emphasize just how much Microsoft is weaponizing query store.

Of course, I don’t mean they’re creating the Death Star or something, I simply mean they’re taking the information that Query Store gathers and using that to enable a number of new performance enhancements within SQL Server.

Grant shows several examples of how SQL Server can take data in Query Store and use it to make queries in that database faster. It took a few versions of SQL Server but we’re getting to see some of the long-run promise of the feature, now that most of the bugs have been ironed out and the SQL Server development team has had enough time to implement this functionality.

Comments closed