Press "Enter" to skip to content

Month: March 2020

Reducing Trigger Executions

Jared Poche writes up a fun scenario he discovered:

I’ve never been a fan of triggers. I don’t like the idea of them adding an additional tax on every operation. It’s easy to forget they are even there, consuming your cycles. I’ve even seen a few nasty death-by-a-thousand-cuts scenarios with them. But I found something out this week that makes me like them even less.

I was tuning a procedure that runs 284 million times a day.

Over a number of servers and databases, but yes, that number is correct. It takes 2.5ms to run on average, with 1.0ms of CPU time. I’ll spare you the math, but that means over 3 cores of SQL Server are doing nothing but running this procedure 24/7/365. Anything we can do to improve this will be significant, even if we just shave off half a millisecond.

The best stories start with “I was tuning a procedure [which] runs 284 million times a day.”

Comments closed

Creating Triggers with Cosmos DB

Hasan Savran shows how you can create a trigger in Cosmos DB:

You have options if you need to use any type of triggers in Cosmos DB. There are two types of triggers in Cosmos DB. First one which I will cover here is the regular triggers which can be executed before (Pre-Triggers) or after (Post-Triggers) an operation. This type of triggers is written in JavaScript and you need to register them to a collection just like stored procedures. Second type of triggers can be created by Azure Functions and you can find more information about them in my older posts.

     Pre-Triggers and Post-Triggers do not take any input parameters. Since Cosmos DB needs to work more work to execute triggers, you will end up with higher Request Units for your queries. They might name triggers, but both do not get executed automatically with every operation. You need to call them programmatically if you want to run them.  If trigger throws any error for any reason, transaction will roll back and data will not be saved to the database.

Naturally, triggers are going to have a performance impact on your system regardless of the choice of data platform.

Comments closed

Plan Forcing and Stored Procedure Alteration

Grant Fritchey takes us through a scenario where plan forcing with Query Store has some unexpected results:

I absolutely love Query Store and the ability it provides to force a plan is amazing. However, there are a lot of little gotchas in this functionality. I just recently found one that has quite a far reaching effect. Let’s talk about what happens when you DROP and then CREATE a stored procedure.

It’s one of those things where you read Grant’s blog post and say, “Yeah, of course that makes sense; how could it be otherwise?” And yet, without having read the blog post, the thought might never have occurred.

Comments closed

The Hype Cycle for Artificial Intelligence

William Vorhies takes a look at Gartner’s hype cycle for AI (among other things):

Supposing you’re a business leader and supposing you’re trying to make an intelligent decision about prioritizing your AI adoption plans.  It’s likely that like many of us the first thing you’d reach for would be one of Gartner’s many hype cycle or magic quadrant analyses.

What you might not know is that you now need an expert just to guide you through the expert literature.  There has been such a proliferation of hype cycles and magic quadrants that you could easily be looking in the wrong place.

The hype cycle is definitely opinion-based, but I think it’s a useful look at the relative maturity of different segments of an industry or technology cluster. Do read the whole thing, though, as these things aren’t perfect.

Comments closed

Confluent Developer

Tim Berglund announces Confluent Developer:

Today, I am pleased to announce the launch of Confluent Developer, the one and only portal for everything you need to get started with Apache Kafka®, Confluent Platform, and Confluent Cloud! Everything on Confluent Developer is completely free and ungated. It’s a single online source of everything you’ll need to learn Kafka: links to documentation, collections of video tutorials, links to sample code, the entire collection of guided Kafka Tutorials, an index of podcast episodes, and a link to our global network of meetups.

The site is laid out really well.

Comments closed

Running and Scheduling Azure SQL DB Elastic Jobs

Kate Smith continues a series on Azure SQL Database Elastic Jobs:

In previous posts, I have demonstrated how to create an Elastic Jobs Agent, setup credentials for Elastic Jobs, create a target group of servers/databases for the agent, and how to create and define an elastic job using both PowerShell and T-SQL.

In this post, I drill down into how to run an Elastic Job both in an ad-hoc fashion and how to schedule a job to run regularly. I do this both for PowerShell and for T-SQL.

The Powershell version is a one-liner and the T-SQL version looks a good bit like it does with SQL Agent jobs.

Comments closed

Deprecated and Discontinued Features in SQL Server

Randolph West takes a look at my favorite activity: deleting code:

The following statements are true:

– SQL Server 2019 does not have any deprecated features, but does have discontinued features
– SQL Server 2017 does not have any discontinued features, but does have deprecated features
– SQL Server 2016 has both deprecated and discontinued features

Let’s discuss the difference between deprecated and discontinued features, and explain how this affects database administrators looking to move to SQL Server 2016 or newer. My current advice is to target SQL Server 2019 with at least Cumulative Update 2 (CU2).

Read on for more details.

2 Comments

ASYNC_NETWORK_IO and Execution Plans

Jonathan Kehayias dives into an interesting problem:

A few weeks ago, an interesting question was asked on the #SQLHelp hash tag on Twitter about the impact of execution plans on the ASYNC_NETWORK_IO wait type, and it generated some differing opinions and a lot of good discussion.

My immediate answer to this would be that someone is misinterpreting the cause and effect of this, since the ASYNC_NETWORK_IO wait type is encountered when the Engine has results to send over TDS to the client but there are no available TDS buffers on the connection to send them on. Generally speaking, this means that the client side is not consuming the results efficiently, but based on the ensuing discussion I became intrigued enough to do some testing of whether or not an execution plan would actually impact the ASYNC_NETWORK_IO waits significantly.

To summarize: Focusing on ASYNC_NETWORK_IO waits alone as a tuning metric is a mistake. The faster a query executes, the higher this wait type will likely accumulate, even if the client is consuming results as fast as possible. (Also see Greg’s recent post about focusing on waits alone in general.)

Click through for the things Jonathan tested.

Comments closed

Resizing the SSMS Grid Results

Kenneth Fisher shows how you can resize SQL Server Management Studio’s results grid:

After my post last month about configuring your tools I’ve been on a make my life easier kick and one of the things that gives me grief every day is that my eyesight is at best mediocre and some days it’s worse than others. (I have issues. No, not just those issues, issues that effect my eye sight.) I’m in SSMS 90% of my day or more so it really helps that it’s easy to zoom in and out on the query window, and even the message results pane. There is a little box with a percentage in the bottom left corner of each of the two panes. This controls the font size of that pane.

This is also helpful if you ever present using SSMS and don’t have a strong grasp of Zoom-It or the magnifier app.

Comments closed

Secure Azure Data Source Access from Databricks

Bhavin Kukadia, Abhinav Garg, and Michal Marusan show us the right way to access Azure data sources from Azure Databricks:

Enterprise Security is a core tenet of building software at both Databricks and Microsoft, and thus it’s considered as a first-class citizen in Azure Databricks. In the context of this blog, secure connectivity refers to ensuring that traffic from Azure Databricks to Azure data services remains on the Azure network backbone, with the inherent ability to whitelist Azure Databricks as an allowed source. As a security best practice, we recommend a couple of options which customers could use to establish such a data access mechanism to Azure Data services like Azure Blob StorageAzure Data Lake Store Gen2Azure Synapse Data WarehouseAzure CosmosDB etc. Please read further for a discussion on Azure Private Link and Service Endpoints.

This is more about network configuration rather than things like “store your credentials and other secrets in Azure Key Vault,” which is also a good idea.

Comments closed