Procedure Parameters: Optional and Required

Kenneth Fisher takes us through procedure parameters:

If you are executing a stored procedure with a bunch of parameters it can be a bit of a pain if you have to pass a value in for each of them. Fortunately, it’s pretty easy to make some parameters required and others optional. You simply give them a default value.

Kenneth also points out that functions don’t behave this way, and shows how to handle parameters where you don’t want to accept NULL under any circumstances. This is useful when NULL is just a placeholder for “I don’t really want to use this parameter” but the application doesn’t know how to avoid sending the parameter in the first place.

Ownership Chaining and Temp Stored Procedures

Kenneth Fisher wants to see how security for temporary stored procedures works:

With normal stored procedures there is something called ownership chaining. Without going into a lot of detail about what it means, let’s say that you run a stored procedure. SQL is going to check the permissions to see if the stored procedure can update that table right? Well, who’s permissions? Yours? Well, yes, if you have permissions you are fine. But you won’t always. If you don’t then SQL is going to check the owner of the stored procedure (dbo?) and see if they also own that table. If so then we’re golden, perform the update. That might seem scary but it’s pretty normal.

What was scary (at least to me) is the question “How is that handled for a temp stored procedure?”

Read on for the results of Kenneth’s tests.

Using Temp Stored Procedures

Bert Wagner shows how you can perform testing with temporary stored procedures:

A while back I learned that it’s possible to create temporary stored procedures in SQL Server.

I never put that knowledge into practice however because I struggled to think of a good use case for when a temporary stored procedure would be preferable to a permanent stored procedure.

Not long ago I encountered a scenario where using a temporary stored procedure was the perfect solution to my problem.

Those scenarios are rare but Bert did hit one of them.

Stored Procedure IF Branching and Performance

Erik Darling explains that the IF block in a stored procedure won’t help you with performance:

Making plan choices with IF branches like this plain doesn’t work.
The optimizer compiles a plan for both branches based on the initial compile value.
What you end up with is a stored proc that doesn’t do what it’s supposed to do, and parameter sniffing times two.

Read on to see an example of this. If you really, really want to use an IF block, you could separate the components out into individual stored procedures and call those stored procedures independently.

Finding The Slow Query In A Procedure

Erin Stellato shows us how we can find the slowest query within a stored procedure:

Figuring out exactly what causes slow performance for a stored procedure can sometimes feel like trying to unravel a ball of Clark Griswold’s Christmas lights.  It’s not uncommon to see procedures with hundreds, even thousands of lines of code.  You may have been told which stored procedure runs slow by a user or manager, or you might have found it by looking in SQL Server DMVs.  Either way, once you have detected the offending procedure, where do you start?
If you’re running SQL Server 2016, one option is Query Store.  Query Store captures individual queries, but it also captures the object_id, so you can find all the queries that are associated with an object to determine which ones are problematic.

This is quite useful when you have to tune a procedure you’ve never seen before, and as you go to open that procedure, the vertical scroll bar keeps getting smaller and smaller.

Signing Certificates For Multi-Database Access

David Fowler shows you how to create a signed certificate which allows for cross-database access:

Recently Manish Kumar asked an interesting question, what do you do if your proc accesses multiple or even all the databases on the server?

So, instead of giving him a fuzzy answer in reply, I thought I’d write up exactly how you can deal with that sort of situation.

We’ve got two options and we’ll have a look at both of them (I’m not going to go into details about how signing procs works, please see the post mentioned earlier for a basic overview, here I’m going to look specifically at procs that access multiple databases).

Click through to see both solutions.

Deciding Whether To Clean Up Temp Tables

Grant Fritchey looks at what difference explicitly dropping temporary tables in a procedure makes:

I then set up Extended Events to capture the query metrics and I executed each of the queries multiple times (also, just for the test, I discarded the results because I didn’t want that process mucking with my measurements). After executing both procedures 500 times, the results were quite simple. The average execution time with an explicit drop was 8,672 microseconds. Meanwhile, the average for not dropping the temporary table was 8,530 microseconds. That’s about a 1% difference across hundreds of executions. Reads were identical and so were writes.

In short, the behavior is the same.

What about the impact on the system? Could I see changes in memory or I/O as these different processes ran?

Grant didn’t notice any difference but check Allen White and Jay Robinson’s answers in the comments.  Temp table reuse can happen (if you follow the rules) and can make a difference when a procedure is called frequently enough.

Using Table-Valued Parameters In SQL Server

Ben Richardson has a post showing how to create user-defined table types and pass them into stored procedures:

Table-valued parameters were introduced in SQL Server 2008. Before that, there were limited options to pass tabular data to stored procedures. Most developers used one of the following methods:

  1. Data in multiple columns and rows was represented in the form of a series of parameters. However, the maximum number of parameters that can be passed to a SQL Server stored procedure is 2,100. Therefore, in the case of a large table, this method could not be used. Furthermore preprocessing is required on the server side in order to format the individual parameters into a tabular form.

  2. Create multiple SQL statements that can affect multiple rows, such as UPDATE. The statements can be sent to the server individually or in the batched form. Even if they are sent in the batched form, the statements are executed individually on the server.

  3. Another way is to use delimited strings or XML documents to bundle data from multiple rows and columns and then pass these text values to parameterized SQL statements or stored procedures. The drawback of this approach was that you needed to validate the data structure in order to unbundle the values.

The .NET framework then makes it easy to pass in an IEnumerable as a table-valued parameter.

Tips For Debugging Large Procedures

Erik Darling has a few hints for debugging large stored procedures in SQL Server:

Tip #1: Format Your Code

There’s no shortage of free and paid tools out there. This list from the Recently Legendary Aaron Bertrand on Stack Exchange has both.

This one alone is great.  Erik has several other tips as well.

Temp Table Caching And Reuse

Shane O’Neill ran into an error with his stored procedure call:

We store the results in a temporary table first.

Don’t worry, that’s not the end of the post. That’s not even the point of this post. It is, however, what I was doing when I came across a weird error.

Let me show you!

INSERT EXEC failed because the stored procedure altered the schema of the target table.

He wasn’t really trying to alter the schema of that temp table, but read on to see the context and solution.

Categories

August 2019
MTWTFSS
« Jul  
 1234
567891011
12131415161718
19202122232425
262728293031