Press "Enter" to skip to content

Category: Stored Procedures

Deciding Whether To Clean Up Temp Tables

Grant Fritchey looks at what difference explicitly dropping temporary tables in a procedure makes:

I then set up Extended Events to capture the query metrics and I executed each of the queries multiple times (also, just for the test, I discarded the results because I didn’t want that process mucking with my measurements). After executing both procedures 500 times, the results were quite simple. The average execution time with an explicit drop was 8,672 microseconds. Meanwhile, the average for not dropping the temporary table was 8,530 microseconds. That’s about a 1% difference across hundreds of executions. Reads were identical and so were writes.

In short, the behavior is the same.

What about the impact on the system? Could I see changes in memory or I/O as these different processes ran?

Grant didn’t notice any difference but check Allen White and Jay Robinson’s answers in the comments.  Temp table reuse can happen (if you follow the rules) and can make a difference when a procedure is called frequently enough.

Comments closed

Using Table-Valued Parameters In SQL Server

Ben Richardson has a post showing how to create user-defined table types and pass them into stored procedures:

Table-valued parameters were introduced in SQL Server 2008. Before that, there were limited options to pass tabular data to stored procedures. Most developers used one of the following methods:

  1. Data in multiple columns and rows was represented in the form of a series of parameters. However, the maximum number of parameters that can be passed to a SQL Server stored procedure is 2,100. Therefore, in the case of a large table, this method could not be used. Furthermore preprocessing is required on the server side in order to format the individual parameters into a tabular form.

  2. Create multiple SQL statements that can affect multiple rows, such as UPDATE. The statements can be sent to the server individually or in the batched form. Even if they are sent in the batched form, the statements are executed individually on the server.

  3. Another way is to use delimited strings or XML documents to bundle data from multiple rows and columns and then pass these text values to parameterized SQL statements or stored procedures. The drawback of this approach was that you needed to validate the data structure in order to unbundle the values.

The .NET framework then makes it easy to pass in an IEnumerable as a table-valued parameter.

Comments closed

Temp Table Caching And Reuse

Shane O’Neill ran into an error with his stored procedure call:

We store the results in a temporary table first.

Don’t worry, that’s not the end of the post. That’s not even the point of this post. It is, however, what I was doing when I came across a weird error.

Let me show you!

INSERT EXEC failed because the stored procedure altered the schema of the target table.

He wasn’t really trying to alter the schema of that temp table, but read on to see the context and solution.

Comments closed

Rowgroup Elimination In Stored Procedures

Erik Darling notes a parameter sniffing problem when trying to use rowgroup elimination in a stored procedure:

So where are we? Well, we found that Rowgroup Elimination is possible in stored procedures with ColumnStore indexes, but that the cached plan doesn’t change based on feedback from that elimination.

  • Good news: elimination can occur with variables passed in.
  • Bad news: that cached plan sticks with you like belly fat at a desk job

Remember our plan? It used a Stream Aggregate to process the MAX. Stream Aggregates are preferred for small, and/or ordered sets.

Great post, Brent.

Comments closed

Using Startup Stored Procedures

John Morehouse explains how to set up a startup stored procedure, as well as some of the risks involved:

Startup procedures automatically execute whenever SQL Server is started.  Where would you use this?  One example is if you had an application that required the use of a global temporary table, you could use a startup procedure to create it.  This would allow the table to be immediately accessible to any application that requires it.

Another possibility is that you could “warm up” the cache so that data is already cached when an application needs it.  This would help prevent that initial I/O hit the first time the query is called, thus potentially improving initial performance.

Click through for details.  I’ve created a couple of these, but they rarely come to mind and that’s one of the big problems:  if there is an issue, there’s probably quite a bit of investigative work that would take place before somebody remembers that hey, these things exist.

Comments closed

NULL Parameters

Riley Major turns this T-SQL Tuesday into thoughts on procedure parameterization:

But what if the caller wanted the date to be “empty” (i.e. 1900-01-01)? And what if a NULL is passed?

In our environment, we’ve disallowed NULLs from our table fields. We understand that NULL is actually information– it says that the data is unknown– but we believe that for most data fields, there are non-NULL values which just as effectively represent unknown. Typically, 0’s and empty strings (and the “blank” date 1900-01-01) serve that purpose. And those values are more forgiving during coding (they equal things; they don’t make everything else “unknown”), and we accept the risk of paying little attention to which parts of our logic touched “unknown” values.

It’s an interesting look at dealing with optional and default parameters within procedures.

Comments closed

Tuning Indexes For Stored Procedures

Kendra Little offers advice on index tuning within stored procedures:

The best way is to run the stored procedure yourself to generate and save an “actual” execution plan, which contains the estimates SQL Server used when it generated the plan as well as actual rowcounts, actual memory granted, etc. It will also contain a green tooltip with a “missing index request” if SQL Server thinks an index would help.

This is an introductory-level post which contains good advice.

Comments closed

View Performance

Grant Fritchey looks at view performance vis-a-vis stored procedures:

The difference in the performance including compile time for the procedure alone is 700mc better on average than the view. That’s an 8% difference. It was almost that high for the view that used the procedure at 7%.

If we’re just talking compile time then, there is a significant win if we avoid the view. This is no doubt because of the extra work involved in unpacking the view and going through the simplification process within the optimizer. Plus, the view alone in our query was parameterized by the optimizer in order to assist it’s performance over time (as we saw in the average results without the recompile). All that extra work explains the 8% difference.

Read the whole thing.

Comments closed

Temporary Stored Procedures

Jana Sattainathan discusses temporary stored procedures:

The real benefit of these procedures is when they contain lot of logic that you need on a temporary basis but do not want to clutter the existing stored procedure list. You could even have multiple temporary procedures that call each other. I would not go overboard with this. It is just a convenience.

I don’t often see these in use; when I’ve seen them, they’re in environments in which normal stored procedure create rights are locked down and you want to do something as a one-off (like testing an operation against production data).  In other words, those sketchy things that we don’t admit to each other that we do…

1 Comment