Press "Enter" to skip to content

Day: February 17, 2017

MAXDOP-Related Extended Events

Paul Randal looks at a couple of Extended Events which help you find CXPACKET waits and degrees of parallelism:

Now I’ll set up a simple Extended Events session to track down the offending code (based on the query from here). It’s very important that you query the sys.dm_xe_map_values DMV to find the correct number to use in the query for the CXPACKET wait, as these numbers often change from release to release, and even in Service Packs. For instance, CXPACKET was 191 in SQL Server 2014 RTM, but is 190 in the 2014 build I’m using.

Be very careful about running this in production, as the event will fire for *every* wait that occurs and so will likely affect your workload throughput, even though it’ll short-circuit if the wait isn’t CXPACKET. I’ll show you a better event to use lower down.

These are good events to know.

Comments closed

Logging Space Used

Andrew Peterson has a script to grab space used for each database:

After reviewing the posts, many, perhaps all of the procedures and commands I found provided valuable information. Just not what I wanted. We have:

sp_spaceused, but it returns data in two data sets. Not ideal.  Next we have…

sp_databases, but it just returns a list of databases and the size. Not ideal.  Or use…

DBCC showfilestats,   Ok, but still incomplete. And, since it is not listed on the main DBCC page at MSDN, its most likely deprecated.

And speaking of DBCC, there are other commands to consider. But what I was really hoping to find was a simple way to use the sys.database_files DMV. It is very close, but not quite complete.

Read on for the solution.  One small change I’d prefer is using Aaron Bertrand’s sp_foreachdb for iterating.  Or write a Powershell script if you want to take the looping outside of SQL Server.

Comments closed

R 3.4.0 Performance Improvements

David Smith discusses performance improvements upcoming in R 3.4.0:

A “just-in-time” JIT compiler will be included. While the core R packages have been byte-compiled since 2011, and package authors also have the option of btye-compiling the R code they contain, it was tricky for ordinary R users to gain the benefits of byte-compilation for their own code. In 3.4.0, loops in your R scripts and functions you write will be byte-compiled as you use them (“just-in-time”), so you can get improved performance for your R code without taking any additional actions.

Stay tuned for the release.

Comments closed

Understanding sp_who2

Kendra Little explains what the sp_who2 procedure does and how to read its results:

Let’s talk about sp_who2

I started out using sp_who2, also! And I was often confused by sp_who2.

sp_who2 is a built-in stored procedure in SQL Server.

  • Shows a lot of sessions, even on an idle instance
  • Doesn’t tell you much about what it’s doing

Here’s what an idle SQL Server looks like in sp_who2

This is my dev SQL Server instance. I have a few sessions open, but the only one which was executing anything was the one you see, where I ran sp_who2. I don’t have any agent jobs scheduled on this thing. It’s just waiting for something to happen.

It’s hard to know what to look at, because we see so much. And only 19 of the 49 sessions is on the screen, too.

Kendra then goes on to explain that there are better ways of getting this information and plugs sp_whoisactive.  I’m 100% in agreement.

Comments closed

Trivial Plans With Columnstore

Dmitry Pilugin looks at columnstore query behavior when trivial plans get involved:

Both queries are now fully optimized and that lead to different plans. First of all, both queries run in a Batch Mode, which is much faster than a Row Mode.

In the first query, we see Hash Match Aggregate instead of Stream Aggregate, more to the point you may see that Actual Number of Rows is 0, because all the rows were aggregated locally at the Storage Engine level, you may see property Actual Number of Locally Aggregated Rows = 60855. This is faster than a regular aggregation and is known as Aggregate Pushdown.

In the second query, you may observe a new Window Aggregate operator which is faster than a Window Spool and runs in Batch Mode also.

Read the whole thing.  Dmitry also looks at SQL Server vNext and how it handles the same trivial-plan-generating scenario.

Comments closed

More On Columnstore String Handling

Niko Neugebauer focuses on two potential columnstore issues, NULL strings and string aggregation:

You can see a filter iterator taking estimated 9% of the resources and filtering over 5,1 million rows that were taken out of the lineitem_cci table (Columnstore Index Scan operation). Before it took place all of the table data was read and the values for the aggregate values were calculated, meaning a significant resource waste (when the data was filtered and while it was occupying memory).

The predicate properties of the Columnstore Index Scan are shown on the left picture and you can see that nothing has been pushed into the Storage Engine, making this query perform quite slow. On my local test VM it takes on average 0.7 seconds to execute while spending over 1.9 seconds of the CPU time. This happens because of the inability of the Columnstore Indexes to push NULL expression predicate into the storage engine.

To solve this problem in this test case I will need to rewrite the query and remove the NULL expression by substituting it by = ‘RAIL’ OR IS NULL logic:

Definitely worth a read.

Comments closed

Avoiding Hard-Coded Paths In Powershell

Jana Sattainathan gives a few options for getting around hard-coded paths in a Powershell script:

What if the KillerApp’s home folder suddenly moves?

Now, how do you make your app work with all its scripts without having to change code if you move it to a different folder?

You could now change the initial script that that dot-sources all the functions to alter the path and you are all set. This is still not ideal because you have to make a change when the location changed.

Click through for several options, including PSDrives and even automatically dot-sourcing Powershell scripts in the current and all child directories.

Comments closed

Batch Requests Per Second

Tara Kizer explains the Batch Requests/sec metric:

WHAT IS BATCH REQUESTS/SEC?

Batch Requests/sec is a performance counter that tells us the number of T-SQL command batches received by the server per second. It is in the SQLServer:SQL Statistics performance object for a default instance or MSSQL$InstanceName:SQL Statistics for a named instance.

WHAT COMPRISES A BATCH?

If I execute a stored procedure that has multiple queries and calls to other stored procedures, what will Batch Requests/sec show? Let’s test it to find out.

Click through for the answer.

Comments closed

Deadlock Priority

Kenneth Fisher tells a story of a deadlock:

Why does it matter that they were system sessions? The important thing to remember here is that these sessions can not be KILLed. So because they were holding locks on the database (And somehow even though it was in single user there were multiple sessions with locks in the database. Don’t ask me how.) I wasn’t able to get that exclusive access I needed.

Interestingly when I tried to do the ALTER instead of just hanging I immediately got a deadlock error. I spent a little while trying various things and searching through forums before I went for help on twitter using the #SQLHELP hashtag.

Read on for the answer, including how deadlock priorities saved the day.

Comments closed