Press "Enter" to skip to content

Day: March 2, 2017

Understanding The Stack

Ewald Cress digs into stack frames, using a few SQL Server functions as examples:

The rdi register is used as a scratchpad to save the incoming rcx value for later – first it is used to restore the possibly clobered rcx value for the second child function call, and then it becomes the return value going into rax. However, since we don’t want to clobber its value for our caller, we first save its old value and then, at the very end, restore it. This is done through a traditional push/pop pair, which in x64 is only allowed in the function prologue and epilogue.

Similarly, ebx (the bottom half of rbx) is used to save the incoming value of the second parameter in edx. However, here we don’t use a push/pop pair, but instead save it in a slot in the shadow space. Why two different mechanisms for two different registers? Sorry, that’s one for the compiler writers to answer.

There’s a lot of detail involved here, but it’s well worth the read.

Comments closed

Figuring Out Cost Threshold For Parallelism

Grant Fritchey uses R to help him decide on a good cost threshold for parallelism value:

With the Standard Deviation in hand, and a quick rule of thumb that says 68% of all values are going to be within two standard deviations of the data set, I can determine that a value of 16 on my Cost Threshold for Parallelism is going to cover most cases, and will ensure that only a small percentage of queries go parallel on my system, but that those which do go parallel are actually costly queries, not some that just fall outside the default value of 5.

I’ve made a couple of assumptions that are not completely held up by the data. Using the two, or even three, standard deviations to cover just enough of the data isn’t actually supported in this case because I don’t have a normal distribution of data. In fact, the distribution here is quite heavily skewed to one end of the chart. There’s also no data on the frequency of these calls. You may want to add that into your plans for setting your Cost Threshold.

This is a nice start.  If you’re looking for a more experimental analysis, you could try A/B testing (particularly if you have a good sample workload), where you track whatever pertinent counters you need (e.g., query runtime, whether it went parallel, CPU and disk usage) under different cost threshold regimes and do a comparative analysis.

Comments closed

Standard Edition Max Server Memory Changes In 2016 SP1

Randolph West details the changes in max server memory for Standard Edition in SQL Server 2016 SP1:

The memory limit of 128GB RAM applies only to the buffer pool (the 8KB data pages that are read from disk into memory — in other words, the database itself).

For servers containing more than 128GB of physical RAM, and running SQL Server 2016 with Service Pack 1 or higher, we now have options.

Randolph has a couple good clarifications on memory limits outside the buffer pool, making this worth the read.

Comments closed

Target Server Memory Vs Total Memory

Arun Sirpal compares the Target Server Memory and Total Memory Perfmon counters:

Target Server Memory (KB) is the amount of memory that SQL Server is willing (potential) to allocate to the buffer pool under its current load. Total Server Memory (KB) is what SQL currently has allocated.

Using SQL Server 2014 developer edition (64 bit) my machine has 12GB RAM and maximum server memory is currently set to 8GB and for the purpose of this post I have set  minimum server memory to 1GB (Lock Pages in Memory has not been set).

Read on for a nice description laden with Perfmon pictures.

Comments closed

Preventing Event Storms

Kenneth Fisher has some good advice when dealing with event notifications:

One of the most common ways to get an event notification is by email. So what happens when you get 500 emails in a day and only one or two are actionable? Do you read every single email? Spending quite literally hours to find those one or two gems? Or do you just ignore the whole lot and wait for some other notification that there is a problem. Say, by a user calling you?

Next, let’s say you have a job that runs every few minutes checking if an instance is down. When that instance goes down you get an immediate email. Which is awesome! Of course then while you are trying to fix the issue you get dozens more emails about the same outage. That is at best distracting and at worst makes it take longer for you to fix the issue.

Fun story time:  at one point during my work career, there was a person (not me!) who accidentally broke every single SQL Agent job on dozens of instances and nobody noticed it for hours.  These weren’t production instances so it wasn’t the end of the world or anything…except that included in the broken jobs were a bunch which ran every minute.  And alerted every minute.  Via e-mail.  The entire database team essentially lost e-mail access for 3 days as there were so many messages coming in that it overwhelmed our provider’s ability to serve messages to us.  This type of mistake can happen, and if we had put into place some of the things Kenneth talks about, the consequences would have been less severe.

Comments closed

How SSDT Publication Works

Ed Elliott has a great post deconstructing the SQL Server Data Tools deployment process:

The first thing to note is the garbage collection, there are like 8 changes to the line so 8 sets of garbage collection that happen which is expensive for a .net app. We get much more processor usage for a more prolonged time and we really start to see some disk activity, write activity just has one peak while the files are written but reading stays quite high throughout the whole process. It could be that it is sql reading from disk (my demo was from a single ssd with a local sql instance) but we can see after the sql cpu settles down the green disk read line stays high so it is unlikely to be pure sqlserver.exe.

What does this tell us?

Well if you have large objects, multi-thousand line stored procs then you better get yourself some serious ssd’s to deploy from. If you have smaller objects then you’ll need CPU and memory – don’t scrimp here!

Check it out.

Comments closed

Changing Powershell Error Colors

Jana Sattainathan shows how easy it is to change the colors for Powershell warnings and errors:

How could such a nice tool have such a terrible choice of colors for errors and warnings? I am talking about the ISE and the unreadable contrast of colors.

You can fix it. Open and update your profile by running this from your ISE (which opens your profile in NotePad)

It’s Powershell, so you can do it in a couple lines.

Comments closed