SQL Server spinlocks are famously elusive little beasties, tending to stay in the shadows except when they come out to bother you in swarms. I’m not going to add to the documentation of where specific spinlock types show up, or how to respond to contention on different types; the interested reader likely already knows where to look. Hint: Chris Adkin is quite the spinlock exterminator of the day.
In preparation for Bob Ward’s PASS Summit session, I figured it would make sense to familiarise myself a bit more with spinlock internals, since I have in the past found it frustrating to try and get a grip on it. Fact is, these are actually simple structures that are easy to understand, and as usual, a few public symbols go a long way. Being undocumented stuff, the usual caveats apply that one should not get too attached to implementation details.
Spinlocks are a testament to the level of engineering complexity in the SQLOS model. I appreciate Ewald’s explanation of the topic.
Jovan Popovic has a couple of posts on JSON. First, using OPENJSON to generate a tally table:
Problem: I want to dynamically generate a table of numbers (e.g. from 0 to N). Unfortunately we don’t have this kind of function in SQL Server 2016, but we can use OPENJSON as a workaround.
OPENJSON can parse array of numbers [1,25,3,5,32334,54,24,3] and return a table with [key,value] pairs. Values will be elements of the array, and keys will be indexes (e.g. numbers from 0 to length of array – 1). In this example I don’t care about values I just need indexes.
Well, that’s one way to do it.
You might notice that table scans take majority of the query cost. Cost of the FOR JSON (JSON SELECT operator) is 0% compared to others. Also, since we are joining small tables (one sales order and few details), cost of the JOIN is minor. Therefore, if you processing small requests there will be no performance difference between formatting JSON on client side and in database layer.
This comment was actually due to a bug in the AdventureWorks CTP 3 database. The good news is that JSON isn’t obviously slow performance problems, but I’d like to see some more thorough performance tests.
Both posts via Database Weekly.
Filtering events from an Extended Events file is even easier. Open the .xel file within Management Studio, then select Extended Events | Filters (you can also select the Filters icon in the Extended Events toolbar).
This may be the only case in which XE is easier than a trace…
Meagan Longoria is keeping a list of all the things changing in Power BI. It’s a long one:
Since it’s part of my job to stay on top of the latest Power BI features and capabilities (and because I like lists), I decided to keep a list of the features released for easy reference. It’s much more convenient for me to have a single place to reference when I’m discussing specific capabilities and updates with clients and colleagues. I’m now sharing my list with you.
I’m glad that somebody’s keeping up with everything changing in Power BI. I’m equally glad that person isn’t me.
Paul Turley takes a look at how SSRS and Power BI are maturing. One of the key grafs for me:
In SQL Server 2016, Reporting Services is getting a significant face lift on several fronts. The HTML renderer has been completely rewritten to emit pure HTML 5 to produce consistent output in every modern browser on every device. This capability is in the current CTP today.
I hated having people install executables to view SSRS reports, hated how Firefox and Chrome displayed reports differently than IE, and hated the occasional insoluable error brought about by these two things. SSRS was due for a modernization, and I hope to look at it again in 2016. Between these two tools, R support, and PolyBase, SQL Server 2016 is really shaping up to be a huge release for BI teams.
The way MergeUi used to work was that it enumerated the schemas and tables in a project and let you create a merge statement in the post-deploy script. The problem with that is that you may want to have different versions of the table for different environments or you may want to put the merge in a different script and either reference it using :r imports or not reference it at all.
The new way it works is that instead of enumerating tables it now enumerates script files (pre, post, included, not included etc) and lets you choose which table to add whether or not the table has been added before.
I’ve not used this tool before, but it’s good to know that it’s available via Github.
The way Microsoft have implemented this always encrypted feature, is to let all the data in the tables be encrypted. The application that needs to look at data will have to use the new Enhanced ADO.net library, which will give your application the methods to de/encrypt data.
This way, the only way to insert data into a table, which contains encrypted columns, is to use parameterized insert statements from your application. It is not even possible to insert data from SQL Server Management Studio, if we try, the statement will fail.
This way we ensure that only the persons using the application will be looking at un-encrypted data, thus reducing the number of people with a direct access to sensitive data.
If you go down this route, it looks like the only method available for modifying data is going through ADO.NET, although that could change later. My biggest concern here is how much of a performance hit—if any—systems will take.
After you provision a Microsoft Azure VM with SQL Server there are a few more steps that you need to take to make remote connections. The procedure below starts with a fresh Azure VM provisioned and walks through the process of establishing a connection via SQL Server Management Studio, installed on an on-premises work station.
Note that this is Azure IaaS, not Azure SQL Database.
Pieter Vanhove has published his Policy-Based Management-based DBA Morning Checklist and has some post-Summit additions:
Optimize for Ad Hoc Workloads
The policy is going to check if the server setting Optimize for Ad Hoc Workloads, is set to True. By default, this setting is set to False.
The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches. More information can be found on https://msdn.microsoft.com/en-us/library/cc645587.aspx
I don’t see any downside by enabling this setting.
Not many shops use PBM, so I’m happy to see Pieter contributing this to the general community.
Nic Cain has an outstanding blog post on enabling Instant File Initialization in SQL Server 2016, specifically wondering what happens when group policy explicitly prohibits setting Perform Volume Maintenance Tasks privileges:
Much to my surprise the virtual SQL account showed up in the PVMT secpol setting. I had no idea how it got there. Reviewing the setting I was able to confirm that the account I used for install was not able to make any adjustments and yet somehow the permissions were set.
I’m happy to hear why I’m wrong, but I’d consider this a reasonable instance of privilege escalation: I may not want the DBA to be able to perform volume maintenance tasks on just any server, but I do want him to do it on SQL Server instances.