Press "Enter" to skip to content

Month: August 2019

Local Database Builds with Jenkins

Steve Jones continues a series on continuous integration, containers, and all that is good in life:

The only way to build a database project in SQL Server is with an actual SQL Server. In this case, I don’t have any code that would error on LocalDB, so I’ll just use that. I coudl specify my local SQL Server development database if I had the need.

This is a test build, so I also don’t need any SQL Compare options or other switches.

Getting code into source control and building continuous integration around it has become a lot easier over the past several years. Easy enough that you can work a simple system out in a day or two of experimentation.

Comments closed

Finding Unused Indexes in SQL Server

Monica Rathbun shows us how we can find and remove unused indexes in SQL Server:

Indexes can be incredibly beneficial to your database performance; however, they do come with a cost—indexes both consume storage space and affect insert performance. Therefore, it is important as part of your index maintenance procedures that you periodically check to see if your indexes are being used. Many times, indexes are created in the belief they are needed but in fact they are never used. You can reduce that IO overhead on inserts when you remove unnecessary indexes.

I’ll use the same script. Typically, I won’t drop unless total reads is 0 or at least two or three orders of magnitude smaller than writes. Sometimes you have indexes which don’t get used frequently but support very expensive or time-sensitive reports, and you don’t want those getting caught up in your dragnet.

Comments closed

Why ALTER Implies DROP

Andy Mallon explains why granting ALTER operations to a user means that you’ve granted DROP permissions:

Dropping stuff is destructive. Anyone who’s ever dropped a plate or a car knows that. But dropping a table, procedure, or database makes it go away completely. The only way to un-drop something is to recover it from backup, or to re-deploy from source control (you do have all your code & schema in source control, right?). Getting back to the original question’s premise (it’s OK to change the object but not to completely make it disappear), I ask…does it even matter? What are you preventing? Is dropping an object worse than altering it?

Read on for Andy’s reasoning.

Comments closed

Dropping a Column as a Metadata Operation

Max Vernon takes us through column dropping:

Dropping a column that is not referenced by any other object lets the storage engine simply mark the column definition as no longer present. Deleting the meta-data invalidates the procedure cache. Any query that subsequently references the affected table will result in the plan for that query be recompiled. The recompile operation can only return columns that currently exist in the table. As a result, the storage engine skips the bytes stored in each page for the dropped column, as if the column no longer exists.

This has some nice benefits in practice around minimizing deployment-releated downtime.

Comments closed

Optimizing for Sequential Keys

Dennes Torres gives us a reminder of what the world was like before a new feature in SQL Server 2019:

Once upon a time a SQL Server version that hadn’t row locks. The minimal level of lock was page lock, every time you want to lock a record, an entire page was locked.

At that time we were between the devil and the deep sea: if we choose a clustered index with an ascending key we would create what was called a Hot Spot, all the records would be inserted on the same page, creating a bottleneck. On the other hand, if we create a clustered index with a non-ascending key, we would suffer from index fragmentation and page splits, having huge admin trouble to find out the correct fill factor for each index in order to support the period between the re-index job without too many page splits.

Dennes covers the specific case which this feature intends to cover and how we got there.

Comments closed

Filtering Stored Procedure Results

Thomas Rushton shows how to filter any stored procedure’s result set and uses sp_who as an example:

sp_who – useful – up to a point. Particularly when the server is busy, and you’re looking for something specific (eg to see if certain processes are out of a database before running an update)

If the server is busy – don’t you wish there was a way to run something like

sp_who WHERE dbname = 'foo'

Yeah. Unfortunately, it doesn’t work like that.

That doesn’t work, but Thomas shows you what does.

Comments closed

Sampling and Estimating Rare Events

Yi Liu takes us through a process to estimate rare events:

Naturally, we get an unbiased estimate of the overall prevalence of violation if we sample the videos uniformly from the population and have them reviewed by human raters to estimate the proportion of violating videos. We also get an unbiased estimate of the violation rate in each policy vertical. But given the low probability of violation and wanting to use our rater capacity wisely, this is not an adequate solution — we typically have too few positive labels in uniform samples to achieve an accurate estimate of the prevalence, especially for those sensitive policy verticals. To obtain a relative error of no more than 20%, we need roughly 100 positive labels, and more often than not, we have zero violation videos in the uniform samples for rarer policies.

This is similar in nature to testing for rare diseases, where a random sample of N people in the population is likely to turn up 0 cases of it.

Comments closed

A Plan for Troubleshooting Plans

Bert Wagner takes us through a workflow for troubleshooting performance issues in SQL Server using execution plans:

With the query pasted and formatted in my SSMS editor window, I like retrieving the estimated execution plan first, and then pasting the query into a second editor window and executing the query with the “Include Actual Execution Plan” option turned on. For bonus points, I’ll split the SSMS window vertically so I can start looking at the estimated execution plan while the query runs and returns the actual execution plan: I like this combination because I (almost) immediately receive my estimated execution plan and can start looking for problems. Once the query on the right finishes executing and I get the actual plan with all of its lovely run-time stats, I usually switch to that looking at that one.

Even if your approach is quite different, it’s good to compare and contrast.

Comments closed

Migrating Old Databases to New Versions

Chrissy LeMaire walks us through migrating an old, old application database to SQL Server 2017:

There was even a linked server in the mix, but our biggest concerns revolved around the changing collation and the Agent jobs, which were known to be brittle.

The destination test server was an existing shared server, which mirrored the scenario that would play out in production. And while the databases only needed to exist on the new server for a limited period of time, these migrated databases were going to be the most important databases on the entire instance. This meant that the SQL Server configs were going to have to cater to this app’s needs. One exception was the collation, as the accent sensitivity was determined not to be a big deal and the vendor agreed.

Read on to see how Chrissy did it. The answer, naturally, is with dbatools.

Comments closed