Press "Enter" to skip to content

Author: Kevin Feasel

Native Spatial In SQL Server 2016

The CSS SQL Server Engineers team points out that spatial types will be a lot faster in the upcoming version of SQL Server:

The SQL Server development team was able to remove the PInvoke and PUnInvoke activities during T-SQL execution for many of the spatial methods.   A critical aspect of the change is that the change is fully compatible across the server and client scenarios. 

The same source code is used to build the managed C++ implementation and the unmanaged C++ implementation.   At the risk of understating this work the C++  managed code can be compiled with the C++ /CLI compiler, creating the managed assembly and a few, cleaver templates and macros bridge the native variations allowing C++ native compilation.  Any change to the code is made in one source file and built in two different ways.

They also have a couple of demos and point out that if you’re using a spatial index appropriately, the performance benefit from switching to 2016 is upwards of 3x.  A 3x performance improvement with no code changes is nothing to sneeze at.

Comments closed

New-TimeSpan

Richie Lee shows off Get-TimeSpan and New-TimeSpan:

As is the case with most things, when I find a way for getting something done in a script that is “good enough”, I’ll tend to stick with that method until that method no longer becomes fit for purpose. One such method is printing out the time that something took in PowerShell: many of the scripts on my site use this method to get the duration of a task, and I’ve been using this since PowerShell 1.0

This is certainly an improvement over the old version.

Comments closed

CPU Co-Stop

David Klee discusses an important hypervisor-level metric:

VMware’s CPU Co-Stop metric shows you the amount of time that a parallelized request spends trying to line up the vCPU schedulers for the simultaneous execution of a task on multiple vCPUs. It’s measured in milliseconds spent in the queue per vCPU per polling interval. Higher is bad. Very bad. The operating system is constantly reviewing the running processes, and checking their runtime states. It can detect that a CPU isn’t keeping up with the others, and might actually flag a CPU is actually BAD if it can’t keep up and the difference is too great.

This is extremely useful information for DBAs in virtualized environments.  My crude and overly simplistic answer is, don’t over-book vCPUs on hosts running important VMs like your SQL Server instances.

Comments closed

SQL Server Event Logging

Kendra Little discusses having a reusable event logging tool for your database work:

You can’t, and shouldn’t log everything, because logging events can slow you down. And you shouldn’t always log to a database, either– you can keep logs in the application tier as well, no argument here.

But most applications periodically do ‘heavy’ or batch database work. And when those things happen, it can make a lot of sense to log to the database. That’s where this logging comes in.

Bonus points if you feed this kind of logging into Splunk (or your logging analysis tool of choice) and integrate it with application-level logging.

Comments closed

Auto-Update Stats Threshold Change

Erik Darling points out that automatic statistics updates will happen more frequently in 2016 for large tables than in prior versions:

Here’s an abridged version of 10-20 million and 30-40 million rows, and how many modifications they took before a stats update occurred. If you follow the PercentMod column down, the returns diminish a bit the higher up you get. I’m not saying that I’d prefer to wait for 20% + 500 rows to modify, by any stretch. My only point here is that there’s not a set percentage to point to.

And, because you’re probably wondering, turning on Trace Flag 2371 in 2016 doesn’t make any difference.

This is a good change, though as Erik points out, if you’re managing very large tables, you might already have the trace flag on and thereby won’t see any difference.

Comments closed

Synonyms

Aaron Bertrand discusses synonyms:

Let’s say you have a table called dbo.BugReports, and you need to change it to dbo.SupportIncidents. This can be quite disruptive if you have references to the original name scattered throughout stored procedures, views, functions, and application code. Modern tools like SSDT can make a refactor relatively straightforward within the database (as long as queries aren’t constructed from user input and/or dynamic SQL), but for distributed applications, it can be a lot more complex.

A synonym can allow you to change the database now, and worry about the application later – even in phases. You just rename the table from the old name to the new name (or use ALTER TABLE ... SWITCH and then drop the original), and then create a synonym named with the old name that “points to” the new name

I’ve used synonyms once or twice, but they’re pretty low on my list, in part because of network effects:  if I create this great set of synonyms but the next guy doesn’t know about them, it makes maintenance that much harder.

Comments closed

Azure SQL Database Cross-Database Queries

James Serra reports that Azure SQL Database now allows cross-database queries:

A limitation with Azure SQL database has been its inability to do cross-database SQL queries.  This has changed with the introduction of elastic database queries, now in preview.  However, it’s not as easy as on-prem SQL Server, where you can just use the three-part name syntax DatabaseName.SchemaName.TableName.  Instead, you have to define remote tables (tables outside your current database), which is similar to how PolyBase works for those of you familiar with PolyBase.

For one-off tables you join to frequently, I suppose this isn’t terrible, but it’s certainly less convenient.

Comments closed

Backup Restore Chains

Anders Pedersen has a script to determine which backup files are needed to restore a database, but ran into a problem with LSNs matching:

Backup_set_id 1713355 and 1713378 has First_Lsn = Last_Lsn, and they are the same between the two records.  There was more records where this was true, just happens to be early in the result set so was easy to spot.

Question then arose in my mind if this caused a problem with the CTE, which it is obvious that it did, but to fix it, I would have to remove those records from being considered in the restore chain (or switch to a loop from a recursive CTE).

Check out the script.

Comments closed