Press "Enter" to skip to content

Author: Kevin Feasel

SQL Server R Services Troubleshooting

Ginger Grant walks us through a few troubleshooting tactics with SQL Server 2016 R Services:

Much to my surprise after this I received an error

Msg 39019, Level 16, State 1, Line 1
An external script error occurred:
Unable to launch the runtime. ErrorCode 0x80070490: 1168(Element not found.).
Msg 11536, Level 16, State 1, Line 1
EXECUTE statement failed because its WITH RESULT SETS clause specified 1 result set(s), but the statement only sent 0 result set(s) at run time.

I looked in the log files and didn’t find any errors.  I checked the configuration manager to ensure that I had some user ids configured in the configuration manager.  Nothing seemed to make any difference.  Looking online, the only error that I saw which might possibly be close was a different error message about 8.3 naming and the working directory.

This service is somewhat finicky to set up in my experience, though once you have it configured, it tends to be pretty stable.

Comments closed

Cross-Platform Variables In Powershell Core

Max Trinidad shows how to use the Is* variables in Powershell Core to write cross-platform code:

Use the cmdlet Get-Variable to find them, and keep in mind, these variables are not found in Windows PowerShell 5.x.

Get-Variable Is*

Although, the results will display four variable, but let’s pay attention to three of them. Below are the variables with their default values:

IsLinux                            False
IsOSX                              False
IsWindows                    True

These three variables can help in identifying which Operating System the script are been executed.  This way just adding the necessary logic, in order to take the correct action.

Read on for a code example showing how to use these variables.

Comments closed

Docker Stop Versus Docker Kill

Andrew Pruski explains why docker kill is so much faster than docker stop:

When running demos and experimenting with containers I always clear down my environment. It’s good practice to leave a clean environment once you’ve finished working.

To do this I blow all my containers away, usually by running the docker stop command.

But there’s a quicker way to stop containers, the docker kill command.

Sending SIGTERM isn’t particularly polite and doesn’t let processes clean up, which could leave your process in an undesirable state during future runs.  But if you’re just re-deploying a container, you don’t really care about the prior state of the now-disposed container.

Comments closed

Using Query Performance Insight To Find High-IO Queries

Jim Donahoe shows how he used Azure’s Query Performance Insight to eliminate 10 billion logical reads:

To access QPI, you simply need to click on the database you want to work with. Once you click on your database, scroll down in the portal to Query Performance Insight(QPI). Once QPI opens, you will see three options to sort on: CPU, DATA I/O, and LOG I/O.  You can also set the timeframe to view, I set for 24 hours.  Now, I have my timeline of 24 hours, and I am able to view which queries had the highest DATA I/O. I made a list of the top 3 from each category(CPU, DATA I/O, and LOG I/O) and presented it to my client. I presented the number of times it was executed, and the usage it utilized each time(all from the QPI information). The client then sent me 10 queries they wanted tuned and listed them in a prioritized list.

Well, by the end of tuning their 3 highest priority queries, we removed over 10 billion logical reads!  Yep, 10 BILLION! The client was very happy with our results and is currently awaiting the preview Standard Elastic Pools to come out of Preview and become GA. I have provided a few screenshots of an AdventureWorksLT database on my personal instance just to show you how to access QPI, and change metrics.

Click through for a demo.

Comments closed

Estimates Outside The Histogram

Lonny Niederstadt is building up some information on how the cardinality estimator works when it needs to generate an estimate outside the histogram it has:

SQL Server keeps track of how many inserts and deletes since last stats update – when the number of inserts/deletes exceeds the stats update threshold the next time a query requests those stats it’ll qualify for an update.  Trace flag 2371 alters the threshold function before SQL Server 2016. With 2016 compatibility mode, the T2371 function becomes default behavior.  Auto-stats update and auto-stats update async settings of the database determine what happens once the stats qualify for an update.  But whether an auto-stats update or a manual stats update, the density, histogram, etc are all updated.

Trace flags 2389, 2390, 4139, and the ENABLE_HIST_AMENDMENT_FOR_ASC_KEYS hint operate outside the full stats framework, bringing in the quickstats update.  They have slightly different scope in terms of which stats qualify for quickstats updates – but in each case its *only* stats for indexes, not stats for non-indexed columns that can qualify.  After 3 consecutive stats updates on an index, SQL Server “brands” the stats type as ascending or static, until then it is branded ‘unknown’. The brand of a stat can be seen by setting trace flag 2388 at the session level and using dbcc show_statistics.

Right now there are just a few details and several links, but it does look like he’s going to expand it out.

Comments closed

Trusted Assemblies And Module Signing

Solomon Rutzky continues his SQLCLR and trusted assemblies series:

Ownership chaining is quite handy as it makes it easier to not grant explicit permissions on base objects (i.e. Tables, etc) to everyone. Instead, you just grant EXECUTE / SELECTpermissions to Stored Procedures, Views, etc.

However, one situation where ownership chaining does not work is when using Dynamic SQL. And, any SQL submitted by a SQLCLR object is, by its very nature, Dynamic SQL. Hence, any SQLCLR objects that a) do any data access, even just SELECT statements, and b) will be executed by a User that is neither the owner of the objects being accessed nor one that has been granted permissions to the sub-objects, needs to consider module signing in order to maintain good and proper security practices. BUT, the catch here is that in order to sign any Assembly’s T-SQL wrapper objects, that Assembly needs to have been signed with a Strong Name Key or Certificate prior to being loaded into SQL Server. Neither “Trusted Assemblies” nor even signing the Assembly with a Certificate within SQL Server suffices for this purpose, as we will see below.

Read on for more details.

Comments closed

SQL Server Import From Excel And TLS 1.0

Chris Thompson has an important notice if you import data into Excel from SQL Server:

As you may know, TLS 1.0 is being deprecated due to various known exploits and will no longer be PCI compliant as of June 30th, 2018 (see PCI DSS v3.1 and SSL: What you should do NOW below).   You may also know that Microsoft has provided TLS 1.1/1.2 patches for the SQL Server Database Engine (2008+) as well as the client connectivity components (see TLS 1.2 support for Microsoft SQL Server below). What you may NOT know is that there is a popular feature in Excel to import data from SQL Server. See the screen print below from Excel 2016.

The problem with this feature lies in the fact that this menu option will, by default, leverage SQLOLEDB.1 as the OLE DB provider when connecting to SQL Server. This provider is an older MDAC/WDAC provider (see Data Access Technologies Road Map below) that comes built into the Operating System (including Windows 10) but DOES NOT support TLS 1.1+. So, if you have SQL Servers that have TLS 1.0 Server disabled, you will no longer be able to use this feature. You will receive an error similar to the one below. You will also receive the same or similar error if you have existing workbooks that use this feature and attempt to refresh those workbooks.

Chris has a workaround for current versions of Excel and notes that future versions will hide this particular import option behind a legacy wizards menu.

Comments closed

Columnstore Deadlocking

Kendra Little shows us a scenario in which querying columnstore metadata during table updates can lead to a deadlock:

I was playing around with a nonclustered columnstore index on a disk-based table. Here’s what I was doing:

Session 1: this session repeatedly changed the value for a single row, back and forth. This puts it into the delta store.

Session 2: this session repeatedly counted the number of rows in the table, using the columnstore index.

With my sample data in this scenario, I found I frequently generated deadlocks.

Read on for more.

Comments closed

Concatenation Truncation

Adrian Buckman walks through one of the more annoying aspects of building large strings in SQL Server:

So there I was building this massive VARCHAR(MAX) string and concatenated at various points in my code were Database names of the datatype NVARCHAR(128).
The interesting part was that I was expecting SQL server to use my largest data type – the VARCHAR(MAX) and just concatenate the NVARCHAR(128) values into it
this was not the case – what actually happened was my string of  VARCHAR(MAX) characters being truncated down to an NVARCHAR(4000)!

There is a reason for this and its all to do with Data Type Precedence in this case the NVARCHAR is preceding my VARCHAR unless of course I explicitly convert the NVARCHAR to a VARCHAR.

Read the whole thing.

Comments closed

Performance Improvements In SQL Server 2017

Bob Ward goes over some of the performance improvements introduced in SQL Server 2017:

Consider for a minute all the built-in capabilities that power the speed of SQL Server. From a SQLOS scheduling engine that minimizes OS context switches to read-ahead scanning to automatic scaling as you add NUMA and CPUs. And we parallelize everything! From queries to indexes to statistics to backups to recovery to background threads like LogWriter. We partition and parallelize our engine to scale from your laptop to the biggest servers in the world.

Like the enhancements we made as described in It Just Runs Faster, in SQL Server 2016, we are always looking to tune our engine for speed, all based on customer experiences. Take, for example, indirect checkpoint, which is designed to provide a more predictable recovery time for a database. We boosted scalability of this feature based on customer feedback. We also made scalability improvements for parallel scanning and consistency check performance. No knobs required. Just built-in for speed.

One of the coolest performance aspects to built-in speed is online operations. We know you need to perform other maintenance tasks than just run queries, but keep your application up and running, so we support online backups, consistency checks, and index rebuilds. SQL Server 2017 enhances this functionality with resumable online index builds allowing you to pause an index build and resume it at any time (even after a failure).

I saw the performance improvements in 2016 and am looking forward to the ones in 2017.

Comments closed