Press "Enter" to skip to content

Author: Kevin Feasel

Synchronizing Logins And Jobs

Ryan Adams shows five methods for synchronizing SQL logins and a couple ways of synchronizing SQL Agent jobs between instances of SQL Server:

Robert Davis wrote a great script back when he published his Mirroring book.  I started to write my own and was almost done when I contacted Robert and asked if he had dealt with SQL logins since the script only handled Windows logins.  His reply was something along the lines of, “What are you talking about? Of course it handles SQL logins”.  It turns out that the publisher didn’t get the right script version published with the book.  That’s when this post from Robert with the full script was born…

Transferring Logins

I also wrote about it HERE.

This script creates a stored procedure to handle the move and also uses Linked Servers.  If you can’t have linked servers in your environment this is not a good choice for you.  However, you can create the linked server in a SQL Agent job step prior to the step for transfer and then remove it in a job step after the transfer.  It breaks the rule but it does it fast enough maybe no one will notice.

Read the whole thing.

Comments closed

Custom SQL Server Docker Images

James Anderson shows an easy way of creating a custom Docker image which relies on some other image:

The FROM statement declares that we want to lay some instructions on top of the microsoft/mssql-server-windows image. The beauty of this approach is that when I pull down a new version of the microsoft/mssql-server-windows image, my image will be updated too. The microsoft/mssql-server-windows Dockerfile does the same thing with the microsoft/windowsservercore image.

The rest of the Dockerfile sets some meta data, downloads the installer and adds the Advanced Analytics feature.

SSIS, SSAS, SSRS or any other SQL Server feature could be added to a containerised SQL Server deployment in the same way.

With this approach, you do run the risk that upstream changes will break your image, but for something like this, it’s a very useful approach.

Comments closed

Building Temp Tables From Queries

David Fowler shows how to use dm_exec_describe_first_result_set to generate a temp table schema:

Have you ever needed to store the results of a complex query in a temp table?  How did you go about working out what the definition for that temp table should be, the columns and their data types?

It can be a bit of a pain, not to mention time consuming to have to go figuring out what all datatypes of the base tables are.

I got fed up with all that hunting around as well so as a quick blog I thought I’d share a little script that will take your query in a variable and print out a temp table definition for its result set.

Click through for the script, as well as an important comment by frequent curatee Shane O’Neill.

Comments closed

The Pain Of Multi-Statement TVFs

Andy Mallon walks through a multi-statement table-valued function in Microsoft Dynamics CRM:

Look at all those table-valued function calls! Followed immediately by a really expensive hash match. My Spidey Sense started to tingle. What is fn_GetMaxPrivilegeDepthMask, and why is it being called 30 times? I bet this is a problem. When you see “Table-valued function” as an operator in a plan, that actually means it’s a multi-statement table-valued function. If it were an inline table-valued function, it would get incorporated into the larger plan, and not be a black box. Multi-statement table-valued functions are evil. Don’t use them. The Cardinality Estimator isn’t able to make accurate estimates. The Query Optimizer isn’t able to optimize them in the context of the larger query. From a performance perspective, they don’t scale.

Even though this TVF is an out-of-the-box piece of code from Dynamics CRM, my Spidey Sense tells me that it’s the problem.

That said, Joe Sack and team are working on making multi-statement TVFs faster in SQL Server 2017.  Whether it will move the needle from Andy’s excellent advice, we’ll have to wait and see.

Comments closed

Using NLP To Find Similar Facebook Posts

The folks at Knoyd put together a word embedding example by scraping a Python Facebook group:

We are going to represent the content of a Facebook post using word embeddings and comparing the transformed posts using word mover’s distance. The combination of both have shown lower k-nearest neighbor-document classification error rates compared to other state of the art techniques.

The advantage of word embeddings is that the words which have similar meanings but don’t have any letters in common will still have similar vectors (be close) in the embedded space (e.g. lion and tiger).

There’s a good high-level discussion of techniques in this post.

Comments closed

R Services Internal Communication Mechanisms

Niels Berglund continues his R Services internals series:

When browsing for the symbols, you can use this command: x /1 *!TCP*. By using the option /1 you’ll only see the names, and no addresses. On my machine that gives me quite a lot, but there are two entries that catch my eye: sqllang!Tcp::AcceptConnection and sqllang!Tcp::Close. So let us set breakpoints at those two symbols, and see what happens when we execute our code.

The result when executing the code is that we initially break at sqllang!Tcp::AcceptConnection. Followed somewhat later by breaking at sqllang!Tcp::Close. Cool, this seems to work – let us set some more breakpoints and try to figure out the flow of events.

The first half recapitulates his previous findings, and then he incorporates new information in the second half.

Comments closed

Palindromes In SQL

Lukas Eder rises to a challenge:

SQL is a really cool language. I can write really complex business logic with this logic programming language. I was again thrilled about SQL recently, at a customer site:

But whenever I tweet something like the above, the inevitable happened. I was nerd snipedOleg Šelajev from ZeroTurnaround challenged me to prove that SQL is so awesome:

Given a string, find all substrings from that string, which are palindromes. Challenge accepted! (For the moment, let’s forget about algorithmic complexity.)

His answer is in Postgres syntax, and a commenter includes Oracle syntax.  T-SQL is left as an exercise for the reader.

Comments closed

Automating Index Maintenance On Azure SQL DB

Arun Sirpal shows how to use Azure Automation to rebuild indexes on an Azure SQL Database database:

The answer is via Azure Automation.

At a high level this is what I did.

  • Create an Automation Account.

  • Create a credential.

  • Create a PowerShell Runbook which has the code for index rebuilds.

  • Create a schedule and link it to the above.

  • Configure parameters within the schedule (if any).

  • Configure logging level (if desired).

Click through for the detailed steps.

Comments closed

CLR Strict Security Done Easier

Solomon Rutzky continues his CLR Strict Security series with an easier way of creating a secure assembly:

This solution is easier than Solution 1:

8 steps instead of 22!

No extra Project

However, a very small amount of risk was added by overriding the default MSBuild workflow for SSDT. This risk can be eliminated if Microsoft provides a pre-defined Target for the appropriate event. Please upvote my suggestion to have this happen: Add MSBuild predefined Targets for “BeforeSqlBuild” and “BeforePublish” to SSDT SQL Server Data Projects.

ALSO: Even though we did not sign the assembly with a Strong Name Key, it is still probably a good idea to do that.

If you use CLR, this is worth the read.

Comments closed

Another Reason To Avoid Shrinking Data Files

Frank Gill gives us a demo of how much log space it takes to shrink a database file:

Yesterday, I was running a health assessment for a client. They are running a weekly maintenance plan that is shrinking all of their data files. After I picked myself up off the floor, I searched the web for “Paul Randal shrink” and hit on Paul’s excellent post Why you should not shrink your data files. In the post, Paul (b|t) demonstrates the effect of DBCC SHRINKDATABASE on index fragmentation. After the demo script, Paul writes, “As well as introducing index fragmentation, data file shrink also generates a lot of I/O, uses a lot of CPU and generates *loads* (emphasis Paul’s) of transaction log.”

This led me to ask the question, “How much is *loads*?”. To find an answer, I made the following modification to Paul’s script:

Read on for the answer.  There are legitimate reasons to shrink data files, but it comes at a very high cost.

Comments closed