Press "Enter" to skip to content

Curated SQL Posts

Power BI On-Prem Details

Ginger Grant explains what’s going on with Power BI Premium and the on-prem offering:

It is not possible to run Power BI reports locally right now, but sometime before the 1st of July 2016,  users who have SQL Server 2016 Enterprise Edition per-core and active Software Assurance [SA] can deploy Power BI Report Server.  This means that no one is going to have to wait for SQL Server 2017 for Power BI on premise as that will be available sometime in June.  The functionality in SQL Server 2017 SQL Server Reporting Server [SSRS]. Community Technology Preview edition is going to be available in Power BI Report Server, with the addition of the ability to include custom visuals, which the CTP version did not do. The Power BI Server includes all of the functionality of SSRS This means that users will not need an SSRS Server and a Power BI Server, as the Power BI Server will be able to both.  If you want to migrate all of the reports created in SSRS from 2008 R2, and SSRS Mobile Reports, you can migrate these reports to the new Power BI Report Server, provided of course you have SQL Server 2016 Enterprise per-core edition with SA. The Power BI Report Server will be a separate install with separate release schedules.  Microsoft has announced that they are planning on doing updates at a greater frequency than SQL Server. Power BI Report Server will also be able to publish reports to mobile devices as well. If the reports uses data in the cloud, you can employ a Data Gateway as the Power BI Reporting Server can use the gateway to access cloud data. Of course if all of the data in the report is located on-premises, no gateway will be required.

I’m a bit disappointed that the on-prem installation will not allow you to create dashboards, but perhaps that will come in time.

Comments closed

Attaching A SQL Server Database To A Docker Container

Mat Hayward-Hill shows how to attach an existing MDF file to a SQL Server on Linux instance in Docker:

Now we are ready to attach the database using the TSQL below. For this demo, I used Management Studio from my Laptop, to connect to SQL Server.

In the TSQL we need to use the FOR ATTACH_REBUILD_LOG argument as we have no log file to attach. It will create a 1MB log file in the default log file directory.

It’s better to restore a full backup, but there’s more than one way to connect a database.

Comments closed

Azure SQL Data Warehouse Security

Grant Fritchey looks at what security measures are available within Azure SQL Data Warehouse:

Login Security

You have two core choices on logins. First, you have to create a SQL login at the server level for both Azure SQL Database and Azure SQL Data Warehouse. You can’t remove this or disable it (to my knowledge, and I’ve tried), so make the password a good one (and don’t lose it). You can then create other SQL logins, but this is not a recommended best practice. In fact, I wouldn’t do it at all unless I was forced because of some third party product (few of which currently support Azure anyway).

The next choice, the preferred choice, is to set up Azure Active Directory. With Azure AD you get all the functionality you’re used to with your local AD. Further, you can federate Azure AD with your local AD to control and manage the logins from within your network. You also get multi-factor authentication with Azure AD. We are talking real security here. Read through the documentation on setting up authentication to get it right. You can do the whole thing using Powershell commands, so there’s no excuse on automating it.

There aren’t as many security-related toggles as in an on-prem product, but Grant demonstrates what is available.

Comments closed

Installing Multiple SSRS Instances

Dave Mason explains how to set up multiple SQL Server Reporting Services installations to run against a single SQL Server instance:

Have you ever needed to install multiple instances of SSRS, with each instance “connected” to the same instance of the SQL Server database engine? (By “connected”, I mean that the pair of [ReportServer] databases for each SSRS instance would all reside on the same instance of SQL Server. And each SSRS instance would be reporting on data from one or more databases that also resided on the same instance of SQL Server.)

To my surprise, I don’t see much guidance for this scenario on the internet. TechNet has an article. It’s consistently one of the first search results I get back for variations of “Install multiple instances of SSRS”. That article (and a few others) omit a simple installation step/requirement that was a blind spot for me. (More on that towards the end.) I finally figured out what I was doing wrong and eventually succeeded with my task. Let’s walk through the steps.

I’m not quite positive what problem this best solves, but that could just be a lack of vision on my part.

Comments closed

Comparing Query Store Plans

Arun Sirpal explains how to compare two plans in the Query Store:

A small but nice little feature I have been using recently can be found within Query Store.

Let’s say you have 2 Plan IDs for a query, naturally you want to view the execution plan for the different plans. In the past I did it a manual way, by that I mean by individually clicking on the Plan ID to see the plan then moving on to the next one.

It is much easier to explain with some images.

Click through to see those images.

Comments closed

More With Adaptive Joins

Erik Darling continues his adaptive joins exploration with two more posts.  First, how local variables can affect the query plan:

The easiest way to look at this is to compare Adaptive Joins with literal values to the same ones using local variables. The results are a little… complicated.

Here are three queries with three literal values. In my copy of the Super User database (the largest Stack Overflow sub-site), I’ve made copies of all the tables and added Clustered ColumnStore indexes to them. That’s the only way to get Adaptive Joins at this point — Column Store has to be involved somewhere along the line.

The last day of data in this dump is from December 11. When I query the data, I’m looking at the last 11 days of data, the last day of data, and then a day where there isn’t any data.

Then Erik takes on non-SARGable queries:

The queries with non-SARGable predicates on the Users table used Adaptive Joins.

The queries with non-SARGable predicates on the Posts table did not.

Now, there is an Extended Events… er… event to track this, called adaptive_join_skipped, however it didn’t seem to log any information for the queries that didn’t get Adaptive Joins.

Bummer! But, if I had to wager a guess, it would be that this happens because there’s no alternative Index Seek plan for the Posts table with those predicates. Their non-SARGableness takes that choice away from the optimizer, and so Adaptive Joins aren’t a choice. The Users table is going to get scanned either way — that’s the nature of ColumnStore indexes, so it can withstand the misery of non-SARGable predicates in this case and use the Adaptive Join.

Two more good posts in Erik’s series, and both definitely worth reading.

Comments closed

Handling Expiring Encryption Keys

Ed Leighton-Dick explains how safely to replace a SQL Server certificate which is about to expire:

So, now that we know what we need to rotate, how do we do it?

First, obtain a new certificate. SQL Server has the capability to generate its own certificates. For many purposes, that’s enough. However, if your company has to comply with auditing or regulatory requirements, you may need to obtain the new certificate from an outside source. Often, this is a third-party certificate authority. Some companies use a system called Encryption Key Management (EKM, also known as a Hardware Security Module, or HSA, after the device used to store the master key). (Obtaining an external certificate is a subject for an upcoming post.)

However you obtained the certificate, install it. Make sure to back it up securely, including the private key.

Next, add the new certificate to the symmetric key. The ALTER CERTIFICATE command has a clause that does just that – ADD ENCRYPTION BY.

Finally, remove the old certificate from the symmetric key. You’ll again use ALTER CERTIFICATE, but this time with the DROP ENCRYPTION BY clause.

Click through for instructions, including scripts.  Ed also explains how to update the certificate used with Transparent Data Encryption.

Comments closed

The Value Of Log Shipping

Robert Davis explains that log shipping can be better than mirroring for database migrations:

This topic has come up several times recently, so I feel the need to blog on it. As the person who wrote the book on Database Mirroring, it will probably come as a surprise to many of you that I believe that log shipping is a much better tool for database migrations than database mirroring.

I’m not just talking about the fact that database mirroring is deprecated (since SQL Server 2012) and log shipping is not. Both are still in SQL Server to this day. Because database mirroring is deprecated, it is no longer receiving bug fixes (except maybe critical security bugs) and no work is being done to make sure that it works with new features in current and future versions. Log shipping is still receiving both of these things. I will lay out the real reasons below.

Robert makes two compelling arguments in favor of log shipping.

Comments closed

The Bayesian Trap

David Smith links to a video describing an application of Bayes’s Theorem and gives the example of medical tests:

If you get a blood test to diagnose a rare disease, and the test (which is very accurate) comes back positive, what’s the chance you have the disease? Well if “rare” means only 1 in a thousand people have the disease, and “very accurate” means the test returns the correct result 99% of the time, the answer is … just 9%. There’s less than a 1 in 10 chance you actually have the disease (which is why doctor will likely have you tested a second time).

Now that result might seem surprising, but it makes sense if you apply Bayes Theorem. (A simple way to think of it is that in a population of 1000 people, 10 people will have a positive test result, plus the one who actually has the disease. One in eleven of the positive results, or 9%, actually detect a true disease.)

This goes to sensitivity/recall (in the medical field, they call it sensitivity; in the documents world and in the Microsoft ML space, they call it recall):  True positives / (True positives + False negatives).  Supposing a million people, 1000 will have the disease.  Of those 1000, we expect the test to find 990 (99%).  Of the 999,000 people who don’t have the disease, we expect the test to produce 9990 false negatives (1%).  990 / (990 + 9990) = 9%.

Comments closed

SOS_UnfairMutexPair

Ewald Cress talks about SOS_UnfairMutexPair:

The focal point of the mutex’s state – in fact one might say the mutex itself – is the single Spinlock bit within the 32-bit lock member. Anybody who finds it zero, and manages to set it to one atomically, becomes the owner.

Additionally, if you express an interest in acquiring the lock, you need to increment the WaiterCount, whether or not you managed to achieve ownership at the first try. Finally, to release the lock, atomically set the spinlock to zero and decrement the WaiterCount; if the resultant WaiterCount is nonzero, wake up all waiters.

Now one hallmark of a light-footed synchronisation object is that it is very cheap to acquire in the non-contended case, and this class checks that box. If not owned, taking ownership (the method SOS_UnfairMutexPair::AcquirePair()) requires just a handful of instructions, and no looping. The synchronisation doesn’t get in the way until it is needed.

However, if the lock is currently owned, we enter a more complicated world within the SOS_UnfairMutexPair::LongWait() method.

I love the statement that “This is not a very British class at all.”  Read the whole thing.

Comments closed