Press "Enter" to skip to content

Month: January 2020

An Intro to Power BI Premium

Gilbert Quevauvilliers gives us an overview of what Power BI Premium is and what you need to know before using it:

A great place to start is to first explain “What is Power BI Premium?”

At the very basic level Power BI Premium allows you to buy dedicated capacity with additional features. 

Read on to see what that means, what you get with Premium, and how it can benefit your organization. Gilbert also covers the pricing model, which is important because this isn’t cheap.

Comments closed

A Script for Peer Reviews of Code

Paul Andrew shares with us a script which is useful for peer reviewing code before check-in:

Does the code include good comments? Things that explain the reason why logic has been implemented to assist future developers looking at the same code.

All to often I see code comments written that just translate from code to English and tell me what is happening. What should be fairly obvious to the reviewer as they read the code. Why is so much more important.

It’s a 20-point checklist, but worth reviewing and adapting for your own purposes.

Comments closed

Ghost Records and Availability Groups

Aaron Bertrand ran into an interesting problem:

I recently came across an issue in a SQL Server Availability Group scenario where queries against a heavily-used queue table were taking longer and longer over time. The symptoms of the query were that logical reads were increasing rapidly, and we ultimately tracked it down to ghost records (and version ghost records) that were being created as rows from the queue table were consumed and deleted. Because the database was being used in a readable secondary, the ghost cleanup process simply wasn’t able to keep up with the volume of deletes against the table. The first workaround implemented was to suspend the queue consumers and rebuild the table.

Read on to see how Aaron used filtered indexes and soft deletes to mitigate some of the pain.

Comments closed

An Introduction to gRPC

Munander Singh explains why you might want to use gRPC for cross-process communications:

When the client invokes the service, the client-side gRPC library uses the protocol buffer and marshals the remote procedure call, which is then sent over HTTP2. Server un-marshal the request and executes the respective procedure invocation using protocol buffers. The response follows a similar execution flow from the server to the client.

The main advantage of developing services and clients with gRPC is that your service code or client side code doesn’t need to worry about parsing JSON or similar text-based message formats (within the code or implicitly inside the underlying libraries such as Jackson, which is hidden from service code). What comes in the wire is a binary format, which is unmarshalled into an object. Also, having first-class support for defining a service interface via an IDL is a powerful feature when we have to deal with multiple microservices and ensure and maintain interoperability.

The “yes, but” here is that based on your language of choice, gRPC can be a bit tricky to get going.

Comments closed

Interpretability Issues in Monitoring Tools

Brent Ozar explains how it’s hard to explain things without context:

However, outside of that window, you may not have any wait time on CPU at all. If queries are simple enough, and there isn’t enough concurrency, then as we discuss in Mastering Server Tuning, you can end up with SOS_SCHEDULER_YIELD waits with no time attached to them. You wouldn’t notice a CPU problem at all outside of that 8AM window because you don’t have a CPU bottleneck – yet. And to be fair, you probably wouldn’t tune a server in this kind of shape, either.

But when you DO need to tune a server that isn’t running at 100% capacity, picking the right query to tune isn’t just about wait stats: it’s also about which queries you need to be more responsive, and how you need to plan for future load. If your company tries to run a flash sale, and a lot of folks try to check out at the same time, Query T is going to knock your server over. But wait-stats based tools won’t see that coming: they’ll still be focused on Query R, the only one that spends a lot of time waiting on CPU.

Good food for thought.

Comments closed

Moving tempdb on a SQL Server Instance

Drew Skwiers-Koballa gives us the step-by-step process for moving tempdb from one folder to another on a machine:

Not only can the size of TempDB files be unpredictable (unless the workload is completely predictable or a size limit is placed), but it is full of old Tupperware. That is, if TempDB is destroyed, your SQL Server will create a new one as soon as the service restarts. The whole migration is summarized in these 4 steps:

1. Create a new location for TempDB
2. Use TSQL to change the TempDB file location(s)
3. Restart the SQL Server service during a maintenance window
4. Verify and clean up

This is one of the easier things to move, but it does require server downtime.

Comments closed

Transitivity and Query Tuning

Jared Poche talks about the ability to use either predicate on a join for filtering:

You may recall the transitive property from elementary school math class. It states:

If A = B, and B = C, then A = C

The SQL Server optimizer can and will use this property, and it can lead to issues in your queries. When I’m writing a query, I have a clear idea of how I want it to operate. But using the transitive property, SQL Server has additional options one might not expect, and this may occasionally cause things to go awry

Click through for an example. Most of the time, the optimizer is smart about using transitivity, but sometimes it can go wrong.

Comments closed

Features for Securing Data in SQL Server

Jonathan Kehayias gives us a set of features we can use to secure data in SQL Server:

Recently I have been having quite a few discussions around security of data inside of SQL Server and how to prevent the massive data breaches that we have been hearing about on the news.  Like most things some people want “THE ANSWER” or THE SOLUTION to securing the data inside of SQL Server. Unfortunately there isn’t a single solution that solves all of the problems that are potentially out there.  Security of data requires defense in depth, starting with a secure configuration and installation of SQL Server.  Often times, defense in depth also means changes to the database, the application, and how you do business.

Also read the comments to understand why Dynamic Data Masking and Row-Level Security aren’t really security features. I totally agree that Dynamic Data Masking is not a security feature; Row-Level Security I’d argue is, but is quite limited in its scope, much more so than any of the features Jonathan lists.

Comments closed

Finding Tables with High Write Frequency

Michael J Swart has an interesting query for us:

You have excessive WRITELOG waits (or HADR_SYNC_COMMIT waits) and among other things, you want to understand where.

Microsoft’s advice Diagnosing Transaction Log Performance Issues and Limits of the Log Manager remains a great resource. They tell you to use perfmon to look at the log bytes flushed/sec counter (in the SQL Server:Databases object) to see which database is being written to so much.

After identifying a database you’re curious about, you may want to drill down further. I wrote about this problem earlier in Tackle WRITELOG Waits Using the Transaction Log and Extended Events. The query I wrote for that post combines results of an extended events session with the transaction log in order to identify which procedures are doing the most writing.

But there are times when you just want a quick and dirty script, and that’s what Michael has for us today.

Comments closed