Press "Enter" to skip to content

Month: October 2018

When Query Store Skips An Interval Period

Rich Brenner explains why you might see gaps in your Query Store run times:

When messing about (developing) things that are based on the query store I came across an unusual phenomenon. We were missing query store intervals. It’s not that data had been deleted but rather than it was never created in the first place. Take a look at this;

I’d set the query store to 5 minute intervals for testing purposes and left it. Notice that there is no data between 15:20 and 15:25. It’s not that the data has been deleted, we know that because the runtime_stats_interval_id is still creating increments correctly.

Click through for the answer.

Comments closed

Death Marches Roundup

Jeff Mlakar takes us on a tour of bad project planning:

This month we had 15 post submissions about this daunting topic. Two of the posts were from people who had never posted before. To them I say welcome and I hope you enjoyed the experience.

I think, in general, you were all brave to write about this sensitive topic. I know a lot of you are consultants (which is a great way to expose yourself to a death march project) and must be careful about telling stories that could be misconstrued by clients. Nonetheless, with enough obfuscation and redaction you have brought forth some truly horrifying posts! I am going to group the submissions by the most terrible themes that many posts shared.

It is best to read this post in Vincent Price’s voice.

Comments closed

Removing A Database From An AG

Erin Stellato walks us through the process of removing a database from an Availability Group and then re-adding it later:

I recently had a scenario in a two-node Availability Group where multiple large-batch modification queries were executed and created a large redo queue on the replica.  The storage on the replica is slower than that on the primary (not a desired scenario, but it is what it is) and the secondary has fallen behind before, but this time it was to the point where it made more sense remove the database from the replica and re-initialize, rather than wait several hours for it to catch up.  What I’m about detail is not an ideal solution.  In fact, your solution should be architected to avoid this scenario entirely (storage of equal capability for all involved nodes is essential).  But, stuff happens (e.g., a secondary database unexpectedly pausing), and the goal was to get the replica synchronized again with no downtime.

Click through for the demo.

Comments closed

Capturing Implicit Conversions With Extended Events

Grant Fritchey shows how easy it is to build an extended event which captures implicit conversions:

Built right into the Extended Events is an event that captures conversions that would affect execution plans, plan_affecting_convert. This event will show both CONVERT and CONVERT_IMPLICIT warnings that you would normally only see within an execution plan. You can capture this event with others. Capturing events together along with causality tracking makes it very easy to track queries that have the issue. Here’s one example of how you might capture implicit conversions along with the query:

This kind of event can make system-level performance tuning much easier.

Comments closed

Using R To Hit Azure ML From Power BI

Leila Etaati shows how you can use R to hit an Azure ML endpoint to populate a data set in Power BI:

You need to create a model in Azure ML Studio and create a web service for it.

The traditional example in Predict a passenger on Titanic ship is going to survived or not?

we have a dataset about passengers like their age, gender, and passenger class, then we are going to predict whether they are going to survive or not

Open Azure ML Studio and follow the steps to create a model for predicting this. Navigate to Azure ML Studio.

Then download the dataset for titanic from here

Click through for the step-by-step instructions.

Comments closed

Power BI Data Profiling

Matt Allington takes a look at a new feature in Power BI:

The data profiling tools look at the first 1,000 rows in the preview data loaded an shows you the big picture of what the data “looks” like.

Currently the profiling tool only works on the top 1000 rows of data.  It also takes some time to prepare the profile of the columns (as could be expected), however the benefits of getting this stuff right before moving on far outweigh the slower load times (IMO).  I would love to see an option to profile the entire set of data for one or more columns.  I am sure this will come.

Teo Lachev shares some thoughts on what it would take to make this a killer feature:

That’s all data profiling you get for now. Here is what it will take to make Power BI data profiling a killer feature:

  1. Allow data profiling over all the values (understandably there will be performance impact).

  2. Add more aggregates, such as Min/Max/Std/Median.

  3. The ability to dynamically filter the preview data for the selected bar in the profile.

As it is, there’s enough here to see the potential of where it could go.

Comments closed

Testing TDE Performance

Eduardo Pivaral tests the performance of a database with Transparent Data Encryption versus that same database without encryption:

Transparent data encryption (TDE) helps you to secure your data at rest, this means the data files and related backups are encrypted, securing your data in case your media is stolen.
This technology works by implementing real-time I/O encryption and decryption, so this implementation is transparent for your applications and users.

However, this type of implementation could lead to some performance degradation since more resources must be allocated in order to perform the encrypt/decrypt operations.

On this post we will compare how much longer take some of the most common DB operations, so in case you are planning to implement it on your database, you can have an idea on what to expect from different operations.

These results fit in reasonably well with what I’d heard, but it’s nice to have someone run the numbers.

Comments closed

Using Containers To Build A Home Lab

Dmitri Korotkevitch walks us through creating a home lab with Docker containers:

Obviously, in the real life, we do not work with vanilla SQL Server installation. We need to customize it by changing SQL Server settings and logins, creating and/or restoring the databases and do other actions. There are a couple of ways how you can do that.

The first approach is customizing existing container manually and creating the image from it using docker container commit command. After that, you can start the new containers from created image the same way as we already discussed. We will cover a couple ways to move data to and from containers later.

There is the better way, however. You can automate this process by utilizing docker build command. The process is very simple. You just need to define DockerFile, which contains the reference to the main image and specifies the build actions. You can copy scripts and database backups into the image, run SQLCMD, BCP and PowerShell scripts there – you, pretty much, have the full control. Internally, Docker runs every command inside deployment containers (creating and destroying them during the process) saving the final one as the target image.

Read the whole thing.

Comments closed