Press "Enter" to skip to content

Author: Kevin Feasel

Deprecated and Discontinued Features in SQL Server

Randolph West takes a look at my favorite activity: deleting code:

The following statements are true:

– SQL Server 2019 does not have any deprecated features, but does have discontinued features
– SQL Server 2017 does not have any discontinued features, but does have deprecated features
– SQL Server 2016 has both deprecated and discontinued features

Let’s discuss the difference between deprecated and discontinued features, and explain how this affects database administrators looking to move to SQL Server 2016 or newer. My current advice is to target SQL Server 2019 with at least Cumulative Update 2 (CU2).

Read on for more details.

2 Comments

Running and Scheduling Azure SQL DB Elastic Jobs

Kate Smith continues a series on Azure SQL Database Elastic Jobs:

In previous posts, I have demonstrated how to create an Elastic Jobs Agent, setup credentials for Elastic Jobs, create a target group of servers/databases for the agent, and how to create and define an elastic job using both PowerShell and T-SQL.

In this post, I drill down into how to run an Elastic Job both in an ad-hoc fashion and how to schedule a job to run regularly. I do this both for PowerShell and for T-SQL.

The Powershell version is a one-liner and the T-SQL version looks a good bit like it does with SQL Agent jobs.

Comments closed

Resizing the SSMS Grid Results

Kenneth Fisher shows how you can resize SQL Server Management Studio’s results grid:

After my post last month about configuring your tools I’ve been on a make my life easier kick and one of the things that gives me grief every day is that my eyesight is at best mediocre and some days it’s worse than others. (I have issues. No, not just those issues, issues that effect my eye sight.) I’m in SSMS 90% of my day or more so it really helps that it’s easy to zoom in and out on the query window, and even the message results pane. There is a little box with a percentage in the bottom left corner of each of the two panes. This controls the font size of that pane.

This is also helpful if you ever present using SSMS and don’t have a strong grasp of Zoom-It or the magnifier app.

Comments closed

ASYNC_NETWORK_IO and Execution Plans

Jonathan Kehayias dives into an interesting problem:

A few weeks ago, an interesting question was asked on the #SQLHelp hash tag on Twitter about the impact of execution plans on the ASYNC_NETWORK_IO wait type, and it generated some differing opinions and a lot of good discussion.

My immediate answer to this would be that someone is misinterpreting the cause and effect of this, since the ASYNC_NETWORK_IO wait type is encountered when the Engine has results to send over TDS to the client but there are no available TDS buffers on the connection to send them on. Generally speaking, this means that the client side is not consuming the results efficiently, but based on the ensuing discussion I became intrigued enough to do some testing of whether or not an execution plan would actually impact the ASYNC_NETWORK_IO waits significantly.

To summarize: Focusing on ASYNC_NETWORK_IO waits alone as a tuning metric is a mistake. The faster a query executes, the higher this wait type will likely accumulate, even if the client is consuming results as fast as possible. (Also see Greg’s recent post about focusing on waits alone in general.)

Click through for the things Jonathan tested.

Comments closed

Building Metadata for an ADF Pipeline

Paul Andrew continues a series on Azure Data Factory and metadata-driven pipelines:

Welcome back friends to part 2 of this 4 part blog series. In this post we are going to deliver on some of the design points we covered in part 1 by building the database to house our processing framework metadata.

Let’s start with a nice new shiny Azure SQLDB database and schema. This can easily be scaled up as our calls from Data Factory increase and ultimately the solution we are using the framework for grows.

Soon we will get to see the Azure Data Factory power in action.

Comments closed

Secure Azure Data Source Access from Databricks

Bhavin Kukadia, Abhinav Garg, and Michal Marusan show us the right way to access Azure data sources from Azure Databricks:

Enterprise Security is a core tenet of building software at both Databricks and Microsoft, and thus it’s considered as a first-class citizen in Azure Databricks. In the context of this blog, secure connectivity refers to ensuring that traffic from Azure Databricks to Azure data services remains on the Azure network backbone, with the inherent ability to whitelist Azure Databricks as an allowed source. As a security best practice, we recommend a couple of options which customers could use to establish such a data access mechanism to Azure Data services like Azure Blob StorageAzure Data Lake Store Gen2Azure Synapse Data WarehouseAzure CosmosDB etc. Please read further for a discussion on Azure Private Link and Service Endpoints.

This is more about network configuration rather than things like “store your credentials and other secrets in Azure Key Vault,” which is also a good idea.

Comments closed

Real-Time Replay with WorkloadTools

Gianluca Sartori shows us how to perform a real-time replay with WorkloadTools:

Before we jump to how, I’d better spend some words on why a real-time replay is needed.

The main reason is the complexity involved in capturing and analyzing a workload for extended periods of time. Especially when performing migrations and upgrades, it is crucial to capture the entire business cycle, in order to cover all possible queries issued by the applications. All existing benchmarking tools require to capture the workload to a file before it can be analyzed and/or replayed, but this becomes increasingly complicated when the length of the business cycle grows.

I’m not sure how frequently I’d use real-time replays, but it’s nice to know that it’s pretty easy to pull off with WorkloadTools.

Comments closed

Approximate Distinct Count with DAX

Gilbert Quevauvilliers runs some performance tests against the approximate distinct count formula in DAX:

I am currently running SQL Server Analysis Services (SSAS) 2019 Enterprise Edition. (This can also be applied to Power BI)

My Fact table has got roughly 950 Million rows stored in

And as mentioned previously it has got over 64 Million distinct users.

The data is queried from SQL Server into SSAS.

Gilbert first checks how close these are and then how much faster the approximate count is.

Comments closed

Setting Up a SQL Server Lab with AutomatedLab

Jess Pomfret looks at a very interesting Powershell module:

There is a fantastic PowerShell module called AutomatedLab that can enable you to easily build out a lab for the  specific scenario you need to test. Even better is the module comes with 70 sample scripts that you can start with and adapt to meet your needs.

The module gives you the option to work with Hyper-V or VMWare. I will say most of the examples are using Hyper-V, and that is what I’ll be using also.

For my lab I want a SQL Server 2019 instance joined to a domain, and a separate client machine that I can manage the SQL Server from. On the client I would need to be able to connect to the internet as I want to be able to download PowerShell modules from the gallery easily.

It’s about time for me to rebuild my lab, so I’ll need to check that out.

Comments closed