Press "Enter" to skip to content

Author: Kevin Feasel

Trivial Plans and Stats Updates

Lonny Niederstadt lays out the harshness of reality:

OK.  SQL Server trivial plans for rowstore table INSERT. And related optimizer stats interaction.
TL;DR cached trivial plans for INSERT can be surprisingly stubborn. If a query matches to one, it won’t perform or queue a stats update even if the stats are stale.  If the stats have been updated and would otherwise warrant a per-index plan – but there is a matching cached trivial plan for a per-row plan… outta luck. Might hafta DBCC FREEPROCCACHE or add OPTION(RECOMPILE) hint to make sure a cached trivial plan doesn’t prevent a per-index update for an INSERT when you really want one.

Read on for a dive into the topic.

Comments closed

The Power of PIVOT and GROUPING SETS

Aaron Bertrand builds a report:

Without comprehensive reporting tools (or Excel), it can be cumbersome and frustrating to produce perfect report output from SQL Server SELECT statement or stored procedures. In modern versions, we have access to T-SQL functionality that far exceeds old-school ROLLUP and CUBE, like PIVOTUNPIVOT, and GROUPING SETS. Let’s look at how to produce output we can easily plug into a simple front end and produce great-looking reports.

GROUPING SETS is one of my favorite under-utilized operators.

Comments closed

Stringing Azure Data Factory between VNets

Ahmed Mahmoud performs networking wizardry:

Customer wants to connect Azure Data Factory on one subscription to an Azure SQL Server on Virtual Machine (SQL VM) on another subscription. check out the architecture diagram below for more clarification.

Click through for that diagram as well as the process. And between VNet peering and Private Link, I believe (but could be wrong in saying) the traffic would never leave Azure-hosted machines even when it transits between subscriptions.

Comments closed

From Access to SQL Server

Tom Collins has some tips to make an Access to SQL Server migration more successful:

-Access has a size limit of 2 GB

-Access has a concurrent users limit of 255 users

-Require increased capacity 

The SQL Server Migration Assistant for Access (SSMA) is a very useful tool  offered by Microsoft . 

The main objective of these notes is to supplement the Microsoft documentation and to assist in Access to SQL Server journey.      

Read on for those notes.

Comments closed

Database Offline Works but Online Permissions Failure

David Alcock unravels a mystery:

I was browsing the SQL Server subreddit earlier where someone had posted a problem where they’d been able to take a database offline but couldn’t bring the database back online via a script or the UI in SSMS (full thread here).

There’s a bit of a back story; all the DBA’s have left the business (facepalm) so a non-DBA has been left with the admin type tasks. Secondly the reason the database was being taken offline was to take physical backups of the databases mdf and ldf files (double facepalm).

That is its own issue but read on for the problem at hand.

Comments closed

Query Plans in Azure Data Studio

Grant Fritchey is excited:

I have long been a fan of Azure Data Studio, but one shortcoming has kept me from truly adopting it: Query Plans in Azure Data Studio. Sure, there was a plug-in you could install. Also, you could use a somewhat truncated version of Plan Explorer, but all I wanted was for SQL Server Management Studio plans to be query plans in Azure Data Studio.

Go and get version 1.35 of the tool. Right now.

I think there’s still a fair amount of work to do on those plans but it’s a far cry from where they were prior to this.

Comments closed

Building posexplode() in the Serverless SQL Pool

Jovan Popvic rides to the rescue with JSON:

The array cells are pivoted and returned as simple scalar columns. Now you can simply use WHERE or GROUP BY clauses to filter or summarize information by array element values. Another very useful piece of information might be the index of every element (generated as pos column).

Spark enables you to use the posexplode() function on every array cell. The posexplode() function will transform a single array element into a set of rows where each row represents one value in the array and the index of that array element. As a result, one row with the array containing three elements will be transformed into three rows containing scalar cells. This flattened/normalized representation is much easier for the analysis.

Once the array is flattened and normalized, you can easily analyze the data and find how much people knowing SQL or Java.

Read on to see how you can implement the equivalent of POSEXPLODE() using OPENJSON() in the Azure Synapse Analytics serverless SQL pool.

Comments closed

Executing SQL Statements in Azure Data Factory

Abhishek Narain announces a pretty nice improvement to Azure Data Factory and Synapse Pipelines:

We are introducing a Script activity in pipelines that provide the ability to execute single or multiple SQL statements.  

Using the script activity, you can execute common operations with Data Manipulation Language (DML), and Data Definition Language (DDL). DML statements like SELECT, UPDATE, and INSERT let users retrieve, store, modify, delete, insert and update data in the database. DDL statements like CREATE, ALTER, and DROP allow a database manager to create, modify, and remove database objects such as tables, indexes, and users.

Be sure to read the limitations at the bottom, however.

Comments closed

Writing Extended Events to InfluxDB

Gianluca Sartori’s speaking my language:

The TIG software stack (TelegrafInfluxDBGrafana) is a very powerful combination of software tools that can help you collect, store and analyze data that has a time attribute. In particular, InfluxDB is a time series database, built with sharding, partitioning and retention policies in mind. It is absolutely fantastic for storing telemetry data, like performance counters from SQL Server or other software products.

In order to store data in InfluxDB, you can use Telegraf, a data collection agent that takes care of extracting telemetry data from the object to observe and upload it to the InfluxDB database. Telegraf is built with the concept of plugins: each object to observe has its own plugin and it’s not surprising at all to find a specialized plugin for SQL Server.

Click through for more details and how to set it up.

Comments closed

Building a SQL Server Inventory via Powershell

Lee Markum wants to figure out where all the servers are:

You’re a data professional and you’ve been given the keys to a new SQL Server environment. You know you need to build a SQL Server inventory so you know what is in your environment, but how do you get that information?

One of the things I have talked about in other posts is how to create a SQL Server inventory. I’ve discussed using the MAP Toolkit and building your own inventory database using T-SQL. Today, we’ll see another way to generate a list of SQL Servers in your environment.

Click through for that method. In the past, I’ve used nmap (with permission, of course) to figure out all the SQL Server instances in my environment. Fun times.

Comments closed