Press "Enter" to skip to content

Month: April 2021

Unkillable Threads

Paul Randal gives us a supervillain origin story:

While I was teaching IEPTO2 last week, I was discussing why sometimes a thread cannot be terminated using the KILL command, and thought it would make a great topic for a post.

Some of you have likely seen a phenomenon called a non-yielding scheduler. This is where a thread is using the processor and doesn’t voluntarily yield after using more than the thread quantum (4 milliseconds, unchangeable). There’s a background task called the scheduler monitor that checks that progress is being made on the various schedulers inside SQL Server and issues a warning if it finds a problem.

Read on to learn more about how this can happen and what it means for you.

Comments closed

From Azure Analysis Services to Power BI PPU

Gilbert Quevauvilliers teases a new series:

I have been doing a lot of evaluation and investigations for organizations who currently are using Azure Analysis Services (AAS) and looking to see if they can leverage Power BI Premium Per User (PPU)

In this series I am going to cover the following details below, which I completed to see if the migration was not only feasible but should be the new normal.

Looks like it will be an 11-parter, so we have some reading to look forward to.

Comments closed

Additional Common Query Patterns for Joins

Erik Darling continues a series with two more posts. First up is sorting lookups:

Most people see a lookup and think “add a covering index”, regardless of any further details. Then there they go, adding an index with 40 included columns to solve something that isn’t a problem.

You’ve got a bright future in government, kiddo.

In today’s post, we’re going to look at a few different things that might be going on with a lookup on the inside.

The next post is around pre-fetching lookups:

One sort of interesting point about prefetching is that it’s sensitive to parameter sniffing. Perhaps someday we’ll get Adaptive Prefetching.

Until then, let’s marvel at at this underappreciated feature.

Check out both posts and prepare to be illuminated.

Comments closed

Apache Kafka 2.8 Released

John Roesler announces Apache Kafka 2.8:

We are excited to announce that 2.8 introduces an early-access look at Kafka without ZooKeeper! The implementation is not yet feature complete and should not be used in production, but it is possible to start new clusters without ZooKeeper and go through basic produce and consume use cases.

At a high level, KIP-500 works by moving topic metadata and configurations out of ZooKeeper and into a new internal topic named @metadata. This topic is managed by an internal Raft quorum of “controllers” and is replicated to all brokers in the cluster. The leader of the Raft quorum serves the same role as the controller in clusters today. A node in the KIP-500 world can serve as a controller, a broker, or both, depending on the new process.roles configuration. See the README for quickstart instructions and additional details.

In addition to the headline item, there are plenty of other bugfixes and additions as well.

Comments closed

Working with Secrets in Powershell

Jeffrey Hicks tries out the Secrets Management modules in Powershell:

So I’ve been kicking the tires and trying to do more with the Secrets Management modules from Microsoft, now that they are out of pre-release status. You can install the Microsoft.PowerShell.SecretStore and Microsoft.PowerShell.SecretManagement modules, you’ll need both, from the PowerShell Gallery. You can find extension modules that build on the Microsoft modules for working with other key vaults or secret store. Run find-module -tag secretmanagement to find additional modules. But what I want to talk about today relates to the Microsoft modules. Although, it might apply to you with any of the extension modules. The challenge is using the secrets management modules with a PowerShell profile script.

Read on for a challenge around running scheduled tasks which require secrets and a solution.

Comments closed

JSON Patching in Cosmos DB

Hasan Savran does some JSON modification:

JSON is a quite common format for data in these days. NoSQL databases save data in JSON format. Even Relational Databases like SQL Server give you option to save or retrieve data in JSON format. When we need to update a property in a JSON document, we need to update the whole JSON document. This can be a problem specially if document size is larger. This is where JSON Patch feature comes in. JSON Patch is a format that let you update JSON document partially. 

     You can do more than updating a property. You can add, remove, replace, move, or test a value by using this format. Let’s look at its details by using the following JSON document as the source.

Read on to see how patching works with Cosmos DB in particular.

Comments closed

Changing SQL Server to Use a Static, Non-Standard Port

Jack Vamvas has a cmdlet for us:

Question: I have an  SQL Server Instance – currently configured with a Dynamic Port. I’d like to change the setting from a Dynamic port configuration to a Static port configuration , using Powershell.

I want to change to a static port as we need to set up some Firewall rules , and using a static port will be much easier

How can this be done?

Click through for the answer.

Comments closed

Extended Events in Azure SQL Database

Grant Fritchey compares and contrasts extended events on-premises to extended events in Azure SQL Database:

I have long advocated for the use of Extended Events. I’ve been posting all sorts of blog posts on how to implement them, how they present unique opportunities for new and interesting data, and how they do so much more than the old trace events, yet, put less of a load on the system. All of that is true, until we hit Azure SQL Database.

Now, don’t get me wrong, Extended Events are still awesome, amazing and wonderful. It’s just that, Azure SQL Database is going to force us to hop through a few hoops. I want to be up front with these… I’m trying to find the right word here, challenges? Maybe. Frustrations? Yeah, kind of. Limitations? Again, sort of, but not quite. We’ll settle on as neutral a term as possible: differences. For the moment.

Read on for the first part in an ongoing series.

Comments closed

QUOTENAME and Dynamic SQL

Chris Johnson looks at one way to protect dynamic SQL statements:

I don’t have the usual aversion to dynamic SQL that you find a lot of developers have. I use it quite regularly as part of my ETL processes, usually to define how data will move from one stage to another. So, for instance, I might have a single Validate procedure that takes a number of parameters, including an import table name, and moves data from that import table to a valid table according to my validation rules. Or I might have a single SCDMerge procedure that takes a transformation view and merges that into a slowly changing dimension (although not using the actual MERGE function for various reasons). These procedures allow me to have confidence that data will always move from one stage to another in the same way, and saves me from writing essentially the same statement 50 times, and having to update it 50 times when I need to change the way we do something, and inevitably missing something and introducing some error.

This always feels like a pretty safe use of dynamic SQL to me, because it avoids some of the more common objections people like to raise to it:

Click through to see how QUOTENAME() can help sanitize user inputs. I personally prefer the route of using sp_executesql but QUOTENAME() can also do the trick.

Comments closed

Querying Serverless SQL Pools from Spark Notebooks in Scala

Jovan Popovic shows off one integration point between the data services in Azure Synapse Analytics:

Azure Synapse Analytics provides multiple query runtimes that you can use to query in-database or external data. You have the choice to use T-SQL queries using a serverless Synapse SQL pool or notebooks in Apache Spark for Synapse analytics to analyze your data.

You can also connect these runtimes and run the queries from Spark notebooks on a dedicated SQL pool.

In this post, you will see how to create Scala code in a Spark notebook that executes a T-SQL query on a serverless SQL pool.

Read on to see how. You can also query Spark pool and dedicated SQL pool tables from serverless SQL pools.

4 Comments