Press "Enter" to skip to content

Curated SQL Posts

Power BI Premium Per User

Adam Saxton is excited:

Are you curious what Power BI Premium Per User is all about? Adam walks you through how to get it and what it means from a user experience. Take advantage of Power BI Premium features without the Premium capacity price!

Click through for the video as well as a few links for more info.

Comments closed

Changing a Kubernetes Cluster to containerd

Andrew Pruski wants to get ahead of the game:

DISCLAIMER – You’d never do this for a production cluster. For those clusters, you’d simply get rid of the existing nodes and bring new ones in on a rolling basis. This blog is just me mucking about with my Raspberry Pi cluster to see if the update can be done in-place without having to rebuild the nodes (as I really didn’t want to have to do that).

Check it out. In addition to the Twitter thread Andrew mentions, the Kubernetes group has a full blog post with more details.

Comments closed

Use a Separate Deadlock Extended Events Trace

Kendra Little explains why it makes sense to have an extended events trace specifically for deadlocks:

We recently had customer ask why SQL Monitor creates an Extended Events session to capture deadlock graphs, when SQL Server has a built-in system_health Extended Events trace which also captures deadlock information?

There are a couple of reasons why a dedicated trace is desirable for capturing deadlock graphs, whether you are rolling your own monitoring scripts or building a monitoring application. I like this question a lot because I feel it gets at an interesting tension/balance at the heart of monitoring itself.

Click through for the answer.

Comments closed

Aggregate Functions in SQL Server

Hugo Kornelis takes us through the concept of aggregate functions:

SQL Server currently supports three operators that can compute aggregations: Hash MatchStream Aggregate, and Window Aggregate. These operators all use the same basic principle of maintaining internal counters as rows are processed, so that the final value of those internal counters is the expected value.

Read on to see the full list, as well as how they operate.

Comments closed

Introducing Azure Purview

Wolfgang Strasser gives us a once-over on a new service:

Today, at the Azure Data and Analytics event, a new Azure data governance service called Azure Purview (https://aka.ms/AzurePurview) was presented and made available in a public preview.

I have not had a chance to try the actual service, but I found a very interesting video (Microsoft mechanics video) where I took the following screenshots from.

Read on for Wolfgang’s thoughts. It’s definitely a step up from Azure Data Catalog.

Comments closed

ETL with R in SQL Server

Rajendra Gupta shows one reason for using R inside SQL Server:

Data professionals get requests to import, export data into various formats. These formats can be such as Comma-separated data(.CSV), Excel, HTML, JSON, YAML, Tab-separated data(.TSV). Usually, we use SQL Server integration service ETL packages for data transformations, import or export data.

SQL Machine Learning can be useful in dealing with various file formats. In the article, External packages in R SQL Server, we explored the R services and the various external packages for performing tasks using R scripts.

In this article, we explore the useful Rio package developed by Thomas J. Leeper to simplify the data export and import process.

One thing I’d like to reiterate is that even though you’re using R to move this data, you don’t need to perform any data science activities on the data—R can be the easiest approach for getting and cleaning up certain types of data.

Comments closed

So You’ve Hit the Limits of ADF Concurrency

Paul Andrew shows what happens you you break the ADF concurrency barrier:

Firstly, understanding how these limits apply to your Data Factory pipelines takes a little bit of thinking about considering you need to understand the difference between an internal and external activity. Then you need to think about this with the caveats of being per subscription and importantly per Azure Integration Runtime region.

Assuming you know that, and you’ve hit these limits!

Click through to see what happens. It’s not pretty.

Comments closed

Logging Schema Changes with DDL Triggers

Andreas Wolter shows how we can implement a log of object changes with DDL triggers:

Over the years working on customer systems, I personally found it to be invaluable and as best practice equipped any database that I designed with such a small trigger and DDL-log-table, just in case. It has helped many times to quickly solve issues with deployments scripts, non-scripted changes to the systems, problems with Source Control and simply getting answers quickly.
The concept is almost trivial and because DDL changes are usually not in performance-critical code-paths, the theoretical overhead on the DDL statement-runtimes is not relevant. (Unless frequent schema-changes are part of a performance-sensitive workload – in which case I would then question if using DDL is a good idea at all in such a place. Note that temporary tables are not caught by DDL Triggers.)

Click through for more information, as well as a sample script.

Comments closed