Press "Enter" to skip to content

Author: Kevin Feasel

Finding Looked-Up Columns on Queries

Grant Fritchey searches for included columns:

A common issue in performance is the lookup, key or RID, but the question frequently asked is, which columns are looked up? You need to know in order to try to address the issue. The answer is easy enough to find. Let’s take a quick look.

Read on for the answer, which you can find in SQL Server Management Studio. Or by shredding a bunch of XML if that’s the kind of thing you’re into.

Comments closed

Using Synapse Link for Cosmos DB

I have a post combining Synapse Link for Cosmos DB and the Spark to Synapse SQL Connector:

In this post, we saw how to enable Cosmos DB’s Analytical store, access data using Synapse Link for Cosmos DB, and use the Spark to Synapse SQL Connector to move that data into a dedicated SQL pool. We saw how to do this in a workspace using a managed virtual network with data exfiltration protection enabled, meaning this is the largest number of steps necessary.

Click through for product descriptions and step-by-step instructions.

Comments closed

An Introduction to ScriptDOM

Mala Mahadevan gives us an idea of what ScriptDOM is:

I’ve been meaning to get a series of blog posts started on this topic. A twitter conversation from yesterday finally pushed me to it. Last year, I was tasked with finding a linting tool for the large t-sql code base we have at work. I looked into several tools – tsqllint, Sonarqube and several others. We ran into similar issues on all of them. Someone else defining rules for us didn’t work.
One tool called it wrong to use more than 3 tables in a query – we had several with 6-7, maybe even more. Another started to point out lack of indexes on temp tables as a problem (the rule was designed for table scripts but worked the same on code). Getting new rules that we wanted – such as not having unnaming primary keys on temp tables (Querystore doesn’t like them) or having our naming standards enforced meant extra work on someone else’s code. Our rules were custom to our environment. There were generic best practices for sure, such as finding the SELECT * or NOLOCK hints, but those were a small subset of what we needed. I then started looking for a tool with which I could make a custom linter. That’s when I discovered ScriptDOM, which has been around for a really long time with few people knowing or using it. It took me some time to understand how to put this to use. But after I figured it out it was really easy. Now I have a fairly robust, custom linter in place written in PowerShell and integrated well into our Azure DevOps Build process. It is easy to use and it is owned by us.

Read on to see what ScriptDOM can do and stay tuned to learn more.

Comments closed

Writing DAX for Paginated Reports

Adam Aspin shows us how to use DAX functions in Power BI paginated reports:

In the previous articles, you learned – or revised – the basics of using DAX as the query language to populate paginated reports with data from Power BI datasets. However, as befitted an introduction, the focus was essentially on getting up and running. Specifically, the only DAX table function you looked at was SUMMARIZECOLUMNS().

Despite its undeniable usefulness, this function is far from the only DAX function that you can use to query Power BI Datasets when creating Paginated Reports. Moreover, it has limitations when you wish to deliver complete lists of results as it is an aggregation function. This means, for instance, that you will never find duplicate records in the tabular output from SUMMARIZECOLUMNS() as, by default, it is grouping data. Alternatively, if you wish to use SUMMARIZECOLUMNS() to output data at its most granular level, you will need to include a unique field (or a combination of fields that guarantee uniqueness) – even if these are not used in the report output.

It follows that, to extract data in ways that allow effective report creation, it is essential to learn to use a whole range of DAX table functions. 

Click through for a list of functions and how to use them.

Comments closed

Creating Powershell Objects from C#

Robert Cain mixes languages:

In the last two installment of this series, I covered the various ways to create objects using the PSCustomObject. We saw how to create it using the New-Object cmdlet, then how to add your custom properties to it using the Add-Member cmdlet. In the subsequent post we saw how to add new methods to it.

In this post, we’ll cover something new, creating an object based on C# code!

Click through to see how. And also to see the relic of pretended multi-language support, where you have a -Language parameter but it can only take one input and you aren’t going to see another.

Somebody in the community has created an alternative to support F#, though.

Comments closed

Choosing a Bar Chart Orientation

Amy Esselman says to rotate that chart:

Your lesson on choosing an appropriate visual covers a variety of available bar charts. When should I use a horizontal bar chart, and when should I use a vertical bar chart?

When it comes to the horizontal vs. vertical decision, our founder Cole has an admitted penchant for horizontal bar graphs, for a couple of reasons:

Click through for those reasons why bar charts are good but stick around for the reasons why column charts are good. Both have their specific places in the world.

Comments closed

Scheduling Azure ML Compute Instance Start-Up and Shut-Down

I have a post correcting a statement I made before:

The single biggest problem I have with compute instances is that there is no auto-stop functionality to them. This is really frustrating because you’re paying for that virtual machine like you would any other, so if you forget to turn it off when you go home for the weekend, it’ll cost you. I wish there were a built-in option to shut off a compute instance after a certain amount of inactivity. Instead, you’ll need to start and stop them manually.

It turns out that you can and so I wanted to write a post to correct the record.

Click through to see how you can do this. You can bet that I’ve got it enabled now.

Comments closed

Behind the Powershell Pipeline

Jeff Hicks has some new content:

There is an intangible side to PowerShell that can help you understand why you should use PowerShell, in addition to the how. What does it mean to “manage at scale?” Why should you document your code, and what are some best practices? How can you take PowerShell profiles to the next level? These are some of the questions I want to tackle in a new newsletter I’m calling “Behind the PowerShell Pipeline.”

I want to take my years of PowerShell education experience and create genuine premium content. And I want to be able to afford to take the time to develop deep content. This new venture is available now on Substack at jeffhicks.substack.com. Premium content will only be available through a paid subscription. You are welcome to sign up for a free subscription, but that will limit your content.

I’m interested in success here, especially given how there is such a norm for giving away technical content. I like that ethos but also want to see some additional capability for premium content to be available, as I think that is good for the long-term health of technical content development.

Comments closed

Database Mirroring Compatibility and Availability Groups

Sean Gallardy checks out the past:

Around 2005, mirroring was born. It was an evolution on log shipping, which is taking log backups, moving them around, and restoring them all in an automated fashion to different servers. Mirroring upped that game and created a dedicated network channel between servers (you could only have 1 principle and 1 mirror, so 2 total) so that there wasn’t this funny business of copying and restoring, additionally it allowed the mirror server to be a highly available copy with automatic failover. Since Microsoft marketing is terrible at naming things, it was originally called, “Real Time Log Shipping” which was then changed to “Mirroring” and in typical fashion you can find the unofficial “Real Time Log Shipping” name all over the place where it was never updated. (I can’t really blame them here, though, it’s hard to find all the little places you’re putting this moniker in and then having some other team tell you to change it all at some way later point)

Read the whole thing. It’s a fun read, a little sad, and helps us understand a bit of availability group behavior which might bite the unaware. I will definitely defend Microsoft’s backward-compatibility emphasis. This makes life so much easier for developers than a lot of other languages and environments. In the R and Python worlds, breaking changes are the norm, meaning that when you update packages, you can expect something to break and now that “20-minute” package upgrade ticket becomes 3 days of trying to sort out what went wrong.

Comments closed