Press "Enter" to skip to content

Author: Kevin Feasel

Writing DAX for Paginated Reports

Adam Aspin shows us how to use DAX functions in Power BI paginated reports:

In the previous articles, you learned – or revised – the basics of using DAX as the query language to populate paginated reports with data from Power BI datasets. However, as befitted an introduction, the focus was essentially on getting up and running. Specifically, the only DAX table function you looked at was SUMMARIZECOLUMNS().

Despite its undeniable usefulness, this function is far from the only DAX function that you can use to query Power BI Datasets when creating Paginated Reports. Moreover, it has limitations when you wish to deliver complete lists of results as it is an aggregation function. This means, for instance, that you will never find duplicate records in the tabular output from SUMMARIZECOLUMNS() as, by default, it is grouping data. Alternatively, if you wish to use SUMMARIZECOLUMNS() to output data at its most granular level, you will need to include a unique field (or a combination of fields that guarantee uniqueness) – even if these are not used in the report output.

It follows that, to extract data in ways that allow effective report creation, it is essential to learn to use a whole range of DAX table functions. 

Click through for a list of functions and how to use them.

Comments closed

Creating Powershell Objects from C#

Robert Cain mixes languages:

In the last two installment of this series, I covered the various ways to create objects using the PSCustomObject. We saw how to create it using the New-Object cmdlet, then how to add your custom properties to it using the Add-Member cmdlet. In the subsequent post we saw how to add new methods to it.

In this post, we’ll cover something new, creating an object based on C# code!

Click through to see how. And also to see the relic of pretended multi-language support, where you have a -Language parameter but it can only take one input and you aren’t going to see another.

Somebody in the community has created an alternative to support F#, though.

Comments closed

Choosing a Bar Chart Orientation

Amy Esselman says to rotate that chart:

Your lesson on choosing an appropriate visual covers a variety of available bar charts. When should I use a horizontal bar chart, and when should I use a vertical bar chart?

When it comes to the horizontal vs. vertical decision, our founder Cole has an admitted penchant for horizontal bar graphs, for a couple of reasons:

Click through for those reasons why bar charts are good but stick around for the reasons why column charts are good. Both have their specific places in the world.

Comments closed

Scheduling Azure ML Compute Instance Start-Up and Shut-Down

I have a post correcting a statement I made before:

The single biggest problem I have with compute instances is that there is no auto-stop functionality to them. This is really frustrating because you’re paying for that virtual machine like you would any other, so if you forget to turn it off when you go home for the weekend, it’ll cost you. I wish there were a built-in option to shut off a compute instance after a certain amount of inactivity. Instead, you’ll need to start and stop them manually.

It turns out that you can and so I wanted to write a post to correct the record.

Click through to see how you can do this. You can bet that I’ve got it enabled now.

Comments closed

Database Mirroring Compatibility and Availability Groups

Sean Gallardy checks out the past:

Around 2005, mirroring was born. It was an evolution on log shipping, which is taking log backups, moving them around, and restoring them all in an automated fashion to different servers. Mirroring upped that game and created a dedicated network channel between servers (you could only have 1 principle and 1 mirror, so 2 total) so that there wasn’t this funny business of copying and restoring, additionally it allowed the mirror server to be a highly available copy with automatic failover. Since Microsoft marketing is terrible at naming things, it was originally called, “Real Time Log Shipping” which was then changed to “Mirroring” and in typical fashion you can find the unofficial “Real Time Log Shipping” name all over the place where it was never updated. (I can’t really blame them here, though, it’s hard to find all the little places you’re putting this moniker in and then having some other team tell you to change it all at some way later point)

Read the whole thing. It’s a fun read, a little sad, and helps us understand a bit of availability group behavior which might bite the unaware. I will definitely defend Microsoft’s backward-compatibility emphasis. This makes life so much easier for developers than a lot of other languages and environments. In the R and Python worlds, breaking changes are the norm, meaning that when you update packages, you can expect something to break and now that “20-minute” package upgrade ticket becomes 3 days of trying to sort out what went wrong.

Comments closed

Behind the Powershell Pipeline

Jeff Hicks has some new content:

There is an intangible side to PowerShell that can help you understand why you should use PowerShell, in addition to the how. What does it mean to “manage at scale?” Why should you document your code, and what are some best practices? How can you take PowerShell profiles to the next level? These are some of the questions I want to tackle in a new newsletter I’m calling “Behind the PowerShell Pipeline.”

I want to take my years of PowerShell education experience and create genuine premium content. And I want to be able to afford to take the time to develop deep content. This new venture is available now on Substack at jeffhicks.substack.com. Premium content will only be available through a paid subscription. You are welcome to sign up for a free subscription, but that will limit your content.

I’m interested in success here, especially given how there is such a norm for giving away technical content. I like that ethos but also want to see some additional capability for premium content to be available, as I think that is good for the long-term health of technical content development.

Comments closed

Negative Blocking Session IDs

Bob Dorr explains what those negative session IDs actually mean:

SQL Server may report a blocking session id as a negative integer value. SQL Server uses negative sessions ids to indicate special conditions.​​ 

Click through for the table. Bob also includes information on -5, the “any task/session can release the latch” scenario. This also covers information on the latches themselves and is worth keeping around in case you run into an issue at some point.

Comments closed

Monitoring Power BI Queries with Log Analytics

Chris Webb continues a series on using Log Analytics:

It’s actually very easy to build a simple KQL query to look at query activity on your datasets: you just need to look at the QueryEnd event (or operation, as its called in Log Analytics), which is fired when a query finishes running. This event gives you all the information you need: the type of query (DAX or MDX), the duration, the CPU time, the query text and so on. The main challenge is that while you also get the IDs of the report and visual that generated the query, you don’t get the names of the report or visual. I wrote about how to get a list of visual and report IDs here and here, but how can you use that information?

Read on to see how.

Comments closed

Azure Synapse Analytics Integration Points

Warner Chaves takes us through several integration points with Azure Synapse Analytics:

Azure Stream Analytics allows for in-flight querying of streaming data from Blog storage, Data Lake Storage, IoT Hub or Event Hubs. The querying is done through an easily adoptable SQL language and it really speeds up the development of a streaming solution.

The nice thing here is that Stream Analytics allows the use of a Synapse SQL Pool table as the target for the results of the streaming query. So, this is another way to do near real-time analytics by passing data from a streaming source through a Stream Analytics job and into a Synapse table. You could do this to pre-aggregate data on the fly, score data in real-time, perform real-time calculations over specific time or event windows, etc.

Click through for several examples of this.

Comments closed

Identifying R Package and Function Use in GitHub Repos

Bryan Shalloway does a search:

TLDR: funspotr provides helpers for spotting the functions and packages in R and Rmarkdown files and associated github repositories. See Examples for catalogues of the functions/packages used in posts by Julia Silge, David Robinson, and others.

This is an interesting project. I’d imagine that with enough different code bases, you could develop a programming profile and possibly understand people’s strengths on a variety of characteristics like which functions they use, what they use given alternatives (e.g., “functional-friendly” map versus the *apply series versus loops), and how familiar they are with certain packages. I could see this being an advanced technique for learning what you should learn next: you obviously have familiarity with packages A, B, and C but it appears you don’t know about E or K and you might learn them to replace some of the work you’re doing with C.

Comments closed