Press "Enter" to skip to content

Month: November 2022

Fitting 10 Pounds of Data into a 5-Pound Power BI

Chris Webb does some compacting:

Power BI can handle large data volumes, but just how much data can you load into Power BI? Can Power BI handle big data? How big is “big” anyway? These are questions you may have when you’re starting out on a new Power BI project and the answers can be hard to find. Indeed the answer in most cases is “it depends”, which isn’t very helpful. In this post I will try to explain the various limits on Power BI dataset size and how you can know if you’re likely to hit them.

Click through to learn more about these limitations.

Comments closed

Disentangling a Diverging Bar Chart

Simon Rowe tackles a problem:

Have you ever found yourself looking at a graph for the first time and felt immediately overwhelmed by the sheer volume of information presented?  It can leave you wondering not only how to understand it, but also what decisions led to the creation of such a complex visual in the first place.  

Nobody sets out to make a confusing communication. Most dashboards or visuals start out quite simple…but over time they may be leveraged to do more, provide more info, and support more requirements all at once. After all that, by the time you encounter it for the first time, it’s thoroughly impenetrable.  

Read on for one extreme example of scope creep and how to draw out the most important messages.

Comments closed

Making Sets and Lists with KQL

Robert Cain is making a list and checking it twice:

In previous posts, I’ve mentioned using certain functions and operators to investigate conditions in your system. Naturally you’ll need to create lists of those items, based on certain conditions.

For example, you may want to get a list of the counters associated with an object. Or, you may want to get a list of computer where a certain condition is met.

In this article we’ll see how to get those lists using the Kusto make_set and make_list functions.

Read on to see how these two functions work, as well as their conditional brethren.

Comments closed

Backup Options for MySQL

Lukas Vileikis continues a series on backing up MySQL. Part 2 involves Percona XtraBackup:

As already stated above, Percona XtraBackup is one of the primary offerings for MySQL & Percona database administrators developed by Percona. The tool is an open-source backup utility that does not lock our databases during the backup processes it performs. Percona says that their tool can provide automatic verification of backups that have been taken, offer fast dumping and restore times, and above all, it’s supported by their award-winning consulting services helping us make sure that our data and its backups are in safe hands by day and by night.

Part 3 covers mysqlpump:

mysqlpump is a backup utility that is used via the command-line interface. The tool is very similar to mysqldump in that it provides us with the ability to take logical backups, but also different at the same time – the goal of mysqlpump is to be an extendable, parallel-supporting replacement of mysqldumpIn their blog from 2015, MySQL team said that one of the primary aims of introducing mysqlpump was not be forced to implement legacy functionality that is provided by mysqldump.

Read on to see how both of these work.

Comments closed

Table Designer and Query Plan Viewer in Azure Data Studio

Erin Stellato announces two features in GA for Azure Data Studio:

Azure Data Studio provides users with the ability to complete operational tasks such as deploying a database, creating tables, and writing queries.  A logical next step for many users is troubleshooting or improving query performance, a task that is now easier with the general availability of Query Plan Viewer.  From the query editor, you now have the ability to display the estimated or actual plan for a query or set of queries.  This graphical plan provides a visual map to understand the steps the SQL Server engine takes when it retrieves or modifies data.  Saved plans can also be viewed in Azure Data Studio, and for enhanced troubleshooting, two plans can be compared to understand differences and more easily identify problems. 

The lack of a good execution plan viewing tool was a major limitation in Azure Data Studio (and the SQL Sentry plugin wasn’t a good fix even when it was available).

Comments closed

Data Scaling Thoughts for Power BI

Paul Turley starts a series:

The Power BI service can handle a lot of data, but just because your data sources are big doesn’t mean that your Power BI datasets will also take up a lot of space. If the data model is designed efficiently, even terabytes of source data will usually translate into megabytes, or a few gigabytes of dataset storage at most. As the industry has largely made the transition from on-prem SQL Server Analysis Services and AAS tabular models to Power BI datasets in Premium capacity, the size limits in the cloud service are notable. The following reference chart from the Microsoft Learning docs shows that a P1 Premium dedicated capacity is limited to 25 GB per dataset. That’s a lot but there are Premium capacity SKUs that can handle up to 400 GB of compressed data in an in-memory data model.

Click through for Paul’s introductory thoughts and stay tuned for part 2.

Comments closed

Azure Data Studio November Update

Timi Oshin and Erin Stellato have an update for us:

In this release of Azure Data Studio, we have exciting news to share across several of our core features and extensions. The first is the announcement of the general availability of Table Designer and Query Plan Viewer. We would like to extend a huge thank you to our engineering teams who have worked tirelessly over the past few months on improvements to these features. We would also like to thank the MVPs and community members who provided feedback on these features. We are grateful for continued engagement from users as we work to make Azure Data Studio the tool of choice for cloud database management across multiple platforms.

There’s a lot in this release, so check out the full changelog.

Comments closed

Troubleshooting Caching in Shiny

Thomas Williams illuminates us on the caching process:

Caching in R Markdown is a valuable step to get your app, report or visualisation more production-ready. There are one or two potential issues to watch out for, especially when deploying a cache-enabled R Markdown file to a Shiny server – in this post I’ll go over some of these “gotchas”, and how you could address each one.

Click through for those three gotchas.

Comments closed