Press "Enter" to skip to content

Author: Kevin Feasel

Backup Options for MySQL

Lukas Vileikis continues a series on backing up MySQL. Part 2 involves Percona XtraBackup:

As already stated above, Percona XtraBackup is one of the primary offerings for MySQL & Percona database administrators developed by Percona. The tool is an open-source backup utility that does not lock our databases during the backup processes it performs. Percona says that their tool can provide automatic verification of backups that have been taken, offer fast dumping and restore times, and above all, it’s supported by their award-winning consulting services helping us make sure that our data and its backups are in safe hands by day and by night.

Part 3 covers mysqlpump:

mysqlpump is a backup utility that is used via the command-line interface. The tool is very similar to mysqldump in that it provides us with the ability to take logical backups, but also different at the same time – the goal of mysqlpump is to be an extendable, parallel-supporting replacement of mysqldumpIn their blog from 2015, MySQL team said that one of the primary aims of introducing mysqlpump was not be forced to implement legacy functionality that is provided by mysqldump.

Read on to see how both of these work.

Comments closed

Azure Data Studio November Update

Timi Oshin and Erin Stellato have an update for us:

In this release of Azure Data Studio, we have exciting news to share across several of our core features and extensions. The first is the announcement of the general availability of Table Designer and Query Plan Viewer. We would like to extend a huge thank you to our engineering teams who have worked tirelessly over the past few months on improvements to these features. We would also like to thank the MVPs and community members who provided feedback on these features. We are grateful for continued engagement from users as we work to make Azure Data Studio the tool of choice for cloud database management across multiple platforms.

There’s a lot in this release, so check out the full changelog.

Comments closed

Data Scaling Thoughts for Power BI

Paul Turley starts a series:

The Power BI service can handle a lot of data, but just because your data sources are big doesn’t mean that your Power BI datasets will also take up a lot of space. If the data model is designed efficiently, even terabytes of source data will usually translate into megabytes, or a few gigabytes of dataset storage at most. As the industry has largely made the transition from on-prem SQL Server Analysis Services and AAS tabular models to Power BI datasets in Premium capacity, the size limits in the cloud service are notable. The following reference chart from the Microsoft Learning docs shows that a P1 Premium dedicated capacity is limited to 25 GB per dataset. That’s a lot but there are Premium capacity SKUs that can handle up to 400 GB of compressed data in an in-memory data model.

Click through for Paul’s introductory thoughts and stay tuned for part 2.

Comments closed

Troubleshooting Caching in Shiny

Thomas Williams illuminates us on the caching process:

Caching in R Markdown is a valuable step to get your app, report or visualisation more production-ready. There are one or two potential issues to watch out for, especially when deploying a cache-enabled R Markdown file to a Shiny server – in this post I’ll go over some of these “gotchas”, and how you could address each one.

Click through for those three gotchas.

Comments closed

Tips for Large Table Data Archival

Aaron Bertrand follows up on a prior post:

As soon as you realize your growth rates are higher than expected, you need to plan to buy or allocate more disk space. There is no way around this—more data means more disk. You can delay the inevitable for a little bit with better compression, but this is not a long-term fix, and it can impact query performance in different ways (trading CPU for I/O).

Once more disk is in place, you can plan your growth better.

Click through for some guidance on how to plan that growth.

Comments closed

“The Function Requested Is Not Supported” Errors on Availability Groups

David Fowler troubleshoots an issue:

Checking the logs on the secondary, it was littered with ‘Database Mirroring login attempt failed with error: ‘Connection handshake failed. An OS call failed: (80090302) 0x80090302(The function requested is not supported).’ messages. The primary server wasn’t able to authenticate with the secondary, but why? Everything looked ok as far as I could see.

Click through for the fruits of David’s labor.

Comments closed

Transaction Log File Autogrowth in SQL Server 2022

William Assaf mentions a welcome change to SQL Server 2022:

Starting with SQL Server 2022, transaction log file growth events up to 64 MB in size can benefit from instant file initialization (IFI). As usual, the transaction log is otherwise unable to benefit from instant file initialization. 

This should be a big performance improvement if your transaction log files unexpectedly grow. Of course, you should try to avoid autogrowth events altogether. 

The prior default of 10% autogrowth has led to so many problems over the years. I’d like new database files (MDF and NDF) to have a similar default as well.

Comments closed