Press "Enter" to skip to content

Category: Administration

Reviewing Table-Level Statistics in Postgres

Muhammad Ali explains what information you can find regarding tables in Postgres:

Database monitoring, performance tuning and query optimization are critical operations for maintaining an efficient database system. A key component in PostgreSQL for this purpose is the pg_stat_all_tables view.

pg_stat_all_tables offers real time statistics on table activities such as number of sequential and index scans performed on a table, updates, deletes, inserts, and more. It also offers information on dead tuples along with vacuum and analyze stats which enables DB admins to make data-driven decisions. Here’s a table detailing the columns in the pg_stat_all_tables:

Click through for that table about tables.

Comments closed

SQL Server on Azure Arc Performance Dashboards

Lance Wright shows off a dashboard:

At Ignite 2023, we announced the public preview of performance dashboards for SQL Server enabled by Azure Arc. With these performance dashbaord, DBAs and IT Admins now get performance monitoring right from within Azure. No need to setup and login to another tool or remotely connect to the SQL Server to run performance queries. Let Azure Arc do the heavy lifting so you can get to your performance metrics faster. 

This is another tool in the toolbox of DBAs, IT admins, and cloud specialists looking to gain better visibility into their hybrid and multi-cloud workloads. If a SQL Server enabled by Azure Arc meets the requirements for data collection (see “How to enable performance dashboards”), Azure Arc will automatically collect the following types of data from the Dynamic Management Views (DMV) datasets oof the SQL Server: 

Click through to see what it includes and how to enable it.

Comments closed

Enabling Microsoft Fabric

Tomaz Kastrun continues a series on Microsoft Fabric:

If you have used Power BI services in the past, you will be on board immediately. The outlook is the as it is with the Power BI. You will only need additional credentials to access the services. In general, you will need Azure subscription, Power BI service already enabled, and the ability for your organization to enable Fabric with Admin roles

Click through to see how to enable Microsoft Fabric in your environment.

Comments closed

Benchmarking Memory Usage in Shiny Apps

Ryszard Szymanski tracks memory utilization:

R/Shiny allows you to prototype a working web application quickly and easily. However, with increasing amounts of data, your app may become slow and, in extreme cases, crash due to insufficient memory.

When the worst-case scenario happens, we need to figure out a way to lower the memory usage of our app to avoid those crashes.

A crucial part of optimization efforts is benchmarking how much memory our app is consuming. This allows us to check if the changes we made to the app are indeed moving us in the right direction.

In this step-by-step guide, we will describe how to do that based on an example application.

Read on for the process and how you can diagnose memory problems. H/T R-Bloggers.

Comments closed

Replacing the Workspace Summary with Azure Workbooks

Josephine Bush works around a deprecation notice:

I’m saddened that the Workspace Summary is being deprecated in Log Analytics Workspace. I am trying to reproduce it in workbooks. While it isn’t an exact match, workbooks provide me with enough information to use and share with others.

Click through to see what you could get from the Workspace Summary and how to re-implement a fair amount of that in Azure Workbooks.

Comments closed

Controlling Fallback Behavior in Direct Lake

Sandeep Pawar talks about fallback options:

When you create a Direct Lake semantic model, by default it is in Direct Lake mode, i.e. you will directly query the delta table from the lakehouse/warehouse. This is what we want because the query performance will be very much comparable to the import mode. However, under certain circumstances, the DAX query can fallback to DirectQuery if Direct Lake limitations are hit.

Read on to learn more about circumstances in which this could happen and ways to change the default behavior.

Comments closed

Tracking Inaccessible Azure SQL DB Databases and Customer Key Cycling

Rod Edwards is watching:

This is the first follow up post from: Azure SQL TDE and Customer Keys (BYOK). Microsoft?…your name isn’t down, so you aren’t coming in. (sqlrod.com) , which explained how to use Customer Keys with Azure SQL DB (and Managed instance), and some of the dangerous pitfalls that you can face. We need to know when there may be trouble on the horizon, so key (pun fully intended) to this is monitoring.

Yes, i’m rambling on again about monitoring…but I like monitoring.

Be seeing you, Number Six.

Comments closed

Methods for Authenticating to PostgreSQL

Samab Tariq builds a list:

Authentication is the process of verifying the identity of a user or system attempting to access a database. In the realm of PostgreSQL, authentication serves as the first line of defense, ensuring that only authorized individuals or applications gain entry. As the gateway to sensitive data, robust authentication is imperative, safeguarding against unauthorized access and fortifying the foundation of data protection. In this blog, we delve into the significance of authentication in PostgreSQL, unraveling its critical role in securing valuable information.

PostgreSQL supports various authentication methods to secure access to its database. The exact methods available may depend on the version of PostgreSQL you are using, In this blog we have mentioned a few of the most used authentication methods in PostgreSQL

Read on for the listing and some ideas on how to use the various options.

Comments closed

Scheduling Fabric Capacity Pause/Resume with Azure Logic Apps

Soheil Bakhshi doesn’t want to forget to turn off the power at night:

In the previous blog post, I explained Microsoft Fabric capacities, shedding light on diverse capacity options and how they influence data projects. We delved into Capacity Units (CUs), pricing nuances, and practical cost control methods, including manually scaling and pausing Fabric capacity. Now, we’re taking the next step in our Microsoft Fabric journey by exploring the possibility of automating the pause and resume process. In this blog post, we’ll unlock the secrets to seamlessly managing your Fabric Capacity with automation that helps us save time and resources while optimising the usage of data and analytics workloads.

Right off the bat, this is a rather long blog, so I added a bonus section at the end for those who are reading from the beginning to the end. With that, let’s dive in!

To spoil the bonus a little bit, Soheil shows us not only how to turn things on and off on a schedule, but also how to ignore certain days of the week. Read the whole thing to get that.

Comments closed