Press "Enter" to skip to content

Category: Administration

Azure Database for PostgreSQL Replicas

Gauri Mahajan takes us through replica creation in Azure Database for PostgreSQL:

Azure Database for PostgreSQL is an Azure offering of the open-source Postgres database. As there are many databases and data warehouses that are derived from Postgres, during migration from Postgres to a different flavor of another database or data warehouse that is compatible with Postgres, often read replicas are employed. The replicas are read-only since it’s a one-way replication from the master database to replicas. And replicas serve the purpose of decreasing the load on the primary transactional database in production environments. Replicas are typically used as migration sources, reporting and ad-hoc analytics sources and for other purposes. Let’s go ahead and learn to create and manage read replicas in Azure Database for PostgreSQL.

Click through for the process.

Comments closed

Accessing Network Shares from SQL Server

Daniel Hutmacher engages in chicanery:

Using a local service account for your SQL Server service, your server won’t automatically have permissions to access to other network resources like UNC paths. Most commonly, this is needed to be able to perform backups directly to a network share.

Using a domain account as your SQL Server service account will allow the server to access a network share on the same domain, but if the network share is not on your domain, like an Azure File Share, you need a different solution.

There’s a relatively easy way to make all of this work, though.

Click through to see how, as well as several methods to make it work within SQL Server.

Comments closed

Receiving Notifications on Cosmos DB 429 Errors

Hasan Savran wants to remain in the loop:

Developers like to know when things go wrong in applications. It is an easy and simple solution to send an email when a bad error occurs. Things can go wrong easily in Cosmos Db, one of the most common error you will get from Cosmos DB is “Request rate too large (429)” exception. This error says that you do not have enough request units to run a query. This error usually occurs in peak times. Usually cause of getting 429 errors is the configuration of Request Units settings. You need to scale up your application or optimize your queries.
     It takes more time to retrieve data from Cosmos DB when error 429 occurs. You should get notification when this occurs, but you do not want to get an email each time it occurs either. 1- 5% of requests with 429 is acceptable. You can always open the Cosmos DB Monitoring tools and keep eye on it, or you can create Cosmos DB Alerts to get emails.

Click through for a demonstration of how to use Cosmos DB Alerts.

Comments closed

Contrasting In-Place and Side-By-Side Updates for SQL Server

Chad Callihan is not the biggest fan of in-place upgrades:

How many SQL Server upgrades have you been a part of? Was there much discussion about whether or not to do an in-place upgrade versus a side-by-side upgrade? I’ve been part of both and to me there is only one safe way to perform upgrades. Let’s go over some pros and cons of each and see if you come to the same conclusion.

I did kind of spoil the answer, but click through to understand why one is so much riskier than the other.

Comments closed

E-Mailing a DBA upon Object Creation in System Databases

Thomas Williams has another script for us:

Next on the list to commission a SQL Server, is to create a trigger in the master system database to send an e-mail to the DBA when objects are created in the database.

In general I’d expect this never to happen – but if it did, I’d want to follow up. It’s possible to go even further and prevent creating objects in the master system database completely.

My experience is that about 5% of the time, people intentionally put objects into the master database, and 95% of the time, I…err, they…forgot to switch databases before running the script.

Comments closed

SQL Server Monitoring via Zabbix

Reitse Eskens digs into using Zabbix to monitor SQL Server:

In one of the projects I’m working in, we needed to have some sort of monitoring solution on SQL Server, but there wasn’t budget for a commercial monitoring solution. There’s a small number of freeware, open-source solutions but these are all difficult to get working. In this blog I’ll show you what Zabbix has on offer as a default and what you can add yourself.

I’m not the biggest fan of Zabbix, but if it’s what you have, better to use the tools you have than not.

Comments closed

Dealing with Non-Yielding Schedulers

Sean Gallardy breaks up the party:

One of the most common items that will cause a memory dump in SQL Server is a non-yielding scheduler (generally referred to as NYS). What the heck does that mean? Why would it cause a memory dump? Is there anything that can be investigated? Good questions, let’s take a look.

Read on to learn what these are, why they’re not something you want to deal with on a regular basis, and how you can get more information on what happened out of a dump file. Which is also going to be helpful for Microsoft staff to diagnose and correct the underlying issue (if possible).

Comments closed

Sending an E-Mail upon Database Creation

Thomas Williams has a job for SQL Server:

As part of commissioning a SQL Server, I create a server trigger to send an e-mail when a new database is created (adapted from https://www.mssqltips.com/sqlservertip/2864/email-alerts-when-new-databases-are-created-in-sql-server/), and a SQL Agent alert which e-mails when a database is restored (inspired by Jeremy Dearduff’s comment at https://www.brentozar.com/archive/2017/06/tracking-restores-hard/#comment-2446362).

After receiving the e-mail I can follow up and include the database in an inventory. See below for the trigger and alert scripts – feel free to use these as a basis for your own monitoring and inventory.

Click through for the script, as well as an important disclaimer.

Comments closed

Projecting Disk Space Available

Constantine Kokkinos predicts the future:

The first question I wanted to model out was a bigger issue with on-premises databases – when are we going to run out of storage?

Back in the day I’d cheat with msdb backups, comparing compressed sized to actuals, and moving on. However I don’t have a historical reference for Stack Overflow… so what can I do?

Taking a look at the tables we see a commonality in many tables – CreationDate! It looks like the rows faithfully are stamped when they are created.

Constantine does at the end hit on something we tend to forget: most operations in life aren’t quite linear. We often get lucky in that certain stretches are close enough to be linear that we can model them that way, but even in this dataset, you can see the effects of polynomial growth slowly build up. Still, this is a good way of taking us through what an analysis and projection can look like.

Comments closed

Changing the Slow Query Log Threshold in RDS

John McCormack wants to know about those slow queries:

The slow query log will record all queries which are above the threshold level. The default value is 10 (seconds) but you can set it higher or lower depending on your requirements. It is useful for finding slow queries and allows you to pick out candidates for tuning.

If you set the threshold too low, it can increase I/O overhead on your instance and use a lot of valuable disk space. If you set it too high, it might not capture enough useful information.

This is a setting in AWS Relational Database Services and mimics functionality in MySQL

Comments closed