Press "Enter" to skip to content

Category: Administration

Attributing Redshift Costs to Users

Jason Pedreza, et al, show how you can break down query utilization by user in an Amazon Redshift database:

At its simplest form, cost attribution can be determined using the amount of the storage assigned to the individual objects using the ownership of the objects to the groups. But the downside of this approach is it doesn’t provide a true translation of the resource usage. For example, let’s say Team 1 has total object size of 1 TB, whereas Team 2 has 100 GB in total size. Team 1 member runs 10 queries daily, and Team 2 runs 1,000 queries per day. Of course, Team 2 uses more resources than Team 1.

The Amazon Redshift RA3 architecture allows you to pay for the compute and data warehouse storage capacity separately, therefore storage doesn’t reflect the resources used by the teams for the cost attribution.

Click through to see how.

Comments closed

Learn, Test, Forget

Erik Darling shows us the bright side of a memory equipped to forget:

I often have to talk clients down from odd ledges. At this point, there’s about 20 years of advice about SQL Server out there, and for various reasons received wisdom often gets migrated en masse, whether it’s still applicable or not.

Outdated trace flags and settings are pretty common, and don’t get me started on coding practices.

But I get it — that stuff came along with advice that still makes sense — like parallelism and memory settings, separating off tempdb, etc.

Unfortunately, it often leads people to the “it used to be fast so it should always be fast” trap based on what used to work, rather than what makes sense currently. Or even what the current problem is.

Read the whole thing. Also, in a rare double-quotation, I want to cite this line from Erik:

“I don’t know, xxVanWilderFan420xx said they were bad and we should avoid them.”

Never fully trust a person going by the pseudonym xxVanWilderFan420xx. However, if it were xxSolomonKaneFan420xx, 100% believe.

Comments closed

Azure Database for PostgreSQL Replicas

Gauri Mahajan takes us through replica creation in Azure Database for PostgreSQL:

Azure Database for PostgreSQL is an Azure offering of the open-source Postgres database. As there are many databases and data warehouses that are derived from Postgres, during migration from Postgres to a different flavor of another database or data warehouse that is compatible with Postgres, often read replicas are employed. The replicas are read-only since it’s a one-way replication from the master database to replicas. And replicas serve the purpose of decreasing the load on the primary transactional database in production environments. Replicas are typically used as migration sources, reporting and ad-hoc analytics sources and for other purposes. Let’s go ahead and learn to create and manage read replicas in Azure Database for PostgreSQL.

Click through for the process.

Comments closed

Accessing Network Shares from SQL Server

Daniel Hutmacher engages in chicanery:

Using a local service account for your SQL Server service, your server won’t automatically have permissions to access to other network resources like UNC paths. Most commonly, this is needed to be able to perform backups directly to a network share.

Using a domain account as your SQL Server service account will allow the server to access a network share on the same domain, but if the network share is not on your domain, like an Azure File Share, you need a different solution.

There’s a relatively easy way to make all of this work, though.

Click through to see how, as well as several methods to make it work within SQL Server.

Comments closed

Receiving Notifications on Cosmos DB 429 Errors

Hasan Savran wants to remain in the loop:

Developers like to know when things go wrong in applications. It is an easy and simple solution to send an email when a bad error occurs. Things can go wrong easily in Cosmos Db, one of the most common error you will get from Cosmos DB is “Request rate too large (429)” exception. This error says that you do not have enough request units to run a query. This error usually occurs in peak times. Usually cause of getting 429 errors is the configuration of Request Units settings. You need to scale up your application or optimize your queries.
     It takes more time to retrieve data from Cosmos DB when error 429 occurs. You should get notification when this occurs, but you do not want to get an email each time it occurs either. 1- 5% of requests with 429 is acceptable. You can always open the Cosmos DB Monitoring tools and keep eye on it, or you can create Cosmos DB Alerts to get emails.

Click through for a demonstration of how to use Cosmos DB Alerts.

Comments closed

Contrasting In-Place and Side-By-Side Updates for SQL Server

Chad Callihan is not the biggest fan of in-place upgrades:

How many SQL Server upgrades have you been a part of? Was there much discussion about whether or not to do an in-place upgrade versus a side-by-side upgrade? I’ve been part of both and to me there is only one safe way to perform upgrades. Let’s go over some pros and cons of each and see if you come to the same conclusion.

I did kind of spoil the answer, but click through to understand why one is so much riskier than the other.

Comments closed

E-Mailing a DBA upon Object Creation in System Databases

Thomas Williams has another script for us:

Next on the list to commission a SQL Server, is to create a trigger in the master system database to send an e-mail to the DBA when objects are created in the database.

In general I’d expect this never to happen – but if it did, I’d want to follow up. It’s possible to go even further and prevent creating objects in the master system database completely.

My experience is that about 5% of the time, people intentionally put objects into the master database, and 95% of the time, I…err, they…forgot to switch databases before running the script.

Comments closed

SQL Server Monitoring via Zabbix

Reitse Eskens digs into using Zabbix to monitor SQL Server:

In one of the projects I’m working in, we needed to have some sort of monitoring solution on SQL Server, but there wasn’t budget for a commercial monitoring solution. There’s a small number of freeware, open-source solutions but these are all difficult to get working. In this blog I’ll show you what Zabbix has on offer as a default and what you can add yourself.

I’m not the biggest fan of Zabbix, but if it’s what you have, better to use the tools you have than not.

Comments closed

Dealing with Non-Yielding Schedulers

Sean Gallardy breaks up the party:

One of the most common items that will cause a memory dump in SQL Server is a non-yielding scheduler (generally referred to as NYS). What the heck does that mean? Why would it cause a memory dump? Is there anything that can be investigated? Good questions, let’s take a look.

Read on to learn what these are, why they’re not something you want to deal with on a regular basis, and how you can get more information on what happened out of a dump file. Which is also going to be helpful for Microsoft staff to diagnose and correct the underlying issue (if possible).

Comments closed

Sending an E-Mail upon Database Creation

Thomas Williams has a job for SQL Server:

As part of commissioning a SQL Server, I create a server trigger to send an e-mail when a new database is created (adapted from https://www.mssqltips.com/sqlservertip/2864/email-alerts-when-new-databases-are-created-in-sql-server/), and a SQL Agent alert which e-mails when a database is restored (inspired by Jeremy Dearduff’s comment at https://www.brentozar.com/archive/2017/06/tracking-restores-hard/#comment-2446362).

After receiving the e-mail I can follow up and include the database in an inventory. See below for the trigger and alert scripts – feel free to use these as a basis for your own monitoring and inventory.

Click through for the script, as well as an important disclaimer.

Comments closed