Press "Enter" to skip to content

Category: Administration

Securing Amazon Managed Streaming for Kafka

Stephane Maarek has some security advice for us:

AWS launched IAM Access Control for Amazon MSK, which is a security option offered at no additional cost that simplifies cluster authentication and Apache Kafka API authorization using AWS Identity and Access Management (IAM) roles or user policies to control access. This eliminates the need for administrators to run an unfamiliar system to control access to Apache Kafka on Amazon MSK, and learn intricate details and specific commands to manage Apache Kafka access control lists (ACLs).

This is a game-changer from a security perspective for AWS customers who use Apache Kafka: I recommend Amazon MSK customers use IAM Access Control unless they have a specific need for using mutual TLS or SASL/SCRAM authN/Z.

Read on to see how it works.

Comments closed

Availability Groups and Logins

Andrea Allred runs into a post-failover issue:

While doing a planned Availability Group failover, the application stopped talking to the database. After checking the SQL Server log, we found that all the SQL Logins were failing with an “incorrect password” error. The logins were on the server, the users were in the databases, and the passwords were even right, so what was wrong? It all comes down to SID’s (Security Identifiers).

Read on for the cause and the solution. I’d also recommend Sync-DbaAvailabilityGroup as a good dbatools cmdlet to use.

Comments closed

An Introduction to Latches

Paul Randal starts a series on latches:

In some of my previous articles here on performance tuning, I’ve discussed multiple wait types and how they are indicative of various resource bottlenecks. I’m starting a new series on scenarios where a synchronization mechanism called a latch is a performance bottleneck, and specifically non-page latches. In this initial post I’m going to explain why latches are required, what they actually are, and how they can be a bottleneck.

Read on to learn what a latch is, why it is useful, and how latches work at a high level.

Comments closed

Working with High Virtual Log Files

Chad Callihan explains the notion of Virtual Log Files and has a process to handle them when they multiply like rabbits:

Today, I want to go over what Virtual Log Files are and how to handle them if you have too many in your databases.

A SQL Server log file is made up of smaller files called Virtual Log Files (VLFs). As the log file grows, so will the count of VLFs. I haven’t seen or heard of a calculation that can be worked out to determine how many VLFs you should have or how many is too many for a database. I’ve heard that you shouldn’t have more than a few hundred. I’ve also heard to not worry about VLFs until you break 1000. If you check your databases and have thousands in a database, I would say it’s best to get that count lowered whether you’re seeing issues yet or not.

Read on to see how.

Comments closed

Managing Azure DevOps via Azure Logic Apps

Stuart Ainsworth has a process:

A big part of my job these days is looking for opportunities to improve workflow. Automation of software is great, but identifying areas to speed up human processes can be incredibly beneficial to value delivery to customers. Here’s the situation I recently figured out how to do:

1. My SRE team uses a different Azure DevOps project than our development team. This protects the “separation of duties” concept that auditors love, while still letting us transfer items back and forth.
2. The two projects are in the same organization.
3. The two projects use different templates, with different required fields.
4. Our workflow process requires two phases of triage for bugs in the wild: a technical phase (provided by my team), and a business prioritization (provided by our Business Analyst).
5. Moving a card between projects is simple, but there were several manual changes that had to be made:
– Assigning to a Business Analyst (BA)
– Changing the status to Proposed from Active
– Changing the Iteration and Area
– Moving the card.

To automate this, I decided to use Azure Logic Apps

Read on to see how Stuart did this.

Comments closed

Pre-Loading SSAS Databases into Memory Post-Restart

Nigel Foulkes-Nock explains why that first query after restarting SSAS can be slow:

When the SQL Server Analysis Services (SSAS) Tabular Service is started, it can take a long time before it is ready to be queried. This can cause delays to Service, not to mention confusion.

This Blog Post will explain what is happening during this time and a method that can be used to improve. It’s worth mentioning that the SSAS Tabular Databases that this has been used on are quite large (> 100Gb).

Click through for the answer, as well as a technique to warm up those servers so an end user doesn’t wind up being the one to pay for this wait.

Comments closed

Read-Ahead Reads

Chad Callihan provides some helpful tips around read-ahead:

What are read-ahead reads and how do they impact SQL Server performance? Read-ahead reads allow SQL Server to think ahead to pull pages into the buffer cache before they are actually requested for a query. Up to 64 contiguous pages from a file can be read and the ability to read-ahead can be used for both data pages and index pages. Once data is in the buffer cache, it will not need to be pulled in for future queries unless it has been pushed out by other SQL Server tasks.

Click through to see what they are and how you could disable them if you really need to.

Comments closed

Shrinking an Azure SQL Database

Joey D’Antoni wants to take it down a notch:

You will note that I didn’t mention that “your log file grew because of a large index rebuild”. That’s because that is probably roughly (this is a really rough rule of thumb) how big your transaction log needs to be. But, anyway, we’re talking about Azure SQL Database, so you don’t need to worry about your transaction log file. Microsoft takes care of that for you: ‘Unlike data files, Azure SQL Database automatically shrinks log files since that operation does not impact database performance.’

Read on for the twist at the end.

Comments closed

Altering Columns in Temporal Tables

Meagan Longoria explains the process around table alteration when it’s a temporal table:

System-versioned temporal tables were introduced in SQL Server 2016. They provide information about data stored in the table at any point in time by storing an effective dated version of each row rather than only the data that is correct at the current time

You can alter a temporal table to add or change columns, but you must first turn off system versioning. Let’s look at an example.

The example here relates to a computed column, so a bit more work has to happen due to the way you define computed columns.

Comments closed