Press "Enter" to skip to content

Category: Administration

Pure Storage FlashArray Snapshot Torture Test

Argenis Fernandez puts SQL Server snapshots on a Pure Storage FlashArray to the test:

Look, I’m not here to fight your religious war about how snapshots should not be called backups. I’m just gonna call them fast-as-fast restores(*) and be done with it. Because let’s be honest, with Pure Storage there’s absolutely nothing faster than a storage snapshot to recover a volume. Or volume(s). You get the idea. It’s about how fast you recover, every time.

Yes, I do understand that there are a million of considerations for something to be called a “backup”. We’ll get to those little by little – don’t expect a thorough post on that debate right now. Today I want to focus on one question: Are Pure Storage FlashArray snapshots stable, trustworthy enough that I can take them without pausing I/O against my database? Can I trust that the database will come online every time from a snapshot?

Read on for the Answer. For additional fun, read the whole article with your mental voice sounding like Argenis.

Comments closed

Finding the Binding: I/O or CPU as the Constraint

Erik Darling lays down a lesson for us:

When you’re looking for queries to tune, it’s important to understand which part is causing the slowdown.

That’s why Actual Execution plans are so valuable in newer versions of SQL Server and SSMS. Getting to see operator timing and wait stats for a query can tell you a lot about what kind of problem you’re facing.

Let’s take a look at some examples.

Let’s, shall we?

Comments closed

A Primer on Prometheus

Nikhil Varghese provides an introduction to Prometheus:

Prometheus is an open-source monitoring system for processing time series metric data. It collects, organizes, and stores metrics using unique identifiers and timestamps. DevOps teams and developers query that data using PromQL and then visualize it in a UI such as Grafana.

Read on to learn more about the pieces fit together and some of the key terminology.

Comments closed

Deciding on Forced Parameterization or Optimize for Ad Hoc Workloads

Erik Darling hosts a showdown:

I often speak with people who are confused about what these settings do, and which one they should be using to fix certain problems.

The first myth is that Optimize For Ad Hoc Workloads has some special effect on queries run outside of stored procedures: it does not. It’s very poorly named in that regard. There are no special optimizations applied because of that setting.

Read the whole thing.

Comments closed

Latches to Know

Paul Randal wraps up a series on latches with a few miscellaneous entries:

When either a heap or an index is being accessed, internally there’s an object called a HeapDataSetSession or IndexDataSetSession, respectively. When a parallel scan is being performed, the threads doing the actual work of the scan each have a “child” dataset (another instance of the two objects I just described), and the main dataset, which is really controlling the scan, is called the “parent.”

When one of the scan worker threads has exhausted the set of rows it’s supposed to scan, it needs to get a new range by accessing the parent dataset, which means acquiring the ACCESS_METHODS_DATASET_PARENT latch in exclusive mode. While this can seem like a bottleneck, it’s not really, and there’s nothing you can do to stop the threads performing a parallel scan from occasionally showing a LATCH_EX wait for this latch.

Click through to read the whole thing.

Comments closed

The Reason for Tail Log Backups

Chad Callihan explains why we need tail log backups:

When you are migrating a database from one server to another, how can you be sure to backup all transactions? Sure, you can notify the client and let them know “there will be a short outage at 8AM so please stay out of the application at that time.” Can you really trust that? Of course not. Let’s demonstrate the steps needed to include all transactions with the tail-log backup.

Protip: if you build your application such that nobody wants to use it, you can migrate the database much more easily. Assuming you don’t want to follow that outstanding advice, Chad has you covered.

Comments closed

Monitoring Azure Data Factory, Integration Runtimes, and Pipelines

Sandeep Arora monitors all the things:

For effective monitoring of ADF pipelines, we are going to use Log Analytics, Azure Monitor and Azure Data Factory Analytics. The above illustration shows the architectural representation of the monitoring setup.

The details of setting up log analytics, alerts and Azure Data Factory Analytics are further discussed in this section.

If you manage Azure Data Factory in your environment, give this a read.

Comments closed

Azure Linux VM Agent Vulnerability

Nir Ohfeld finds another vulnerability:

Wiz’s research team recently discovered a series of alarming vulnerabilities that highlight the supply chain risk of open source code, particularly for customers of cloud computing services.

The source of the problem is a ubiquitous but little-known software agent called Open Management Infrastructure (OMI) that’s embedded in many popular Azure services.

When customers set up a Linux virtual machine in their cloud, the OMI agent is automatically deployed without their knowledge when they enable certain Azure services. Unless a patch is applied, attackers can easily exploit these four vulnerabilities to escalate to root privileges and remotely execute malicious code (for instance, encrypting files for ransom).

This has been patched, but it’s really ugly. H/T Ben Stegink.

Comments closed

Persist Sample Percent Bugfix in SQL Server

John Sterritt has good news for us:

Hi Everyone, this is John Sterrett. I am a SQL Server Consultant in Austin, TX. Last year I blogged about a feature called Persist Sample Percent. It had a nasty bug that could negatively impact performance. I have great news to share. The fix is now rolled into SQL 2016 SP2 CU17 and SQL 2019 CU10Pedro Lopes let me know that with the fix now queued for SQL 2017 CU26, this becomes fixed in all versions.

Read on to see what this mean and why it’s important.

Comments closed

An Overview of Bicep

Steve Jones pumps and he pumps:

Bicep is a transpiler, meaning it takes one language and translates it into another. In this case, the Bicep language will move code into the ARM JSON templates. JSON is really for machines, not humans, so the idea is to give sysadmins and developers an easy way to describe resources they need to deploy into Azure.

The language is new, and it’s on Github. This is a DSL (domain specific language), which means it was designed for a specific purpose. With the 0.3 release, the language is built into the Azure CLI and Azure PoSh utilities, so this will do the transpilation for you. There’s also a decompiler to go from an ARM template back to Bicep. It’s also supported by Microsoft, which is always a plus if you need to call for some issue.

Click through for more information.

Comments closed