Press "Enter" to skip to content

Category: Administration

Capturing SQL Server Login Details with extended Events

Jack Vamvas shows how to track SQL Server logins:

I have to capture logon information details for a specific logon on a SQL Server.   Specifically – the client_hostname, nt_username & username. What i’m looking for is a log recording a successful connection made to the server.     The event should be triggered a) when a connection is made & b)   from a connection pool. 

Click through to see how.

Comments closed

Comparing CPU Activity and Diagnosing the Cause

Joe Obbish has a tutorial for us:

Sometimes I have a need to run a quick CPU comparison test between two different SQL Server instances. For example, I might be switching from old hardware to new hardware and I want to immediately see a faster query to know that I got my money’s worth. Sometimes I get a spider sense while working with virtualized SQL Server instances and want to check for problems. Yesterday, I was doing a sort of basic health check on a few servers that I hadn’t worked with much and I wanted to verify that they got the same performance for a very simple query.

Click through for an easy test script and a good amount of diagnosis to understand why there is a significant difference between two instances.

Comments closed

Pure Storage FlashArray Snapshot Torture Test

Argenis Fernandez puts SQL Server snapshots on a Pure Storage FlashArray to the test:

Look, I’m not here to fight your religious war about how snapshots should not be called backups. I’m just gonna call them fast-as-fast restores(*) and be done with it. Because let’s be honest, with Pure Storage there’s absolutely nothing faster than a storage snapshot to recover a volume. Or volume(s). You get the idea. It’s about how fast you recover, every time.

Yes, I do understand that there are a million of considerations for something to be called a “backup”. We’ll get to those little by little – don’t expect a thorough post on that debate right now. Today I want to focus on one question: Are Pure Storage FlashArray snapshots stable, trustworthy enough that I can take them without pausing I/O against my database? Can I trust that the database will come online every time from a snapshot?

Read on for the Answer. For additional fun, read the whole article with your mental voice sounding like Argenis.

Comments closed

Finding the Binding: I/O or CPU as the Constraint

Erik Darling lays down a lesson for us:

When you’re looking for queries to tune, it’s important to understand which part is causing the slowdown.

That’s why Actual Execution plans are so valuable in newer versions of SQL Server and SSMS. Getting to see operator timing and wait stats for a query can tell you a lot about what kind of problem you’re facing.

Let’s take a look at some examples.

Let’s, shall we?

Comments closed

A Primer on Prometheus

Nikhil Varghese provides an introduction to Prometheus:

Prometheus is an open-source monitoring system for processing time series metric data. It collects, organizes, and stores metrics using unique identifiers and timestamps. DevOps teams and developers query that data using PromQL and then visualize it in a UI such as Grafana.

Read on to learn more about the pieces fit together and some of the key terminology.

Comments closed

Deciding on Forced Parameterization or Optimize for Ad Hoc Workloads

Erik Darling hosts a showdown:

I often speak with people who are confused about what these settings do, and which one they should be using to fix certain problems.

The first myth is that Optimize For Ad Hoc Workloads has some special effect on queries run outside of stored procedures: it does not. It’s very poorly named in that regard. There are no special optimizations applied because of that setting.

Read the whole thing.

Comments closed

Latches to Know

Paul Randal wraps up a series on latches with a few miscellaneous entries:

When either a heap or an index is being accessed, internally there’s an object called a HeapDataSetSession or IndexDataSetSession, respectively. When a parallel scan is being performed, the threads doing the actual work of the scan each have a “child” dataset (another instance of the two objects I just described), and the main dataset, which is really controlling the scan, is called the “parent.”

When one of the scan worker threads has exhausted the set of rows it’s supposed to scan, it needs to get a new range by accessing the parent dataset, which means acquiring the ACCESS_METHODS_DATASET_PARENT latch in exclusive mode. While this can seem like a bottleneck, it’s not really, and there’s nothing you can do to stop the threads performing a parallel scan from occasionally showing a LATCH_EX wait for this latch.

Click through to read the whole thing.

Comments closed

The Reason for Tail Log Backups

Chad Callihan explains why we need tail log backups:

When you are migrating a database from one server to another, how can you be sure to backup all transactions? Sure, you can notify the client and let them know “there will be a short outage at 8AM so please stay out of the application at that time.” Can you really trust that? Of course not. Let’s demonstrate the steps needed to include all transactions with the tail-log backup.

Protip: if you build your application such that nobody wants to use it, you can migrate the database much more easily. Assuming you don’t want to follow that outstanding advice, Chad has you covered.

Comments closed

Monitoring Azure Data Factory, Integration Runtimes, and Pipelines

Sandeep Arora monitors all the things:

For effective monitoring of ADF pipelines, we are going to use Log Analytics, Azure Monitor and Azure Data Factory Analytics. The above illustration shows the architectural representation of the monitoring setup.

The details of setting up log analytics, alerts and Azure Data Factory Analytics are further discussed in this section.

If you manage Azure Data Factory in your environment, give this a read.

Comments closed

Azure Linux VM Agent Vulnerability

Nir Ohfeld finds another vulnerability:

Wiz’s research team recently discovered a series of alarming vulnerabilities that highlight the supply chain risk of open source code, particularly for customers of cloud computing services.

The source of the problem is a ubiquitous but little-known software agent called Open Management Infrastructure (OMI) that’s embedded in many popular Azure services.

When customers set up a Linux virtual machine in their cloud, the OMI agent is automatically deployed without their knowledge when they enable certain Azure services. Unless a patch is applied, attackers can easily exploit these four vulnerabilities to escalate to root privileges and remotely execute malicious code (for instance, encrypting files for ransom).

This has been patched, but it’s really ugly. H/T Ben Stegink.

Comments closed