Press "Enter" to skip to content

Category: Error Handling

LOB Data and Replication in SQL Server

Mark Beaumont diagnoses an error:

Recently, one of our clients encountered an issue while running a data update in SQL Server. The operation failed immediately with a configuration error, specifically targeting Large Object (LOB) data:

Length of LOB data (169,494) to be replicated exceeds configured maximum 65,536. Use the stored procedure sp_configure to increase the configured maximum value for max text repl size option, which defaults to 65,536. A configured value of -1 indicates no limit, other than the limit imposed by the data type.

The tricky part was, that client wasn’t using replication. Read on to learn about the culprit.

Leave a Comment

Dealing with a Full Transaction Log

Rebecca Lewis performs some troubleshooting:

It’s 2am. Your phone wakes you. Rub your eyes, check your email, and there it is:

Error: 9002, Severity: 17, State: 4
The transaction log for database 'trading' is full due to 'LOG_BACKUP'.

The database is still online. Looks ok. You can read from it. But every INSERT, UPDATE, and DELETE fails. Production night-trading is effectively down.

The good news: It’s fixable — but, that fix depends entirely on what’s preventing log truncation.

Click through for a choose-your-own-adventure story.

Leave a Comment

Lessons Learned in a SQL Server 2025 Upgrade

Aaron Bertrand shares some lessons learned:

We recently upgraded multiple systems to SQL Server 2025. The engine upgrade itself was smooth, but three unexpected issues surfaced in our lower environments as we planned out production. None of these issues prevented the upgrade from completing, but all three could easily derail an otherwise smooth in-place upgrade to SQL Server 2025. What were these issues, and how can you avoid hitting them?

My biggest surprise out of this is that full-text search actually got upgraded.

Leave a Comment

Sequence Integer Overflows and BIGINT in PostgreSQL

Laurenz Albe performs a migration:

In a previous article, I recommended using bigint for sequence-generated primary keys (but I make an exception for lookup tables!). If you didn’t heed that warning, you might experience integer overflow. That causes downtime and pain. So I thought it would be a good idea to show you how to monitor for the problem and how to keep the worst from happening.

Read on for the downtime-rich solution (thanks to table blocking), as well as a solution that requires less downtime.

Leave a Comment

Diagnosing DirectQuery Connection Limit Issues

Chris Webb goes troubleshooting:

To kick off my series on diagnosing Power BI performance problems with Performance Analyzer in the browser (which I introduced last week with my post on vide-coding a custom visual to visualise Performance Analyzer data), I want to revisit a subject I blogged about two years ago: how hitting the limit on the maximum number of connections to a DirectQuery data source can lead to queries queuing for an available connection and performance problems. In my original post on this topic I showed how you can use the Execution Metrics event in Profiler/Log Analytics/Workspace Monitoring to see when this queuing happens. In this post I will show how you can do exactly the same thing with Performance Analyzer.

Read on to learn how.

Comments closed

Diagnosing SQL Audit Failure

Alyssa Montgomery troubleshoots an issue:

Message: 

SQL Server Audit failed to create an audit file related to the audit ‘AuditName_ServerAudit’ in the directory ‘C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\Log’. Make sure that the disk is not full and that the SQL Server service account has the required permissions to create and write to the file. 

Based on the error, the solution would be to free up drive space or add user/service account permissions in the file path. Unless you are initially setting up an audit, typically permissions are not the issue. 

Read on for an example and how to resolve this issue.

Comments closed

Troubleshooting a Distributed Availability Group Failure

Jordan Boich digs in:

To give some background on this topology, they have a DAG comprised of two individual AGs. Their Global Primary AG (we’ll call this AG1) has three replicas, while their Forwarder AG (we’ll call this AG2) has two replicas. The replicas in AG1 are all in the same subnet and all Azure VMs. The replicas in AG2 are all in their own same subnet and all Azure VMs.

By the time we got logged in and connected, the Global Primary Replica was online and was able to receive connections. The secondary replicas in the Global Primary AG however, were unable to communicate with the Global Primary Replica. This is problem 1. The other secondary problem is that several databases on the Forwarder Primary Replica were in crash recovery. This is problem 2. Considering problem 2 was happening in the DR environment, that was put aside for the time being.

Read on for the troubleshooting process and solution.

Comments closed

Accessing Home Assistant’s InfluxDB Instance from R

Martin Stingl looks for some data:

I’m running a HomeAssistant instance at home. I’ve configured it to log data into an InfluxDB database, so I can retrieve historical data for analysis later on. In default mode HomeAssistant would aggregate historical data for storage reasons.

So now I want to access the InfluxDB database from R to perform custom analyses. HomeAssistant is still using InfluxDB version 1. To connect to InfluxDB from R, I thought I can use the influxdbr package. But I got some errors because this package seems to be outdated.

Read on for the error message and how Martin was able to get around this. H/T R-Bloggers.

Comments closed