Press "Enter" to skip to content

Category: Error Handling

Troubleshooting Data Factory Errors

Ginger Grant discusses Azure Data Factory errors:

Unfortunately, while developing Data Factory I became very familiar with errors. All of the errors show up at the end and provide very little insight as to what in the process failed. Here’s an example.

Database operation failed on server ‘Sink:DBName01.database.windows.net’ with SQL Error Number ‘40197’. Error message from database execution : The service has encountered an error processing your request. Please try again. Error code 4815. A severe error occurred on the current command. The results, if any, should be discarded.

This sounds like classic Microsoft error messages:  “An error occurred.  Here is a code you can put into Google and hope desperately that someone has already figured out the answer.  Good luck!”

Comments closed

Error Severity Levels Greater Than 18

Manoj Pandey was debugging an Informatica ETL and got back an uncommon error message:

So, to identify the cause I tried to execute the above MERGE statement directly and I got the same error:

EXEC spMergeTables 'STG.ABCtblXYZ','ABC.tblXYZ'

(0 row(s) affected)
Msg 2754, Level 16, State 1, Procedure spMergeTables, Line 107
Error severity levels greater than 18 can only be specified by members of the sysadmin role, using the WITH LOG option.

This is a case in which an immediate error obscured the actual error.

Comments closed

SET XACT_ABORT On By Default

Tibor Karaszi notes an important change to SQL Server Management Studio with the July 2016 release:

Now, exposing one more SET option can’t be bad, right? But Erland Sommarskog found out that this is checked by default. Now, using XACT_ABORT might now [sic] be a bad thing, au contraire. See this one of Erland’s series of error handle articles.

Making this change after all of these years is a little odd.  Making it in a regular update is very odd.  Using SET XACT_ABORT ON is a smart move in general, but there are times in which you don’t want to rollback immediately after an error; the problem is, are all of those places in your code well-documented?

Comments closed

Try-Catch Doesn’t Handle Everything

Tara Kizer notes that there are limits in what TRY/CATCH blocks handle in SQL Server:

It’s well documented in Books Online (BOL). If you’re like me, then tl;dr. Are we even calling it Books Online these days? I still say “bookmark lookup” instead of “key lookup”. I suppose I’ll be saying Books Online for quite some time too. At least these days it really is online.

Here’s a shortened version:

  • Warnings or informational messages that have a severity of 10 or lower

  • Errors that have a severity of 20 or higher that stop the session

  • Attentions

  • When a session is KILLed

It’s important to know that not everything gets caught, particularly major issues.

Comments closed

Watch Named, Nested Transactions

Gail Shaw finishes her outstanding series on transactions:

The error was thrown by the ROLLBACK statement. As such, the transaction is still open, the locks are held and the transaction log space can’t be reused. Unless the application that called this was checking for open transactions, that transaction could potentially be left open for quite some amount of time, causing blocking and/or the transaction log to grow.

It’s not just that someone in the future might call the code from another stored proc within a transaction, it’s also that it might be that the code is called from an application which started a transaction. Or called from SSIS which started a transaction. It’s very hard to ensure that code is never called from within an existing transaction

Read the whole thing.

Comments closed

Check Bulk Insert Errors

Tom Staab points out bulk insert allows up to 10 errors by default:

The issue was that the last row in a text file contained the row count, so he needed to bulk import all but that last row.

My solution was to set maxerrors to 1 so the import would ignore the last row due to the error. Any other row with an error would still fail the import. This reminded me of one of my least favorite defaults in SQL Server, so I decided to write about it here as well. A lot of people don’t realize this, but by default a bulk insert will only fail after 10 errors. Why not 0? I’ve never understood that. If you don’t change the default and then bulk insert 20 rows of data from a file, it will only fail if over half of the rows cause an error.

Keep track of error incidence and what that means for your data.  The default of 10 errors does seem rather strange.

Comments closed