Press "Enter" to skip to content

Category: Error Handling

Error Log Column Size

Cody Konior tests how long messages in the error log can be:

Stupid question… what’s the schema of a table with sys.sp_readerrorlog output? Well you might be surprised if you’re used to using nvarchar(max) or nvarchar(2048).

There’s a datetime (modern datetime2(3)) obviously. ProcessInfo is either “Server” or “spidxxxy” where xxx is an int (max of 11 characters including minus) and y is an optional single character suffix. But as for the text…

Let’s try to max it out!

Moral of the story:  keep those error messages as short as possible while still being meaningful.

Comments closed

Powershell Errors

Shane O’Neill explains what the $error variable does:

Now ignoring the fact that you already know what is wrong, this tells me that there is either something wrong with the $Database variable, the $sql variable or the syntax statement. Maybe even something else though!
This is not helpful and I’m going to have a bad time.

I encountered this lately and thanks to Chrissy LeMaire ( b | t ), I was introduced to the $error variable.

Shane also has an interesting side note around error colors.

Comments closed

SQL Server Error Handling Scenarios

Dan Guzman goes into some detail about error handling in SQL Server:

T-SQL and ADO.NET data access code must work in concert with one another to ensure SQL errors are detected in application code. The T-SQL constructs used in multi-statement batches can affect if and how when errors are reported by ADO.NET during batch execution. I’ll start by citing core T-SQL error handling objectives, which can be summarized as:

1) Ensure a multi-statement T-SQL batch doesn’t continue after an error occurs.
2) Rollback transaction after errors.
3) Raise error so that the client application is aware a problem occurred.

Read the whole thing.

Comments closed

Troubleshooting SSRS 4xx Errors

Jeff Pries explains how to troubleshoot various Reporting Services configuration errors:

After installing SQL Server Reporting Services (SSRS), are you receiving an Error 404, Error 400, “Invalid Request” error, or “Bad Connection” error on first visiting the SSRS web portal (the error message seems to vary based on version, browser, and whether accessing via http/https or /reports vs /reportserver) ?

I’ve run into this a few times so I’m listing the steps I’ve used to fix it.  For me, the root cause of this error has been the SSRS Configuration Wizard automatically configuring SSRS to use HTTPS, but assigning an invalid machine SSL Certificate.  The fix is to self-generate a new and valid SSL certificate for the SSRS website to use.

Jeff then provides step-by-step instructions.

Comments closed

Jepsen: MongoDB 3.4.0-rc3

Kyle Kingsbury takes a new look at MongoDB:

In April 2015, we discussed stale and dirty reads in MongoDB 2.6.7. However, writes appeared to be safe; update-only workloads with majority write concern were linearizable. This conclusion was not entirely correct. In this Jepsen analysis, we develop new tests which show the MongoDB v0 replication protocol is intrinsically unsafe, allowing the loss of majority-committed documents. In addition, we show that the new v1 replication protocol has multiple bugs, allowing data loss in all versions up to MongoDB 3.2.11 and 3.4.0-rc4. While the v0 protocol remains broken, patches for v1 are available in MongoDB 3.2.12 and 3.4.0, and now pass the expanded Jepsen test suite. This work was funded by MongoDB, and conducted in accordance with the Jepsen ethics policy.

Mongo has grown up when it comes to data integrity, though be sure you’re using the v1 replication protocol.

Comments closed

Memory-Optimized Table Warnings

Robert Davis looks at messages in the error log related to memory-optimized tables:

The server on which we are running in-memory OLTP is a really hefty server with 128 logical cores and 1.5 TB of RAM (1.4 TB allocated to SQL Server). We are limiting in-memory’s memory usage with Resource Governor, which also makes it easy to see how much it is using. Needless to say, even with a limited percentage of 1.4 TB of RAM is still a lot of memory. The highest I have seen in-memory usage for this one database reach at peak activity levels is ~43 GB. In production, when the heavy in-memory OLTP processes complete, I see the system reclaim the in-memory buffers pretty quickly, though not completely. During a normal day, I often see the in-memory memory usage hovering between 1 and 3 GB even when there is virtually no traffic.

When testing in-memory on a dev server that only I was using before deploying to production, I noticed that the memory usage would stay at whatever high level it reached. This makes me believe that in-memory buffers are cleaned up and reclaimed as needed, and if not needed, they just hang around as in-memory buffers. And it appears that some of the buffers end up hanging around. Perhaps they wouldn’t if the server was memory starved. I have not tested that theory.

It’s a conjecture, but seems pretty solid.  Also worth reiterating is that they’re warnings, not errors.

Comments closed

Doomed Transactions

Michael Swart talks about doomed transactions:

So the procedure was complicated and it used explicit transactions, but I couldn’t find any TRY/CATCH blocks anywhere! What I needed was a stack trace, but for T-SQL. People don’t talk about T-SQL stack traces very often. Probably because they don’t program like this in T-SQL. We can’t get a T-SQL stack trace from the SQLException (the error given to the client), so we have to get it from the server.

Michael shows how to get stack trace information and provides some advice on the process (mostly, “don’t do what we did”).

Comments closed

SSIS Fast Load

Chris Taylor runs into an issue with the OLE DB Destination’s fast load option in Integration Services:

What I do want to bring to your attention is the differences between the two when it comes to redirecting error rows, specifically rows that are truncated. One of the beauties of SSIS is the ability to output rows that fail to import through the error pipeline and push them into an error table for example. With fast load there is a downside to this, the whole batch will be output even if there is only 1 row that fails, there are ways to handle this and a tried and tested method is to push those rows into another OLE DB Destination where you can run them either in smaller batches and keep getting smaller or simply push that batch to run in row-by-row to eventually output the 1 error you want. Take a look at Marco Schreuder’s blog for how this can be done.

One of the issues we have exerienced in the past is that any truncation of a column’s data in fast load will not force the package to fail. What? So a package can succeed when in fact the data itself could potentially not be complete!?! Yes this is certainly the case, lets take a quick look with an example.

Read on for details and potential workarounds.

Comments closed