We’ve seen how setting
errors.tolerance = allwill enable Kafka Connect to just ignore bad messages. When it does, by default it won’t log the fact that messages are being dropped. If you do set
errors.tolerance = all, make sure you’ve carefully thought through if and how you want to know about message failures that do occur. In practice that means monitoring/alerting based on available metrics, and/or logging the message failures.
The most simplistic approach to determining if messages are being dropped is to tally the number of messages on the source topic with those written to the output:
Read on for a few different tactics and how you can implement them.
So, are you seeing this error?
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
If you read the error it might freak you out a bit. The key words memory and corrupt can be a bit .. concerning. Fortunately in this case they are also rather misleading.
Click through to understand what’s going on and how you can fix the problem if you see this error.
Exception deserializing the package “Operation is not valid due to the current state of the object.”. (Microsoft.DataTransformationServices.VsIntegration)
As a professional consultant who has been blogging about SSIS for 12 years and authored and co-authored a dozen books related to Microsoft data technologies, my first response was:
That is a reasonable first response. Fortunately, Andy also had a second response which was more helpful in finding the root cause.
An error message has started appearing in the SQL Server Error Logs during a nightly full backup.
Could not clear ‘DIFFERENTIAL’ bitmap in database ‘Database1’ because of error 9002. As a result, the differential or bulk-logged bitmap overstates the amount of change that will occur with the next differential or log backup. This discrepancy might slow down later differential or log backup operations and cause the backup sets to be larger than necessary. Typically, the cause of this error is insufficient resources. Investigate the failure and resolve the cause. If the error occurred on a data backup, consider taking a data backup to create a new base for future differential backups.
Click through for the root cause and solution.
The message should help the author fix the code, but sometimes the text suggests a possible action without describing the underlying issue. The goal of this article is to explain the more common DAX error messages by providing a more detailed explanation and by including links to additional material. If some terms are not clear, look at the linked articles or consider some free self-paced training such as Introducing DAX.
Click through for several examples.
So how does this relate to error tables? Like most well-documented APIs, the U.S. Census Bureau API has a page devoted to listing and describing all the possible response codes that can be returned by their service. I take this information and build an internal table within the query that defines and describes these response codes in my own words. I’m now able to throw custom messages that make the difference between a
400response code and a
404response code more obvious.
For example, in the code below, I use the
Error.Recordfunction to create individual records that allow me to catch these unsuccessful requests and throw my own custom error messages to the user. I then create an extra field in each record called ‘Status’, which maps each HTTP response code returned by the API to a corresponding error message of my choosing:
There’s a bit of work, but the end result is a fairly simple explanation for end users.
I’m executing code using SQLCMD from a batch file . The code points to a sql file and there is also an output file.
SQLCMD -E -S MYSERVER\INST1 -i “setup_job_entry.sql” -o “setup_job_entry.log”
But I noticed that if the actual SQLCMD returns an error , for example , if I’m connecting to an server which doesn’t exist this error message will appear in the output file – but there will not be an ERROR number , which would allow me to trap and return an appropriate message
There is a way and Jack shows us how.
To build a robust BI system, you need to cater for errors and handle errors carefully. If you build a reporting solution that the refresh of that fails everytime an error occurs, it is not a robust system. Errors can happen by many reasons, In this post, I’ll show you a way to catch potential errors in Power Query and how to build an exception report page to visualize the error rows for further investigation. The method that you learn here, will save your model from failing at the time of refresh. Means you get the dataset updated, and you can catch any rows caused the error in an exception report page. To learn more about Power BI, read Power BI book from Rookie to Rock Star.
There’s a lot of work, but also a lot of value in doing that work.
And there we go, you get the table name, the column name as well as the value, notice that the message id changed from 8152 to 2628 now
Msg 2628, Level 16, State 1, Line 20
String or binary data would be truncated in table ‘truncatetest.dbo.TruncateMe’, column ‘somevalue’. Truncated value: ‘33333’.
The statement has been terminated.
So it looks it only returns the first value that generates the error, let’s change the first value to fit into the column and execute the insert statement again
It’s not perfect, as it only shows one column from the first failed row, but that is still a lot more information than we had before and I’m happy that this is making into the product.
Ever seen the below error? Until this week I hadn’t. So, I figured I’d take a little time and introduce it to those that had not.
Error Description: Length of LOB data (65754) to be replicated exceeds configured maximum 65536. Use the stored procedure sp_configure to increase the configured maximum value for max text repl size option, which defaults to 65536. A configured value of -1 indicates no limit
We ran into an issue with a customer this week, this error was flooding the error log. After a little digging I found it had to do with transactional replication (also applies to Change Data Capture) they had setup which included LOB data.
Read on to see what you can do to resolve this error. Also, check out the comments and be glad you’re not in that boat…unless you are, in which case…