Press "Enter" to skip to content

Category: Error Handling

Throwing Exceptions in T-SQL

Chad Callihan plays hot potato:

When it comes to error handling or troubleshooting a long stored procedure, RAISERROR is an easy statement to use that gets the job done. Way back in SQL Server 2012, Microsoft wanted to replace RAISERROR with the new (arguably less convenient) THROW statement. I thought it would be worth looking at an example using THROW and what it takes to have custom messaging with a parameter.

THROW can be a little less convenient, but conceptually speaking, I do think it’s better than the alternative. The part which makes it tricky in practice is that so many types of T-SQL errors are non-catchable, so as a developer, you have to keep on your toes about it.

Comments closed

Is sysname Case-Insensitive?

Solomon Rutzky tries Betteridge’s Law of Headlines:

Over time I’ve used a variety of SQL Server versions with a variety of instance-level collations. This has lead me to conclude that sysname had to be all lower-case in some earlier versions if the instance-level collation was either binary or case-sensitive. However, it was not clear exactly which versions and what scenarios truly affected the behavior of sysname name resolution, so I played it safe and continued to always specify that particular data type in all lower-case, even leaving comments in scripts that it must remain as all lower-case (just in case someone goes through and tries to make it upper-case to be consistent with the other data types).

But then, while researching another topic, I recently found the following in some old documentation ( Breaking Changes to Database Engine Features in SQL Server 2005 ):

Read on for the results of Solomon’s archaeological expedition.

Comments closed

Categorizing Why Bugs Can Be Tricky

Julia Evans has a list:

Hello! I’m very slowly working on writing a zine about debugging, so I asked on Twitter the other day:

If you’ve run into a bug where it felt “impossible” to understand what was happening – what made it feel that way?

Of course, bugs always happen for logical reasons, but I’ve definitely run into bugs that felt like they might be impossible for me to understand (until I figured them out!)

I got about 400 responses, which I’ll try to summarize here. I’m not going to talk about how to deal with these various kinds of “impossible” bugs in this post, I’ll just try to classify them.

Click through for the major categories, as well as explanations and sub-categories. I think an interesting follow-up to this is to ask why we find ourselves in situations where we get these sorts of bugs and what (if anything) we can do to minimize or eliminate the likelihood of their appearance.

Comments closed

When the Version Store Fills tempdb

David Fowler takes us through a mental exercise:

Well, there is something else that I’ve seen have a habit of filling TempDB. If you’re using Read Committed Snapshot or RCSI then you’ll also have a version store in your database. I’m not going to go into the details of exactly how version store works, there is plenty of documentation on it out there if you’re interested (perhaps I’ll write a post on it sometime).

The key bit that you need to know is that although SQL will keep version store trimmed down and only keep the rows that are needed, it can only clear rows that are older than the oldest transaction. This is because SQL has no way of knowing what rows are going to be needed by the transaction. Do you see a potential issue here?

Read on for enlightenment.

Comments closed

Spark SQL and Merge Errors from Multiple Source Rows Matched

Manoj Pandey explains an error message in Spark SQL:

UnsupportedOperationException: Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge, when multiple source rows match on the same target row, the result may be ambiguous as it is unclear which source row should be used to update or delete the matching target row. You can preprocess the source table to eliminate the possibility of multiple matches. Please refer to https://docs.microsoft.com/azure/databricks/delta/delta-update#upsert-into-a-table-using-merge

The above error says that while doing MERGE operation on the Target table there shouldn’t be any duplicates in the Source table. This check is applied implicitly by the SQL engine to avoid unnecessary updates and avoid inconsistent data.

Read on for a reproduction and what you can do to resolve the issue.

Comments closed

Ranger and Jersey Clients

Jon Morisi troubleshoots an irksome issue:

Just a quick blog here about an issue I had with HDP-3.1.4.0.  I recently was setting up a new user with specific rights in Ranger for Hive access.  After creating the new policy and attempting to validate it, I received an error message stating that the hive user does not have use privilege.  This error was produced even though I had just created the policy specifically granting those privilege’s.

Upon further review I noticed that the plugin was downloading the policy, but not applying it.  

Read on to learn what the problem was and how Jon corrected it.

Comments closed

Enumerating Breaking Changes to Power BI Reports

Brett Powell gives us a list of things which might cause breaking changes in Power BI reports:

A breaking change, which we can define as any change to a dataset which causes either reports to render errors or the dataset to fail to refresh, can severely impact business workflows and reflect poorly on those responsible for the solution. Given significant investments in other areas of the organization’s data estate such as Azure Synapse Analytics, a simple, easily avoidable oversight in a Power BI deployment may not be tolerated.

Read on for the list.

Comments closed

Error Handling Patterns in Kafka

Gerardo Villeda gives a few options for handling errors in an Apache Kafka topic:

Apache Kafka® applications run in a distributed manner across multiple containers or machines. And in the world of distributed systems, what can go wrong often goes wrong. This blog post covers different ways to handle errors and retries in your event streaming applications. The nature of your process determines the patterns, and more importantly, your business requirements.

This blog provides a quick guide on some of those patterns and expands on a common and specific use case where events need to be retried following their original order. This blog post illustrates a scenario of an application that consumes events from one topic, transforms those events, and produces an output to a target topic, covering different approaches as they gradually increase in complexity.

Click through for the list. Each explanation is pretty short, but opens the door for further analysis.

Comments closed