Andy Leonard troubleshoots an issue:
If I had a nickel for every time I’ve encountered “Cannot load script for execution,” I’d have a lot of nickels.
Save your nickels and read Andy’s post.
Comments closedA Fine Slice Of SQL Server
Andy Leonard troubleshoots an issue:
If I had a nickel for every time I’ve encountered “Cannot load script for execution,” I’d have a lot of nickels.
Save your nickels and read Andy’s post.
Comments closedJack Vamvas answers a question:
Question: I’m attempting to create a backup of a SQL Server read only database , but getting an error message – BACKUP LOG cannot be performed because there is no current database backup
Why is this happening and are there any workarounds?
Read on to understand the problem, as well as the red herring in the question.
Comments closedSolomon Rutzky tries Betteridge’s Law of Headlines:
Over time I’ve used a variety of SQL Server versions with a variety of instance-level collations. This has lead me to conclude that
sysnamehad to be all lower-case in some earlier versions if the instance-level collation was either binary or case-sensitive. However, it was not clear exactly which versions and what scenarios truly affected the behavior ofsysnamename resolution, so I played it safe and continued to always specify that particular data type in all lower-case, even leaving comments in scripts that it must remain as all lower-case (just in case someone goes through and tries to make it upper-case to be consistent with the other data types).But then, while researching another topic, I recently found the following in some old documentation ( Breaking Changes to Database Engine Features in SQL Server 2005 ):
Read on for the results of Solomon’s archaeological expedition.
Comments closedHello! I’m very slowly working on writing a zine about debugging, so I asked on Twitter the other day:
If you’ve run into a bug where it felt “impossible” to understand what was happening – what made it feel that way?
Of course, bugs always happen for logical reasons, but I’ve definitely run into bugs that felt like they might be impossible for me to understand (until I figured them out!)
I got about 400 responses, which I’ll try to summarize here. I’m not going to talk about how to deal with these various kinds of “impossible” bugs in this post, I’ll just try to classify them.
Click through for the major categories, as well as explanations and sub-categories. I think an interesting follow-up to this is to ask why we find ourselves in situations where we get these sorts of bugs and what (if anything) we can do to minimize or eliminate the likelihood of their appearance.
Comments closedDavid Fowler takes us through a mental exercise:
Well, there is something else that I’ve seen have a habit of filling TempDB. If you’re using Read Committed Snapshot or RCSI then you’ll also have a version store in your database. I’m not going to go into the details of exactly how version store works, there is plenty of documentation on it out there if you’re interested (perhaps I’ll write a post on it sometime).
The key bit that you need to know is that although SQL will keep version store trimmed down and only keep the rows that are needed, it can only clear rows that are older than the oldest transaction. This is because SQL has no way of knowing what rows are going to be needed by the transaction. Do you see a potential issue here?
Read on for enlightenment.
Comments closedManoj Pandey explains an error message in Spark SQL:
UnsupportedOperationException: Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge, when multiple source rows match on the same target row, the result may be ambiguous as it is unclear which source row should be used to update or delete the matching target row. You can preprocess the source table to eliminate the possibility of multiple matches. Please refer to https://docs.microsoft.com/azure/databricks/delta/delta-update#upsert-into-a-table-using-merge
The above error says that while doing MERGE operation on the Target table there shouldn’t be any duplicates in the Source table. This check is applied implicitly by the SQL engine to avoid unnecessary updates and avoid inconsistent data.
Read on for a reproduction and what you can do to resolve the issue.
Comments closedJon Morisi troubleshoots an irksome issue:
Just a quick blog here about an issue I had with HDP-3.1.4.0. I recently was setting up a new user with specific rights in Ranger for Hive access. After creating the new policy and attempting to validate it, I received an error message stating that the hive user does not have use privilege. This error was produced even though I had just created the policy specifically granting those privilege’s.
Upon further review I noticed that the plugin was downloading the policy, but not applying it.
Read on to learn what the problem was and how Jon corrected it.
Comments closedBrett Powell gives us a list of things which might cause breaking changes in Power BI reports:
A breaking change, which we can define as any change to a dataset which causes either reports to render errors or the dataset to fail to refresh, can severely impact business workflows and reflect poorly on those responsible for the solution. Given significant investments in other areas of the organization’s data estate such as Azure Synapse Analytics, a simple, easily avoidable oversight in a Power BI deployment may not be tolerated.
Read on for the list.
Comments closedGerardo Villeda gives a few options for handling errors in an Apache Kafka topic:
Apache Kafka® applications run in a distributed manner across multiple containers or machines. And in the world of distributed systems, what can go wrong often goes wrong. This blog post covers different ways to handle errors and retries in your event streaming applications. The nature of your process determines the patterns, and more importantly, your business requirements.
This blog provides a quick guide on some of those patterns and expands on a common and specific use case where events need to be retried following their original order. This blog post illustrates a scenario of an application that consumes events from one topic, transforms those events, and produces an output to a target topic, covering different approaches as they gradually increase in complexity.
Click through for the list. Each explanation is pretty short, but opens the door for further analysis.
Comments closedPieter Humphrey gives us a few techniques for handling data pipeline errors when running Apache NiFi:
The more complex the model, the more possible sources of problems exist. Forecasting every single potential problem is, of course, impossible. Identifying the most important ones and providing self-solving solutions can greatly reduce the operational uncertainty of our NiFi pipeline and improve its robustness.
To see how to do this analysis, we will consider four possible strategies: one external and three internal. They certainly do not cover all potential error scenarios, they are just examples that we can extrapolate from, and inform how to handle other potential failure domains.
Click through for an overview of the topic as well as those four techniques.
Comments closed