Press "Enter" to skip to content

Day: June 18, 2021

Reinvestment Risk and Yield to Maturity

Sang-Heon Lee looks at reinvestment risk:

From this post, we can learn the reinvestment risk of coupon bond. It is worth noting that 1) YTM is attainable when roll rate is the same as YTM and 2) The argument that coupon rate is equal to YTM at issuance (par yield) is only applied to standard coupon bond with in arrears interest payment schedule. Unlike standard coupon bond, coupon bond with in advance interest payment has a higher YTM than coupon rate at an issuance.

Click through for the explanation as well as the R code used. H/T R-Bloggers.

Comments closed

The Enterprise Eats Software

Jessica Kerr explains why software from large firms is so often terrible:

Software is hard to get right. And every time we don’t, customers leave.

Appointment scheduling that sends a calendar invitation “Join the Zoom” without a link. Checkout screens that delete my credit card number when I change the shipping address. Complete Order that comes back with “Please try again later.” Items can’t all be shipped, can’t all be picked up, and this is a maze to figure out (also Lowe’s). A shopping cart that pops up a generic error modal when any single call to the server fails.

Read the whole thing. A lot of this sounds like an incentive alignment problem: each sub-group within a large firm optimizes for its own benefits, but the sum total of those choices leads to a sub-optimal result for the firm itself, as in the case of bad software driving customers to Amazon.

Comments closed

Spark SQL and Merge Errors from Multiple Source Rows Matched

Manoj Pandey explains an error message in Spark SQL:

UnsupportedOperationException: Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge, when multiple source rows match on the same target row, the result may be ambiguous as it is unclear which source row should be used to update or delete the matching target row. You can preprocess the source table to eliminate the possibility of multiple matches. Please refer to https://docs.microsoft.com/azure/databricks/delta/delta-update#upsert-into-a-table-using-merge

The above error says that while doing MERGE operation on the Target table there shouldn’t be any duplicates in the Source table. This check is applied implicitly by the SQL engine to avoid unnecessary updates and avoid inconsistent data.

Read on for a reproduction and what you can do to resolve the issue.

Comments closed

Foreign Key Constraints and Blocking

Paul White takes a look at blocking due to foreign key checks:

This article covers one such consideration that does not receive much publicity: To minimize blocking, you should think carefully about the indexes used to enforce uniqueness on the parent side of those foreign key relationships.

This applies whether you are using locking read committed or the versioning-based read committed snapshot isolation (RCSI). Both can experience blocking when foreign key relationships are checked by the SQL Server engine.

Under snapshot isolation (SI), there is an extra caveat. The same essential issue can lead to unexpected (and arguably illogical) transaction failures due to apparent update conflicts.

This article is in two parts. The first part looks at foreign key blocking under locking read committed and read committed snapshot isolation. The second part covers related update conflicts under snapshot isolation.

Definitely worth reading the whole thing.

Comments closed

Reasons to Switch to Power BI Gen2

Teo Lachev gives us five reasons to switch to Power BI Gen2:

More memory
Imported models are memory-resident so memory is usually the most constraining factor. With Gen2, the capacity maximum memory applies to the resource itself and not collectively across all resources in the capacity. Let’s say you are on a P1 plan which has a maximum memory capacity of 25GB. With Gen 1, you won’t be able to have two datasets, let’s say 20GB and 10GB, loaded at the same time. However, Gen2 will apply the 25GB limit to each dataset. So, each resource (dataset, report, dataflow) will be boxed within 25 GB. This feat is possible because Gen2 uses a SaaS approach, which means datasets are scattered across multiple cluster nodes instead of being associated with a dedicated capacity. A potential downside, however, could be “noisy neighbor” because a P3 cluster node may co-host datasets from different customers.

Read on for the full list of reasons, as well as three things Teo would like to see improved in a future release.

Comments closed

SQL Injection and Square Brackets

Erik Darling is not amused:

I see a lot of scripts on the internet that use dynamic SQL, but leave people wide open to SQL injection attacks.

In many cases they’re probably harmless, hitting DMVs, object names, etc. But they set a bad example. From there, people will adapt whatever dynamic SQL worked elsewhere to something they’re currently working on.

Click through for a demonstration of the problem.

Comments closed

Updating dbatools

Chad Callihan shows us how to update the dbatools Powershell module:

The first sentence on the dbatools download page references the belief in releasing early and releasing often. While SQL Server and SQL Server Management Studio may get a handful up dates every year, dbatools averages a few every month. Fortunately, staying up to date with dbatools is easily manageable as we’ll see below.

Read on to see how you can tell which version of the module you have and then how to update it.

Comments closed