The latest version of the rOpenSci av package includes some useful new tools for working with audio data. We have added functions for reading, cutting, converting, transforming, and plotting audio data in any popular audio / video format (mp3, mkv, aac, etc).
The functionality can either be used by itself, or to prepare audio data for further analysis in R using other packages. We hope this clears an important hurdle to use R for research on speech, music, and whale mating calls.
Day: February 4, 2020
Consider a person who applies for a loan with a financial company, but their application is rejected by a machine learning algorithm used to determine who receives a loan from the company. How would you explain the decision made by the algorithm to this person? One option is to provide them with a list of features that contributed to the algorithm’s decision, such as income and credit score. Many of the current explanation methods provide this information by either analyzing the algorithm’s properties or approximating it with a simpler, interpretable model.
However, these explanations do not help this person decide what to do next to increase their chances of getting the loan in the future. In particular, changing the most important features for prediction may not actually change the decision, and in some cases, important features may be impossible to change, such as age. A similar argument applies when algorithms are used to support decision-makers in scenarios such as screening job applicants, deciding health insurance, or disbursing government aid.
This has the potential to be a great library. One of the issues with machine learning as it stands today is that you can get an answer, but to understand how to change the answer requires having a human understand the model. This looks like a good first step. It’s only available in Python.
Last year, you finally retired the last of your SQL Server 2008R2 instances. Congratulations! But are you taking advantage of everything that your new instances have to offer? Unless you did a review of all of the T-SQL in your applications, I’m guessing not.
At one time or another, we all find ourselves having to do some string parsing, especially splitting strings on a delimiter. Nearly all of us have one (or two or a dozen) functions for doing this somewhere on every instance of SQL Server. But since SQL Server 2016, we’ve had an official way to do it – the
Andy’s example involves splitting strings, but there are plenty of functions which come into the T-SQL lexicon. It might be worth doing a quick review of the available system functions just to see if there’s something useful which slipped with a newer version of SQL Server.
Recently, I had to purge some parent records from a table. In this case, the parent table had foreign keys, which itself isn’t an issue. The fact that there were more than 30 of them was. While SQL Server will happily tell you that you are violating a foreign key if a child record is present when deleting the parent record, finding all of them can be cumbersome. This is even more true when you have a larger number of foreign keys.
Thankfully, SQL Server can tell us a lot of information about foreign keys including both the parent and child tables as well as the column used. From this information, we can dynamically create a SELECT statement that would tell us the number of child records that are tied to the parent ID.
Click through for the solution.
I hope this script does not become infamous for the wrong reasons! Please use caution.
I had to help a team recreate everything in a database and test their scripts but leave the roles and role grants in place. Basically, this meant that I could have scripted out the permissions and recreated the database but I thought it would be easier and more re-runnable to just drop everything else except the permissions.
Caution here means making sure you have good backups beforehand, ensuring that you pick the right database, and double-checking everything.
One thing to remember is that cost in SQL Server is always an estimate. This is a number SQL Server calculates when considering multiple potential plans to determine which would be the best. But the number of rows it expects a given operation to return or how many times that operation runs can be off. All of that is based on statistics.
It doesn’t then go back and update the cost number later if those numbers were incorrect. So while we can use the cost as an indicator of which query or operator we should focus on, don’t completely tunnel-vision that one thing.
This kind of cost mismatch allows something to look awful on an execution plan but not actually be a problem, or (in the case of most user-defined functions prior to SQL Server 2019) vice versa.
No problem, we just added in the correct IP range to the possible addresses for the vNet, added a new Subnet and moved the VMs over to the new subnet (which caused the VMs to reboot, but that was expected).
It turns on that BDC in SQL Server 2019 doesn’t like having the IPs changed for the aks nodes. The problem stems from the fact that BDC is generating its certificates off of the IP address of the node, so if the IP address of the node changes (even if you are using DHCP for on-prem nodes and DHCP gives you a new IP address) your BDC won’t respond.
Read on for your three possible solutions.
The idea is I would have eight instances of an application (SQLQueryStress) that would each run on a dedicated scheduler, handling only the data destined for a specific partition / filegroup / data file, ~100 million rows at a time
It was interesting seeing how to pull that off. Aaron also has some extended findings, so check it out.