Press "Enter" to skip to content

Day: April 11, 2019

K-Nearest Neighbors in Python

Hardik Jaroli shows how to use the k-Nearest Neighbors algorithm using scikit-learn:

K Nearest Neighbors is a classification algorithm that operates on a very simple principle. It is best shown through example! Imagine we had some imaginary data on Dogs and Horses, with heights and weights.

Training Algorithm:
1. Store all the Data

Prediction Algorithm:
1.Calculate the distance from x to all points in your data
2. Sort the points in your data by increasing distance from x
3. Predict the majority label of the “k” closest points

Comments closed

Apache Flink 1.8.0 Released

Aljoscha Krettek announces the general availablity of Apache Flink version 1.8.0:

SQL pattern detection with user-defined functions and aggregations: The support of the MATCH_RECOGNIZE clause has been extended by multiple features. The addition of user-defined functions allows for custom logic during pattern detection (FLINK-10597), while adding aggregations allows for more complex CEP definitions, such as the following (FLINK-7599).

There are several really nice changes. I pointed out this one to get people to vote up Itzik Ben-Gan’s feedback item to get row pattern recognition in SQL Server.

Comments closed

Tracking Database Recovery with Extended Events

Jason Brimhall takes us through the extended events which show progress on database recovery:

Recently, I wrote a rewrite of my database recovery progress report script. That script touches on both the error log and some DMVs along with some fuzzy logic to join the data sets together. That script may not be the most complex script out there, but it is more more complex than using the power of XE.

Database recovery (crash recovery) is a nerve wrenching situation under the wrong conditions. It can be as bad as a root canal and just as necessary to endure that pain at times. When the business is waiting on you waiting on the server to finish crash recovery, you feel nervous at best. If you can be of some use and provide some information back to the business, that anxiety dissipates and the business becomes more calm as well. While the previous script can help you get that information easily enough, I want to introduce the easiest method to capture that information currently available.

Click through for more information, as well as a couple of scripts.

Comments closed

Blob Storage for Database Backups

Randolph West has a couple of tools to help upload and download database backup files:

I wrote it because AzCopy was weak and inconsistent. It was fragile, needing constant attention and monitoring in case a journalling file got stuck. Also, AzCopy didn’t keep files in sync. If a file was deleted locally (as part of a cleanup to delete old backups), AzCopy was unable to delete files remotely, so it was messy to maintain files in Blob Storage containers. The uploader was written to keep files in sync, and not have to fuss with AzCopy.

The real value of this tool though, is being able to recover the latest backup files (full, differential and transaction logs where available) which are needed to recover from a catastrophic failure. Without any knowledge of the backups, just knowing the database name, it can parse the list of files in Azure, download the necessary ones to recover, and build a T-SQL script to restore them. Literally all you need to do is run the downloader, then run the restore script.

Randolph talks about how the state of AzCopy has changed and offers up some new guidance as well as tooling updates.

Comments closed

SQL on Linux with Local Repositories

Kevin Chant explains how you can use local repositories to make installing SQL Server on Linux easier when you have servers lacking Internet access:

Now, some of you more experienced Linux users probably know there is another way to do this. When I previously had to create a Hadoop environment I had to implement a local repository.

In other words, I copied the contents of the online repository locally and created my own repository on a server.

That way I could manage installing software onto multiple clients using the same secured files in a local location instead of online. Which means I could install using the same method I would have done with internet access.

Kevin provides links and some notes on the process.

Comments closed

Extended Event Duration Units of Measure

Emanuele Meazzo shows how we can find out whether that duration column is milliseconds or microseconds:

Even if I use Extended Events almost every day, I always forget the unit of measure of each duration counter, since they’re basically arbitrary; Seconds, milliseconds, microseconds? Whatever, it depends on the dev that implemented that specific counter.

That’s why I’ve added to Tsql.tech Github repository the following code that extracts the descriptions from XE DMVs in order to identify the unit of measure

Click through for the script as well as the results against Azure SQL Database.

Comments closed

Fixing Transaction Logs Having Too Many Virtual Log Files

Max Vernon has a script to reduce the number of virtual log files you have in your transaction log:

The script shrinks the existing log file down to the smallest size possible, then grows it back to the prior size using a sensible growth increment. If the database is in simple recovery model, the script runs a checkpoint command to attempt to force transaction log truncation before growing the file. If the database is in full recovery model, one or more transaction log backups are taken to the NUL: device, in addition to the checkpoint operation, to allow the necessary transaction log truncation to occur. Ensure you take and test a full backup of all databases you run this script against, prior to running the script. For databases in full recovery model, you’ll need to take a database backup after the script runs since the prior log chain will be broken by the log backups that have been take to the NUL: device.

Click through for an important warning as well as the script itself.

Comments closed

Telling a Story When Data is Always Changing

Meagan Longoria explains how you can tell a story with data when the data is not static:

If you are a BI/report developer, you know this challenge well. You may follow all the guidelines: choose a good color palette, make visuals that highlight the important data points, get rid of clutter. But what happens when your data refreshes tomorrow or next month or next year? It’s much easier to make a chart with static data. You can format it so it communicates exactly the right message. But out here in Automated Reporting Land, that is not the end of our duties. We have to make some effort to accommodate future data values.

Meagan uses the Phone-a-Friend option and gets an interesting inversion of the normal solution.

Comments closed