Press "Enter" to skip to content

Curated SQL Posts

Persistent Storage and Kubernetes

Chris Adkin explains the concepts behind persistent storage in containers:

A question that often crops up is “Can I use local storage”, the answer is “It depends”. Kubernetes is essentially a container scheduler at its most basic and fundamental level. The ‘Pod’ is the unit of scheduling, containers in the same pod share the same life cycle and always run on the same node. For stateless pods life is reasonably simple and straight forward, for state-full pods, life is a bit more nuanced. If for any reason a node fails, the pods that ran on that node have to be rescheduled to run on a working node, and their  storage needs to follow them. This involves un-mounting the volume from the failed node and then mounting it on the node the pod(s) are rescheduled to run on. With basic vanilla hyper-converged storage, i.e. storage and compute in the same chassis, this will ultimately lead to scheduling problems. However, software defined solutions exist that enable this kind of infrastructure to be turned into a storage cluster which allows state to follow pods around the cluster. Some people automatically associated HDFS with local storage, the reason for this is probably because “Back in the day”, the most cost efficient way for Google to scale out its infrastructure was via commodity servers with local disks. 

Read the whole thing.

Comments closed

Saving Time with Choclatey

Craig Porteous explains how you can use Chocolatey to manage Windows packages:

In my own words, it’s the quickest way to get a machine up and running, ready to start working. Chocolatey is a package manager for Windows that allows you to script out the installation of (what feels like) almost anything. I was able to get up and running on a new laptop in just a few hours, while I worked on something else. Forget constant Next > Next > Next installation wizards. I had all the tools I needed to do my job, not just apps but PowerShell modules too, all installed automatically.

I’ve pulled packages but never created one. It’s definitely something that should go on my to-learn list given how often I build and rebuild machines and VMs.

Comments closed

K-Nearest Neighbors in Python

Hardik Jaroli shows how to use the k-Nearest Neighbors algorithm using scikit-learn:

K Nearest Neighbors is a classification algorithm that operates on a very simple principle. It is best shown through example! Imagine we had some imaginary data on Dogs and Horses, with heights and weights.

Training Algorithm:
1. Store all the Data

Prediction Algorithm:
1.Calculate the distance from x to all points in your data
2. Sort the points in your data by increasing distance from x
3. Predict the majority label of the “k” closest points

Comments closed

Apache Flink 1.8.0 Released

Aljoscha Krettek announces the general availablity of Apache Flink version 1.8.0:

SQL pattern detection with user-defined functions and aggregations: The support of the MATCH_RECOGNIZE clause has been extended by multiple features. The addition of user-defined functions allows for custom logic during pattern detection (FLINK-10597), while adding aggregations allows for more complex CEP definitions, such as the following (FLINK-7599).

There are several really nice changes. I pointed out this one to get people to vote up Itzik Ben-Gan’s feedback item to get row pattern recognition in SQL Server.

Comments closed

Tracking Database Recovery with Extended Events

Jason Brimhall takes us through the extended events which show progress on database recovery:

Recently, I wrote a rewrite of my database recovery progress report script. That script touches on both the error log and some DMVs along with some fuzzy logic to join the data sets together. That script may not be the most complex script out there, but it is more more complex than using the power of XE.

Database recovery (crash recovery) is a nerve wrenching situation under the wrong conditions. It can be as bad as a root canal and just as necessary to endure that pain at times. When the business is waiting on you waiting on the server to finish crash recovery, you feel nervous at best. If you can be of some use and provide some information back to the business, that anxiety dissipates and the business becomes more calm as well. While the previous script can help you get that information easily enough, I want to introduce the easiest method to capture that information currently available.

Click through for more information, as well as a couple of scripts.

Comments closed

Blob Storage for Database Backups

Randolph West has a couple of tools to help upload and download database backup files:

I wrote it because AzCopy was weak and inconsistent. It was fragile, needing constant attention and monitoring in case a journalling file got stuck. Also, AzCopy didn’t keep files in sync. If a file was deleted locally (as part of a cleanup to delete old backups), AzCopy was unable to delete files remotely, so it was messy to maintain files in Blob Storage containers. The uploader was written to keep files in sync, and not have to fuss with AzCopy.

The real value of this tool though, is being able to recover the latest backup files (full, differential and transaction logs where available) which are needed to recover from a catastrophic failure. Without any knowledge of the backups, just knowing the database name, it can parse the list of files in Azure, download the necessary ones to recover, and build a T-SQL script to restore them. Literally all you need to do is run the downloader, then run the restore script.

Randolph talks about how the state of AzCopy has changed and offers up some new guidance as well as tooling updates.

Comments closed

SQL on Linux with Local Repositories

Kevin Chant explains how you can use local repositories to make installing SQL Server on Linux easier when you have servers lacking Internet access:

Now, some of you more experienced Linux users probably know there is another way to do this. When I previously had to create a Hadoop environment I had to implement a local repository.

In other words, I copied the contents of the online repository locally and created my own repository on a server.

That way I could manage installing software onto multiple clients using the same secured files in a local location instead of online. Which means I could install using the same method I would have done with internet access.

Kevin provides links and some notes on the process.

Comments closed

Extended Event Duration Units of Measure

Emanuele Meazzo shows how we can find out whether that duration column is milliseconds or microseconds:

Even if I use Extended Events almost every day, I always forget the unit of measure of each duration counter, since they’re basically arbitrary; Seconds, milliseconds, microseconds? Whatever, it depends on the dev that implemented that specific counter.

That’s why I’ve added to Tsql.tech Github repository the following code that extracts the descriptions from XE DMVs in order to identify the unit of measure

Click through for the script as well as the results against Azure SQL Database.

Comments closed

Fixing Transaction Logs Having Too Many Virtual Log Files

Max Vernon has a script to reduce the number of virtual log files you have in your transaction log:

The script shrinks the existing log file down to the smallest size possible, then grows it back to the prior size using a sensible growth increment. If the database is in simple recovery model, the script runs a checkpoint command to attempt to force transaction log truncation before growing the file. If the database is in full recovery model, one or more transaction log backups are taken to the NUL: device, in addition to the checkpoint operation, to allow the necessary transaction log truncation to occur. Ensure you take and test a full backup of all databases you run this script against, prior to running the script. For databases in full recovery model, you’ll need to take a database backup after the script runs since the prior log chain will be broken by the log backups that have been take to the NUL: device.

Click through for an important warning as well as the script itself.

Comments closed