Press "Enter" to skip to content

Category: Storage

Scaling HDFS to an Exabyte

Konstantin Shvachko, et al, explain some of the changes to the Hadoop Distributed File System needed to scale to one exabyte of data:

LinkedIn runs its big data analytics on Hadoop. During the last five years, the analytics infrastructure has experienced tremendous growth, almost doubling every year in data size, compute workloads, and in all other dimensions. It recently reached two important milestones.

1. LinkedIn now stores 1 exabyte of total data across all Hadoop clusters.

2. Our largest 10,000-node cluster stores 500 PB of data. It maintains 1 billion objects (directories, files, and blocks) on a single NameNode serving RPCs with an average latency under 10 milliseconds, making it one of the largest (if not the largest) Hadoop cluster in the industry.

From the early days of LinkedIn, Apache Hadoop was the basis of our analytics infrastructure. Many teams assisted in this effort to make Hadoop our canonical big data platform.

Read on for different techniques they’ve used, as well as code changes implemented in HDFS to support this data size.

Comments closed

One…Million IO Requests

Sean Gallardy wins the jackpot:

If, somehow, you’ve managed to see this error in your errorlog then congratulations, you’ve won an instance of SQL Server that probably won’t be doing much.

I found out about this message a few months ago, but it has been in the product for years and I went this long without ever even knowing it existed (congrats me!) until I was asked about it and coincidentally ended up finding it in an errorlog the same week. Clearly, I have too much fun packed into my weeks. I asked around, only one other person had ever found this in an errorlog before… that’s either impressive, depressing, or some perfect quantity of both – mellow it out to a smooth melancholy.

Click through to see more information about the 1000000 IO error message and when you might find it.

Comments closed

Using Logic Apps to Send Multiple Attachments

Rayis Imayev has a project:

In my real project, I need to build a Logic App to send email messages with a set of files attached from my Azure Storage Account. I was able to find similar examples from other power platform developers, however, they lacked a critical part that I needed: my set of files had to be dynamic: 2 files, or 102 files –  the Logic App should be able to support this.

So, here, I would like to share my brief journey in creating such Azure Logic App:

Read on to see how Rayis solved this.

Comments closed

Deploying a Storage Solution to a Kubernetes Cluster

Chris Adkin continues a series:

Before we dive into deploying a storage solution to our Kubernetes cluster, we need to understand the basics of storage in the world of Kubernetes, which can appear to be both exotic and mysterious to the uninitiated. To dispel some confusion around Kubernetes and storage, the storage IO path is exactly the same as that with common garden vanilla variety Unix or Linux. The Kubernetes storage ecosystem introduces two extra things we need to concern ourselves with above and beyond conventional Unix/Linux storage, firstly there are some extra layers of abstraction between the physical storage and filesystems that pods use, what I like to refer to as . . .

Read the whole thing. And that was a particularly mean cut-off point on my part, if I do say so.

Comments closed

Splitting SQL Server Drives on Modern SANs

Chris Taylor checks up on some older advice:

Back in the day, “when I was a lad“, the recommendation for SQL Server was to split your data, logs and tempdb files onto separate drives/luns to get the most out of your storage. Jump forward to 2021, is this still relevant and should I be splitting my SQL Server drives on to separate luns on new SAN storage? A question which is often asked not just by customers as well as their 3rd party managed service providers / hosting companies. This question can also be along the lines of, “Why can’t we just put everything on a C:\ because the backend is all on the same lun“. This is slightly different as they’re questioning the drive lettering more than creating separate luns but still relevant to this topic.

Click through to learn what Chris has found.

Comments closed

Enabling Multiple Lifecycle Policies on S3

Sheldon Hull has a hoarding problem to solve:

In my case, I’ve run into 50TB of old backups due to tooling issues that prevented cleanup from being successful. The backup tooling stored a sqlite database in one subdirectory and in another directory the actual backups.

I preferred at this point to only perform the lifecycle cleanup on the backup files, while leaving the sqlite file alone.

Click through to see how to do this using Powershell.

Comments closed

Things to Know about Storage

Monica Rathbun gives us a primer on storage concepts:

“One Gerbil, Two Gerbils or Three Gerbils?” is a common DBA joke about server and storage performance. No matter how many gerbils power your storage, you need to know what type they are and the power that they provide. Storage is not about gerbils it is about IOPs, bandwidth, latency, and tiers.

As a DBA it is important for you to understand and know what kind of storage is attached to your servers and how it is handling your data. It is not important to master everything about it, but it is very advantageous to be able to talk to your storage admins or “Gerbil CoLo, LLC” provider intelligently especially when you experience performance issues.  Here is a list of things to I encourage you to know and ask.

Click through for the cheat sheet.

Comments closed

Saving Money on Backups to Azure Blob Storage

John McCormack has a few tips for saving some cash:

You have 5 databases on a SQL Server Instance. You take daily full backups of each database on your instance. You also take log backups every 15 minutes as each database is in full recovery mode. This means in 1 week, you will have 35 full backups and 3,360 transaction log backups. This multiplies to 1,820 full and 174,720 t-log backups over 52 weeks. Multiply this for 7 years or more and the costs can get very expensive.

Click through to see how you can save a considerable amount with a bit of planning.

Comments closed

Disk Performance Testing in 2020

Glenn Berry gives us some CrystalDiskMark results:

Recently, I built a new AMD mainstream desktop system with some existing parts that I had available. This system has six storage drives, with various levels of technology and performance. I thought it would be interesting to run CrystalDiskMark 7.0.0 on each of these drives. So, here are some quick comparative CrystalDiskMark results in 2020 from those six drives.

This system has a Gigabyte B550 AORUS MASTER motherboard, which is actually a great choice for a B550 motherboard, especially if you want extra storage flexibility. AMD B550 motherboards only have PCIe 4.0 support from the CPU, not from the B550 chipset.

Glenn gets some outstanding performance from one drive and reminds us once more of how beautiful SSD and M.2 drives are.

Comments closed

AzureTableStor: Table Storage in R

Hong Ooi announces a new package on CRAN:

I’m pleased to announce that the AzureTableStor package, providing a simple yet powerful interface to the Azure table storage service, is now on CRAN. This is something that many people have requested since the initial release of the AzureR packages nearly two years ago.

Azure table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because table storage is schemaless, it’s easy to adapt your data as the needs of your application evolve. Access to table storage data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.

If that sounds like a fit for you, check out the package.

Comments closed