Press "Enter" to skip to content

Category: Storage

Common SQL Server Mistakes: Default Auto-Growth

Hemantgiri Goswami takes a look at auto-growth:

Auto Growth is a feature that allows database files (primary, secondary, and log) to expand when the database file becomes full – without manual intervention.

Auto Growth feature is handy when we do not want to increase the size of database files manually. There are two ways you can set auto growth – using SQL Server Management Studio (SSMS hereafter) and T-SQL. Auto Growth can be configured – In Percent and Megabytes.

Auto-growth isn’t a problem on its own, though growth sizes, especially in older versions of SQL Server, were far too low for medium- and large-sized databases.

I don’t particularly like the 2.5 MB example Hemantgiri shows. I have a quick rule of thumb which is 64MB for small databases, 256-512 for medium-sized databases, and 1GB for large databases (assuming my underlying disk is fast). This limits the number of auto-growth events and, for log files in particular, keeps virtual log file counts more reasonable.

Comments closed

Tiered Storage in Apache Kafka

Matthew de Detrich explains the value behind tiered storage in Apache Kafka:

Tiered Storage is arguably one of the most sought-after features of Kafka 3.6, allowing Kafka’s core data to be stored in other locations, such as object storage, in addition to hard disks in a transparent manner, without any changes to Kafka’s producers or consumers. The Kafka brokers control whether the data is stored on local disks, fast but expensive and limited, or in alternative storage places, such as Amazon S3. When Tiered Storage is properly configured, it means you can have the best of both worlds: recent data is stored on local fast disks (as is currently), and older, less frequently accessed data can be stored elsewhere where it’s cheaper and space requirements are less of a concern (sometimes unlimited!) 

Read on to learn more about the official version of tiered storage, as well as a forward port of two prior implementation attempts to Kafka version 3.3.

Comments closed

Azure Blob Storage Operating System Error 86

Jose Manuel Jurado Diaz 86’d that option:

Today, I worked on a service request that our customer got the following error message: Cannot open backup device ‘https://XXX.blob.core.windows.net/NNN/YYY.bak‘. Operating system error 86(The specified network password is not correct.). RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3201). Following I would like to share with you some details why this issue and the activities done to resolve it. 

Read on to get a better understanding of what this error actually means and how you can fix it.

Comments closed

Choosing a Data Serialization Format

Rathish Kumar says more than “JSON and Parquet”:

In the world of software, we often work with different types of data like lists, tables, and more. These data structures are designed to be fast and efficient when our computer programs use them. However, sometimes we need to move this data out of our computer’s memory, like when we want to save it to a file or send it over the internet. To do this, we have to change the data into a special format made up of 0s and 1s, which is quite different from data structures. This process is what we call encoding or serialization. 

In this article, we’ll explore the world of encoding and decoding, which is the reverse process of turning that special format back into usable data. We’ll also take a look at different ways to do encoding and decoding, as well as important things to think about when choosing the right method for your software projects.

Sadly, ORC (Optimized Row Columnar) doesn’t make the cut, as Parquet ended up taking over that market.

Comments closed

Troubleshooting a Downed SQL Server

Kevin Hill takes us through an issue:

This is a stand-alone bare metal server located at a hosting facility with some local storage as well as NAS storage.  SQL Server 2016 Standard Edition, 8 cores, 128GB RAM.  The primary database is nearing 1TB and is has multiple filegroups (Active, Archive, Indexes, etc.).   This database and apps that touch it ARE the company.

Read on for Kevin’s process, which was a solid bit of troubleshooting.

Comments closed

Storing Log Analytics Queries in Azure Blob Storage

Gilbert Quevauvilliers wants some long-term storage:

Following on in my series, in this blog post I am going to demonstrate how to store Log Analytics Queries in Blob Storage.

This allows me to be able to store the Power BI Queries externally from Log Analytics and to have an easy way to get the data into my Fabric Lake house in later steps. To do this I am going to use a Logic App in Azure.

In this series I am going to show you all the steps I did to have the successful outcome I had with my client.

Read on to see what Gilbert used for the task.

Comments closed

Persisting Data for SQL Server on Docker Swarm

Andrew Pruski saves the day, or at least the data:

In my last couple of blog posts (here and here) I talked about how to get SQL Server running in Docker Swarm. But there is one big (and show-stopping) issue that I have not covered. How do we persist data for SQL Server in Docker Swarm?

Docker Swarm, like Kubernetes, has no native method to persist data across nodes…so we need another option and one of the options available to us is Portworx.

So how can we use Portworx to persist SQL Server databases in the event of a node failure in Docker Swarm?

Read on to find out how.

Comments closed

Immutable Blobs in Azure Storage

Khushbu Gandhi won’t change:

A lot of my customers have a business requirement that –

  • once a document is written in storage account, nobody shall be able to modify or delete it, including the administrators
    AND / OR
  • audit data can be written only once, and nobody shall be able to alter them.

These kind of business requirements are called WORM (Write Once, Read Many) and are common for various industries such as ISVs, financial or healthcare. Have you run into these mission-critical requirements? If yes, then immutable storage would be the solution.

Read on to see how this works, using retention policies. But ash Khushbu notes, if you set an absurd retention policy (up to 400 years), you can’t change the data while that policy is active.

Comments closed

Putting tempdb on an Azure VM Temp Disk

Daniel Hutmacher uses a temp disk for a temp database:

Almost all Azure virtual machine sizes come with a temporary disk. The temporary disk is a locally attached SSD drive that comes with a couple of desirable features if you’re installing a SQL Server on your VM:

  • Because it is locally attached, it has lower latency than regular disks.
  • IO and storage are not billed like regular storage.

As the name implies, the temporary disk is not persistent, meaning that it will be wiped if you shut down your VM or if the VM moves to another VM host (as part of maintenance or troubleshooting). For that reason, we never want to put anything on the temporary disk that we need to keep.

I’d say this was a lot more popular several years ago, back when spinning disk was the default for Azure storage. There can still be benefits from doing this, though if you’re using Premium storage with high IOPS, the biggest remaining benefit is around latency.

Comments closed