Press "Enter" to skip to content

Category: Storage

Mounting Data Lake Storage from a Spark Pool

Kamil Nowinski runs into some trouble:

Last weekend, I played a bit with Azure Synapse from a way of mounting Azure Data Lake Storage (ADLS) Gen2 in Synapse notebook within API in the Microsoft Spark Utilities (MSSparkUtils) package. I wanted to just do a simple test, hence I followed the documentation from Microsoft: How to use file mount/unmount API in Synapse.
Having an ADLS Account already created in a subscription – should be easy peasy, right?

Read on to understand when things might be a little more complicated than they seem. And more frustrating, once you see the cause of the problem.

Comments closed

Don’t Store Files in the Database

Josh Darnell provides timeless advice:

As Deborah’s invite post suggests, this is a “that one time at that client” story. I was working at a consulting firm, and we had written an app for a particular client. Part of this application’s workflow involved users uploading images alongside some other information. These were not particularly large images in the grand scheme of things – they were taken by a microscope, and were a few kilobytes each, maybe.

However, this app had been in use for a long time. And as you might have guessed from the title of this post, each of these images was stored in a single table in the database that backed this application

Yeah, that’ll be a problem… Read on for some recommendations on how to avoid the issue. One thing I would add is FileTable, which came out in SQL Server 2012. In that case, the files are actually stored on disk but are queryable via T-SQL. It introduces its own set of problems but I do have some fond feelings about having used FileTable in the past.

Comments closed

Enumerating Azure Storage Replication Types

Arun Sirpal has a list:

Storage Accounts are pretty much integrated into so many different designs in Azure, whether you are using Azure Synapse, 3rd party product like Snowflake, or Event Streaming designs – we need it.

When you create a storage account there are 5 different replication types you should know about.  These are LRS, ZRS, GRS, RA-GRS and GZRS. Lots of abbreviations here, lets explain further.

Read on for the explanation.

Comments closed

Using S3 Object Storage in MinIO with SQL Server 2022

Anthony Nocentino takes us through an example of integrating with MinIO via its S3 integration:

In this post, I will walk you through how to set up MinIO, so you can use it to work with SQL Server 2022’s s3 object integrations. Working with s3 and SQL Server requires a valid and trusted TLS certificate. This can be a pain for some users and environments. So I’m writing this post so you can get off the ground running with this new feature set in SQL Server 2022. The certificate we’re working with here is self-signed. You could get a real certificate for your environment, and that’s encouraged. But this walk-through intends to get you up and running fast so that you can test out SQL Server’s s3 object integrations. We’re using MinIO’s free GNU AGPL v3 edition and running it in a docker container for our s3 compatible object storage and SQL Server 2022 CTP 2.0, which is also running in a container.

Click through for the demo, in which Anthony sets everything up and then backs up a database in SQL Server 2022 to MinIO.

Comments closed

The Benefits of Parquet

Maria Zakourdaev explains why the Parquet file format is so useful:

Parquet files organize data in columns, while CSV files organize data in rows.

Columnar storage allows much better compression so Parquet data files need less storage, 1 TB of CSV files can be converted into 100GB of parquet files – which can be a huge money saver when cloud storage is used. This also means that scanning parquet file is much faster than scanning CSV files – fewer data would be scanned and there is no need to load unneeded columns into memory and aggregations will run faster. Parquet files contain both data and metadata, information about data schema and structure. When you load the file, having metadata helps the querying tool define proper data types.

Click through for an example of when Parquet makes sense. It’s not the best format for everything—it’s a columnar file format, so writes are typically slower than row-store formats like CSV or Avro—but it and ORC are outstanding for analytical processing, not least because of the metadata these formats contain.

Comments closed

Seeding AG Replicas from Snapshots in SQL Server 2022

Anthony Nocentino is excited about using storage snapshots in SQL Server 2022:

But what if I told you that you could seed your Availability Group from a storage-based snapshot and that the re-seeding process can be nearly instantaneous?

In addition to saving you time, this process saves your database systems from the CPU, network, and disk consumption that comes with direct seeding and using backups and restores to seed.

This process described in this post is imlemented on Pure Storage’s FlashArray and works cloud scenarios on Pure’s Cloud Block Store.

Click through to see how.

Comments closed

Azure Shared Disk with Zone-Redundant Storage

Dave Bermingham runs some tests:

What makes this interesting is that you can now build shared storage based failover cluster instances that span Availability Zones (AZ).  With cluster nodes residing in different AZs, users can now qualify for the 99.99% availability SLA. Prior to support for ZRS, Azure Shared Disks only supported Locally Redundant Storage (LRS), limiting cluster deployments to a single AZ, leaving users susceptible to outages should an AZ go offline.

There are however a few limitations to be aware of when deploying an Azure Shared Disk with ZRS.

Dave also checks to see how their performance compares to locally-redundant storage.

Comments closed

Power BI Dataflows and Storage Considerations

Teo Lachev has some things for us to consider:

Over the past few years, the BI industry has come up with new file formats, such as Parquet, ORC, and Avro, which are widely used today. To facilitate its vision for cross-industry data integration, Microsoft introduced a few years ago the Common Data Model (CDM) and CDM Folders. Power BI dataflows output CSV files to CDM folders and each table is saved in its own folder. You can bring your own data lake to directly access these files. If do so, you’ll find the following folder structure:

Although accessing the dataflow files might open all sorts of data integration scenarios, here are some things to watch for concerning the dataflow output:

Read on for five things.

Comments closed

Zero Records but Lots of Space Used

Jeff Iannucci solves a riddle:

Anyhow, it’s worthwhile to occasionally review the tables in a database to see which ones are growing every day, using the most space.

But what if during a review you see the largest table looks like this?

That’s around 24 GB of sweet drive space allocated for 0 records. But…how?

Let me show you how.

Click through to see how. My initial thought was LOB craziness but Jeff’s example doesn’t even need that.

Comments closed