Gauri Mahajan shows how we can read data in Azure Blob Storage from Azure Databricks:
Since our base set-up comprising of Azure Blob Storage (with a .csv file) and Azure Databricks Service (with a Scala notebook) is in place, let’s talk about the structure of this article. We will demonstrate the following in this article:
1. We will first mount the Blob Storage in Azure Databricks using the Apache Spark Scala API. In simple words, we will read a CSV file from Blob Storage in the Databricks
2. We will do some quick transformation to the data and will move this processed data to a temporary SQL view in Azure Databricks. We will also see how we can use multiple languages in the same databricks notebook
3. Finally, we will write the transformed data back to the Azure blob storage container using the Scala API
It’s just a few lines of code. One of the best things Microsoft and the Databricks team did for Azure Databricks was to ensure that it felt like a first-party offering—everything feels a little more integrated than Databricks for AWS.