Shared Access Signatures

Arun Sirpal explains what an Azure Shared Access Signature is:

Using a Shared Access Signature (SAS) is usually the best way to control access rights to Azure storage resources (like a container for backups) without exposing the primary / secondary storage keys. It is based on a URI and this is what I want to look at today.

Read on to learn about the components which make up a Shared Access Signature.

Azure Backup for SQL Server VM Pains

John Sterrett has run into a few issues with Azure Backup for SQL Server VMs:

Having the ability to backup new databases automatically is taken for granted. So much, that I noticed that Azure Backup for SQL Server VM’s will not automatically backup new databases for you. That’s right. Make sure you remember to go in and detect and select your new database every time you add a database or you will not be able to recover.

Azure Backup for SQL Server VM’s has an interesting feature called Autoprotect. This should automatically backup all your databases for you. Unfortunately, this does not work. Yes, I double-checked by enabling autoprotect for a VM and I added a new database. The database didn’t get backed up so I had to manually add the database.

Some of these seems like it’s easy enough to fix, so hopefully the product gets better over time.

Databricks versus Mapping Data Flows

Helge Rege Gardsvoll contrasts Azure Databricks, Azure Data Factory Mapping Data Flows, and SQL Server Integration Services:

Mapping Data Flows
One of the many data flows from Microsoft these days providing, for the first time, data transformation capabilities within Data Factory. This is not a U-SQL script or Databricks notebook that is orchestrated from Data Factory, but a tool integrated. This means that you can reuse (many of) the datasets you have defined in Data Factory, while in Databricks you don’t.

Mapping Data Flows runs on top of Databricks, but the cluster is handled for you and you don’t have to write any of that Scala code yourself.

Read on for the full comparison.

On-Prem Data, Azure Apps

Jamie Wick helps us figure out how to keep our data local while using Azure services:

One of the challenges many organizations face when beginning to work with Azure applications (PowerBI, PowerApps, Flow, etc.) is that their data is on-premise and the applications are hosted in the cloud. Moving the data to the cloud is often cost-prohibitive and there can be operational requirements that prevent the data, or the systems hosting it, from being relocated to the cloud.

So, how can on-prem data be used with Azure apps?

Read on for more.

Using Azure Storage Explorer

Arun Sirpal takes us through Azure Storage Explorer:

I only ever use the storage explorer when managing my blobs, files, queues within storage accounts. It is your single view access point for all your storage needs and I totally recommend downloading it and using it (https://azure.microsoft.com/en-gb/features/storage-explorer/).

Why do I like using it? I am sure there are more reasons, but these are personal to me.

Click through for Arun’s reasons as well as installation basics.

Backup to URL in Azure

Joey D’Antoni recommends backing up database to Blob Storage via URL in Azure:

Unlike in your on-premises environment, where you might have up to a 32 Gbps fibre channel connection to your storage array and then a separate 10 Gbps connection to the file share where you write your SQL Server backups, in the cloud you have a single connection to both storage and the rest of the network. That single connection is metered and correlates to the size (and $$$) of your VM. So bandwidth is somewhat sacred, since backups and normal storage traffic go over the same limited tunnel. This doesn’t mean you can’t have good storage performance, it just means you have to think about things. In the case of the customer I mentioned, they were saturating their network pipe, by writing backups to the file system, and then having the Azure backup service backup their VM, they were saturating their pipe and making regular SQL Server I/Os wait.

Read on to see what the alternative is.

Migrating to Azure SQL DB Serverless

John Morehouse shows how we can move an Azure SQL Database instance to serverless:

In a previous post, I discussed the public preview of Azure SQL Database Serverless.  This is a newer product released from Microsoft for the Azure ecosystem.  Moving to this new product is really easy to do and I thought that I’d show you how.

John makes it easy to follow what’s going on, so check it out.

Migrating to SQL Managed Instances with dbatools

Jovan Popovic shows how we can perform an offline migration from on-prem/IaaS SQL Server to a SQL Managed Instance using dbatools:

Typically, the offline migration process looks like:

– You need to create an Azure Blob Storage account that will be used to temporary hold the database backups that will be moved from SQL Server to Managed Instance.
– You need to back up the databases to Azure Blob Storage and restore them from Azure Blob Storage to Managed Instance.
– You need to migrate server-level objects such as logins, agent jobs from the source to destination instance.

In this article, I will use Azure PowerShell to create and manage necessary Azure resources, and DBATools PowerShell library to initiate migration.

Read on for the process, including the Powershell scripts and dbatools calls needed.

AzCopy, Batch Files, and the Task Scheduler

Kevin Feasel

2019-08-30

Cloud

Randolph West shares the results of persistent, relentless experimentation:

This coincidentally has caused an issue if we are using Windows Task Scheduler to schedule the synchronization process, especially if we use a SAS (Shared Access Signature) token which can be quite long. What then happens is we have a command that is longer than Windows Task Scheduler allows, and the task will fail with a very unhelpful error message:

Task Scheduler failed to execute task "\AzureBlobStorageSync". Attempting to restart. Additional Data: Error Value: 2147942487.

Click through to see how Randolph fixed this problem, which created a new problem necessitating Randolph solve it as well.

Working with Tables in Databricks

Brad Llewellyn shows us how to build tables (temporary and permanent) and views in Azure Databricks using each of the main languages:

Simply put, an External Table is a table built directly on top of a folder within a data source.  This means that the data is not hidden away in some proprietary SQL format.  Instead, the data is completely accessible to outside systems in its native format.  The main reason for this is that it gives us the ability to create “live” queries on top of text data sources.  Every time a query is executed against the table, the query is run against the live data in the folder.  This means that we don’t have to run ETL jobs to load data into the table.  Instead, all we need to do is put the structured files in the folder and the queries will automatically surface the new data.

Each language has its own way of doing things, but they all use the Hive metastore under the covers.

Categories

September 2019
MTWTFSS
« Aug  
 1
2345678
9101112131415
16171819202122
23242526272829
30