Press "Enter" to skip to content

Category: Cloud

Another Look at Cosmos DB Indexing

Hasan Savran revises some indexing recommendations based on changes to Cosmos DB:

Lazy indexing used to be an option. It’s not in any CosmosDB documentation anymore. By using Lazy indexing, you could save 20 to 30 percent for Request Units. Just like anything else in life, you get what you pay for when it comes to Lazy indexing. By selecting Lazy indexing, you are saying that eventually Indexes will be updated. If Indexes are not updated, that means your queries might not return all the data since all data might not be indexed yet. Lazy indexing is still an option, nobody talks about it for a good reason. In my opinion, it should be listed as obsolete feature or it should have a better documentation about how it works or why it might not be a good option for your solutions.

     If you use Lazy Indexing to reduce Request Units in your solution, change it to consistent now unless you have a really good reason!

Read on for more advice in this vein.

Comments closed

Dimensional Load with Databricks

Leo Furlong shows how we can load an Azure SQL Data Warehouse dimension with Databricks:

Ingesting data into the Data Lake occurs in steps 1 and 2 in our architecture.  Azure Data Factory (ADF) provides an excellent mechanism for loading data from source applications into a Data Lake stored in Azure Data Lake Store Gen2.  In fact, Microsoft offers a template in the ADF Template gallery which provides a metadata driven approach for doing so.  The template comes with a control table example in a SQL Server Database, a data source dataset and a data destination dataset.  More on this template can be found here in the official documentation.

I appreciate that this is a full walkthrough of the process, not just one step.

Comments closed

Managed Instance Challenges

Joey D’Antoni has a few real-world challenges with migrating to Azure SQL Managed Instances:

While DMS is pretty interesting tooling, I had mostly ignored it until recently. Functionally, the tool works pretty well. The problem is it requires a lot of privileges–you have to have someone who can create a service principal and you need to have the following ports open between your source machine and your managed instance:

– 443
– 53
– 9354
– 445
– 12000

While the scope of those firewall rules is limited, in a larger enterprise, explaining why you need port 445 open to anything is going to be challenging.

The technology is intriguing, though it does seem like there are still some kinks to work out.

Comments closed

Using the Cosmos DB Change Feed

Hasan Savran shows us how we can use Cosmos DB’s change feed to track changes to documents in a container:

This is great but I want to do more than that. How am I going to access to changed data? What should I do if there is more than one change or insert? In my case, I need to access to SensorCode attribute so I can do something about this alert. To answer these questions, you need to know more about the Azure Functions. If you can see the number of modified documents by this code, that means you are done with Change Feed functionality. First, we need some kind of loop so if the code can process multiple changes. To do that, I will use a simple foreach loop.

The thing which comes to mind when I hear about Cosmos DB’s change feed is Kafka, with that immutable log of actions you can read through.

Comments closed

Keeping S3 and Blob Storage in Sync

Sheldon Hull shares with us a technique to keep an S3 bucket in sync with an Azure Blob Storage blob:

Moving data between two cloud providers can be painful, and require more provider scripting if doing api calls. For this, you can benefit from a tool that abstracts the calls into a seamless synchronization tool.

I’ve used RClone before when needing to deduplicate several terabytes of data in my own Google Drive, so I figured I’d see if it could help me sync up 25GB of json files from Azure to S3.

You’ll have to do a few of the steps on your own, but this looks like a good way of parking data in two clouds.

Comments closed

Using AZCopy for SQL Backups

John McCormack shows how you can use AZCopy to move SQL Server backups into Azure Storage:

AZCopy is a useful command line utility for automating the copying of files and folders to Azure Storage Account containers. Specifically, I use AZCopy for SQL Backups but you can use AZCopy for copying most types of files to and from Azure.

In this blog post example (which mirrors a real world requirement I had), the situation is that whilst I need to write my SQL backups over a network share, I also want to push them up to Azure Storage (in a different region) to allow developers quicker downloads/restores. This is why I need to use AZCopy. If I only needed my backups to be written to Azure, I could have used BACKUP TO URL instead.

Read on to see how John did it.

Comments closed

Changes to Azure SQL Database SLA

Arun Sirpal notes a change to the Azure SQL Database Service Level Agreement:

I am sure many missed the updates to Azure SQL Database SLA (Service Level Agreement). It used to be 99.99% across all tiers  but split between two different high-availability architectural models. Basic, Standard and General Purpose tiers had its own model and the Premium / Business Critical tiers had a different one.

Read on to see the change.

Comments closed

Oracle Data Guard on Azure

Kellyn Pot’vin-Gorman’s worlds continue to collide:

So, as most people know, I’m not a big fan of Oracle RAC, (Real Application Cluster).  My opinion was that it was often sold for use cases that it doesn’t serve, (such as HA) and the resource demands between the nodes, as well as what happens when a node is evicted to those that are left are not in the best interest for most use cases.  On the other hand, I LOVE Oracle Data Guard, active or standard, don’t matter, the product is great and it’s an awesome option for those migrating their Oracle databases to Azure VMs.

Read on to see what Oracle Data Guard is and where you might use it.

Comments closed

Notebooks in Azure Databricks

Brad Llewellyn takes us through Azure Databricks notebooks:

Azure Databricks Notebooks support four programming languages, Python, Scala, SQL and R.  However, selecting a language in this drop-down doesn’t limit us to only using that language.  Instead, it makes the default language of the notebook.  Every code block in the notebook is run independently and we can manually specify the language for each code block.

Before we get to the actually coding, we need to attach our new notebook to an existing cluster.  As we said, Notebooks are nothing more than an interface for interactive code.  The processing is all done on the underlying cluster.

Read on to learn how Databricks uses the notebook metaphor heavily in how you interact with it.

Comments closed