Press "Enter" to skip to content

Category: Cloud

Diagnosing Virtual Machine Cloning Issues

Jack Li walks through a few common problems when creating Azure VMs based off of captured images:

When you create VM from a captured image, the drive letters for data disks may not preserved.  For example if you have system database files on E: drive, it may get swapped to H: drive.  If this is the case, SQL Server can’t find system database files and will not start.  If the driver letter mismatch occurs on user database files, then the user databases will not recover.   After VM is created, you just need to go to disk management to change the drive letters to match your original configuration.

Read the whole thing if you’re thinking about copying your on-premise server to an Azure VM.

Comments closed

Stretch Databases

Tim Radney looks at Stretch Database functionality in RTM:

Prior to learning about this new billing method for DSU, I could make the argument that using Stretch Database would be a very cost effective method for storing cold data (unused data) into the cloud. By stretching this data into Azure, you could migrate a large portion of older data, which would decrease the size (and thus cost) of your local backups. In the event you had to restore a database, you would simply have to establish the connection to Azure for the stretched data, thus eliminating the need to restore it. However, with the minimal cost being nearly $1,000 per month for the low end DSU scale, many organizations will find that it is much cheaper to retain the data on a less expensive tier of storage within their data center and find other methods for HA such as mirroring, log shipping, or Availability Groups.

Read the whole thing.  Maybe V2 of stretch databases will fix some of the biggest problems (the cost, needing to pull all of your data back down before you make any schema changes, etc.) and become a viable feature, but I can’t see it being one today.

Comments closed

HDInsight Tool For Eclipse

Xiaoyong Zhu reports that the HDInight tool for Eclipse is now generally available:

The HDInsight Tool for Eclipse extends Eclipse to allow you to create and develop HDInsight Spark applications and easily submit Spark jobs to Microsoft Azure HDInsight Spark clusters using the Eclipse development environment.  It integrates seamlessly with Azure, enabling you to easily navigate HDInsight Spark clusters and to view associated Azure storage accounts. To further boost productivity, the HDInsight tool for Eclipse also offers the capability to view Spark job history and display detailed job logs.

Check out the link for videos and additional resources.

Comments closed

Running Compiled Code In Azure ML

Max Kaznady shows how to use R or Python scripts to call compiled code within Azure ML:

In this post, we focus on sourcing R and Python’s external dependencies, such as R libraries and Python modules, which are not already installed on Azure ML and require code compilation. Commonly the compiled code comes from a variety of other languages such as C, C++ and Fortran. One could also use this approach to wrap their compiled code with R or Python wrappers and run it on Azure ML.

To illustrate the process, we will build two MurmurHash modules from C++ for R and Python using the following two implementations on GitHub, and link them to Azure ML from a zipped folder

Link via David Smith.  I knew it was possible to call compiled C code from Python and R, but didn’t expect to be able to do it within Azure ML, so that’s good to know.

Comments closed

Using Azure Data Catalog

Melissa Coates has some good advice if you start using Azure Data Catalog:

Register only data sources that users interact with. Usually the first priority is to register data sources that the users see-for instance, the reporting database or DW that you want users to go to rather than the original source data. Depending on how you want to use the data catalog, you might also want to register the original source. In that case you probably want to hide it from business users so it’s not confusing. Which leads me to the next tip…

Use security capabilities to hide unnecessary sources. The Standard (paid) version will allow you to have some sources registered but only discoverable by certain users & hidden from other users (i.e., asset level authorization). This is great for sensitive data like HR. It’s also useful for situations when, say, IT wants to document certain data sources that business users don’t access directly.

This is a good set of advice.

Comments closed

Overlapping Ranges Using U-SQL

Michael Rys explains how to merge overlapping ranges of data using U-SQL:

If you look at the problem, you will at first notice that you want to define something like a user-defined aggregation to combine the overlapping time intervals. However, if you look at the input data, you will notice that since the data is not ordered, you will either have to maintain the state for all possible intervals and then merge disjoint intervals as bridging intervals appear, or you need to preorder the intervals for each user name to make the merging of the intervals easier.

The ordered aggregation is simpler to scale out, but U-SQL does not provide ordered user-defined aggregators (UDAGGs) yet. In addition, UDAGGs normally produce one row per group, while in this case, I may have multiple rows per group if the ranges are disjoint.

Luckily, U-SQL provides a scalable user-defined operator called a reducer which gives us the ability to aggregate a set of rows based on a grouping key set using custom code.

There are some good insights here, so read the whole thing.

Comments closed

Azure Cortana Intelligence Suite Walkthrough

Rolf Tesmer gives us a high-level walkthrough of the Azure Cortana Intelligence Suite, using management of a wind turbine farm as an example problem:

Event Hub

What is it

https://azure.microsoft.com/en-us/services/event-hubs/

Fully Managed Service (PaaS) for ingesting events/messages at a massive scale (think telemetry processing from websites, IoT etc).

What does it do in our wind farm

Provides a “front door” to our wind farm application to accept all of the streaming telemetry being generated from the turbines.  Event Hubs wont process any of this data per se – its just ensuring that its being accepted and queued (short term) while other components cane come in to consume it.

Before you dig deeply into particular services, it’s nice to see how they fit together at a higher level.

Comments closed

Polybase Setup Errors

Murshed Zaman on the Azure CAT team covers a number of Polybase configuration errors:

SSMS Error:

Any Select query fails with the following error.
Msg 106000, Level 16, State 1, Line 1
Java heap space

Possible Reason:

Illegal input may cause the java out of memory error.  In this particular case the file was not in UTF8 format. DMS tries to read the whole file as one row since it cannot decode the row delimiter and runs into Java heap space error.

Possible Solution:

Convert the file to UTF8 format since PolyBase currently requires UTF8 format for text delimited files.

I imagine that this page will get quite a few hits over the years, as there currently exists limited information on how to solve these issues if you run into them, and some of the error messages (especially the one quoted above) have nothing to do with root causes.

Comments closed

Netflix Billing Architecture

The Netflix tech blog discusses changing their billing infrastructure to be entirely in the cloud (AWS in this case):

Cleaning up Code: We started chipping away existing code into smaller, efficient modules and first moved some critical dependencies to run from the Cloud. We moved our tax solution to the Cloud first.

Next, we retired serving member billing history from giant tables that were part of  many different code paths. We built a new application to capture billing events, migrated only necessary data into our new Cassandra data store and started serving billing history, globally, from the Cloud.

We spent a good amount of time writing a data migration tool that would transform member  billing attributes spread across many tables in Oracle  into a much simpler Cassandra data structure.

We worked with our DVD engineering counterparts to further simplify our integration and got rid of obsolete code.

Purging Data: We took a hard look at every single table to ensure that we were migrating only what we needed and leaving everything else behind. Historical billing data is valuable to legal and customer service teams. Our goal was to migrate only necessary data into the Cloud. So, we worked with impacted teams  to find out what parts of historical data they really needed. We identified alternative data stores that could serve old data for these teams. After that, we started purging data that was obsolete and was not needed for any function.

All in all, a very interesting read on how to migrate large databases.  Even if you’re moving from one version of a product to another, some of these steps might prove very helpful in your environment.

Comments closed