Press "Enter" to skip to content

Category: Cloud

Sharing Power Query Queries

Chris Webb shows how to use Azure Data Catalog to share queries from Power Query:

While I’m really happy to have this functionality back, and I think a lot of people will find it useful, there’s still a lot of room for improvement. Some thoughts:

  • This really needs to extended to work with Power BI Desktop too. In fact, it’s such an obvious thing to do it must be happening soon…?

Given how quickly the Power BI team iterates, that’s probably the case.  Anyhow, read the whole thing.

Comments closed

Cases For Using Azure Analysis Services

Melissa Coates enumerates several reasons why you might want to use Azure Analysis Services:

Varying Levels of Peak Workloads

Let’s say during month-end close the reporting activity spikes much higher than the rest of a typical month. In this situation, it’s a shame to provision hardware that is underutilized a large percentage of the rest of the month. This type of scenario makes a scalable PaaS service more attractive than dedicated hardware. Do note that currently Azure SSAS scales compute, known as the QPU or Query Processing Unit level, along with max data size (which is different than some other Azure services which decouple those two).

Read on for more use cases.

Comments closed

Azure VM Auto-Shutdown

Dave Bermingham shows how to configure automatic shutdown of Azure VMs:

If you are like me, I try to make my Azure MSDN subscription credits stretch the entire month. I’m typically just building labs to try out new features or to demonstrate SQL Server Failover Clusters in Azure. A lot of the time I am testing some pretty large instance sizes with plenty of premium storage. As you can imagine, you can burn through $150 pretty quick with a few GS5 instances running.

I try to be mindful and shutdown or destroy instances once I am done with them, but occasionally I’ll get pulled away for other business, only to log in the next day and see my credit has expired because I forgot to turn off the VMs.

Click through for details, including a warning about storage.

Comments closed

Scheduling VM Backups

Jens Vestergaard shows how to schedule Azure VM backups:

In this wizard we are presented with three (3) areas of configuration; First we need to decide if it’s in Azure or On-Premises. By selecting Azure, we are left with only Virtual Machine as the only option for the backup. On-Premises has more options, SQL Server, Sharepoint and Hyper-V VM’s among others. This example will be about Azure VM’s, hence we selected accordingly.

Step 2 is about the backup policy, or in other words frequency and retention. I am going with the default settings here, but options are great as you can configure retention range for weekly, monthly and yearly backups in parallel.

It’s easy and like any other backups, might save your bacon later.

Comments closed

Scaling Kinesis Streams

Allan MacInnis shows how to scale Amazon Kinesis streams using the UpdateShardCount API call:

You also need to adjust the alarm threshold to accommodate for the new shard capacity automatically. For this example, update the alarm threshold to 80% of your new capacity (or 3200 records per second) by setting a CloudWatch alarm with an action to publish to a SNS topic when the alarm is triggered.

You can then create a Lambda function that subscribes to this SNS topic and executes a call to the new UpdateShardCount API operation while adjusting the CloudWatch alarm threshold. To learn how to configure a Cloudwatch alarm, see Creating Amazon Cloudwatch Alarms. For information about how to invoke a Lambda function from SNS, see Invoking Lambda Functions Using Amazon SNS Notifications.

This is pretty cool.

Comments closed

Stream Computing Platform

Ravi Peri shows how to set up the Stream Computing Platform for .NET (SCP.Net) library and kick off a job:

SCP.Net generates a zip file consisting of the topology DLLs and dependency jars.

It uses Java (if found in the PATH) or .net to generate the zip. Unfortunately, zip files generated with .net are not compatible with Linux clusters.

If you’re interesting in working with a Storm topology while writing .NET code, check this out.

Comments closed

Calling Cognitive Services With R

David Smith has written a go-to guide for connecting to Azure Cognitive Services using R:

There’s no official R package (yet!) for calling Cognitive Services APIs. But since every Cognitive Service API is just a standard REST API, we can use the httr package to call the API. Input and output is standard JSON, which we can create and extract using the jsonlite package.

(There’s also an independent R interface to the text APIs. And there are already Python SDKs for many of the services, including the Face API.)

This is also useful for other REST APIs for times when there isn’t already a pre-built package to do most of the translation work for you.

Comments closed

Processing Azure Analysis Services

Bill Anton shows how to process an Azure Analysis Services tabular model:

This post contains a list of various methods that can be used to process (i.e. load data into) an Azure AS tabular model. As you will see – not much has changed from the regular on-premise version (which is a very good thing as it softens the learning curve).

Read on if you’re looking at putting an Analysis Services model into Azure.

Comments closed

Sparklyr On HDInsight

Ali Zaidi has a walkthrough on using sparklyr on HDInsight:

The majority of Spark is written in Scala (~80% of Spark core), which is a functional programming language. Functional programming languages emphasize functional purity (the output only depends on the inputs) and strive to avoid side-effects. One important component of most functional programming languages is their lazy evaluation. While it might seem odd that we would appreciate laziness from our computing tools, lazy evaluation is an effective way of ensuring computations are evaluated in the most efficient manner possible.

Lazy evaluation allows Spark SQL to highly optimize the queries. When a user submits a query to Spark SQL, Spark composes the components of the SQL query into a logical plan. The logical plan is basically a recipe Spark SQL creates in order to evaluate the desired query. Spark SQL then submits the logical plan to its highly optimized engine called Catalyst, which optimizes this plan into a physical plan of action that is executed inside Spark computation engine (a series of coordinating JVMs).

Read on for more description and code.

Comments closed

Elastic Database Jobs

Mark Vaillancourt looks at Elastic Database Jobs in Azure:

The new Elastic Database Jobs are designed to echo well the functionality the folks working with SQL Server are accustomed to on-prem with SQL Agent. But it’s even better than that. There are many features that are just baked in that you no longer have to worry about. I’ve presented on the new Elastic Jobs as part of a larger presentation on the overall Elastic tools associated with Azure SQL Database a handful of times. That presentation is called Azure SQL Database Elastic Boogie and references Marcia Griffith’s hit song Electric Boogie (The Electric Slide). Yeah. I know. That will explain the use of the word boogie all over the place.

Even with it just being a very new private preview, my experience has been a great one. Huge kudos to Debra and her team on that.

This sounds pretty good.  I really like the dynamic resolution portion and wish that on-prem SQL Agent jobs could do the same out of the box.

Comments closed