Press "Enter" to skip to content

Category: Cloud

The SQL Has Landed

Drew Furgiuele’s SQL Server in Space project was a success:

We took quite a few readings to try and account for any variations, and when we were done we ended up with a little more lift than we calculated, but not by much. We then proceeded to tie off the balloon. This is by far the most stressful part of the entire operation. Weather balloons have what amounts to a very long, thick neck where you inflate them from. Unlike, say, a party balloon, you don’t just knot the end and call it a day. Instead, you need to tie one end of the neck off to prevent the escape of the helium, then bend the neck back. Then, you tie that off. You’re left with a “U” shaped bend at the bottom of the balloon that you then attach your payload train to: the parachute, then the reflector, and ultimately the payload. Oh, and you have to do this while, you know, trying to wrangle a 7 foot diameter balloon.

Tagged “cloud” even though it was a bright and sunny day.

Comments closed

Using Azure Active Directory With Azure SQL Database

Shannon Lowder shows how to integrate Azure Active Directory with Azure SQL Database:

Now that our instance is set up to use AAD, let’s connect to our instance from SSMS. If you’re running Management Studio 2016 and SQL Server Data Tools for 2015 (14.0.60311.1) or later you should have noticed there are some extra authentication methods available in SSMS now.  We’re going to cover these out of order, since some of these options take more work than others.

Active Directory Password Authentication looks similar to a SQL authentication, but it accepts AAD User names and passwords.  When you choose this method, your credentials are sent over to Azure and end at your AAD instance.  Once your username and password are validated, AAD will check to see if you have rights to connect to the instance.  If so, you will connect.  If not, you will get an error message that you’ll need to google (bing) to find out what it really means.

With the steps we took in the last section, you should be able to log in to your Azure SQL Server as an administrator by using Active Directory Password Authentication.

Click through for the process, as well as links to additional resources.

Comments closed

SQL Server In Kubernetes

Andrew Pruski shows us how to spin up SQL Server containers within Kubernetes running on Azure Container Services:

Looks pretty good to me! SQL is up and has accepted the config value within our yaml file to change the SA password. But how are we going to connect to it?

What we need to do now is define a Kubernetes service. A service is a level of abstraction over pods to allow connections to the containers within the pods regardless of where the pod is located in the cluster. So let’s setup a service.

Andrew does a good job of taking us through the process step by step.

Comments closed

Basics Of Azure SQL Data Warehouse

Minette Steynberg has an article introducing Azure SQL Data Warehouse:

Azure SQL DW is best used for analytical workloads that makes use of large volumes of data and needs to consolidate disparate data into a single location.

Azure SQL DW has been specifically designed to deal with very large volumes of data. In fact, if there is too little data it may perform poorly because the data is distributed. You can imagine that if you had only 10 rows per distribution, the cost of consolidating the data will be way more than the benefit gained by distributing it.

SQL DW is a good place to consolidate disparate data, transform, shape and aggregate it, and then perform analysis on it. It is ideal for running burst workloads, such as month end financial reporting etc.

Azure SQL DW should not be used when small row by row updates are expected as in OLTP workloads. It should only be used for large scale batch operations.

Azure SQL Data Warehouse is fantastic when you’ve got a setup like above and are willing to pay a premium to make things faster.  And with appropriately distributed data, it certainly does get faster.

Comments closed

Azure SQL Database Compatibility Level Change

Joe Sack reports that new Azure SQL Databases will have a compatibility level of 140 by default:

Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 130 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database.  For example, you may wish to ensure that new databases created in Azure SQL Database use the same compatibility level as other databases in Azure SQL Database to ensure consistent query optimization behavior across development, QA and production versions of your databases. We recommend that database configuration scripts explicitly designate COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior.

For new databases supporting new applications, we recommend using the latest compatibility level (140).  For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article, Change the Database Compatibility Mode and Use the Query Store.  Note that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140 for SQL Server and Azure SQL DB.

It’s good to hear, and as Joe mentions, you have the ability to move back down to 130 if you need it.

Comments closed

Scale-Up In Azure SQL Database

Arun Sirpal walks us through how to bump up the scale of an Azure SQL Database:

We should all be aware that when changing the service tier and/or performance level of a database it creates a replica of the original database at the new performance level and then switches connections over to the replica. No data is lost during this process but during the brief moment when we switch over to the replica, connections to the database are disabled, so some transactions in flight may be rolled back. So it probably best that you do this during a period of low activity.

It’s pretty easy to do, but heed Arun’s warning, as you can cause queries to fail during switch-over.  My experience has been that scaling up is pretty slow, too:  I seem to recall it taking 10-20 minutes for a moderately sized database.

Comments closed

Automating Index Maintenance On Azure SQL DB

Arun Sirpal shows how to use Azure Automation to rebuild indexes on an Azure SQL Database database:

The answer is via Azure Automation.

At a high level this is what I did.

  • Create an Automation Account.

  • Create a credential.

  • Create a PowerShell Runbook which has the code for index rebuilds.

  • Create a schedule and link it to the above.

  • Configure parameters within the schedule (if any).

  • Configure logging level (if desired).

Click through for the detailed steps.

Comments closed

Building Azure Resource Manager Templates

Ed Elliott gives a brief overview of Azure Resource Manager templates and puts together a sample template:

The ARM API deploys resources to Azure, but doesn’t deploy code onto those resources. For example you can use ARM to deploy a virtual machine with SQL Server already installed but you can’t use ARM to deploy a database from an SSDT DacPac.

To save time when designing solutions, it is important to understand that ARM API is used simply for resources and we need to use some other technology such as DSC or PowerShell to manage the deployments onto the infrastructure once it is deployed.

This is a nice overview of the topic, and because it’s Ed (who is much better about this than most), he goes into how to test before even getting into how to create.

Comments closed

Taking Control In The Cloud

Arun Sirpal advises you to enjoy the change in control when moving to Azure:

The key for me has been to “embrace the change”. I have come from a traditional DBA background, backups, consistency checks, server level configuration at the heart of things. Yes, I no longer care about SQL backups and things of that nature when operating within Azure but I have different tasks and to think about.

Sceptical? Please read on.

Do read on.

Comments closed

Analyzing Twitter Data With Storm In HDInsight

Nischal S shows how to configure an HDInsight cluster to process tweets, followed by loading them into a Power BI dashboard:

When we need to process streams of real-time data, Storm is a great contender. Examples of streaming data are the number of consumer clicks and navigations on a website, IIS or user logs, IoT data, and social network information. In all these scenarios, we use real-time data processing. Apache Storm can process real-time unbounded streams of data.

The term “unbounded” defines streams of data with no start or end. Here, the processing of data is continuous and in real-time. Twitter is a good example. Twitter data is continuous, has no start or end time, and is provided in real-time by millions of Twitter users around the world.

Storm wouldn’t rank in my top three technologies for doing this, but it certainly does the job.

Comments closed