Many conversations I have with customers go like this:
Customer: “We cannot deploy on Azure until we know that appropriate defenses are in place.”
Me: “I agree 100%, so let’s build a threat model for the proposed design and see what you need to do, and what Microsoft provides.”
A couple of days pass as we build and iterate on the threat model.
Now here’s when the customer has an “a-ha” moment. At the end of the process we have a list of defenses for each part of the architecture and we all agree that the defenses are correct and appropriate.
It’s at that point the customer realizes that they can deploy a cloud-based solution securely.
My tongue-in-cheek response is, of course a customer can deploy a cloud-based solution securely if they have Michael Howard walking them through it. Michael does include some links on Azure security configuration and threat modeling resources so check those out.
This is telling us that four of the tables in the sample do not have clustered indexes. Azure SQL Database insists on a clustered index for every table. So without warranty, here is a script that I used to refactor my on-prem AdventureWorksDW2014 database. After making these fixes, I was able to deploy to Azure SQL DB from SSMS with no errors.
Julie includes the script she used to fix up AdventureWorksDW.
How do you work out what DocumentDB will cost?
Well, 1 RU corresponds to a GET of a 1KB document. You then need to know the specifics of your workload such as number of documents, avg document size, how the documents are accessed (query patterns), and how often documents are updated.
I suggest loading your data and running your standard queries in the Azure Portal (see below) to see what RU’s your workload generates, and use that in your calculations.
Rolf also has some comparisons to SQL Server queries and some hints with integrating SQL Server with DocumentDB.
The most fundamental form of disaster recovery is database backups and restores. Typically setting up backups is a lot of work. DBAs need to make sure there’s enough storage available for backups, create schedules that accommodate business operations and support RTOs and RPOs, and implement jobs that execute backups according to those schedules. On top of that, there is all the work that has to be done when backups fail and making sure disk capacity is always large enough. There is a huge investment that must be made, but it is a necessary one, as losing a database can spell death for a company.
This is one of the HUGE strengths of Azure SQL Database. Since it a service offering, Microsoft has already built out the backup infrastructure for you. All that stuff we talked about in the previous paragraph? If you use Azure SQL Database, you do not have to do any of it. At all.
What DBAs still need to manage is being able to restore databases if something happens. This is where Powershell comes into play. While we can definitely perform these actions using the portal, it involves a lot of clicking and navigation. I would much rather run a single command to run my restore.
The Powershell cmdlets are easy to use, so spin up an instance and give it a try.
Unlike SSMS, Microsoft does support connecting to SQL Data Warehouse from Visual Studio, via the database engine features in SSDT. When you get into the Visual Studio/SSDT environment, open SQL Server Object Explorer, which is similar to Object Explorer in SSMS. From there, click the Add SQL Server button.
When the Connect dialog box appears, provide the server name, select SQL Server Authentication, and then specify the login name and password, as shown in the following figure.
It is a bit surprising that you can’t easily connect via SSMS 2014. Maybe that’s changed with SSMS 2016?
Q: Is there going to be down time when I scale up/down? What’s going to happen to my existing connections?
Note that changing the service tier and/or performance level of a database creates a replica of the original database at the new performance level, and then switches connections over to the replica.No data is lost during this process but during the brief moment when we switch over to the replica, connections to the database are disabled, so some transactions in flight may be rolled back. This window varies, but is on average under 4 seconds, and in more than 99% of cases is less than 30 seconds. Very infrequently, especially if there are large numbers of transactions in flight at the moment connections are disabled, this window may be longer.
The duration of the entire scale-up process depends on both the size and service tier of the database before and after the change. For example, a 250 GB database that is changing to, from, or within a Standard service tier, should complete within 6 hours. For a database of the same size that is changing performance levels within the Premium service tier, it should complete within 3 hours.
Video by Joe Idziorek on Service Tiers and how to scale up and down using Azure Portal is available here.
Read the whole thing. There are some great questions and answers in this set.
This week, we’re excited that Forrester recognized Microsoft Azure as a leader in their Big Data Hadoop Cloud Solutions. Apache Hadoop as a technology has become popular amongst organizations to unlock insights from data of all size, shape, and speed. Hadoop power solutions to help businesses improve their performance, educators to better connect with the needs of their students, medical professionals to improve the quality of their care, or researchers to accelerate new advancements in science.
As an example, Ultra Tendency uses Hadoop to achieve something not possible before – visualize more than 27 million distinct sensor readings to give Japanese citizens accurate, up-to-date information about the radiation contamination from the Fukushima nuclear plant meltdown. More and more organizations are also deploying Hadoop in the cloud with 47% of Forrester’s respondents to a 2015 survey increasing their cloud deployments either by 5-10% (37%) or more than 10% (10%).1 This makes sense because the cloud allows you to scale elastically on demand to handle the processing of any amount of data.
AWS and IBM also have very good solutions, and Google is trying to get a stronger foothold on the cloud game.
The access tiers available for blob storage accounts are “hot” and “cold”. In general, hot data is classified as data that is accessed very frequently and needs to be highly durable and available. On the other hand, cool data is data that is infrequently accessed and long-lived. Cool data can tolerate a slightly lower availability, but still requires high durability and similar time to access and throughput characteristics as hot data. For cool data, slightly lower availability SLA and higher access costs are acceptable tradeoffs for much lower storage costs. Azure Blob storage now addresses this need for differentiated storage tiers for data with different access patterns and pricing model. So you can now choose between Cool and Hot access tiers to store your less frequently accessed cool data at a lower storage cost, and store more frequently accessed hot data at a lower access cost. The Access Tier attribute of hot or cold is set at an account level and applies to all objects in that account. So if you want to have both a hot access tier and a cold access tier, you will need two accounts. If there is a change in the usage pattern of your data, you can also switch between these access tiers at any time.
It looks like there shouldn’t be a performance difference between the two; it’s more of a cost difference in which you might be able to save money by choosing your tier wisely.
Walking through this, we just need to create a secure string for our password and then use the Set-AzureRmSqlServer cmdlet and pass the secure string to -SqlAdministratorPassword argument. Easy as that and we don’t even need to know what the previous password was. With this in mind, I also want to call out that you can only change the password and not the admin login name. While this is not such a big deal, be aware that once you have an admin login name, you are stuck with it.
Mike promises that his next blog post won’t take a month to publish. Here’s hoping he’s right.
I may blog about that solution in the future, but with the Future of SharePoint event rapidly coming up, my BI Focal fellow collaborator, Jason Himmelstein convinced me that there was something more interesting that we could do with this. How about near real time monitoring of Twitter conversations for the event? All of the pieces were in place.
We rolled up our sleeves, and in relatively short order, had a solution. Jason has written about the experience on his SharePoint Longhorn blog, and he has included the videos that we put together, so I can be a little less detailed in this post.