Press "Enter" to skip to content

Category: Cloud

Cost Governance via Azure Policy

Felipe Binotto helps us save a bit of money in Azure:

Cost governance is an essential aspect of managing any cloud infrastructure. Azure Policy is a powerful tool that can help implement cost governance measures within your Azure environment. With Azure Policy, you can define and enforce rules to control costs, monitor usage, and optimize your resources.

These policies can be used to prevent the creation of resources that are not compliant with cost-saving measures or to apply tags to resources that identify them as cost-related resources. You can also use policies to track resource usage and generate alerts when certain thresholds are reached, allowing you to take proactive measures to optimize your resources and control costs.

Throughout this article I will provide some examples of Azure Policies you can use for cost optimization.

There’s some solid advice in here. Most of it boils down to knowing what you have running so things don’t slip between the cracks.

Comments closed

Putting tempdb on an Azure VM Temp Disk

Daniel Hutmacher uses a temp disk for a temp database:

Almost all Azure virtual machine sizes come with a temporary disk. The temporary disk is a locally attached SSD drive that comes with a couple of desirable features if you’re installing a SQL Server on your VM:

  • Because it is locally attached, it has lower latency than regular disks.
  • IO and storage are not billed like regular storage.

As the name implies, the temporary disk is not persistent, meaning that it will be wiped if you shut down your VM or if the VM moves to another VM host (as part of maintenance or troubleshooting). For that reason, we never want to put anything on the temporary disk that we need to keep.

I’d say this was a lot more popular several years ago, back when spinning disk was the default for Azure storage. There can still be benefits from doing this, though if you’re using Premium storage with high IOPS, the biggest remaining benefit is around latency.

Comments closed

Changes to the IaaS Agent for SQL Server on Azure VMs

Aditya Badramraju has an announcement:

SQL Server on Azure Virtual Machines is powered by the SQL IaaS Agent extension which provides many features that make managing your SQL Server easy. This blog will discuss new features and changes we’ve recently released in this extension. 

Click through for those changes. I was prepared, upon seeing the “Retiring Modes” section, to have a cynical response about forcing everyone into what was effectively Full mode, but that proto-take ended up being way off base and the truth is much nicer.

Comments closed

Well-Architected Framework Cost Optimization

Brandon Wilson cuts costs:

Hi everyone! Brandon Wilson (Cloud Solution Architect/Engineer) here to follow up on the post I authored previously for the Well-Architected Cost Optimization Assessment offering, with another customer offering we have known as the Well-Architected Cost Optimization Implementation. This offering can be considered as a continuation/”part 2” of sorts for the Well-Architected Cost Optimization Assessment, where the goal is to help you implement some of the findings relating to Azure Reservations, Azure Savings Plans, Azure Hybrid Benefits, along with cleaning up some of that cloud waste sitting around.

Just as before (and in case you are a new reader), we’ll touch a little bit on the Azure Well-Architected Framework (WAF), along with the Cloud Adoption Framework (CAF), and then go over what is covered in the Well-Architected Cost Optimization Implementation offering itself.

Some of this is Microsoft-internal tooling, though the WAF assessments themselves are available to the general public and well worth going through.

Comments closed

Provisioning an Azure Key Vault

Andy Leonard takes us through building an Azure Key Vault:

One way to keep confidential information confidential is to store confidential values in Azure Key Vault.

This post describes one way to provision an Azure Key Vault.

In addition to other values, I use key vault to store login usernames – as well as passwords – in key vault. Why? I don’t like storing half of the login information – the username – in plain text. In case I haven’t shared this with you, you should know I use a password generator to create usernames and passwords. In Azure, it’s common to use the same username and password in multiple locations, so when I change access credentials (You are regularly changing passwords, at least, right?), I can update both values in a central location.

One nice thing about most Azure services is that they make Key Vault access fairly easy, especially if you use the managed identity account to grant vault access.

Comments closed

TDE with Customer-Managed Keys in Azure SQL Database

Mirek Sztajno announces a public preview:

In this scenario, a key that is stored in a customer-owned and customer-managed Azure Key Vault (AKV) can be used for each database within a server to encrypt the database encryption key (DEK), called the TDE protector. The feature provides the ability to add keys, remove keys, and change the user-assigned managed identity (UMI) for each database. For more information on identity management, see Managed identity types in Azure.

Click through for more details on how it works and what’s currently not supported in the public preview.

Comments closed

Automating Self-Hosted Integration Runtime Deployment

Jonathan D’Aloia doesn’t want to click next-next-next:

Welcome to my blog on how to fully automate the deployment of a Self-Hosted Integration Runtime using Terraform!

The title of this blog is very much self-explanatory but I hope you find the contents useful and are able to apply this on your projects in some aspect.

Click through for a brief overview of self-hosted integration runtimes, the process to follow, and a link to the repo.

Comments closed

Change Data Capture and the Cosmos DB Analytical Store

Mark Kromer and Revin Chalil show off an interesting preview feature:

Making it super-easy to create efficient and fast ETL processing the cloud, Azure Data Factory has invested heavily in change data capture features. Today, we are super-excited to announce that Azure Cosmos DB analytics store now supports Change Data Capture (CDC), for Azure Cosmos DB API for NoSQL, and Azure Cosmos DB API for Mongo DB in public preview!

This capability, available in public preview, allows you to efficiently consume a continuous and (inserted, updated, and deleted) data from the analytical store. CDC is seamlessly integrated with Azure Synapse Analytics and Azure Data Factory, a scalable no-code experience for high data volume. As CDC is based on the analytical store, it does not consume provisioned RUs, does not affect the performance of your transactional workloads, provides lower latency, and has lower TCO.

Click through to see how it works.

Comments closed

Choosing a SKU for Azure Data Explorer

Brian Bønk makes a choice:

When creating the clusters from the Azure portal, you are presented with 3 options when choosing the compute specification.

The compute specification is the method of setting up the clusters for the specific workload you are planning to put on the Kusto cluster.

The portal gives you these three options:

Read on for the options, as well as some recommendations on when you might choose each.

Comments closed

Restoring an Azure SQL Database

Andrea Allred recovers from a mistake:

Recently, the wrong table got dropped and we needed to bring it back. I had never done a restore in an Azure Managed Database before so I learned something really fast.

Click through for the process. And yeah, it is quite easy, though I’ve noticed that restore times are a bit slower than if you were using local hardware on-premises.

One neat trick with database restores in Azure SQL DB: you can’t restore over an existing database, something a client wanted me to do last week. What you can do, however, is restore the database under a new name, so we might have messedupdb and then messedupdb_restore. Well, in this case, messedupdb had no changes since “the incident,” so what we were able to do was rename messedupdb to messedupdb_dropme and rename messedupdb_restore to messedupdb. Azure SQL DB happily rolls on with this and after ensuring that the database was now in prime condition, we could drop the old version. It’s a little more complex than simply restoring over the existing database, but all the relevant metadata Azure SQL DB needs stayed in sync along the way, so the process was smooth.

Comments closed