Press "Enter" to skip to content

Category: Cloud

Interacting with Microsoft Graph API via Synapse

Paul Hernandez starts a new series:

In this and the next post I want to show you how to connect to the Microsoft Graph API, request some data, process it and store it in a database using Synapse Analytics. 

This first post presents a sample use case, briefly introduces the Graph API, how to create a linked service to it, and how to start querying data. In the next post a sample  Synapse pipeline will be described. The pipeline grabs some data and copies it into some target tables. Finally, I will create a sample query to showcase the newly imported data. 

Because there’s some potential confusion to people, Graph API is completely different from the idea of graph databases.

Comments closed

Premium Azure SQL DB Performance

Reitse Eskens is moving on up:

The standard tier starts at 125 DTU’s and goes up to 4000. DTU’s are made up from a magic mix of CPU, memory, read iops and write iops. An iop (Input Output oPeration) should be a 4kb (disk cluster size) read or write. 125 DTU translates to 500 Kb/sec up to 32.000 Kb/sec. As we’re used to datapages which are 8Kb in size, you could say these databases are able to pull 62 to 4.000 pages per second from disk. When there are simultaneous writes, you’ll share the performance. At least that’s my interpretation of the IOP. For the DTU part, I’m still struggling to get a good grip on what it exactly is, beyond the magic mix.

It’s also a good idea to compare this to what the Standard tier has to offer. The general data patterns look similar with respect to elbows but the magnitudes are quite different, with Premium P1 starting out around Standard S4 in the test for insertion but more like S3 for selects.

Comments closed

SQL Server Backup and Restore Operations for S3

Hugo Queiroz shows off something new in SQL Server 2022:

Backup and restore to simple storage service (S3)–compatible object storage is a new feature introduced in SQL Server 2022 that grants the user the capability to back up or restore their databases using S3-compatible object storage, whether that be on-premises, or in the cloud.

There are some differences from other backup operations, so you should definitely read up on it before using it. One interesting side benefit I got to try out recently is that Pure Storage’s FlashBlade product has an S3 API, allowing you to use that interface for backup/restore operations as well as data virtualization.

Comments closed

Azure SQL MI Error Loading Backup Retention Policies

Paloma Garcia Martin troubleshoots an error:

When you try to create a new database (*) using Azure Portal using non supported characters, you will see an error indicating characters that you cannot use on the database name.

But if you use SSMS tool, it doesn’t include these characters cheeking and it will not avoid you to use these non-supported characters on the database name. 

Click through for an example of this error in action.

Comments closed

CI/CD for Synapse Link for SQL Server 2022

Kevin Chant makes some changes:

In another post I showed how you can use CI/CD to update both ends of Azure Synapse Link for SQL Server 2022 using Azure DevOps. Allowing you to update both a SQL Server 2022 database and an Azure Synapse Analytics dedicated SQL Pool in the same deployment pipeline.

By my own admission, that method can become complex. Plus, I showed some more advanced concepts in that post. With this in mind, I have decided to cover an easier way in this post.

Read on for the simpler technique.

Comments closed

Understanding Purview Pricing

Rolf Tesmer disambiguates:

Like all services in Azure, there’s associated costs when using the service, and naturally Microsoft Purview is no different. If interested in reading the standard pricing model for Microsoft Purview it has been outlined here – and follows a similar layout to all Azure price models.

However – as a result of such a broad range of capabilities, its pricing model is one of the more difficult to understand!

Read on for a PDF which hits the various charges you’ll see.

Comments closed

Performance on Azure SQL DB Standard Tier

Reitse Eskens continues a series on performance comparisons for Azure SQL DB:

This tier is more expensive than the basic, but starting at 12 euro’s per month up to 3723 euro’s you have a wider range of spending your money and with that a wider performance range. The standard tier is suited for general purpose workloads and can be compared with the general purpose tier whereas the latter works with cores. Standard tier works with DTU’s. The concept of a DTU is a difficult one as the documentation states it’s a blend of CPU, Memory, reads and writes. If you hit a limitation, you’ll be throttled. You can read more about the DTU model here.

One thing I wish Reitse had done in the images was to show them in log scale—there’s a consistent L curve for each (which is good) but makes it hard to see anything after about Standard S4.

Comments closed

Cost Optimization Tips for Azure

Marc Kean saves us money:

I constantly see customers with so many managed disks which are unattached and orphaned. Recommendation here would be to delete these if you know you can. Else (from a VM within Azure in the same region where the disks are (to save on egress costs)) use Azure Storage Explorer, download the managed disks as VHD disks, then copy to an Azure Storage account and mark the storage account as Archive (tape storage backend).

Archive storage is estimated less than 10% the cost of managed disk storage. Note, VHDs can be brought back and imported again as managed disks at any time if they are needed.

Pricing can be confirmed by using the Azure Pricing Calculator

There’s a lot of solid guidance in here.

Comments closed

Rebuilding a Dedicated SQL Pool via Azure DevOps

Sarath Sasidharan clones an Azure Synapse Analytics dedicated SQL pool:

There are many scenarios where you want to create a new Synapse dedicated SQL pool environment based on an existing Synapse dedicated SQL pool environment. This may be required when you need to create a development or test environment based on your production environment by copying complete schemas and without copying data.

Note that this process won’t move the data itself—given that you’re starting with terabytes for an effective dedicated SQL pool, trying to create a bacpac would be an exercise in misery.

Comments closed