Press "Enter" to skip to content

Curated SQL Posts

Azure SQL Database and the Well-Architected Framework

Jason Bouska has a big announcement:

Microsoft Azure SQL Database is a fully managed cloud database (PaaS) that handles many database management tasks without user intervention. Tasks such as patching, upgrading, taking backups, and monitoring can be configured to the specific needs of the workload and are performed in the background. Azure SQL Database runs the latest stable version of SQL Server and patched OS with 99.99% availability. The intelligent automated functions built into the database free up the user to focus on other important tasks.

Today I am introducing the Azure Well-Architected Service Guide for Azure SQL Database. Like other service guides, this guide for Azure SQL Database contains design considerations, checklists, and detailed configuration recommendations that can assist cloud architects in deploying optimal Azure SQL workloads in line with the guiding tenets of the Well-Architected Framework: security, reliability, cost management, performance efficiency, and operational excellence.

I’ve found that the Well-Architected Framework (whose overloaded acronym is still annoying) works best once you’re far enough along that you have a good idea of workload characteristics, meaning it’s not for the pre-planning state. Also, a full review might take hours or days and require several people to complete, not just a DBA.

Comments closed

Verifying a Database Backup Has Occurred

Lee Markum reviews backup logs:

As data professionals responsible for SQL Server, it is drilled into our heads that we need to take backups. But, how do we know we actually have backups available to us when we need them? How can we verify that a backup has been taken? Some types of auditing that an employer has to undergo might require proof of backups. How will you provide it?

Read on for three options. Note that this post is about finding the existence of backups, not checking to see if the backups are any good.

Comments closed

Errors Sending Subscription E-mails in SSRS

Garry Bargsley sorts out an e-mail problem:

Recently, I was tasked with creating an email subscription to a new SSRS report in an environment that I was not familiar with.  I have created my fair share of subscriptions in my day, and this one was very straightforward.

I found the report, clicked on Manage, and went to the Subscription page.  Clicked on New Subscription and filled in all the information, easy peasy.

The subscription is ready to go when the schedule kicks in the next day, or so I thought.

Turns out that wasn’t quite the case. Read on to see what happened and how Garry fixed the problem.

Comments closed

The Basics of Slowly Changing Dimensions

Soheil Bakhshi explains what slowly changing dimensions are:

Another example is when a customer’s address changes in a sales system. Again, the customer is the same, but their address is now different. From a data warehousing standpoint, we have different options to deal with the data depending on the business requirements, leading us to different types of SDCs. It is crucial to note that the data changes in the transactional source systems (in our examples, the HR system or a sales system). We move and transform the data from the transactional systems via extract, transform, and load (ETL) processes and land it in a data warehouse, where the SCD concept kicks in. SCD is about how changes in the source systems reflect the data in the data warehouse. These kinds of changes in the source system do not happen very often hence the term slowly changing. Many SCD types have been developed over the years, which is out of the scope of this post, but for your reference, we cover the first three types as follows.

Click through for depictions of the first three types as well as implementation details and pains.

Comments closed

The Basics of Network Tracing

Will Aftring shows how to put together a network trace:

I know it can be tempting to spin up WireShark and jump right into looking at traces, but asking questions is just as important, if not more important than the traces themselves. 

I usually like to group these questions into two groups: technical and general. 

Note: I will be using the terms client and server to refer to the sender and receiver. The client is always the sender, the server is always the receiver. 

Read on to get an idea of why we might create a network trace, what we intend to learn from it, and then how to do it.

Comments closed

Data Sharing and Secure Cleanrooms in Databricks

Craig Porteous reviews a couple of announcements from Data + AI Summit:

Having worked with many organisations across different industries and sectors, the sharing of data with partners and vendors is always a pain point and one that all too often results in both parties not quite getting what they want or need. This isn’t restricted to my experience however which is why Databricks announced Delta Sharing back at DATA + AI Summit 2021.

Coming to this year’s conference, Delta Sharing has been established as the foundation for many new features with the announcement Databricks Marketplace and Cleanrooms for example, both built upon the Delta Sharing protocol. We’ll explore Cleanrooms below and I’ll look at the Databricks Marketplace in it’s own post.

Read on for Craig’s thoughts on two of the bigger announcements at this year’s summit.

Comments closed

Updating a SQL Agent Job Step across Multiple Servers

Chad Callihan needs to deploy a job step change to multiple servers:

I recently needed to update the command of a job step for a job existing on multiple servers. Logging into each server to make the same changes can be time consuming. Thankfully, there are better methods.

Let’s look at a couple of ways to update a job step more efficiently.

There is also using a TSX/MSX arrangement, which would keep the jobs in sync as well. If the job is the same across each instance and you have enough cases where you need to think about this sort of thing, MSX is a good thing to look into.

Comments closed

The Basics of Snowflake Architecture

Arun Sirpal lays out the foundation of Snowflake DB’s architecture:

At the most basic level, Snowflake has 3 important components. The Cloud services layer, centralised storage layer and the compute layer.

Cloud services – they call this the “brains” of snowflake. This is where infrastructure management takes place, the optimiser is based (cost-based), metadata management and security (authentication and access control) are handled.

Read on to learn about the other two layers and how they meet.

Comments closed

T-SQL Tuesday 152 Round-Up

Deb Melkin casts a wide net:

It’s time to do the round up of this month’s T-SQL Tuesday entries. It was great to see so many people responding, including at least one new participant. I think there are a lot of kindred spirits here, as in we’ve all felt each other’s pain. (I know that I personally can relate to way too many of these things.) And I truly enjoyed reading everyone’s post.

Click through for the full rundown.

Comments closed