Press "Enter" to skip to content

Day: April 7, 2023

Well-Architected Framework Cost Optimization

Brandon Wilson cuts costs:

Hi everyone! Brandon Wilson (Cloud Solution Architect/Engineer) here to follow up on the post I authored previously for the Well-Architected Cost Optimization Assessment offering, with another customer offering we have known as the Well-Architected Cost Optimization Implementation. This offering can be considered as a continuation/”part 2” of sorts for the Well-Architected Cost Optimization Assessment, where the goal is to help you implement some of the findings relating to Azure Reservations, Azure Savings Plans, Azure Hybrid Benefits, along with cleaning up some of that cloud waste sitting around.

Just as before (and in case you are a new reader), we’ll touch a little bit on the Azure Well-Architected Framework (WAF), along with the Cloud Adoption Framework (CAF), and then go over what is covered in the Well-Architected Cost Optimization Implementation offering itself.

Some of this is Microsoft-internal tooling, though the WAF assessments themselves are available to the general public and well worth going through.

Comments closed

Audit Operations with Triggers in PostgreSQL

Ryan Lambert creates a trigger:

I rarely use triggers. I don’t hate triggers, I just think they should be used sparingly. Like any specialized tool, you should not expect to use triggers for every occasion where they could be used. However… there is one notable use where case I really like triggers: audit tables. Part of the magic of using triggers for auditing data changes in Postgres is the JSON/JSONB support available.

I think this attitude toward triggers is correct. Triggers are really useful, but they come with a few drawbacks, such as tools not making it readily obvious which tables have triggers associated with them, the effects of triggers being “hidden” until execution, and execution problems (e.g., performance issues, potentially modifying data incorrectly, assuming triggers will only work with one row at a time, etc.).

Comments closed

Blamestorming with PostgreSQL Errors

Robert Haas points the finger:

If an error message shows up in the PostgreSQL log, what program is malfunctioning? It’s easy to conclude that the answer is PostgreSQL, but that’s too simplistic.

For example, suppose these messages show up in my log file:

ERROR:  new row for relation “orders” violates check constraint “orders_d_check”
DETAIL:  Failing row contains (1975-03-16, 42).
STATEMENT:  insert into orders values (‘1975-03-16’, 42);

Error logs are always a “reading is fundamental” scenario and an exercise in applied logic. Sometimes the exercise is harder than others, such as the notorious “String or binary data would be truncated” in SQL Server or “Object reference not set to an instance of an object” in .NET. But in many cases, there’s enough information at least to get an idea of culprit and cause.

Comments closed

Building a Time Dimension in Power BI

Allison Kennedy shares a variant of DimTime:

 Today’s post is going to be short and sweet. I’m simply publishing my version of a DimTime table for Power BI.

Thanks to Radacad for posting a great article on why we need a DimTime table, along with their script for creating one in Power BI. 

If you don’t already have a DimDate table, you can find my version in my DimDate: What, Why and How blog article. I update the script occasionally with new requests. 

Click through for the script.

Comments closed

Optimizing for Mediocre

Erik Darling is always optimal:

Using the OPTIMIZE FOR UNKNOWN hint, or declaring variables inside of a code block to be used in a where clause have the same issue, though: they make SQL Server’s query optimizer make bad guesses, which often lead to bad execution plans.

You can read great detail about that here.

Read on for a bit of a deserved rant and an example to show why OPTIMIZE FOR UNKNOWN often doesn’t solve the problem.

Comments closed

Provisioning an Azure Key Vault

Andy Leonard takes us through building an Azure Key Vault:

One way to keep confidential information confidential is to store confidential values in Azure Key Vault.

This post describes one way to provision an Azure Key Vault.

In addition to other values, I use key vault to store login usernames – as well as passwords – in key vault. Why? I don’t like storing half of the login information – the username – in plain text. In case I haven’t shared this with you, you should know I use a password generator to create usernames and passwords. In Azure, it’s common to use the same username and password in multiple locations, so when I change access credentials (You are regularly changing passwords, at least, right?), I can update both values in a central location.

One nice thing about most Azure services is that they make Key Vault access fairly easy, especially if you use the managed identity account to grant vault access.

Comments closed