Press "Enter" to skip to content

Category: Uncategorized

Locking Azure Resources

Stuart Moore shows us how we can lock Azure resources to prevent accidental mistakes:

A resource deletion may not sound like too much of a big thing if you’re deploying Infrastructure as code, hey we’ll just terraform apply again and it’ll pop backup.

In theory that’s a great idea, just with one big problem. The new resource isn’t the old resource!

For an example, an Azure SQL Database server is a unique resource. If you delete one you lose any backups you’ve taken as they’re hosted on the server. Spinning up a new one isn’t going to get them back!

Stuart shows us how to take these locks. Because I have my doubts that everyone has all of their infrastructure prepped as terraform scripts, Stuart’s point is even more relevant.

Comments closed

Automate VM Shutdown

Meagan Longoria has a script to shut off an Azure VM when a SQL Agent job finishes:

The runbook sets the Azure context to the appropriate subscription (especially important when you are a guest user in someone else’s tenant). Then it checks if the VM is started. If it is, it goes into a do-while loop. This task isn’t super time sensitive (it’s just to save money when the VM isn’t in use), so it’s waiting 60 seconds and then calling the child runbook to find out if my SQL Agent job is running. This makes sure that the child runbook is called at least once. If the result is that the job is not running, it stops the VM. If the job is running, the loop starts over, waiting 60 seconds before checking again. This loop is essentially polling the job status until it sees that the job is completed.

Click through for the script.

Comments closed

Managing Database Changes with Scripts

Ed Elliott continues a series on automated database deployments with a quick change process:

In my blog post here https://the.agilesql.club/blogs/ed-elliott/2019-06-10/steps-to-automated-database-deployments I described the steps you need to go through so you can build up your confidence that you are capable of deploying databases using automation. I mean, afterall, knowing that it is possible to automate your deployments and having confidence that they will succeed are two very different things.

Even with the best tooling in the world, automated database deployments are still a struggle and there is one key thing that you can do, no matter what tools you choose and that is to make the deployments re-runnable. (Insert discussion here on the word idempotent and how it means re-runnable but sounds far cooler and intellectual). If you make your deployments re-runnable then you can, by their very definiton, re-run them.

Click through for two options. I definitely prefer option number 1 as well.

Comments closed

Analyzing Data by the Numbers

I am close to wrapping up my series on forensic accounting techniques:

Round number analysis focuses on the final digits of a number, specifically looking at how many 0s there are at the end after rounding off the decimal. We’re interested in the percentage of records in each rank (or Type). For example, $10,820 is a Type 1 because there is a single 0 at the end. $8,300 is a Type 2 because there are two 0s at the end. $11,000 is a Type 3. And $9009 is a Type 0 because there are no 0s at the end.

With that in mind, how do we do this kind of calculation in SQL Server? With one of my favorite operators: APPLY.

Click through to watch me look at round numbers, last digits, and first digits.

Comments closed

Dropping Database Objects with Aplomb

Pamela Mooney has a two-part series on dropping database objects. Part one includes a big setup script:

Some months ago, a fellow DBA came to me and expressed her concern over the normal housecleaning process that occurred at her company.  Developers or product owners would submit changes for objects which were no longer used, only to find that sometimes, they were.  No damage had been done, but the possibility of creating an emergent situation was there.

I could well understand her concern.  Most of us who have been DBAs for any length of time have probably come across a similar scenario.  I thought it would be fun to write a process that reduced the risk of dropping database objects and made rollbacks a snap.

Part 2 handles the actual drops:

Now, the objects in the table will be dropped after 120 days.  But what if you need them to be dropped before (or after)?  Both options work, but I’ll show you the before, and also what happens if there is a problem with the drop.

Check it out and drop with impunity. Or at least the bare minimum of punity.

Comments closed

Contrasting Flink with Kafka Streams

Sourabh Verma contrasts Apache Flink with Kafka Streams:

Initially, I would like you all to focus on a few questions before comparing the frameworks:
1. Is there any comparison or similarity between Flink and the Kafka?
2. What could be better in Flink over the Kafka?
3. Is it the problem or system requirement to use one over the other?

I’m generally happy with both technologies as well as Spark Streaming. But as Sourabh points out, there are differences to keep in mind.

Comments closed

Against Surrogate Keys on Junction Tables

Lukas Eder explains the costs of surrogate keys on tables intended to join multiple tables together:

There is really no point in adding another column FILM_ACTOR_ID or ID for an individual row in this table, even if a lot of ORMs and non-ORM-defined schemas will do this, simply for “consistency” reasons (and in a few cases, because they cannot handle compound keys).

Now, the presence or absence of such a surrogate key is usually not too relevant in every day work with this table. If you’re using an ORM, it will likely make no difference to client code. If you’re using SQL, it definitely doesn’t. You just never use that additional column.

But in terms of performance, it might make a huge difference!

Lukas makes a good argument here.

Comments closed

Thoughts on Certification

Eugene Meidinger is certifiable:

This being a complex topic, I thought I’d lay out the various factors to give a more comprehensive answer than you can easily fit in a tweet.

So the first two questions we need to answer are “Why do certs exist?” and “Why do people take them?”. Without these, we can’t give a good answer to whether you should take them. Certifications often exist for reasons that have nothing to do with your personal best interest. It is necessary to understand that fact.

Giving the economist’s spin, certifications are imperfect signals of reputation. When you know nothing else about a candidate, business partner, vendor, or ranting homeless person on the street, that cert can let you update your priors about the person. The exclusivity of the certification goes a long way in building credence: the MCM (or MCSM) has such a positive reputation even years after its cancellation because it was so difficult an exam that the only way a person could pass is if that person really knew the topic extremely well. By contrast, the old MCSE certifications from the early 2000s were a joke because anybody could memorize a brain dump, spit out answers, and get a cert.

The economist in me also says that certifications tend to be a net drain because you’re spending time on an imperfect signal when there are probably better imperfect signals out there. Your blog, YouTube/Twitch channel (assuming you’re not just playing Slay the Spire all day), and GitHub repo are going to tell me more about your interests and technical capabilities.

Read what Eugene has to say. I think we agree on the broad strokes but I’m probably more in the “not worth it” camp than he is with the exception of cases where it’s necessary to land a business contract (e.g., needing to be a Microsoft Gold Partner).

1 Comment

Disable Lightweight Pooling

Randolph West explains why enabling lightweight pooling in SQL Server is almost always a bad idea:

When can I enable lightweight pooling then?
Don’t. But if you must, these are the conditions under which Microsoft suggests it may be useful:
– Large multi-processor servers are in use.
– All servers are running at or near maximum capacity.
– A lot of context switching occurs (greater than 20,000 per second).

We can measure context switching with a performance counter in Performance Monitor on Windows, so the last two items on this list can be monitored. Use the Context Switches/sec counter to track context switches.

I’m sure there were a few customers who benefited from this, but I’ve neither seen nor heard of a case where it did actually help.

Comments closed