Press "Enter" to skip to content

Category: Uncategorized

Recovering Deleted Indexes

Eric Cobb has an enhancement to the SQL Server Metrics Pack:

I recently had a case at work where a database was restored, and several important indexes were accidentally removed. But because we are using SQL Server Metrics Pack to track the indexes on that server, we were easily able to recover all of the deleted indexes.

In order to spotlight the feature a little, I wanted to answer a few questions and provide some queries to help explain how to use this new feature.

Read on to learn more, and check out the GitHub repo as well.

Comments closed

Generating Unique File Names

Slava Murygin gives us unique file names:

That is pretty common task to generate new files with a timestamp in their names.
It gives you ability to easily identify them, sort them and make them pretty unique.
However, if you have a very busy process it is possible that duplicate name will be produced and you might loose some data.

To avoid that situation I’ve came up with following solution.

It is difficult to envision this solution going wrong.

Comments closed

Locking Azure Resources

Stuart Moore shows us how we can lock Azure resources to prevent accidental mistakes:

A resource deletion may not sound like too much of a big thing if you’re deploying Infrastructure as code, hey we’ll just terraform apply again and it’ll pop backup.

In theory that’s a great idea, just with one big problem. The new resource isn’t the old resource!

For an example, an Azure SQL Database server is a unique resource. If you delete one you lose any backups you’ve taken as they’re hosted on the server. Spinning up a new one isn’t going to get them back!

Stuart shows us how to take these locks. Because I have my doubts that everyone has all of their infrastructure prepped as terraform scripts, Stuart’s point is even more relevant.

Comments closed

Automate VM Shutdown

Meagan Longoria has a script to shut off an Azure VM when a SQL Agent job finishes:

The runbook sets the Azure context to the appropriate subscription (especially important when you are a guest user in someone else’s tenant). Then it checks if the VM is started. If it is, it goes into a do-while loop. This task isn’t super time sensitive (it’s just to save money when the VM isn’t in use), so it’s waiting 60 seconds and then calling the child runbook to find out if my SQL Agent job is running. This makes sure that the child runbook is called at least once. If the result is that the job is not running, it stops the VM. If the job is running, the loop starts over, waiting 60 seconds before checking again. This loop is essentially polling the job status until it sees that the job is completed.

Click through for the script.

Comments closed

Managing Database Changes with Scripts

Ed Elliott continues a series on automated database deployments with a quick change process:

In my blog post here https://the.agilesql.club/blogs/ed-elliott/2019-06-10/steps-to-automated-database-deployments I described the steps you need to go through so you can build up your confidence that you are capable of deploying databases using automation. I mean, afterall, knowing that it is possible to automate your deployments and having confidence that they will succeed are two very different things.

Even with the best tooling in the world, automated database deployments are still a struggle and there is one key thing that you can do, no matter what tools you choose and that is to make the deployments re-runnable. (Insert discussion here on the word idempotent and how it means re-runnable but sounds far cooler and intellectual). If you make your deployments re-runnable then you can, by their very definiton, re-run them.

Click through for two options. I definitely prefer option number 1 as well.

Comments closed

Analyzing Data by the Numbers

I am close to wrapping up my series on forensic accounting techniques:

Round number analysis focuses on the final digits of a number, specifically looking at how many 0s there are at the end after rounding off the decimal. We’re interested in the percentage of records in each rank (or Type). For example, $10,820 is a Type 1 because there is a single 0 at the end. $8,300 is a Type 2 because there are two 0s at the end. $11,000 is a Type 3. And $9009 is a Type 0 because there are no 0s at the end.

With that in mind, how do we do this kind of calculation in SQL Server? With one of my favorite operators: APPLY.

Click through to watch me look at round numbers, last digits, and first digits.

Comments closed

Dropping Database Objects with Aplomb

Pamela Mooney has a two-part series on dropping database objects. Part one includes a big setup script:

Some months ago, a fellow DBA came to me and expressed her concern over the normal housecleaning process that occurred at her company.  Developers or product owners would submit changes for objects which were no longer used, only to find that sometimes, they were.  No damage had been done, but the possibility of creating an emergent situation was there.

I could well understand her concern.  Most of us who have been DBAs for any length of time have probably come across a similar scenario.  I thought it would be fun to write a process that reduced the risk of dropping database objects and made rollbacks a snap.

Part 2 handles the actual drops:

Now, the objects in the table will be dropped after 120 days.  But what if you need them to be dropped before (or after)?  Both options work, but I’ll show you the before, and also what happens if there is a problem with the drop.

Check it out and drop with impunity. Or at least the bare minimum of punity.

Comments closed

Contrasting Flink with Kafka Streams

Sourabh Verma contrasts Apache Flink with Kafka Streams:

Initially, I would like you all to focus on a few questions before comparing the frameworks:
1. Is there any comparison or similarity between Flink and the Kafka?
2. What could be better in Flink over the Kafka?
3. Is it the problem or system requirement to use one over the other?

I’m generally happy with both technologies as well as Spark Streaming. But as Sourabh points out, there are differences to keep in mind.

Comments closed

Against Surrogate Keys on Junction Tables

Lukas Eder explains the costs of surrogate keys on tables intended to join multiple tables together:

There is really no point in adding another column FILM_ACTOR_ID or ID for an individual row in this table, even if a lot of ORMs and non-ORM-defined schemas will do this, simply for “consistency” reasons (and in a few cases, because they cannot handle compound keys).

Now, the presence or absence of such a surrogate key is usually not too relevant in every day work with this table. If you’re using an ORM, it will likely make no difference to client code. If you’re using SQL, it definitely doesn’t. You just never use that additional column.

But in terms of performance, it might make a huge difference!

Lukas makes a good argument here.

Comments closed