Press "Enter" to skip to content

Category: Administration

Migrating from Azure SQL DB to an Azure SQL Managed Instance

Eitan Blumin walks us through the process of moving from Azure SQL Database to an Azure SQL Managed Instance:

What we cannot do:

– You cannot backup and restore from Azure SQL DB to a Managed Instance.
– You cannot use Azure SQL DB as a source in Azure Data Migration Service (DMS).
– You cannot use Azure SQL DB as a source in Data Migration Assistant (DMA).
– You cannot use Azure SQL DB as a source in DB Mirroring, AlwaysOn, Transaction Log Shipping, or Replication.
– Implementing a custom T-SQL, SSIS, or ADF (Azure Data Factory) solution will be too complicated, and will require an unacceptable development overhead.

That last one is a bit iffy, though Eitan’s two solutions are going to be easier than a custom solution.

Comments closed

Surviving an Audit: Backup History

Dave Bland talks about surviving an audit:

Don’t volunteer information

Only provide the evidence they are asking for, nothing more.  You want to avoid giving them things that they do not ask for.  By doing so, you might expose something that you don’t want them to see.  The last thing you want to do, is give them a reason for documenting a finding. Another thing to remember is the audits have a defined scope.

This is some of the best advice about audits out there, particularly when combined with the other points Dave makes about being honest and doing the right thing. Dave also ties in some information about backup history as a particular auditing example.

Comments closed

TDE and Managed Instances

Tim Radney notes a change to the way new Azure SQL Database Managed Instance databases are configured:

Another recent change is that all newly created databases have “Encryption Enabled” set to True. This enables Transparent Data Encryption. By default, if you haven’t created your own key, it will use the service-managed key. A bug I’ve found in SSMS is that even though you set “Encryption Enabled” to false, it will still enable TDE. This is not the behavior when using T-SQL.

Read on to understand the ramifications of this change.

Comments closed

Preventing Unexpected Failover with AG Patching

Josh Darnell dealt with an issue with Availability Groups:

I had a 2-node availability group (AG) + fileshare witness system experience an unexpected failover recently.

The synchronous secondary was being patched, and when it came back up from a reboot, the current primary unexpectedly failed over. We weren’t done with all the patching on the secondary, so this caused a short outage, and we had to fail back to the original primary to finish the patching (which is of course another short interruption in availability).

The root cause was interesting enough that I decided to share the story here, and provide some general advice and debugging tips along the way.

Click through to understand why this happened and how you might be able to avoid the pain Josh experienced.

Comments closed

Blocking Inbound Connections to SQL Server

John Morehouse shows one quick way of preventing anybody else from connecting to your SQL Server instance:

We even tried to restart the instance into single user mode, however, every time that happened something else would take the connection before we could get into the instance.  We eventually restarted the SQL Server instance to normal operation so that we could investigate why we could not get a connection when in single user mode.

Turns out that with the production nature of the instance, the clients large farm of application servers was connecting to it faster than we could.   This was discovered by using sp_who2, however, you could use the DMV sys.dm_exec_connections to see what is connecting to the instance if you desired.  So, we needed a way to block incoming connections while not being evasive like shutting down the application servers or a large network change.

This is where the brilliance comes in.

Click through for the idea. This is the type of thing you keep in your back pocket in a real pinch, but hope never to need to use.

Comments closed

Ten Comments from a DBA

Kevin Chant spins the DBA archetype around:

2. So glad we test our backups.

Now this is something that every DBA should do, or at least persuade whoever is responsible for backups to do it.

Otherwise you may find yourself in a situation where a database is corrupt and a restore is not possible. Which means that you have to try and recover the database using other methods like the one here.

I can tell you from experience that this is definitely not the best situation to be in unless you enjoy working for over twenty-four hours straight. So, if your backups are not being tested at the moment then I highly recommend you change that.

And if you are a DBA who can’t say this, Kevin has some advice for each of the ten.

Comments closed

Getting Row Counts After Execution

Grant Fritchey gives us a few ways to find row counts after a query has finished:

But, if you really want to get picky, batches and procedures frequently have multiple statements. This means that the row count you’re getting isn’t for a particular table. If you want to get that, then you need to capture the statement level stuff, sp_statement_completed or sql_statement_completed, depending on if you’re looking at procedures or batches.

However, this is one of those “be careful what you wish for” moments. Because, if you capture all statements in a system, you’re going to have a very large amount of data to deal with. This is where filtering is your friend.

Click through for Grant’s full answer.

Comments closed

Handling Cross-Database Transactions

Michael J. Swart explains how cross-database transactions work on a single instance:

The transaction is touching two different databases. So it makes sense that the two actions should be atomic and durable together using the one single transaction.

However, databases implement durability and atomicity using their own transaction log. Each transaction log takes care of its own database. So from another point of view, it makes sense that these are two separate transactions.

Which is it? Two transaction or one transaction?

Click through to read the article, and then check the comments for a cautionary tale around database mirroring and cross-database transactions.

Comments closed

Removing and Refilling All Tables in a Database

Phil Factor has a couple T-SQL scripts for us to remove and reload a test database:

Copying a database using a BCP dataset from a DOS script or from PowerShell is fairly quick and trivial, but what if the data is already in another copy of the database on the same instance? Imagine you have a database build that is a copy of an existing database on the same instance and you want to run a ‘destructive’ test on the data, and do it over and over again.

Deleting all the data in a database should, you’d have thought, be trivial. Truncation doesn’t work because it doesn’t like to truncate tables with foreign keys, even if they are disabled. DELETE is safer but you will need to then reset any identity columns. Deleting large tables in one chunk isn’t scalable, so you have to do it in smaller chunks. I like to disable triggers while I do this as well. The only disadvantage comes if you have triggers that are supposed to be disabled because the simple code just re-enables them all.

This is, I think, one of the biggest selling points for containers where the database is built into the container image. You spin up a container based off of an image, perform your destructive testing, and destroy the container afterward. The next time you need to run these tests, spin up a new container. And if you need to change the data, modify the container. This introduces new challenges like how SQL Server on Linux has limitations which don’t exist on Windows, but for supported functionality, it’s a nice solution.

Comments closed