Our ERP database has been chosen by the IT gods to get moved to the shiny new flash storage array, off the old spinning-rust SAN. This is fantastic news for the business users. But lo, the executives warn us, “You must do this with no downtime!” (said in my best Brent Ozar PHB-imitation voice). Of course when we tell them that’s impossible, they say, “OK, you must do this with minimal downtime.” That’s mo’ betta’.
So what are our typical options for doing a database migration? Or, more specifically, a data file migration. See, we’re not moving to a new server, and we’re not moving a bunch of databases together; we’re just moving this one ERP database. And we’re keeping it on the same SQL instance, we’re just swapping the storage underneath.
Click through for some discussion on options, followed by implementation of a particular strategy.
I could not read my error log on one of my local SQL Servers, when I executed the following code:EXEC sp_readerrorlog
I received the below:
Msg 22004, Level 16, State 1, Line 2 Failed to open loopback connection. Please see event log for more information. Msg 22004, Level 16, State 1, Line 2 Error log location not found.
Fortunately, the error logs had a bit more detail, so Arun has the answer for you.
About once a month, I get support ticket regarding SQL Server dropping an application’s or user’s connection. The problem is SQL Server does not just randomly drop a connection and continue to work normally. Some force outside the control of SQL Server breaks the connection. By default, SQL Server does not record when this event occurs.
In my history of working with SQL Server, only in extreme situations have I ever seen SQL Server drop its connections. The most common example is when SQL Server is in the process of shutting down.
Click through for the trace flag and details.
At first sight it should be obvious. Cassandra is a distributed storage and you’re able to upgrade each node independently. But also it’s a kind of tricky, because Cassandra has so many concepts and moving parts. Introducing such a major change, you’ll be probably excited about how not to break one.
Also, as with every DB upgrade, the most important outcome will be your app behaviour. Protocol versions support might be removed from the future versions. Storage might work another way application doesn’t expect. There might be a lot of pitfalls. So, to start getting the benefits of upgrade, we have to be 200% sure that the application works. And at least it won’t work worse with database.
The whole process is straightforward but there do seem to be a couple places where you can shoot yourself in the foot.
If, for whatever reason, the Windows Computer Name is changed after SQL Server is already installed, then @@SERVERNAME and the information in sysservers would not automatically reflect the change.
This means that @@SERVERNAME contains the incorrect value for the machine name.
Sometimes, and especially in production environments, the value in that global variable is important and is used as part of business processes.
And if @@SERVERNAME doesn’t reflect the actual server name, it could cause problems.
Read on for that script.
In our Oracle environment, we can see every background process, with it’s own pid and along with the process monitor, (pmon)db writer, (dbwr), log writer, (lgwr), we also have archiving, (arcx), job processing, (j00x) performance and other background processing. I didn’t even grep for the Oracle executable, so you recognize how quickly we can see what is running.
In the SQL Server environment, we only have two processes- our parent process is PID 7 and the child is 9 for SQL Server and nothing to distinguish what they actually are doing. If we decide to use the pmap utility to view what the parent and child process aredoing, we see only sqlservr as the mapping information.
I imagine that things like this will improve over time for SQL Server, but Oracle definitely has a leg up in this regard.
One Pester test running!
What I like about this is that it can be easily dropped into a job scheduler (e.g.- Jenkins) and then you’ve got a way to routinely check (and correct) all the configuration settings of the SQL instances that you monitor.
Pester would not have been my first thought for configuration checking, but it does serve as another useful option.
But, let’s take a minute and think about this a bit closer. Should we? Should we dump everyone out of the database? Are you sure?
- Well, it’s dev and we’ve talked to the development team. No one is doing anything right now.
- It’s prod so we’ve been extra careful. We arranged an outage with the business and confirmed that now is a good time.
Well, in theory, we’ve been told no one is using the database. However, that’s theory, this is reality, and in fact, someone is in the database. But what are they doing? Is it just a leftover connection? Someone left a query window open and forgot about it? The application creates a connection and holds on to it 24×7 even if it’s not doing anything?
Click through for more thoughts on the topic.
Jason Squires has a series on database migrations using dbatools. The first part covers capacity planning:
Great results, I now know that I cannot move the database from one server to another on any of the drives shown without expanding the drive volume. Even though this was an awesome output, I wanted more. I know my data and log drives that I wanted to focus on so I decided to dump everything into an array.
In this post, I am going to show you the steps that I decided to take to do this particular migration. This migration was going to be a bit different. I am merging two systems together, so I had to really think the process through and ensure I get everything I needed. I also did not want to rollback due to the systems being legacy and unsupported. Some of the hosts were going to be pushing storage limits so I had to ensure everything went flawlessly so I could decommission the legacy systems quickly.
Both of these have scripts and explanations attached, making it easy to follow along.
I’ve been hearing throughout my entire DBA career that 300 seconds is a good counter for PLE (Page Life Expectancy) to be above.
Paul Randal calls this “utter nonsense“. If anyone would have the right to say that, its Paul.
It is good for the number to be above 300, but that’s like saying it’s good to have your brand new car last a year and a half.