Press "Enter" to skip to content

Category: Administration

Restoration With Replacement

Joey D’Antoni tests whether RESTORE WITH REPLACE is functionally different from dropping a database and performing a restoration:

I recently read something that said using the RESTORE WITH REPLACE command could be faster than dropping a database and then performing a RESTORE, because the shell of the file could be used and therefore skip file initialization. I did not think that was the case, but books online wasn’t clear about the situation, so I went ahead and built a quick test case, using ProcMon from sysinternals. If you aren’t familar with the sysinternals tools, you should be—they are a good way to get under the hood of your Windows Server to see what’s going on, and if you’re old like me, you probably used PSEXEC to “telnet” into a Windows server to restart a service before RDP was a thing.

Read on to see how the processes compare.

Comments closed

Automatic Identity Value Reseeding

Tracy Boggiano shows how to build an automated identity column reseeding solution:

You have tables that have a lot of data inserted into them and deleted that use identity values and run out integers to use.  I have over 3000+ databases where this can occur so we have an alerts setup that checks the tables then checks a table to see if setup to be auto reseeded based on rather of database engineers have indicated it is safe to do so.  If it is that table is auto reseed either to one the maximum negative number for the datatype else we are alerted and we check with our database engineers on how to handle that table.  Keep in mind we are reseeding tables that have been deemed OK to reseed automatically.

Click through for the code, which includes reseeding logic, a job to run reseed operations, and a whitelist table for the tables which you want to allow automatic reseeding.

Comments closed

Use Fixed Filegrowth Settings

Andy Galbraith notes that you should use fixed-increment filegrowth settings for log and data files:

As you probably already know, the key flaw to percentage-based FILEGROWTH is that over time the increment grows larger and larger, causing the actual growth itself to take longer and longer.  This is especially an issue with LOG files because they have to be zero-initialized before they can be used, causing excessive I/O and file contention while the growth is in progress.  Paul Randal (blog/@PaulRandal) describes why this is the case in this blog post.  (If you ever get a chance to see it Paul also does a fun demo in some of his classes and talks on why zero initialization is importan, using a hex editor to read the underlying contents of disk even after the old data is “deleted”)

Andy also has a script to change filegrowth to fixed-increment growth depending upon the size of the file, so check that out.

Comments closed

Finding Broken Views

Bill Fellows has a script to test each view to see if it is broken:

Shh, shhhhhh, we’re being very very quiet, we’re hunting broken views. Recently, we were asked to migrate some code changes and after doing so, the requesting team told us we had broken all of their views, but they couldn’t tell us what was broken, just that everything was. After a quick rollback to snapshot, thank you Red Gate SQL Compare, I thought it’d be enlightening to see whether anything was broken before our code had been deployed.

You’ll never guess what we discovered.

Read on to see what they discovered (spoilers:  broken views) and how Bill fixed the problem.

Comments closed

More On The New Service Model

Randolph West summarizes the new SQL Server patching model:

  • Every twelve months after GA, the installation files will be updated to contain all the Cumulative Updates in what is effectively now a service pack, but won’t be called that. This will also become the slipstream update. In other words, you’re more likely to be up to date when installing from scratch, later in the release cycle.

  • Customers on the GDR (General Distribution Release) release cycle will only get important security and corruption fixes, as before. You can switch to the standard CU release cadence any time, but once you do, you can’t switch back to GDR.

Brent Ozar thinks CU12 might become the new SP1 in the minds of managers:

So now fast forward to late 2018, early 2019. You’re about to build a new SQL Server for a project, and you have two choices:

  • SQL Server 2018 – which is basically the new dev branch, getting monthly updates, or
  • SQL Server 2017 (or 2016, or 2014) – which is the stable branch, getting quarterly updates

Once a version has hit CU12, and it only gets updates once a quarter, it might be considered Good Enough For Our Apps. Managers might see 2017/2016/2014 interchangeably at that point – which might be great for the second most recent version’s adoption.

It will be interesting to see how companies adopt this new model.

Comments closed

Unattended Installation Of SQL Server 2017 On Linux

Denzil Ribeiro walks us through an unattended installation and configuration of SQL Server 2017 on Linux:

SQL 2017 bits are generally available to customers today. One of the most notable milestones in the 2017 release is SQL Server on Linux. Setup has been relatively simple for SQL Server on Linux, but often there are questions around unattended install. For SQL Server on Linux, there are several capabilities that are useful in unattended install scenarios:

  • You can specify environment variables prior to the install that are picked up by the install process, to enable customization of SQL Server settings such as TCP port, Data/Log directories, etc.

  • You can pass command line options to Setup.

  • You can create a script that installs SQL Server and then customizes parameters post-install with the mssql-conf

The sample script link seems like it’s broken, but you can see it all on Denzil’s Github repo.

Comments closed

Limiting Docker Container Resources

Andrew Pruski shows how to cap the resources available to a container:

What I’ve done here is use the cpus and memory switches to limit that container to a maximum of 2 CPUs and 2GB of RAM. There are other options available, more info is available here.

Simple, eh? But it does show something interesting.

I’m running Docker on my Windows 10 machine, using Linux containers. The way this works is by spinning up a Hyper-V Linux VM to run the containers (you can read more about this here).

Read on to learn more.

Comments closed

Running Out Of Ints

Paul Randal explains the unlikelihood that you’d run out of bigints in a table:

So with 1 million rows per second, you’ll be generating 1 million x 3,600 (seconds in an hour) x 24 (hours in a day) = 86.4 billion rows per day, so you’ll need about 1.4 terabytes of new storage per day. If you’re using the bigint identity as a cluster key, each row needs new space, so you’ll need almost exactly 0.5 petabytes of new storage every year.

At that rate, actually running out of bigint values AND storing them would take roughly 150 thousand petabytes. This is clearly impractical – especially when you consider that storing *just* a bigint is pretty pointless – you’d be storing a bigint and some other data too – probably doubling the storage necessary, at least.

By contrast, if you have a staging table that flows 10 million rows a day (meaning 10 million leave and 10 million new ones enter), you’ll overflow an int column in less than a year.  It’s worth thinking about data sizes before deciding on the type of a surrogate key.  Bigint is the safest, and if you think you’ll need it, go with it.  But there is that storage overhead.

Comments closed

Connecting To Named Instances Sans Name

Derek Colley shows what to do when trying to connect to a named instance without specifying its name:

This is a good question.  There’s a few ways to address it:

  • We could configure two different connection strings and on failover, the alternative is used (by any arbitrary application mechanism)
  • For mirroring, we can use the Failover Partner= element in the connection string – but this is transactional replication being used for DR, and failover should be manual
  • We could set the instance names to be the same – however in this case it was too late since changing the instance name cleanly in SQL Server is practically impossible and breaks all sorts of things

None of these solutions was suitable.  However, on each server, there was only one named instance and no default instance.  What can we do?

Read on for a fourth solution.

Comments closed

Pausing Versus Stopping SQL Servers

Arun Sirpal demonstrates what the Pause option does on a SQL Server:

As stated via official Microsoft documentation “Pausing the Database Engine service prevents new users from connecting to the Database Engine, but users who are already connected can continue to work until their connections are broken. Use pause when you want to wait for users to complete work before you stop the service. This enables them to complete transactions that are in progress”.

Very handy! Let’s see it in action and compare it to a STOP.

Read on for more.

Comments closed