Press "Enter" to skip to content

Category: Administration

Collations

Robert Sheldon has an article on collations:

The ideal solution is to choose a collation when setting up SQL Server that can be used for all your user databases and character columns. Using one collation removes any issues you might encounter when querying the data in different ways. It can also be the best approach in terms of performance if multiple collations impact your queries. However, this approach works only if the same language and collation settings are appropriate for all your users and applications—or at least a good majority of them.

If you support multi-cultural environments, you’ll need to take into account a number of considerations. To begin with, you should pick collations that support the most users, and you should use Unicode data types where possible because they can help avoid code page conversion issues. Just keep in mind the storage requirements that come with Unicode’s two bytes per character.

My inclination is to say Unicode everywhere possible.  There are cases in which Unicode doesn’t fit, but it’s easy to do and if you have enough data to worry about the extra bytes Unicode characters take up, Unicode compression is available.

Comments closed

Finding Identity Columns Close To Overflow

Rob Sewell demos another dbatools feature:

It would be very useful to be able to quickly see what the current values of the identity columns are and how close they are to being full so that we can plan for and be able to take action before we end up with shouty smart suits at our desk. If we could do it with just one line of code that would be even easier.
Step forward dbatools.  This PowerShell module is a community based project written by excellent, brilliant people in their own time and available to you free. To find out more and how to use and install it visit https://dbatools.io
There is a command called Test-DbaIdentityUsage This command was created by Brandon Abshire. You can find Brandon blogging at netnerds.net. Thank you Brandon

That’s quite useful for taking a quick look at identity columns across a database or instance.

Comments closed

Re-Incrementing Identity Columns

Kenneth Fisher explains how to change the increment value of a table which already has an identity column:

The first thing to do is remember that all tables are partitioned. Sort of. What we think of as non-partitioned tables are really just tables with a single partition. Every table is listed in sys.partitions and in fact you can use it to quickly see how many rows there are in a table. Since there is no partition scheme/function we can’t do splits or merges, but we can do a SWITCH.

What we are going to do is create a new, virtually identical table, then switch the data from the old table (partition) to the new table (partition). The trick is that while in order to do the switch almost everything has to be identical, the properties of the identity column are part of that almost.

I love these types of solutions:  hacks in the most positive connotation of the term.

Comments closed

Last Known Good DBCC CHECKDB In Powershell

Rob Sewell shows off a cmdlet to check DBCC DBINFO for each database to get the last known good CHECKDB run:

This time we get more information. The server name, database name, when the database was created, the last good DBCC Checkdb, how long since the database was created, how long since the last known good DBCC Checkdb, a status and a Data Purity enabled flag. If you look at the image above it shows that the DBA_Admin database has a status of “New database, not checked yet” even though it has a date for the last known good DBCC CheckDb. This is because it was restored after this server was upgrade from CTP 1.3 to CTP 1.4 and there has not yet been a DBCC CheckDb run yet. The system databases have a status of “CheckDb should be performed”. This is because the last known good DBCC CheckDb is more than 7 days ago. Lets run a DBCC CheckDb and check again

Do read the caveats, and also check out a previous Arun Sirpal blog post on DBCC DBINFO.

Comments closed

Troubleshooting Cluster Creation Errors

Mark Broadbent diagnoses an error which seems misleading at first:

One such problem is when you use the New-Cluster command to add all your nodes in one go.

New-Cluster -Name magrathea -node server5,server6,server7
-staticaddress 192.168.1.70

Simple right? Well no. In this instance I ran into the following error:

New-Cluster : There was an error adding node 'server7' to the cluster
the node cannot be contacted. Ensure that the node is powered on and is connected to the network.

Read on for an example of piecemeal debugging.  Mark’s advice is to keep things simple, as in this case at least, you can’t count on the error messages coming back to be completely accurate.

Comments closed

Operating System Error 3

Stacy Brown provides common reasons for why you might get Operating System Error 3:

Sometimes the users of SQL Backup Master may face the following error while executing the database backup job:

Msg 3201, Level 16, State 1, Line 1
Job Execution Error: Cannot open backup device ‘’ Operating System error 3 (The system cannot find the path specified.)

Now, there can be the various possible reasons behind the occurrence of this error. Therefore, in the following sections, all possible reason with their respective solutions are discussed. A user can refer them to solve this SQL Server operating system error 3(the system cannot find the path specified.)

Click through for solutions to several potential causes of this error.

Comments closed

Cleaning Up LOB

Raul Gonzalez reminds us that rebuilding a clustered index onto a new filegroup doesn’t move LOB data:

In previous posts I have explained how having dedicated filegroups for user data can improve our RTO by, in case of disaster, recovering critical data first and then the rest.

The thing is when you deal with databases which were not created this way, you need to move the data from one filegroup to another before you can apply this kind of techniques.

Here is where this post can show you one of the gotchas you can find during this process.

Read on for a demo of this.

Comments closed

SQL Client Aliases

Andrew Pruski explains how to use a lesser-known feature in SQL Server, client aliases:

One of the problems that we ran into when moving to using containers was how to get the applications to connect. Let me explain the situation.

The applications in our production environment use DNS CNAME aliases that reference the production SQL instance’s IP address. In our old QA environment, the applications and SQL instance lived on the same virtual server so the DNS aliases were overwritten by host file entries that would point to 127.0.0.1.

This caused us a problem when moving to containers as the containers were on a separate server listening on a custom tcp port. Port numbers cannot be specified in DNS aliases or host file entries and we couldn’t update the application string (one of the pre-requisites of the project) so we were pretty stuck until we realised that we could use SQL client aliases.

This is definitely a place that you’d want to document changes thoroughly, as my experience is that relatively few DBAs would even think of looking there.

Comments closed

Database File Sizes In Powershell

Rob Sewell has a nice post on checking database file sizes using dbatools in Powershell:

As always, PowerShell uses the permissions of the account running the sessions to connect to the SQL Server unless you provide a separate credential for SQL Authentication. If you need to connect with a different windows account you will need to hold Shift down and right click on the PowerShell icon and click run as a different user.

Lets get the information for a single database. The command has dynamic parameters which populate the database names to save you time and keystrokes

It’s a great post, save for the donut chart…  Anyhow, this is recommended reading.

Comments closed

Cloned Database Security

Parikshit Savjani explains what happens when you run DBCC CLONEDATABASE on databases with various security measures activated:

Transparent Data Encryption (TDE)

If you use TDE to encrypt data at rest on the source database, DBCC CLONEDATABASE supports cloning of the source database but the cloned database is not encrypted by TDE. Thus, the backup of the cloned database will be unencrypted. If it is desired to encrypt and protect cloned database backup, you can enable TDE on cloned database before it is backed up as shown below

It’s a good read which covers several technologies.

Comments closed