The table was around 500GB with over 900 million rows. Based on the average number of inserts a day on that table, I estimated that we had eight months before inserts on that table would grind to a halt. This was an order entry table, subject to round-the-clock inserts due to customer activity. Any downtime to make the conversion to BIGINT was going to have to be minimal.
This article describes how I planned and executed a change from an INT to a BIGINT data type, replicating the process I used in a step by step guide for the AdventureWorks database. The technique creates a new copy of the table, with a BIGINT datatype, on a separate SQL Server instance, then uses object level recovery to move it into the production database.
There’s a way to do this without any downtime, though the trigger logic gets a little more complex and it does take longer.
Today, I received a request to find the manager for a whole bunch of users. This was a list of names (not UserId’s) in a Excel worksheet.
It is not actually that complex to do it
Locate the AD user based on the name
Check the Manager property
Lookup AD again for Manager to get the name
Click through for the script. This does, of course, assume that the information is already in Active Directory somewhere.
You know you can have multiple filegroups right? You might have a separate filegroup for the data (the clustered index & heaps) and another for the indexes (non-clustered indexes). Or maybe you want to separate your data tables from the system tables. There are any number of reasons why you might want to have multiple filegroups, however, there will always be a primary filegroup and it will always be the default if you don’t specify otherwise. Right? Wrong.
I’ve never seen a way to remove primary or to move the system objects in it. However, you can change the primary filegroup.
Having a separate filegroup for your tables and another for indexes (or splitting things up some other way) can help get a database back online faster, as you can restore the system tables first and then restore filegroups as needed.
A gMSA is a sMSA [standalone managed service account] that can be used across multiple devices, and where the Active Directory (AD) controls the password. PowerShell is used to configure a gMSA on the AD. The specific computers that it is allowed to be used on is configured using some more PowerShell commands. The AD will automatically update the password for the gMSA at the specified interval – without requiring a restart of the service! Because the AD automatically manages the password, nobody knows what the password is.
Not all services support a gMSA – but SQL Server does. During a SQL Server installation you can specify the gMSA account. The SQL Server Configuration Manager (SSCM) tool can be used to change an existing SQL Server instance to use a gMSA. After entering the gMSA account you simply do not enter a password. The server automatically retrieves the password from the AD.
This is a nice way of improving service account security in a scenario where, for example, you can’t or don’t want to use virtual service accounts.
But how fast were they? Can we get faster? Thankfully R provides `system.time()` for timing code execution. In order to get faster, it makes sense to use all the processing power our machines have. The ‘parallel’ library has some great tools to help us run our jobs in parallel and take advantage of multicore processing. My favourite is `mclapply()`, because it is very very easy to take an `lapply` and make it multicore. Note that mclapply doesn’t work on Windows. The following script runs the `read_clean_write()` function in a for loop (boo, hiss), lapply and mclapply. I’ve run these as list elements to make life easier later on.
It’s interesting reading, particularly because I had expected lapply to do a little bit better. Also interesting is the relative overhead cost of mclapply in this scenario: going from 1 core to 4 cut the time to approximately 1/3, not 1/4.
Folks in the Linux world are used to moving SSH keys to and from systems enabling password-less authentication using SSH user keys. Let’s take a minute to look at what it takes to use PowerShell to distribute SSH user keys to remote systems.
In the OpenSSH package there’s a command ssh-copy-id which is a bash script that copies a user’s public key to a remote systems. There’s a little intelligence in the script to set things up properly on the remote system for password-less key based authentication. If the appropriate directory and key file aren’t set up, ssh-copy-idwill create the directory and key file with the correct permissions on remote system. As far as I can tell, ssh-copy-id has not been implemented in the Win32-OpenSSH port. So that leaves us with implementing this functionality ourselves, in PowerShell.
Read on to see how he does it.
If you have experience with PowerShell, some properties of Bash variables will feel familiar. In Bash, variables are denoted with a
$just like in PowerShell, but unlike PowerShell the
$is only needed when they are being referenced. When you are assigning a value to a variable, the
$is left off:
Above we assigned a value to
my_varwithout using the
$, but when we then referenced it in the
printfstatement, we had to use a
$. We also enclosed the variable name in curly braces. This is not required in all cases, but it is a good idea to get in the habit of using them. In cases where you are using positional parameters above 9 (we’ll talk about this later) or you are using a variable in the middle of a string the braces are required, but there is no harm in adding them every time you use a variable in a string.
The basic syntax is pretty familiar to most programming languages, and there’s nothing scary about outputs, even when Mark starts getting into streams.
Now, not every environment is the same. Instances get configured differently or things change just due to the nature of the business. In a previous life I would routinely have to backup a database and restore it to another server. However, the server I was using to restore to had a different drive configuration. It happens. Anyway, I wanted a script that would give me
- A backup statement
- A restore statement with the appropriate MOVE switch
This would allow me to easily execute the backup statement and then on the target server execute the restore statement. Since the restore statement already had the appropriate MOVE switch provided, I didn’t have to manually compose the statement.
Click through for the script.
-- 1) Create the Certificate: CREATE CERTIFICATE [Permission$ViewServerState] ENCRYPTION BY PASSWORD = 'UseBetterPassword!' WITH SUBJECT = 'VIEW SERVER STATE permission', EXPIRY_DATE = '2099-12-31'; /* The password is used to protect only the Private Key, not the Public Key or the Certificate in general. The default expiration date is 1 year from the date the Certificate is created. However, in many situations, such as with Module Signing, the expiration date is actually ignored. Still, I prefer to set the expiration date to 2099. */
The DBA handling this server in the year 2100 is going to be really upset. Certificate signing works wonders in conjunction with stored procedures, as you can grant procedures all kinds of rights needed for your application but users cannot run ad hoc queries expecting those same rights.