Something isn’t right…as DBAs we think of things in rows and columns. So we’re going to count across the top and think the 7th column is going to yield the 7th column and it’s data for each row, right? Well, it will but data processed by awk is whitespace delimited by default and is processed row by row. So the 7th column in the second line isn’t the same as the output in the first line. This can be really frustrating if your row data has spaces in it…like you know…dates.So let’s fix that…the output from the DMVs via dbfs is tab delimited. We can define our delimiter for awk with -F which will allow for whitespaces in our data. Breaking the data only on the tabs. Let’s hope there isn’t any tabs in our data!
I’m a little surprised that these metrics don’t end up in /proc, but I imagine there’s a reason for that.
In many cases, you can easily provision resources in the web-based Azure portal. If you’re never going to repeat the deployment process, then by all means use the interface in the Azure portal. It doesn’t always make sense to invest the time in automated deployments. However, ARM templates are really helpful if you’re interested in achieving repeatability, improving accuracy, achieving consistency between environments, and reducing manual effort.
Use ARM templates if you intend to:
Include the configuration of Azure resources in source control (“Infrastructure as Code”), and/or
Repeat the deployment process numerous times, and/or
Automate deployments, and/or
Employ continuous integration techniques, and/or
Utilize DevOps principles and practices, and/or
Repeatedly utilize testing infrastructure then de-provision it when finished
Melissa walks through an example of deploying a website with backing database, along with various configuration changes.
ON is the default and is what you would expect. Trailing spaces are saved in VARCHAR and in CHAR additional spaces added to fill the entire space. When ANSI_PADDING is off then additional spaces are not saved .. unless the column is CHAR AND NOT NULL.
So there’s the first reason to not turn ANSI_PADDING off. Most people expect the ON results and the OFF results can be .. let’s just say confusing.
Click through for more details, including how painful it is to change the setting on a column after the fact.
Your servers have many-core CPUs, right? And you want SQL to utilize those cores to the best of its ability, distributing the many users’ workloads fairly amongst them, yes? Damn right, you paid $3k or more per core in freaking licensing costs! “OK”, says SQL Server, “I’ll use all available CPUs for any query with a ‘cost’ over ‘5’“. (To give context here, in case you’re not aware, ‘5’ is a LOW number; most OLTP workload queries are in the double to triple digits). “But wait!”, you protest, “I have more than 1 user, obviously, and I don’t want their horrible queries bringing all CPUs to their knees and forcing the 50 other user queries to wait their turn!”
Nate has a few recommendations here, as well as a picture of kittens.
Performance problems for a SQL Server based application are likely to be caused by environmental factors and not buggy code.
Whether it is a configuration you can change in SQL Server, Windows Server, VMware, or the network it is likely the first course of action is to perform a quick assessment of the environment. This is where understanding the various configurations and best practices are key. Knowing what to look for can save tons of time.
A mistake I often see is a performance issue is passed off to someone else (more senior) and that engineer assumes a lot of things without checking. People are going to relay the problem as they see it – not as it actually is. This leads to skipping over some elementary checks which can save time and frustration from tracking down imaginary bugs.
Start troubleshooting with a quick environmental check.
There are quite a few checks here.
I had a recent project to enable backup encryption on all our servers. Then question from the storage team came up will this required additional space. Well by then I had already enabled in all our test servers so I wrote a query that would compare the average size of backups before encryption to after encryption. Keep in mind we do keep only two weeks of history in our backup tables so this is a fair comparison. If you don’t have maintenance tasks to clean up your backup history then you should have backup_start_time to the where clauses to get more accurate numbers and setup a maintenance tasks to keep your msdb backup history in check.
Unfortunately, Tracy leaves us in suspense regarding whether they did increase.
Nothing special here, simple syntax, but the seasoned PowerShell remoting pro will notice that we’re using a new parameter here -HostName. Normally on Windows PowerShell you have the -ComputerName parameter. Now, I don’t know exactly why this is different, but perhaps the PowerShell team needed a way to differentiate between OpenSSH and WinRM based remoting. Further, Enter-PSSession now has a new parameter -SSHTransport which at the moment doesn’t seem to do much since remoting cmdlets currently use OpenSSH by default. But if you read the code comments here, it looks like WinRM will be the default and we can use this switch parameter to specify SSH as the transport.
Once we execute this command, you’ll have a command prompt to the system that passed as a parameter to -HostName. The prompt below indicates you’re on a remote system by putting the server name you’re connected to in square brackets then your normal PowerShell prompt. That’s it, you now have a remote shell. Time to get some work done on that server, eh? Want to get out of the session, just type exit.
It’s interesting to see how well Microsoft is integrating Linux support into Powershell (and vice versa, but that’s a different post).
David Alcock has a couple posts covering installation of SQL Server on a brand new Ubuntu VM. First, David installs Ubuntu:
The system requirements for running SQL Server on Ubuntu 16.04.2 contains the following
NoteYou need at least 3.25GB of memory to run SQL Server on Linux. For other system requirements, see System requirements for SQL Server on Linux.On the create VM window the Memory is currently set to 1024 MB so by clicking the Customize Hardware button I can change the allocated memory to 4GB (4096 MB) as in the screenshot below:
Let’s break it down a little bit. First sudo, which is giving root permissions to a particular command this is as opposed to sudo su which I had to do later on in the install to switch to superuser mode for the session.
Next is apt. Apt is a command line tool which works with the Advanced Packaging Tool and enables to perform installs, updates and removals of software packages. In this case we’re installing curl so we use the install command.
I think Microsoft did a good job of simplifying the installation process on Linux and making it “Linux-y,” with an easy installation and then post-installation configuration.
Restoring 27 databases; they all restored properly, but 15 of them gave a warning along these lines:
Warning: Wordbreaker, filter, or protocol handler used by catalog ‘FOOBARBAZ’ does not exist on this instance. Use sp_help_fulltext_catalog_components and sp_help_fulltext_system_components check for mismatching components. Rebuild catalog is recommended.
Read on for the solution.
In my head I’m thinking of all the things that I can do to try and troubleshoot this problem.
- Extended Events my session,
- Ask my Senior DBA,
Then I realize that I’m jumping the gun again and I slow down, and check the first error message again. This time without the developers shouting in my ear, about permissions.
This is a great example of why it’s important to troubleshoot using a methodical, logical process. If you get it stuck in your head that the answer is quite obviously something, you lose a bunch of time if it turns out that it isn’t quite as obvious.