Brent Ozar Unlimited has a couple whitepapers out about working with SQL Server in Google Compute Engine. First, Brent and Tara Kizer create an Availability Group:
In this white paper we built with Google, we’ll show you:
How to build your first Availability Group in Google Compute Engine
How to test your work with four failure simulations
How to tell whether your databases will work well in GCE
Relax. Have a drink. In this white paper we built with Google, we’ll show you:
How to measure your current SQL Server using data you’ve already got
How to size a SQL Server in Google Compute Engine to perform similarly
After migration to GCE, how to measure your server’s bottleneck
How to tweak your SQL Server based on the performance metrics you’re seeing
If you’re looking at GCE as a potential migratory spot, you’ve got some extra reading material.
Acquiring the physical location of a row
SQL Server 2008 introduced a new virtual system column: “%%physloc%%”. “%%physloc%%” returns the file_id, page_id and slot_id information for the current row, in a binary format. Thankfully, SQL Server also includes a couple of functions to split this binary data into a more useful format. Unfortunately, Microsoft has not documented either the column or the functions.
Read on for two functions you can use to format this data more nicely, as well as a short re-write Wayne did to improve performance of one of them.
So the installation of SQL Server is now fairly straightforward. The wizard does a nice job of guiding you along the way. 2016 even includes best practice suggestions for tempdb and instance file initialization. Along the way, Microsoft as given us ways to automate the installation of SQL Server. You can sysprep an instance, but this does not really automate the installation. It just helps create a template of an instance. At the end of the day, you still need to do things manually. You can also use a configuration file to assist here. This is a great step forward, but it does not allow for all of the things you need to do to configure a SQL server.
Powershell does. Desired State Configuration (DSC) is functionality built into Powershell that allows for the installation and configuration of a SQL Server.
Chris includes his script as well as a link for more information on DSC in case you aren’t familiar with the concept.
If you ever need to move a copy of a SQL database in Azure across servers then here is a quick easy way.
So let’s say you need to take a copy of database called [Rack] within Subscription A that is on server ABCSQL1 and name it database [NewRack] within subscription B on server called RBARSQL1 (The SQL Servers are in totally different data centers too).
Read on for the answer.
A fundamental component of SQL Server is the security layer. A principle player in security in SQL Server comes via principals. In a previous article, I outlined the different flavors of principals while focusing primarily on the users and logins. You can brush up on that article here. While I touched lightly, in that article, on the concept of roles, I will expound on the roles a bit more here – but primarily in the scope of the effects on user permissions due to membership in various default roles.
Let’s reset back to the driving issue in the introduction. Frequently, I see what I would call a gross misunderstanding of permissions by way of how people assign permissions and role membership within SQL Server. The assignment of role membership does not stop with database roles. Rather it is usually combined with a mis-configuration of the server role memberships as well. This misunderstanding can really be broken down into one of the following errors:
The belief that a login cannot access a database unless added specifically to the database.
The belief that a login must be added to every database role.
The belief that a login must be added to the sysadmin role to access resources in a database.
Worth reading. Spoilers: database roles are not like Voltron; they don’t get stronger when you put them all together.
With the Standard Deviation in hand, and a quick rule of thumb that says 68% of all values are going to be within two standard deviations of the data set, I can determine that a value of 16 on my Cost Threshold for Parallelism is going to cover most cases, and will ensure that only a small percentage of queries go parallel on my system, but that those which do go parallel are actually costly queries, not some that just fall outside the default value of 5.
I’ve made a couple of assumptions that are not completely held up by the data. Using the two, or even three, standard deviations to cover just enough of the data isn’t actually supported in this case because I don’t have a normal distribution of data. In fact, the distribution here is quite heavily skewed to one end of the chart. There’s also no data on the frequency of these calls. You may want to add that into your plans for setting your Cost Threshold.
This is a nice start. If you’re looking for a more experimental analysis, you could try A/B testing (particularly if you have a good sample workload), where you track whatever pertinent counters you need (e.g., query runtime, whether it went parallel, CPU and disk usage) under different cost threshold regimes and do a comparative analysis.
The memory limit of 128GB RAM applies only to the buffer pool (the 8KB data pages that are read from disk into memory — in other words, the database itself).
For servers containing more than 128GB of physical RAM, and running SQL Server 2016 with Service Pack 1 or higher, we now have options.
Randolph has a couple good clarifications on memory limits outside the buffer pool, making this worth the read.
Target Server Memory (KB) is the amount of memory that SQL Server is willing (potential) to allocate to the buffer pool under its current load. Total Server Memory (KB) is what SQL currently has allocated.
Using SQL Server 2014 developer edition (64 bit) my machine has 12GB RAM and maximum server memory is currently set to 8GB and for the purpose of this post I have set minimum server memory to 1GB (Lock Pages in Memory has not been set).
Read on for a nice description laden with Perfmon pictures.
This weekend I set up some SQL vNext virtual machines, two on Windows and one on Linux so that I could test some scenarios and build an availability group.
IMPORTANT NOTE :- The names of dbatools commands with a Sql prefix WILL CHANGE in a later release of dbatools. dbatools will use Dba throughout in the future as the sqlserver PowerShell module uses the Sql prefix
I used PowerShell version 5.1.14393.693 and SQL Server vNext CTP 1.3 running on Windows Server 2016 and Ubuntu 16.04 in this blog post
There’s some fancy footwork in this post; if you’re looking for ways to compare instance configurations (specifically, sp_configure settings), check it out.
I saw this week that there was a new CTP (v1.3) of SQL Server v.Next. I haven’t had a lot of time to work on the Linux version lately, but I thought I’d try and see how well the upgrade went.
There’s an install and upgrade page at Microsoft you can use, but on Ubuntu, things are easy. First, connect to your system and run this:sudo apt-get update
That will download updated packages and get the system ready. you can see that I have a lot of stuff to update on this particular system.
One small change I’d make to that script in the snippet is sudo apt-get update && sudo apt-get upgrade. They do different things, both of which are useful. I do hope that Microsoft keeps with the Linux-friendly upgrade process when it comes to CUs and SPs.