NOTE: The most important habit you can start with in Azure is putting everything into discrete, planned, Resource Groups. These make management so much easier.
Once the account is set, the first thing you need is to create a Runbook. There is a collection of them for your use within Azure. None of them are immediately applicable for what I need. I’m just writing a really simple Powershell script to do what I want:
Runbooks are an important part of Azure maintenance, and this is a gentle introduction to them.
But then I saw Mike use the following syntax (sans PROFILE and XML):
SET STATISTICS IO, TIME, PROFILE, XML ON;
SET STATISTICS IO, TIME, PROFILE, XML OFF;
Wow, that is a lot easier! (And yeah, using SQL Prompt I can set a snippet to take away typing, but this saves space in any case.)
Read on for more examples.
The recommendation is to setup a standard 50 EDTU pool. I am convinced that this pool is a new pricing tier. Even though the cost saving is small it is still clever that it suggests this. I assume the analysis done in the background really does understand my utilization patterns as we know that the patterns are absolutely crucial for when using elastic pools so it is something to definitely consider.
Within a click of a button the portal will create it for you.
It’s interesting that the feature can actually save you money rather than just telling you that you need to buy more expensive services.
2 very solid reasons (there are others) that every DBA should be learning and using PowerShell:
1 – Its very useful for admin at the O/S level.
At my current client I am team lead of System and SQL Admins, along with doing any of the work that comes our way. This means we need to be able to manage the modest server farm we have. Its big enough that we can’t log onto every server every day, but small enough nobody wants to buy a proper monitoring toolset. So…PS to the rescue!
Read on for the other reason. I think the relatively poor Powershell tooling with SQL Server (with respect to other groups like Exchange) limited general acceptance, but they’ve made some big improvements over the past year and there are some sharp minds in the community working to make Powershell even more important for DBAs.
I decided to change this around so that it no longer uses string formatting, but instead a
SqlConnectionStringBuilder. I had a couple reasons for this:
It will eliminate redundant code. There are several common elements in each of the
ConnectionStringsabove. If more complex logic is needed, there are potentially more copies of this ConnectionString kicking around.
It’s prone to copy/paste and other editing errors. If there’s a change that affects both versions of the ConnectionString and the developer just copies the line from one branch of the if statement to the other, code will be lost or invalid values will be substituted because of positioning.
This is something I’d like to see make it to the main cmdlet.
This pattern works in SQL Server 2014 and higher. And it even works in Standard Edition of 2014.
Some folks will see the word ‘Switch’ in this pattern and assume the pattern that I’m suggesting is Enterprise Edition only for versions before SQL Server 2016 SP1.
However, oddly enough, you can use partition switching even in Standard Edition, as long as the tables only have one partition.
And all rowstore tables have at least one partition! That happens automagically when you create a table.
Read the whole thing.
We asked to see your papers, and 2,898 people from 66 countries answered.
Download the raw data in Excel, and you can slice and dice by country, years of experience, whether you manage staff or not, education, and more.
Community bloggers have already started to analyze the results:
There were several entrants and some good posts, so check it out.
You’d really like to know what tables are affected without having to wait. Luckily(?), this corruption was recorded in msdb.dbo.suspect_pages, and having just recently read Paul Randal’s post here, we know we can use DBCC PAGE to determine this information. And, after having read my prior blog post, you know that we can automate DBCC PAGE, so we can use our new friend “WITH TABLERESULTS” to find out what objects have been corrupted.
The suspect_pages table, documented here, has three particular columns of interest: database_id, file_id and page_id. These correspond nicely to the first three parameters needed for DBCC PAGE. To automate this, we need to know what information we need to return off of the page – and from Paul’s post, we know that this is the field “METADATA: ObjectId”. For this code example, let’s assume that this corruption is on page 11 of the master database (just change “master” to the name of your 2TB database).
Read on for a script, including a script which checks all such suspect pages, and the possibly-better solution as of SQL Server 2012.