We turn to two cmdlets:
Set-Variable. They may seem redundant as we get and set variables all the time without those. Maybe you have even never heard of these two cmdlets.
Well here they prove their usefulness:
we will set the variables using their names, and use the current value as a starting point.
An important rule to remember is this:
$is a token to indicate we’re dealing with a variable, but it is not part of the variable name!
It’s worth reading the whole thing.
Use the cmdlet Get-Variable to find them, and keep in mind, these variables are not found in Windows PowerShell 5.x.
Although, the results will display four variable, but let’s pay attention to three of them. Below are the variables with their default values:
These three variables can help in identifying which Operating System the script are been executed. This way just adding the necessary logic, in order to take the correct action.
Read on for a code example showing how to use these variables.
So here are the steps that I use to schedule my tasks:
Create a Windows-based Login in SQL Server
Ensure dbatools is available to the account
Create a SQL Server Credential
Create the Agent Proxy
Create the PowerShell .ps1 file
Create the Job and Job Step
Chrissy walks you through step by step, making the whole thing easy.
The first file you need to create is a manifest file. This file tells who wrote the module what version PowerShell should be used and module version. You will need to open either PowerShell ISE or PowerShell under Run as administrator to run this command. You change the path to match the name you want the name you want your module to have along with the name of the module and author. You may have to create the folder in your Modules folder (DBAFunctions) first depending on the security on your machine.
There isn’t too much to the process, but there is a big benefit: code portability.
Standard edition is limited to lesser of 4 sockets or 24 cores with a maximum memory of 128 GB plus a few truly Enterprise level features like Compression, Availability Groups, Partitioning etc are off limits. I would say most places would fall under this threshold for “Standard” but feel inferior to say they run “Standard”! I don’t, especially when money matters.
But, all kidding aside, most shops don’t even realize that they do not use any Enterprise features on 90% of their instances but pay Enterprise price anyway! If you don’t trust me, go check for yourself at your place – we did, on hundred’s of SQL Server instances! I painfully built the infrastructure to do this type of thing using PowerShell in seconds if not a few minutes, for scanning hundreds of servers/instances.
There’s a lot here, so if you’re thinking about downgrading in a post-2016 SP1 world, Jana’s post is a must-read. But even with the new features, there are still quite a few enterprise-level features that make it so I don’t want to live without Enterprise Edition.
It is even easier to use cmdkey with PowerShell. IT can build a small wrapper script that can manage cached credentials on one remote computer at a time and perform the action just as quickly on multiple computers at once.
The following example uses a PowerShell module called PSCredentialManager. IT pros can download the module from the PowerShell Gallery by running Install-Module.
Read the whole thing.
But anyway, on to the TSQL2sDay posts
— Rob Sewell (@sqldbawithbeard) September 12, 2017
What a response. You wonderful people. I salute you with a Rimmer salute
Read on for an amazing 34 entries.
Defines a list of two servers (PowerShell lists are as easy as “”,””)
Pipes them to the
Gathers information about the last full backups for all of your databases on that instance.
Restores the backups to the Destination with a new name. If no Destination is specified, the originating server will be used.
The database is restored as “dbatools-testrestore-$databaseName” by default, which you can change with the -Prefix parameter.
The internal file names are also renamed to prevent conflicts with original database.
A DBCC CHECKTABLE is then performed.
And the test database is finally dropped.
The entire log shipping functionality is now separated between 5 functions. Four of them are used internally and are not visible as a public function because you can easily break stuff if it’s not being used correctly.
The main function, Invoke-DbaLogShipping, is available in the dbatools module for anyone to use.
If you open the GUI for the log shipping you have lots of choices but most of them are already supplied in the GUI itself and you can decide whether to use them or not.
The whole idea behind the functionality was that it would allow you to quickly setup log shipping using a lot of defaults like you can in the GUI, but if you’re more experienced you can change any setting to your preferences.
Read on for an example of how to use this. It looks pretty snazzy.
Now let’s say that we are working in an environment that does not allow us to change the server configurations, meaning that
OPENROWSET()is closed for us.
Being completely honest, my spreadsheet is only 8 rows so I would just manually insert the data but what happens if it’s a few thousand rows big? Not a viable option.
So with most of my T-SQL ideas have been exhausted, let’s look at PowerShell!
The other all-time important question, of course, is how to get data from something else into Excel.