If you’ve ever looked at sys.sysprocesses or sys.dm_exec_requests (or a number of other DMVs), you’ve noticed there is a column called “sql_handle” that contains some binary gobbledygook. Books Online gives the (un)helpful definition as “Hash map of the SQL text of the request.”
“Binary gobbledygook” is probably the best description of the plan handle available.
Did you know that you can change the password on the SQL Service account that is running your SQL instance without a reboot or restart? Turns out this is true. We have a new round of password requirements and it means that we need to change passwords on servers more often. But, since we need our servers up and reboots have to be heavily planned, we needed a solution that kept us from having to restart an instance after a password change. This lovely msdn article explains all the details, but let me give you the cliffs notes.
This is helpful for those max uptime scenarios where even a momentary service restart requires planned downtime.
A client asked me recently why he should back up the SSISDB database. While you can recreate everything inside of the SSISDB, it will take time and you will have to remember exactly how all of your variables were set. Restoring the backup decreases this issue and having a backup allows a server to be redeployed quickly. When you do back up the database, make sure that you remember to backup the database certificate, which is created when the SSISDB is created as well, as you will need this to do a restore. By default. the recovery model of the SSISDB is set to Full. If the packages in SSISDB are changing minute by minute, full would make sense, but given that an SSISDB contains packages which are run on a scheduled basis, most likely the changes made are infrequent. Change the recovery model to simple.
SSISDB is a real database, just like ReportServer, so don’t neglect it just because you didn’t create it.
The shorter the period of time you keep backups, the more often you need to run DBCC CHECKDB. If you keep data for two weeks, weekly is a good starting point. If you take weekly fulls, you should consider running your DBCC checks before those happen. A corrupt backup doesn’t help you worth a lick. Garbage backup, garbage restore. If your data only goes back two weeks, and your corruption goes back a month, best of luck with your job search.
Erik provides some good guidelines on where to begin, but as always, your answer will depend upon your particular circumstances.
The CSS SQL Engineers have a new series called “It Just Runs Faster.” In their first post, they discuss DBCC improvements:
Internally DBCC CHECK* uses a page scanning coordinator design (MultiObjectScanner.) SQL Server 2016 changes the internal design to (CheckScanner), applying no lock semantics and a design similar to those used with In-Memory Optimized (Hekaton) objects, allowing DBCC operations to scale far better than previous releases.
Making DBCC checks significantly faster for large databases makes administration that much easier. I’m looking forward to seeing this. James Anderson, however, is concerned that things might be worse at the extreme low end.
The viewer can consume blocked process report events captured by any extended events session as long as that session has a target of
event_file. For example, if you set up your extended events session the way Jeremiah Peschka did in Finding Blocked Processes and Deadlocks using SQL Server Extended Events. Then you would use the viewer like this:
NC_TABLE1 is 36 total extents. 288 eight k pages. 2.25 mb. It can be read in 5 reads – one read for each contiguous run.CI_TABLE1 is comprised of 48 extents. 3 mb. It can be read in 11 reads – again, one for each contiguous run.The SQL Server instance has the -E startup option enabled. Without that startup option, proportional fill algorithm would distribute incoming data to the 8 data files with a potential smallest allocation of a single 64k extent before rotating round-robin to the additional files in the filegroup. With the -E startup option, the smallest possible new allocation by proportional fill is sixty four 64k extents – 4 mb.That means if I can rebuild the indexes into new space at the tail of the files, the contiguity should be improved considerably.
I had never considered that the scenario described here before, so this was definitely interesting.
When multiple files are involved, and if these are ideally located on different physical spindles on the underlying disk subsystem, then a rather nicely performing data striping can be achieved for the database. If proportional fill kicks in and starts to focus on files with more free space then you may get hot spots for those files. However nowadays with auto-tiering SAN’s, SSD and (abstracted) cloud storage (for IaaS deployments) this is beginning to matter less and less.
This is a good introduction to proportional fill, including what happens when you add files later. If you are counting on proportional fill, it’s a good idea to make sure all files are the same size and grow them all at once.
With the SQL Server 2005 end date approaching, you may find yourself migrating databases. One of the gotchas with any database migration is orphaned users. Below is a script I put together, and have been using for years, to help me resolve issues with orphaned users:
I’m not a fan of the sp_msforeachdb in there because there are issues which can cause “each” database to skip databases. If you have installation authority, Aaron Bertrand’s sp_foreachdb is a better alternative, and if you need to do Insert/Exec calls, there’s a newer version which removes the INSERT/EXEC in that procedure.
There may come a time when you need to generate a list of SQL logins and the last time their password was changed. By using the LOGINPROPERTY of the name in sys.server_principals we can generate such a list.
Note: This will only work with SQL logins.
If you want (or need) to know that passwords are being updated but can’t turn on password policies, this at least answers the initial question.