I trust the utilization reported by “Processor Info”. Note that the greatest reported “Resource Pool Stats” utilization (approaching 120%) is when total “Processor Info” utilization is near 100% across all 12 physical/24 logical cores. Nominal rating of the core is 3.06 GHz, top SpeedStep is 3.46 GHz. That would give a maximum ratio of 3.46/3.06 = 113%, which is still under the number reported by SQL Server (for Default pool alone, I’ll add). Even if the numbers made it seem possible that SpeedStep was responsible for more than 100% utilization reported by SQL Server, I don’t think SpeedStep is the culprit. The older Intel processors were by default conservative with SpeedStep, to stay well within power and heat envelope. And no-one’s been souping this server up for overclocking 🙂
So… if my database engine will give 110% (and sometimes more…) I guess I better, too. 🙂
Math is hard.
PowerShell is an ideal tool for doing health-checks of a collection of SQL Server instances, and there are several examples around, but few acknowledge the fact that individual DBAs have their own priorities for tests, and need something easily changed to suit circumstances. Omid’s Healthcheck allows tests to be SQL or PowerShell and requires only adding, altering or deleting files in directories.
Grab the tool from his GitHub repo.
Should I use WITH OVERRIDE?
That was an easy post. All done! Thanks for reading.
……Oh, right. Let’s talk about why. Books Online actually has the answer:
If that’s too too heavy of reading, you can spend the day on the Transformers Wiki. I mean, it is Friday, after all.
Step 3: Easily Stop SQL Server Service.
That is very possible you performed some very heavy and memory intense operation by your local SQL Server instance and it ate all your free memory. You do not need it anymore, but SQL Server won’t easily give memory back.
The easiest way to claim all your memory is to stop your SQL Server.
There are some easy ways doing it:
– First way us using SQL Server Management Studio. You just have to do a right click on your local SQL Server instance and choose “Stop”.
The use case for Slava’s advice is a scenario in which you have SQL Server installed on a local machine with very little RAM.
Here’s one of my favorites, which searches for code within stored procedures, functions, and views:
SELECT OBJECT_SCHEMA_NAME(sm.object_id) AS SchemaName, OBJECT_NAME(sm.object_id) AS ObjectName, CONCAT(OBJECT_SCHEMA_NAME(sm.object_id), '.', OBJECT_NAME(sm.object_id)) AS FullName, CONCAT(OBJECT_SCHEMA_NAME(sm.object_id), '.', OBJECT_NAME(sm.object_id), ',') AS CommaSeparatedName, definition FROM sys.sql_modules sm WHERE sm.definition LIKE '%DEFINITION%' --AND OBJECT_SCHEMA_NAME(sm.object_id) = 'Something' --AND OBJECT_NAME(sm.object_id) = 'Something' ORDER BY SchemaName, ObjectName;
Andy Galbraith has a tale of woe and a cautionary message:
Paul’s blog post “Issues around DBCC CHECKDB and the use of hidden database snapshots” discusses the need to have certain permissions to be able to create the snapshot CHECKDB uses. I checked the DATA directory and the SQL Server default path and found that the service account did have Full Control to those locations.
What happened next ultimately resolved my issue, and it reflects something I constantly tell people when they ask me how I research things relatively quickly (most of the time anyway :)) – whenever you read a blog post or article about a subject, MAKE SURE TO READ THE FOLLOW-UP COMMENTS! Sometimes they are nothing beyond “Great Article!” but quite often there are questions and answers between readers and the author that add important extra information to the topic, or just “Don’t Forget This!” style comments that add more detail.
The dilemma we all run into is what level of SAMPLED statistics is appropriate? The answer is you have to test but that is not always feasible and in the case of Microsoft CSS we generally don’t have histogram, historical states to revisit.
Microsoft CSS is engaged to help track down the source of a poorly performing query. It is common step to locate possible cardinality mismatches and study them closer. Studying the statistics dates, row modification counter(s), atypical parameters usage and the like are among the fundamental troubleshooting steps.
Good post and great scripts, even if he Microsoftly nouns the verb “ask.”
The way Microsoft have implemented this always encrypted feature, is to let all the data in the tables be encrypted. The application that needs to look at data will have to use the new Enhanced ADO.net library, which will give your application the methods to de/encrypt data.
This way, the only way to insert data into a table, which contains encrypted columns, is to use parameterized insert statements from your application. It is not even possible to insert data from SQL Server Management Studio, if we try, the statement will fail.
This way we ensure that only the persons using the application will be looking at un-encrypted data, thus reducing the number of people with a direct access to sensitive data.
If you go down this route, it looks like the only method available for modifying data is going through ADO.NET, although that could change later. My biggest concern here is how much of a performance hit—if any—systems will take.
Pieter Vanhove has published his Policy-Based Management-based DBA Morning Checklist and has some post-Summit additions:
Optimize for Ad Hoc Workloads
The policy is going to check if the server setting Optimize for Ad Hoc Workloads, is set to True. By default, this setting is set to False.
The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches. More information can be found on https://msdn.microsoft.com/en-us/library/cc645587.aspx
I don’t see any downside by enabling this setting.
Not many shops use PBM, so I’m happy to see Pieter contributing this to the general community.