SQL Server tracks untrusted Foreign keys in sys.Foreign keys with a column called is_not_trusted, there may be a number of reasons why a Foreign key may have become untrusted below are a couple of examples:
- Foreign key was disabled using the ‘NOCHECK’ option then re-enabled using ‘CHECK’ (not to be confused with ‘WITH CHECK’)
- Foreign key was disabled using the ‘NOCHECK’ option , Primary key data was Deleted and the Foreign key was Enabled only using ‘CHECK’ (Again not to be confused with ‘WITH CHECK’)
So what happens when you try and enable a Foreign key ‘WITH CHECK’ (Check existing data for referential integrity), if the data is consistent then this is going to succeed however if Rows have been deleted and the Primary key data no longer exists but the Foreign key data does for example then this is going to fail miserably.
What I like about this post is that he does more than just saying “hey, here’s how you get the key constraint to be trusted again;” he goes further and shows you how to figure out if it will work beforehand.
Extended Events are the all-around smart choice. They take a little bit of time to get used to, however. With thousands of new events and data points, it can be difficult to create an event session in a pinch. That is why it is important to have event sessions pre-scripted or pre-implemented on your SQL Server instances. A little bit of up-front work can save you a lot of time when you need information on the spot. Having them pre-scripted also prevents you from jumping back to Profiler, which has a much heavier footprint on your server.
When I create Extended Event sessions, I tend to use the SQL Server Management Studio wizard to find the events and actions (additional fields) that I want. Then, I will script it out and save it for later.
Below are five Extended Events sessions that I have found particularly useful and recommend you add to your toolbox.
Click through for all of those scripts, as well as queries to shred the resulting XML.
First, we need to create a table to store our information on the caches we would like to clear on an automated basis and populate it with values.
For example, we clear SQL Plans if we 10,000 plans are Adhoc or Prepared plans that take up 5GBs of memory or Single Used Plans is greater than 10,000 or the memory used for Adhoc or Prepared plans if more than 50% of memory. We clear Transactions cache if is more than 2 GBs and Lock Manager : Node 0 if it is more than 2 GBs.
Read on for the script.
Instance level evaluates the following:
- which databases are memory-optimized
- if running Enterprise, if there are any resource groups defined, and which databases are bound to them
- version/edition of SQL server
- ‘max memory’ setting
- whether or not instance-level collection of execution statistics has been enabled for all natively compiled stored procedures
- memory clerks for the buffer pool and In-Memory OLTP
- the value of the committed_target_kb column from sys.dm_os_sys_info
- display any event notifications (because they conflict with deploying In-Memory OLTP
Database level evaluates the following:
For each memory-optimized database:
database files, including container names, size, and location
indexes on all memory-optimized tables
count of indexes per memory-optimized table
natively compiled stored procedures
whether or not the collection of execution statistics is enabled for any natively compiled procedures
count of natively compiled procedures
if using the temporal feature for memory-optimized tables, the amount of memory consumed by hidden temporal internal tables
memory structures for LOB columns (off-row)
average chain length for HASH indexes
Ned provides the script on his blog, so click through to get that. This looks great if you’re trying to build up some basic information on how developers in your environment use memory-optimized objects.
So just remember the only difference when analyzing settings is the difference in Query Store Capture Mode. For Azure it is set to AUTO whereas with local installed SQL Servers it is set to ALL.
What does this mean? ALL means that it is set to capture all queries but AUTO means infrequent queries and queries with insignificant cost are ignored. Thresholds for execution count, compile and runtime duration are internally determined.
Read on to learn more, including how to change these settings.
There was something that popped up today that called for a PowerShell script and the Get-ADGroupMember cmdlet – get a list of users from a list of groups. Some users are in there more than once so this needs to be a distinct list, unless you are into manually cleaning up things like this, and then I will be sad for you. Because that is kinda sad.
I originally wrote a script with two arrays (one for the initial list and one for the de-duped list of users), but even though this is quick and dirty, that was a little too dirty. Enter the Group-Object cmdlet – it takes this list of names and groups them. No black magic this time. Just a cmdlet, that comes baked into PowerShell giving me what I need.
Click through for the script.