I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.
Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().
This is a common pattern and works pretty well. The trick is making sure that you keep that metadata up to date.
What about adding a clustered index and dropping it? Nooooooo, and again, I learned something new. This causes two rebuilds of the non-clustered indexes as they are rebuilt with the cluster addition and then rebuilt when the table changes back to a heap (to get the heap locations). That’s crazy, and certainly not what we want.
Also read Matthew Darwin’s comment, as “Don’t do X” usually has an “Except when Y” corollary.
The topic of baselines in SQL Server is one that I’ve had an interest in for a long time. In fact, the very first session I ever gave back in 2011 was on baselines. I still believe they are incredibly important, and most of the data I capture is still the same, but I have tweaked a couple things over the years. I’m in the process of creating a set of baseline scripts that folks can use to automate the capture of this information, in the event that they do not have/cannot afford a third-party monitoring tool (note, a monitoring tool such as SQL Sentry’s Performance Advisor can make life WAY easier, but I know that not every can justify the need to management). For now, I’m starting with links to all relevant posts and then I’ll update this post once I have everything finalized.
If you don’t know what “normal” looks like, you’ll have a hard time discerning whether something is wrong. The better you understand a normal workload, the easier it is to spot issues before end users call you up.
Brent Ozar walks through one way to reduce SA account usage.
In a perfect world, you’d create a minimally-privileged AD login that only has limited access to specific databases.
However, when you’ve got a toddler running with scissors and razors, sometimes you’re happy just to get the razors out of their hands first, and then you’ll work on the scissors next. One step at a time. Preferably not running.
For now, create another SQL account with DBO permissions on all of the databases involved with the application. (If you’re dealing with multiple different tenants on the same server, give them each their own SQL login.) Let them be complete owners of their databases for now.
Power User: “EVERYTHING IS DOWN! THE SA ACCOUNT PASSWORD ISN’T WORKING! DID YOU RESET IT?”
Me: “Of course not. You told me not to.”
Power User: “THEN WHO DID IT?”
Me: “Oh, I have no way of knowing. Anyone who uses the account can change the password with theALTER LOGIN command. And you said everyone has it, right?”
That’s a nice account you have; it’d be a shame if something…unfortunate…were to happen to it.
If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically. If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.
When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else. The downstream effect of this is going to be the fact your indexes are going to become fragmented. This can be demonstrated in a simple test.
Friends don’t let friends auto-shrink.
So, we can clearly and without any doubt say that both COUNT(*) & COUNT(1) are same and equivalent.
Both of these are different from COUNT(SomeColumnName), though.