ETL. Spec said only Address Line 1 is needed to be loaded, so the developers only bring that line in (plus name, city etc.). Fast forward 8 years, I get a request on my desk: “Please add Address Line 2 to import, and all tables. Oh, and we need historical data for previously loaded files. And for all address types”.
No normalization in this database (which is just one of about 40 databases with SIMILAR structure, but not identical).
Read on for the damage done, as well as another example of foresight saving the day.
We’re still dealing with the same problems because we’re dealing with the problems in the same way.
I think it can be cultural and can propagate from the senior level DBA’s right on down to the new hires. Sometimes it’s just lack of knowledge or understanding. Sometimes it’s just pure laziness to not want to do a deep dive and find a better solution to a recurring problem.
Here is a pretty extreme example but I think it portrays all of these.
Given some of the things I’ve seen, I’d say his example is not at all extreme.
After years of using SQL Server Management Studio (and its predecessor Query Analyzer), I’m struck by how incapable the results grids still are. Unlike Excel, you can’t sort them, you can’t filter them, you can’t search within them, and you can’t easily change their font size. In any commercial software product, grid tools are table stakes. For some reason vendors still like to run through them, but they’re never a differentiator. That’s because you can just buy a grid component and use it in your application. Even the basic grid control which came with .NET 2.0could sort.
Click through to read more, and also check out the Trello board that Riley mentions.
When you detach a database with Change Data Capture enabled on SQL Server 2014 and below and attach it to a SQL Server 2016 instance, you could run into the error mentioned below while execute Change Data Capture (CDC) related procedures.
Msg 22832, Level 16, State 1, Procedure sp_cdc_enable_table_internal, Line 639 [Batch Start Line 0]
Could not update the metadata that indicates table [<schema name>].[<object name>] is enabled for Change Data Capture. The failure occurred when executing the command ‘insert into [cdc].[captured_columns]’. The error returned was 213: ‘Column name or number of supplied values does not match table definition.’. Use the action and error to determine the cause of the failure and resubmit the request.
Read on to learn more, including how to fix the issue.
The primary system I deal with on a daily basis was originally developed as a DOS application and several of the above examples are drawn from it. Looking at the core tables and columns, it’s easy to identify those that began life in those early days – they all have 8-character names. Time moved on and the system grew and evolved. DOS to Windows. Windows to the web. But the database, and the practices and patterns used in the database, haven’t come along for the ride.
Data schema conversions can be hard and disruptive – you need to update your application, your stored procedures, and provide customers/users with a clean migration path. Code changes require testing. Complexity and cost grows every time you introduce changes. I get that.
There’s a lot of effort in Andy’s advice, but it’s well worth it.
What do you do? Hopefully you recognize the issue and can fix the issue. Maybe more importantly, you have a backup of the missing certificate.
Most people don’t deal with encryption, but you never know when your backup job might start failing, perhaps writing to a damaged file that appears to work (if you write as a device) but really isn’t capturing the backup file. Perhaps you don’t know that your backups are being written to a location and deleted a day later, but the process that is supposed to copy them to tape or a remote file share is broken.
Any number of things can happen. The point is that you want to be sure that you are actually getting useable backup files.
That means testing restores.
Read the whole thing.
The need for this setting came about because MAX_CPU_PERCENT is not applied unless the server is busy. This could lead to a situation where queries in a low priority resource pool starts running while the server is idle and are allowed to consume all the CPU they can. Then high priority queries spin up, and they can’t immediately get the CPU they need due to the low priority queries not being capped. CAP_CPU_PERCENT came along and was designed to set a hard limit that the queries in a pool could not go over even if the server is idle. For example, if you cap the CPU at 25%, the queries in the pool will not exceed 25% no matter how idle the server is.
Problem solved, right?
When the end of a section is a yes/no question, the answer is usually “no.” Read on before this burns you.
Installation defaults that are going to bite you (not version specific, and the installer is getting better):
Files all on the C drive
One TempDB data file (improved in SQL 2016)
Backups on C drive
No automated backups
Allow SQL to use ALL the memory
Allow SQL to use ALL the CPUs
Builtin\Administrators group not default*
Compressed backup set to OFF
There’s good advice here, so read on.
I was reminded of this recently in my little SQL Server world. I have a number of garden variety alerts set up, plus some other more custom monitoring stuff, which is mostly tied to DDL triggers and event notifications. The one thing all of them have in common is database mail. You can probably guess where I’m going with this. Yep, database mail stopped working. A couple weeks passed before I realized it. Fortunately, out of all the alerts I should have been notified about, none of them were serious.
How would I prevent this happening in the future? I guess I could build another system to monitor my monitoring system. Something like System C, which monitors System B, which monitors System A. But where would that end? System D? System E? Where should the line be drawn? I don’t know that there’s a right answer here, although admittedly, the farther into the alphabet you get, the more absurd it sounds.
At some level, process becomes the answer. In my case, not before I create a few more systems…
In my case, I created a new VM with a fresh installation of Windows Server 2012 R2. I also ran Windows Update to ensure it had everything Windows Server thought it required. Figure 3 reflects this status.
It’s never quite as easy as “just run this patch,” so do read on for another gotcha.