Press "Enter" to skip to content

Category: T-SQL Tuesday

Death Marches Roundup

Jeff Mlakar takes us on a tour of bad project planning:

This month we had 15 post submissions about this daunting topic. Two of the posts were from people who had never posted before. To them I say welcome and I hope you enjoyed the experience.

I think, in general, you were all brave to write about this sensitive topic. I know a lot of you are consultants (which is a great way to expose yourself to a death march project) and must be careful about telling stories that could be misconstrued by clients. Nonetheless, with enough obfuscation and redaction you have brought forth some truly horrifying posts! I am going to group the submissions by the most terrible themes that many posts shared.

It is best to read this post in Vincent Price’s voice.

Comments closed

Analysis Of A Failed Project

Eugene Meidinger looks back at a big project which fell apart:

So the first issue was that the software was built in-house by another company in the same industry. Imagine, for example, if a large bakery had created an ERP system and another large bakery wanted to move to that system. Sounds great, right? Well, you run into two issues in that scenario.

First, a bakery is not an independent software vendor. Programming, by definition, is not their core competency. Which means that you may run into fragility or issues that you wouldn’t run into with a commercial piece of software. It also means that there isn’t going to be any documentation on migrating to the software or implementing it. Why would there be. If you built software for one company, why would you create scaffolding to move other companies onto it?

Second, not every business is the same. A lot of the fundamentals are the same, but you will run into many edge cases. We do invoices this way. They do workorders this way. We handle purchase orders this way. They handle inventory that way.

The way that I think about it is like a sea shell. It’s this intricate curve that’s grown over time, organically, to fit that creature. If you just try to fit a different snail or mollusk in that shell, it may not work out.

Read the whole thing.

Comments closed

Trigger Spirals

David Fowler tells a story of woe, one which is totally not his fault:

To do this, a trigger was created which would send all the details via a Service Broker message to another SQL Server, this SQL Server was used to hold details of the AD accounts and from there, changes were automatically propagated out to AD.

This was working well until one day when it was realised that any changes to account permissions in AD weren’t reflected in the personnel database.  To solve this, another trigger was created to send a Service Broker message back to the personnel database with details of the change.

This was where I came in, it was noticed that the system had started to run slower and slower, not only that but permissions seemed to be constantly changing for no obvious reason.  Were the machines finally waking up and taking over?

There’s a reasonable explanation here, for some definition of reasonable.

Comments closed

Triggers: Good, Bad, Mostly Ugly

Bob Pusateri walks us through a poorly-written DDL trigger:

First, the scope. While the application that deployed this trigger has its own database, AppDB, this trigger is firing for events on the entire server, which is what the ON ALL SERVER line means. Any qualifying event on this server, even if it pertains to another application with a separate database, will be written into this database. And what is a “qualifying event”? Literally any DDL statement. The line AFTER DDL_EVENTS specifies the very top of the event hierarchy used by DDL triggers.

So to recap on scope, this application is capturing all DDL statements on the entire server and saving a copy for itself. This application is seeing (and recording) plenty of events that it has no need to see. If this were a healthcare application or a system that dealt with PII it would be a legal nightmare, but fortunately it isn’t.

However, scope isn’t the only issue.

Worth the read.  If you use DDL triggers on the instance level, make sure you know what you’re looking for and limit yourself as much as possible.

Comments closed

Tracking Database Logins: 5 Methods

Eugene Meidinger has a medley of options for tracking server logins:

I once had to some auditing for a customer and it was a complicated, multi-stage process. We had to be able to demonstrate who had admin access and what kind of activity was going on, on the server. But before we could do any of that, we first had to identify who was actually logging on.

We get a brief walkthrough of each, and an important warning.

Comments closed

Server-Level Triggers

Shane O’Neill makes me wish Policy-Based Management ever got the love it needed:

Now, my normal attitude with regard to triggers tends to run to the negative. Which is horrible, because triggers are just like any other tool; neutral by themselves and only good or bad based on how we use them.

So, with that being said, I’ve forced myself to think of a positive use for them. So here is a time when I’ve used triggers for a “good” cause and used them to get some visibility on when new databases are created.

DDL triggers can be useful things, as Shane shows us.

Comments closed

Saving Table History With Triggers

Bert Wagner shows us a way of saving table history in SQL Server using triggers:

Triggers are something that I rarely use.  I don’t shy away from them because of some horrible experience I’ve had, but rather I rarely have a good need for using them.

The one exception is when I need a poor man’s temporal table.

Check it out.  My main comment is, make sure you write the triggers to handle updating multiple rows; otherwise, you’ll be disappointed when rows go missing.

Comments closed

Table Swaps With Triggers

Jay Robinson walks through the process of making a breaking change to a large, active table with limited downtime:

I can only recall one time in the past several years (at least a decade) that I’ve found triggers to be useful. It involves data migration.

The problem: You have a massive, high-activity table. Let’s call this Table_A. You need to make significant changes to it. For example, the clustered index needs to change. How do you accomplish this?

I’ve used a similar process with good success in the past.

Comments closed

The Costs And Benefits Of Triggers

Rob Farley hits on my biggest complaint about triggers:

But the problem isn’t what they do, it’s that you didn’t know it was there. Just like asbestos.

People complain about nested triggers, about triggers that are overly complex, about triggers which do too many things… but stored procedures have all these faults too. It’s just that you knew that the stored procedure was there, and you didn’t know that the trigger was there.

I wish that triggers were more visible inside the tools (Management Studio / Operations Studio / Visual Studio), right alongside stored procedures and procedural functions (those ones that use BEGIN & END and are bad, rather than inline functions). They’re code and should be treated as such.

Hiding triggers under tables makes it easy to forget about them, at least until you get some unexpected results.

Comments closed

The Benefits Of DML Triggers

Kendra Little tells us a tale of trigger value:

When I first edged my way into a Junior DBA-ish role, I worked with a complex application with many large databases. Customers loaded and configured data into a (mostly) OLTP style database, and then database was replicated to multiple other systems — some to publish data to and adserving platform, and some to transform the data for reporting.

Triggers were used extensively in these downstream systems to detect changes. It went like this:

  • Transactional replication publications were set up on the OLTP  (ish) database.

  • Transactional replication subscriptions were set up on each downstream server. A dedicated database was used for replication articles on each instance.

  • After replication was initialized, DML triggers were created on each article in the subscriber database. For each modification, the trigger would insert a narrow row into a “delta” table related to that article.

  • The “delta” tables were in their own schema, and contained row identifiers, timestamp columns, and custom indexes for processing. This enabled batches to be efficiently pulled from these tables for processing into the related system.

  • Cleanup processes periodically pulled processed rows out of the delta tables in the background (and indexes were designed to prevent the cleanup process from fighting with data processing jobs and inserts)

Read the whole thing.  There are some things that triggers can do easily which would be difficult to handle otherwise, but they can also be dangerous in the wrong hands.

Comments closed