There are 2 types of security in Service Broker: dialog and transport. Dialog security establishes a secure, authenticated connection between Service Broker Services or dialog endpoints. Transport security establishes an authenticated network connection between SQL Server instances or Service Broker endpoints. Clear as mud, right? Don’t worry, these are easily mixed up by both novice and experienced Service Broker admins. To illustrate, let’s go back to our taxes scenario. You’ve completed your forms, stamped your envelope and you’re ready to mail it in. You drop it in your nearest mailbox and what happens next? A postal worker will pick it up, it gets loaded into a truck and shipped between various sorting facilities (as you might have noticed I have no clue how the USPS works) until it is finally delivered to the IRS via yet another postal worker. Now, those postal workers all have the authority to transport your tax return from point to point. However, they do not have the authority to open up and read your return. That’s what transport security is. The IRS agent on the other end, though, he does have the authority to read your return. That’s dialog security.
It’s also worth noting that transport security is only needed in a distributed environment. Just like if the IRS agent lived with you, you wouldn’t need to go through the USPS. But that’s just weird.
This wraps up Colleen’s Service Broker series. If you do find yourself interested in Service Broker, this is a great way to get your feet wet.
Here, we will show you how you can use the asynchronous execution mechanism offered by SQL Server Service Broker to ‘queue’ up data inside SQL Server which can then be asynchronously passed to a Python script, and the results of that Python script then stored back into SQL Server.
This is effectively similar to the external message queue pattern but has some key advantages:
- The solution is integrated within the data store, leading to fewer moving parts and lower complexity
- Because the solution is in-database, we don’t need to make copies of the data. We just need to know what data has to be processed (effectively a ‘pointer to the data’ is what we need).
Service Broker also offers options to govern the number of readers of the queue, thereby ensuring predictable throughput without affecting core database operations.
There are several interconnected parts here, and Arvind walks through the entire scenario.
Policy-Based Management (PBM) has a PR problem, even worse that DQS and MDS. A lot of people don’t know what it is, or if they do, are not sure if it’s still supported. Microsoft doesn’t even mention the feature in its list of Editions and supported features of SQL Server 2016. The PR problem is so bad, in fact, that I’ve come across community forum postings asking whether PBM still exists.
Despite its silent role, PBM is still a supported feature and appears to be available to all SQL Server editions, although I suspect it caters primarily to organizations with a large number of similar SQL Server implementations.
Microsoft introduced PBM in SQL Server 2008 to provide a policy-based system for managing one or more SQL Server instances. You can, for example, use policies to grant permissions on user databases, disable lightweight pooling, or choose an authentication mode. You can also target policies at specific SQL Server instances, databases, tables, or indexes.
Quick hits: Data Quality Services and Master Data Services have been lapped by the competition and aren’t really competitive products anymore. Policy-Based Management is still useful, and I wish it’d get some internal love to extend roles & features. Management Data Warehouse is a tire fire that you should avoid at all costs. Service Broker is useful in the right circumstances, but it needs a few big changes to become a great broker. Unfortunately, I don’t see big changes happening and so there are superior alternatives for data flow (like Apache Kafka). But read Robert’s take on these features.
Please note, there’s one thing I need to make super abundantly clear for this demo: You’d never, ever configure these components like this for production. There’s so much more to consider, like setting up RabbitMQ to use SSL, writing actual applications and services instead of PowerShell to handle message listeners, and putting more effort into Service Broker. This should serve fine as a proof of concept, but if you want to actually implement something like this, make sure you do you research and follow best practices for each component.
This is the warning label on the chainsaw that says “normally you’d grab it by the other end.” This is a great post, giving an introduction to tying Service Broker to RabbitMQ. My biases lead me to Apache Kafka over RabbitMQ in most cases, but that’s just me.
I’ve wanted to do some Event Notification testing for SQL Server deprecation events for quite some time. The thought process here is that I could send myself an alert to identify usage of SQL Server features that will be removed from the next major version (or future version) of SQL Server. I could then forward this info to development and let them take action…or not (I kid, I kid). Today is the day I finally got around to the testing. I didn’t get very far, though.
Without rehashing the basics of event notifications (this post may help if you need some context), I created an EVENT NOTIFICATION for the TRC_DEPRECATION event group. That group includes the DEPRECATION_FINAL_SUPPORT and DEPRECATION_ANNOUNCEMENT child events. I also created a QUEUE, a SERVICE, and an activation PROCEDURE (for the QUEUE). The proc is simplistic. It takes the EVENTDATA() XML data, transforms it into an HTML <table>, and emails the info to me.
Watch out, though: Dave discovered something quite funny when he set this up.
When an Event Notification is created, one or more conversations is created between the SQL Server database engine and a Service. I tend to think of these as “message channels”. When the related event occurs, SQL Server calls the EVENTDATA() function, returning the event information results (as a variable of type XML) to a Service. The Service in turn writes the information to a Queue. A Queue is a FIFO data structure. Conceptually it is similar to a table of rows and columns. You can SELECT from it, but you can’t directly insert or update its rows. You “delete” rows from a Queue via the RECEIVE statement.
Dave has a full example worked out at the link.
One of the first questions you might ask when distributing Service Broker solutions across multiple machines is “how does SQL Server know where the other service is?” And that’s where routes come in. When we distribute a Service Broker solution, we use routes to tell SQL Server the server name and endpoint port of a remote service on the network.
For example, in our taxes solution, we would create a route in the Taxpayer database that points to the IRS service, and a route in the IRS database that points to the Taxpayer service
Good stuff. A big part of Service Broker’s value is its ability to communicate across servers, not just databases on the same instance.
With that setup, now we need to build an activation stored procedure to process the messages from the queue and perform the operations generated by Ola’s scripts. At first I thought that this would be pretty straightforward, but it actually required a fair amount of additional logic to prevent poison messages and queue deactivation from occurring when multiple online index rebuilds were attempted on the same table. When this occurs the engine raises Error 1912:
Could not proceed with index DDL operation on %S_MSG ‘%.*ls’ because it conflicts with another concurrent operation that is already in progress on the object. The concurrent operation could be an online index operation on the same object or another concurrent operation that moves index pages like DBCC SHRINKFILE.
which causes the XACT_STATE() of the operation to become –1, which only allows the transaction processing the command to be rolled back, leading to a poison message within Service Broker. So to mitigate against that I had to add in a check against sys.dm_tran_locks for any ALTER_INDEX command to identify if a conflicting lock is currently being held and if it is then we just requeue the message and increment the retry count held within the message so that we can incrementally back-off attempting to run the command each time it encounters a conflicting lock using a WAITFOR. Since Ola’s scripts log the command type, database name, object name, and command to be executed to the CommandLog table in the master database, all we need to queue in our message is the ID of the command to be processed by Service Broker, and we can get the rest of the pertinent information from the CommandLog table directly. As a part of each tasks execution in our activation stored procedure we will also update the StartTime and EndTime of the operation for historical trending of execution durations.
This is a clever use of Service Broker, and I’m glad Jonathan shared his code. Also check out his Pluralsight course for another good use of Service Broker.
The other way to automate this stored procedure is by attaching it to the queue itself so that it Service Broker directly executes the procedure in a process called activation. With activation Service Broker starts an application whenever there is work to do (e.g. when messages are in the queue). When we enable activation for a queue, SQL Server spawns a queue monitor process to keep an eye on the queue and check every few seconds to see whether there are messages available for processing. There are 2 types of activation:
Internal activation – A stored procedure like the one above is attached to the queue to process messages automatically as they arrive.
External activation – An external program is notified that messages need to be processed.
Colleen focuses on internal activation, which calls a stored procedure for each message received. Check her post out.
This type of situation, a message that can never be processed successfully, is known as a poison message. The name kind of makes it sound like there’s a problem with the message itself. And there might be. Perhaps the message format is wrong for what the receiving code was expecting. But maybe the problem is with the receiving code itself. Regardless of what causes the poison message, it has to be dealt with.
SQL Server has a built-in mechanism for handling poison messages. If a transaction that receives a message rolls back 5 times, SQL Server will disable the queue. So that means that all processing that depends on that queue will cease. Nice, huh? Because of this, it behooves you to make sure you include proper error handling in your message processing code. And how exactly you handle errors will depend on several factors:
Handling errors safely is a huge part of asynchronous programming.