All we need is an easy explanation of the problem, so here it is.
We have a After Insert and Update trigger on one of our tables. The trigger basically makes a json payload and enqueues it in a RabbitMQ system.
Today a large insert script was run on the table (over 50,000 inserts). This scenario had not been tested or accounted for and now we are having performance problems on that database.
We notice that the RabbitMQ is having the records trickle in slowly over a long period of time. Even though the data has been in place for a while (because it is an After Trigger).
It seems as if the After triggers have been queued up somehow and are very slowly working through the system.
How are the After Trigger events kept track of for execution? Are they queued somewhere? Is there a way I can clear them out?
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
Method 1
All Triggers fire within the scope of the same Transaction that the INSERT
statement that generated them runs in. Therefore if the Transaction of the INSERT
statement you executed ran to completion, then so did the After Insert Trigger that fires from that INSERT
statement. Here’s some straight to the point information and tests that prove this out:
So in short a trigger executes in the context of the calling transaction and a rollback in a trigger will rollback the calling transaction.
So my guess is either your original INSERT
is still executing (which can then be aborted and rolled back) or the issue is somewhere between after your Trigger runs and the mechanism you’re using to dump the data into RabbitMQ.
You can use sp_WhoIsActive
to determine if your INSERT
statement is still running and to get the SPID
of the process so you can abort it and rollback. To abort you’d have to run KILL 123
(where 123 = the SPID
of your INSERT
).
Side note, if by 50,000 inserts, you mean 50,000 records in one INSERT
statement, then that’s small and should be performant. If you actually mean 50,000 separate INSERT
statements then that’s a different story that can take much longer to complete the INSERT
.
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂
All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0