Hi All,
I come across a nice article from SQL-SERVER-PERFORMANCE.Com on triggers by Brad Mcgehee. http://www.sql-server-performance.com/tips/trigger_tuning_p1.aspx
If you need to implement cascading referential integrity (such as cascading deletes) in your SQL databases, use the cascading referential integrity constraint instead of triggers to perform the cascading delete, as they are much more efficient and can boost performance. If you have an older (7.0 and older) application that you have moved to SQL Server 2000/2005, and it used triggers to perform cascading deletes, consider removing the triggers and using cascading referential integrity instead.
*****
While INSTEAD OF triggers are technically interchangeable with conventional AFTER triggers, the main reason to use an INSTEAD OF triggers is to allow you to update certain types of views. What does this mean in regards to performance? Assuming most of the triggers you write only rollback transactions rarely, then you will want to continue using AFTER triggers. This is because the overhead of an INSTEAD OF trigger is higher than an AFTER trigger, assuming rollbacks are rare. But if rollbacks are common (more than half the time), then an INSTEAD OF trigger would be the better deal because its overhead is less than an AFTER trigger that rolls back. So for most triggers, stick with the conventional AFTER trigger and save INSTEAD OF triggers to update views.
*****
SQL Server 2000/2005 allows you to somewhat control the order in which triggers are fired. I say "somewhat" because you don't have full control. You have the option to specify which trigger fires first and last, but if you have more than two triggers on a table, you can't control the order in which the other ones fire.So how can selecting the order that a trigger is fired help your application's performance? To optimize trigger performance, you should specify that the trigger most likely to rollback (for whatever reason) as the first trigger to fire. This way, if the trigger does cause a rollback, it only affects the first trigger.
Let's say you have three triggers on a table, but instead of having the most likely to rollback trigger specified as the first trigger, you have it specified as the last. In this case, assuming it is rolled back, then all three triggers have to be rolled back. But if the trigger were the first trigger, and not the last, then only one trigger, not three triggers, would have to be rolled back. Reducing the number of triggers rolled back reduces SQL Server's overhead and boosts its performance.
*****
The amount of time that a trigger takes to run is mostly a function of the number of tables referenced in the trigger and the number of rows affected by the code inside the trigger. Because of this, always try to minimize the number of tables referenced in a trigger, and minimize the number of rows being affected.
In addition, keep the code in your triggers to the very minimum to reduce overhead. This is important because triggers typically fire during INSERTs, UPDATEs, and DELETEs, all of which can be common occurrences in OLTP applications. The more code that runs in the trigger, the slower each INSERT, UPDATE, and DELETE that fires will be.
*****
If your trigger includes a WHERE clause, don't forget to include any appropriate indexes for them to use. WERE clauses hidden in triggers are often easy to forget, and like any statements, indexes can often significantly affect their performance.
One way to do this is to run your trigger code from Query Analyzer or Management Studio and then check the resulting execution plan. Doing so will quickly tell you if you need to add an appropriate index.
*****
If you have an INSERT, UPDATE, or DELETE statement that seems to be taking longer to run that you would expect it to run, be sure to check to see if there is a trigger associated with that table. The performance problem you are seeing may well be because of the trigger, not the data modification statement itself.Don't forget to tune trigger code just like you would any other code. Because trigger code is "hidden," many people forget about it and don't realize the potential performance problems that they can cause.
You can use Profiler and Query Analyzer/Management Studio to find out how triggers in your database are working.
*****
Don't use a trigger to enforce referential integrity if you have the option to use SQL Server's built-in referential integrity instead. Using SQL Server's built-in referential integrity is much faster than using a trigger to perform the same task.
*****
If you have a choice between using a trigger or a CHECK constraint to enforce rules or defaults within your SQL Server databases, you will generally want to choose a CHECK constraint as they are faster than using triggers when performing the same task.
*****
Try to avoid rolling back triggers because of the overhead involved. Instead of letting the trigger find a problem and rolling back a transaction, instead catch the error before it can get to the trigger (if possible based on your code). Catching an error early (before the trigger fires) consumes much fewer server resources than letting the trigger roll back.
Catching errors before a transaction can fire a trigger can be done through code in the initiating transaction, and it sometimes can be accomplished by adding a constraint to the table to catch common mistakes. If a constraint catches an error, the trigger will not fire.
*****
Sometimes, for performance reasons, it is necessary to maintain denormalized data. For example, you might need to maintain derived data (such as cumulative data), in a table because it is too time consuming to calculate it on the fly from within SELECT statements. One way to easily maintain denormalized data is to use triggers. For example, every time a new sale is added to a Sales table, a trigger could fire, adding the value of the sale to a SalesTotal table.
*****
The code that is included inside an UPDATE trigger runs every time its related table is updated. In most UPDATE triggers, the code in the trigger affects only certain columns, not all of them. Because of this, it is pointless (and a waste of SQL Server resources) to run all of the code in the trigger if the column or columns you are interested in have not been updated. In other words, even if a column you are not interested in is updated, the UPDATE trigger will fire and run its code.To help reduce the unnecessary running of code in an UPDATE trigger, you can take advantage of one of two different functions: UPDATE() (available in SQL Server 2000/2005) or COLUMNS_UPDATED() (available in SQL Server 7.0 and 2000/2005).Either function can be used to test to see if a particular column you are interested in has changed or not. Because of this, you can write code in your trigger to only run if the column you are interested in has changed, otherwise you can prevent the code from running if the column you are interested in has not changed. This can reduce the amount of work the trigger needs to do, boosting overall performance of your database.
The UPDATE() function is used to check only one column at a time. The COLUMNS_UPDATED() function can be used to check multiple columns at a time.
Friday, September 4, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment