Optimizing Distribution clean up job

12/04/2015

At some environments the distribution clean up job, “Distribution clean up: distribution”, may fail to complete due to deadlocks or may take a long time to complete due to contention typically with the log reader writer thread inserting commands to MSrepl_commands table. I have seen posts suggesting to increase the batch size specified in the TOP (N) clause of the DELETE loop command from 2000 to a larger number in order to improve the performance of the cleanup process. The DELETE command (see bellow code snippet) is being executed at procedure sp_MSdelete_publisherdb_trans (sp_MSdistribution_cleanup –> sp_MSdistribution_delete –> sp_MSdelete_publisherdb_trans) WHILE...

My favorite Profiler trace template

08/04/2015

SQL Server Profiler is installed with 8 default trace templates that you can choose from. In addition you can also add your own template by selecting the events and columns you want and then you will have to start and stop the trace which will allow you to save your selection as a new trace template that can be reused. This is convenience if there is a particular set of events and columns that you prefer and saves you the time to go through the events /columns selection over and over again. Here is my favorite trace template that can be...
no comments

Get the most of SQL Trace

07/04/2015

SQL Trace is an SQL Server Database Engine technology. Many times DBAs are not aware or get confused about that important fact and tend to mix that with SQL Server Profiler which actually is a .NET application (and a great one) that uses system stored procedures exposing the functionality of SQL Trace. What’s the big deal and why it is important to be aware of that fact and distinguish between the two? it is because SQL Trace exposes 2 Trace I/O Providers which sends the data to the final destination that we can work with: the File Provider and the...
one comment

Get row count and size for replicated tables in Transactional Replication

28/03/2015

Here is a query to get the row count and size for your articles. Note that you can omit the last INNER JOIN (aliased as repl ) and then work on all tables not just replicated tables. If a table belongs to more than a single publication it will return more than once in the result set. If you don't care about the publication name then you can omit the last INNER JOIN and comment in the last section that queries sysarticles to eliminate the query to replicated tables only.   USE published_db;SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL...
no comments

Get current executing jobs

26/03/2015

  Here is a useful query to see all the jobs that are now running eliminating the need to use the UI for that. SELECT DISTINCT j.name ,ISNULL(a.last_executed_step_id, 0) + 1 AS current_running_step ,s.step_name ,start_execution_date ,GETDATE() AS ,DATEDIFF(minute, start_execution_date, GETDATE()) AS elapsed_minutes_totalFROM ...
no comments

Optimize your distribution database in Transactional Replication – Part 3

In this post I show a way to over come the contention typically experienced between the log reader agent and the “Distribution clean up: distribution” job both accessing the busiest table in the transactional replication infrastructure MSRepl_commands which servers as a queue table for the commands to be delivered. The log reader’s writer thread is writing new commands to the table while the cleanup job reads and deletes from the table and in a busy environment they many times block each other thus slowing down both processes and in some cases they also run into deadlocks where the victim...
one comment

Optimize your distribution database in Transactional Replication – Part 2

24/03/2015

In this post I present indexes that I typically add on a newly created distribution database to reduce IO and improve performance. Before we go on note that adding indexes on MSrepl_commands and MSrepl_transactions can lead to deadlocks due to the high activity around these tables and that of course depends on the specific environment. The indexes I present here, are safe in my experience, and provide benefit on busy servers.   MSrepl_commands (one index) -- MSrepl_commandsCREATE NONCLUSTERED INDEX ON .( ASC, ASC, ASC) INCLUDE...
one comment

Optimize your distribution database in Transactional Replication – Part 1

When thinking about optimizing the distribution database the very first and most basic thing that comes across my mind is to check the settings for the "immediate_sync" property of the publication. Having "immediate_sync" set to true means that delivered commands are not deleted from the distribution database by the next run of the clean up job (Distribution clean up: distribution) but only when the distribution retention period has been crossed which defaults to 3 days (72 hours) and that can be modified using sp_changedistributiondb, see bellow example.. This means that on a busy server the distribution size will grow...
one comment

USE sp_dbfiles to easily get all database files properties

23/03/2015

In this post I share a stored procedure (coded by a colleague DBA Boaz Goldstein ) which I find very useful in my regular daily tasks. I simply use it in the context of the database that I am interested at as follow: -- Get database information for the current database USE my_db EXEC sp_dbfiles ;     But given the input parameters of the procedure, @dblist and @drivelist, I see that you can also use it to get information for a list of databases or all databases (and same applies for drives ) as follow: -- List of databases EXEC sp_dbfiles @dblist = 'master; model; msdb'; --...
no comments

Get the heaviest queries with missing indexes

20/03/2015

Here is a cool method to improve database performance. We can get the heaviest queries with a missing index in their execution plan by querying sys.dm_exec_query_stats. The bellow query does just that by joining (using the APPLY operator) sys.dm_exec_sql_text to get the query text and sys.dm_exec_query_plan to get the query execution plan and then use XQuery’s exist() method to search for a missing index in the xml plan. . This is a powerful tool which I find very efficient but don’t get too excited adding indexes all over the place. As always, when adding indexes try...
no comments