Perform a count comparison on replicated tables between the publisher and subscriber– enhanced

Monday, April 23, 2012

Here is an improvement to a code posted in an older post Perform a count comparison on replicated tables between the publisher and subscriber. The code introduced in this post also generates (prints) the INSERT commands required to close the diff/gap between the subscriber and publisher. The script is designed to be executed on the subscriber and you will have to edit the 2 parameters: @LinkedServer and @database which represent the linked server to be used for accessing the publisher and the published database. ...
no comments

Modify check constraints to NOT FOR REPLICATION

Sunday, December 4, 2011

Here is a script to modify the database schema and define all Check constraints as NFR (NOT FOR REPLICATION). The script does not execute anything but only generates the commands. NFR is required in a 2 way transactional replication and in p2p replication. In a 1 way transactional replication NFR is not required but is a good practice to have the subscriber’s schema set as NFR. For additional information regarding NFR:     /* 1....
no comments

Define identity columns as NOT FOR REPLICATION

Monday, October 31, 2011

  In sql server 2000 there was no other way but the painful costly method of creating the table with a temp name while defining the ident col as NFR (NOT FOR REPLICATION ), move the data, drop the original table, rename the new table and create all other objects related to the table such as constraints. triggers etc. However, sql server 2005 introduced the system procedure sp_identitycolumnforreplication that allows you to achieve the same results while not needing the data movement process described above.   -- Mark Ident column as NOT FOR REPLICATION for...

Transactional Replication – Transactions per hour

Monday, March 14, 2011

  Knowing what goes inside the distribution database can be helpful for various reasons. Here is a script that groups transactions per hour and per database.   -- Transactions per hourSET NOCOUNT ON;SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; DECLARE @StartTime datetime, @EndTime datetime;SELECT  @StartTime = DATEADD(minute, -60, CURRENT_TIMESTAMP), @EndTime = CURRENT_TIMESTAMP; SELECT      p.publisher_db    ,p.publication    ,DATEPART(Hour, t.entry_time) AS     ,COUNT(*) AS cntFROM MSrepl_transactions t INNER JOIN MSrepl_commands c ON t.xact_seqno = c.xact_seqno AND t.publisher_database_id = c.publisher_database_id INNER JOIN MSpublisher_databases d ON = t.publisher_database_idINNER JOIN MSpublications p ON p.publisher_db = d.publisher_dbWHERE 1=1--AND t.entry_time > DATEADD (Hour, -1,...
one comment

Monitor Merge Replication Triggers

Tuesday, October 12, 2010

When it comes to monitoring production systems I usually try to think in the direction of what else could possibly go wrong here ? (and no, I am not pessimistic but realistic) This leads me to new ideas of what else can be monitored and how to improve on the current situation and prevent future failures.   Merge replication is trigger based; When a table is added to Merge replication the Snapshot agent creates 3 DML triggers (for INSERT, UPDATE and DELETE) when generating the snapshot. The triggers captures the DML statements excecuted against the table and inserts...

Monitor Identity Ranges in Merge Replication

Saturday, September 18, 2010

  Merge Replication has a built in Identity range management mechanism that automatically assigns identity values to identity columns of tables participating in the replication. The identity ranges are stored in the published database at table MSmerge_identity_range and the history of the range allocations is stored in the distribution database at table MSmerge_identity_range_allocations.  For a more detailed explanation check out this post. This mechanism works well but when things go wrong you may be in a deep trouble because the severity of such an incident is typically high resulting in INSERT statements failing and a downtime of a production system.   If you see...

Peer to Peer Transactional Replication using Pull Subscribers

Friday, January 8, 2010

While the UI in SSMS (SQL Server Management Studio) supports setting up a p2p topology using push subscribers only it is actually possible to set up a p2p topology that uses pull subscribers and that can be done via scripts instead of the UI.  Push subscriber provides easier management and are supported by the UI while Pull subscribers perform better. As long as the network connecting the Peers is a LAN the improved performance provided by Pull is not really a factor but if the connectivity is WAN then that can have a noticeable impact depending on several factors such...
no comments

Restoring a backup set of a published database under a new name removes replication settings from the published database

Sunday, October 18, 2009

Recently, together with Meir Dudai, a database expert from Valinor expertise in replication we found the answer to a situation we faced which remained unresolved for 4 single weeks until 3 day ago. The problem we had faced was that 2 Pull Agents to a 2-way Transactional Replication had disappeared thus causing the replication queue to grow which in terms caused our monitor that monitors the replication queue to alert. When I looked at the UI of SSMS I could not see the pull subscriber jobs and so I looked at msdb sysjobs but none of the 2 jobs where...
one comment

Replication Performance and the @status parameter in sp_addarticle

Monday, March 30, 2009

When replicating data over a WAN using a Push Subscription in Transactional Replication there is one parameter related to the article’s properties that has a significant impact on the transactions delivery rate, this is the @status parameter defined with sp_addarticle. The tests I have done (see bellow table) show that if your replication is configured with the @status parameter set to 8 you should gain a noticeable improvement in performance by setting a value of 16 or 24 instead.To modify the article’s property use sp_changearticle and to view the article’s properties use sp_helparticle    This is a copy of the @status in...

Boost up replication performance with SQL Server 2008

Monday, March 9, 2009

A couple of months a go I came a cross a Microsoft white paper article that talks about a performance improvement that can be gained by using SQL Server 2008 on top of Windows Server 2008 when replicating data over a WAN (Wide Area Network). The article talks about an improvement done in Windows Server 2008 to the TCP stack. SQL Server 2008 as an application benefits from these changes in the communication package exposed by the operating system. The boost in performance is mainly derived from the fact that the TCP Receive Window is now...