Get row count and size for replicated tables in Transactional Replication


Here is a query to get the row count and size for your articles. Note that you can omit the last INNER JOIN (aliased as repl ) and then work on all tables not just replicated tables. If a table belongs to more than a single publication it will return more than once in the result set. If you don't care about the publication name then you can omit the last INNER JOIN and comment in the last section that queries sysarticles to eliminate the query to replicated tables only.   USE published_db;SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL...
no comments

Get current executing jobs


  Here is a useful query to see all the jobs that are now running eliminating the need to use the UI for that. SELECT DISTINCT ,ISNULL(a.last_executed_step_id, 0) + 1 AS current_running_step ,s.step_name ,start_execution_date ,GETDATE() AS ,DATEDIFF(minute, start_execution_date, GETDATE()) AS elapsed_minutes_totalFROM ...
no comments

Optimize your distribution database in Transactional Replication – Part 3

In this post I show a way to over come the contention typically experienced between the log reader agent and the “Distribution clean up: distribution” job both accessing the busiest table in the transactional replication infrastructure MSRepl_commands which servers as a queue table for the commands to be delivered. The log reader’s writer thread is writing new commands to the table while the cleanup job reads and deletes from the table and in a busy environment they many times block each other thus slowing down both processes and in some cases they also run into deadlocks where the victim...
no comments

Optimize your distribution database in Transactional Replication – Part 2


In this post I present indexes that I typically add on a newly created distribution database to reduce IO and improve performance. Before we go on note that adding indexes on MSrepl_commands and MSrepl_transactions can lead to deadlocks due to the high activity around these tables and that of course depends on the specific environment. The indexes I present here, are safe in my experience, and provide benefit on busy servers.   MSrepl_commands (one index) -- MSrepl_commandsCREATE NONCLUSTERED INDEX ON .( ASC, ASC, ASC) INCLUDE...
no comments

Optimize your distribution database in Transactional Replication – Part 1

When thinking about optimizing the distribution database the very first and most basic thing that comes across my mind is to check the settings for the "immediate_sync" property of the publication. Having "immediate_sync" set to true means that delivered commands are not deleted from the distribution database by the next run of the clean up job (Distribution clean up: distribution) but only when the distribution retention period has been crossed which defaults to 3 days (72 hours) and that can be modified using sp_changedistributiondb, see bellow example.. This means that on a busy server the distribution size will grow...
no comments

USE sp_dbfiles to easily get all database files properties


In this post I share a stored procedure (coded by a colleague DBA Boaz Goldstein ) which I find very useful in my regular daily tasks. I simply use it in the context of the database that I am interested at as follow: -- Get database information for the current database USE my_db EXEC sp_dbfiles ;     But given the input parameters of the procedure, @dblist and @drivelist, I see that you can also use it to get information for a list of databases or all databases (and same applies for drives ) as follow: -- List of databases EXEC sp_dbfiles @dblist = 'master; model; msdb'; --...
no comments

Get the heaviest queries with missing indexes


Here is a cool method to improve database performance. We can get the heaviest queries with a missing index in their execution plan by querying sys.dm_exec_query_stats. The bellow query does just that by joining (using the APPLY operator) sys.dm_exec_sql_text to get the query text and sys.dm_exec_query_plan to get the query execution plan and then use XQuery’s exist() method to search for a missing index in the xml plan. . This is a powerful tool which I find very efficient but don’t get too excited adding indexes all over the place. As always, when adding indexes try...
no comments

Drop indexes not in use


Index maintenance is an on going task in SQL Server and this is so simply because most databases are read-write  (as opposed to read-only) and are a live. Database schema keeps changing resulting in columns being added or dropped and the data being preserved grows over time. So performing an index tuning session periodically will in most cases improve database performance. A common index tuning session will typically target 2 aspects of index maintenance: 1. Adding missing indexes (see this post) 2. Dropping indexes that are not in use     Adding indexes require thought and we should always...
no comments

Verify database backups using RestoreChecker tool


RestoreChecker is a utility that restores SQL Server backup files in order to verify that the backup files can be successfully restored and in addition perform data integrity checks on the backups restored while offloading the intensive IO activity from your production servers. The utility handles full database backups and partial backups. The concept of the tool is to verify 100 percent that a backup file can actually be restored and this can only be done when using the RESTORE command, all other methods such as using RESTORVERIFYONLY or monitoring the suspect_pages does not give you a 100 percent guarantee that...
no comments

SQLScripter version 3.1 is now available for download


A new version of SQLScripter is now available for download. The new release is compatible with SQL Server 2014 and the main feature is the capability to script an unlimited number of servers from a single location. This was one of the most popular requests we received from existing customers and it is now available.  What does it actually mean? It means that you install the utility (takes 2 minutes) at your preferred server (a single server) and configure it to script as many servers as you wish (additional 3 minutes). The result is that now you...
no comments