Find some text in all databases on the instance

04/05/2015

If you need to search for a specific text that may exist in a stored procedure or other object in all databases on the instance you have a few options. You can query sys.syscomments, sys.sql_modules or INFORMATION_SCHEMA.ROUTINES as in the bellow code snippet but doing so you work at the database level so if you need to search in all databases you will have to reissue the query again and again, once per database. -- INFORMATION_SCHEMA.ROUTINES SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE N'%some_text%';-- sys.sql_modulesSELECT * FROM sys.sql_modules WHERE definition LIKE N'%some_text%';-- sys.syscommentsSELECT OBJECT_NAME(id), * FROM sys.syscomments WHERE...
no comments

Use backupset to get database backup history information

27/04/2015

  The msdb database has some very useful tables that I use pretty much often during my day to day work. The most popular by far of these tables is backupset which provides useful information about backups. It serves as a focal point of information for database backups. Here you can track an unusual growth in a backup’s size, an increased duration which points out a degraded performance of the backup. You can see the backup destination path, the user issuing the backup command, if the backup is damaged and more…   I use this snippet which provides...
no comments

Improve SQL Server backup time

26/04/2015

There are a few ways you can try to improve your backup time when issuing the SQL Server Backup command. I have tested them all a while a go and unfortunately did not document my test, but the one option that will boost backup time most significantly when dealing with large databases is backup to multiple files. It allows SQL Server to get the most out of your disk(s) and also to backup to multiple drives but this is not so common as you typically will have one drive for backups. I experienced a significant improvement of 40% in...
2 comments

Purge SQL Server backups using the forfiles command

25/04/2015

The sp_PurgeFiles stored procedure (coded by a colleague DBA Boaz Goldstein) offers a reliable way to address the issue of purging of backup files. It has the advantage that it uses the Windows forfiles command which is available on all servers as part of the installed OS making it simple, reliable and portable solution.   To implement just create the sp_PurgeFiles stored procedure and copy and edit the sample execution bellow into your backup job as an additional job step.   Create the sp_PurgeFiles stored procedure and set it as a system stored procedure USE ;SET ANSI_NULLS...
one comment

Optimizing Distribution clean up job

12/04/2015

At some environments the distribution clean up job, “Distribution clean up: distribution”, may fail to complete due to deadlocks or may take a long time to complete due to contention typically with the log reader writer thread inserting commands to MSrepl_commands table. I have seen posts suggesting to increase the batch size specified in the TOP (N) clause of the DELETE loop command from 2000 to a larger number in order to improve the performance of the cleanup process. The DELETE command (see bellow code snippet) is being executed at procedure sp_MSdelete_publisherdb_trans (sp_MSdistribution_cleanup –> sp_MSdistribution_delete –> sp_MSdelete_publisherdb_trans) WHILE...

My favorite Profiler trace template

08/04/2015

SQL Server Profiler is installed with 8 default trace templates that you can choose from. In addition you can also add your own template by selecting the events and columns you want and then you will have to start and stop the trace which will allow you to save your selection as a new trace template that can be reused. This is convenience if there is a particular set of events and columns that you prefer and saves you the time to go through the events /columns selection over and over again. Here is my favorite trace template that can be...
no comments

Get the most of SQL Trace

07/04/2015

SQL Trace is an SQL Server Database Engine technology. Many times DBAs are not aware or get confused about that important fact and tend to mix that with SQL Server Profiler which actually is a .NET application (and a great one) that uses system stored procedures exposing the functionality of SQL Trace. What’s the big deal and why it is important to be aware of that fact and distinguish between the two? it is because SQL Trace exposes 2 Trace I/O Providers which sends the data to the final destination that we can work with: the File Provider and the...
one comment

Get row count and size for replicated tables in Transactional Replication

28/03/2015

Here is a query to get the row count and size for your articles. Note that you can omit the last INNER JOIN (aliased as repl ) and then work on all tables not just replicated tables. If a table belongs to more than a single publication it will return more than once in the result set. If you don't care about the publication name then you can omit the last INNER JOIN and comment in the last section that queries sysarticles to eliminate the query to replicated tables only.   USE published_db;SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL...
no comments

Get current executing jobs

26/03/2015

  Here is a useful query to see all the jobs that are now running eliminating the need to use the UI for that. SELECT DISTINCT j.name ,ISNULL(a.last_executed_step_id, 0) + 1 AS current_running_step ,s.step_name ,start_execution_date ,GETDATE() AS ,DATEDIFF(minute, start_execution_date, GETDATE()) AS elapsed_minutes_totalFROM ...
no comments

Optimize your distribution database in Transactional Replication – Part 3

In this post I show a way to over come the contention typically experienced between the log reader agent and the “Distribution clean up: distribution” job both accessing the busiest table in the transactional replication infrastructure MSRepl_commands which servers as a queue table for the commands to be delivered. The log reader’s writer thread is writing new commands to the table while the cleanup job reads and deletes from the table and in a busy environment they many times block each other thus slowing down both processes and in some cases they also run into deadlocks where the victim...
one comment