Use sys.dm_db_stats_properties to get statistics information


The sys.dm_db_stats_properties DMF has been introduced in SQL Server 2008 R2 sp2 and SQL Server 2012 sp1 and offers a much more detailed view over the statistics objects. Prior to this DMF there was no real way to get all that information about statistics which is required if you would like to update statistics manually based on the number of modifications done on the table since the last update of statistics took place. Because the information was not available I have seen some implementations of DBAs manually updating statistics based on the last statistics modification date but this is...

A new update to sp_helpindex2 is now released

The update fixes a bug where on a partitioned table the partitioning key was returned as part of the index keys even though it was not an index key. In addition I added 2 new columns to the output. The first column shows the data_compression description and the second column shows the number of partitions in the table. I also modified slightly the way the index size was calculated which had a minor inaccuracy.   To download sp_helpindex2 go to the original post here -Yaniv
no comments

Using SQL IO to test InfiniBox performance


I was recently requested by a customer to conduct an IO test in order to test between 3 storage systems; One system was an older EMC that was going to be out of usage soon. Another system was a new EMC VNX 5800 and the last system was InfiniBox F6000 by INFINIDAT, a new SAN provider that is not so familiar yet but wait till you see the test results. On the EMC 5800 I tested a LUN that was built on top of a pool consisting 100 drives; 85 SAS 15k and 15 SSD. The InfiniBox architecture is...
no comments

Get row counts and size from the DMVs


Here is a useful query that lets you see the size and row count of all tables in the database at a glance. The important point is that all data is retrieved from the DMVs while not accessing the data and index pages of the tables so that no shared locks are acquired and no heavy IO is generated as when you scan all rows using a COUNT(*) query.   SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;SELECT DB_NAME() ...
no comments

Workload analysis using Distributed Replay


I have had the chance to use Distributed Replay several times in the past. In one occasion we wanted to predict the impact of modifying the default isolation level READ COMMITTED to SNAPSHOT on an OLTP database to improve concurrency and recently I used the tool again to compare a workload between SQL Server 2012 and SQL Server 2014. The goal in this case was to predict the impact of upgrading an SQL Server 2012 instance to SQL Server 2014, we wanted to understand the impact of the new CE (Cardinality Estimator) introduced in SQL Server 2014 on our...

Load SQL Trace files to table


Throughout my daily DBA tasks I some times need to load multiple trace files to a database table for querying and analyzing. For example I may have a series of 100 files each sized at 100 mb. Loading this amount of files to a table as a single operation would be considered as a light operation on high end systems but may cause the transaction log to grow more than we would like on other systems. The script I share bellow loads the file to a table on a file by file basis so referring to my example the...

Capture a trace for a replay of the Distributed Replay tool or SQL Profiler


  The Distributed Replay tool accepts trace files in it’s first step the Preprocess step as it’s input and generates a workload file to be used in it’s next step the Replay step. Valid trace files are files that have been captured using Profiler’s predefined trace template called TSQL_Replay. Using Profiler may be good enough and meet your requirement but if you need to capture data over a time period longer than just a few minutes on a busy server the trace file is likely to become too big and since Profiler limits you to capture data to a single file you...

Find some text in all databases on the instance


If you need to search for a specific text that may exist in a stored procedure or other object in all databases on the instance you have a few options. You can query sys.syscomments, sys.sql_modules or INFORMATION_SCHEMA.ROUTINES as in the bellow code snippet but doing so you work at the database level so if you need to search in all databases you will have to reissue the query again and again, once per database. -- INFORMATION_SCHEMA.ROUTINES SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE N'%some_text%';-- sys.sql_modulesSELECT * FROM sys.sql_modules WHERE definition LIKE N'%some_text%';-- sys.syscommentsSELECT OBJECT_NAME(id), * FROM sys.syscomments WHERE...
no comments

Use backupset to get database backup history information


  The msdb database has some very useful tables that I use pretty much often during my day to day work. The most popular by far of these tables is backupset which provides useful information about backups. It serves as a focal point of information for database backups. Here you can track an unusual growth in a backup’s size, an increased duration which points out a degraded performance of the backup. You can see the backup destination path, the user issuing the backup command, if the backup is damaged and more…   I use this snippet which provides...
no comments

Improve SQL Server backup time


There are a few ways you can try to improve your backup time when issuing the SQL Server Backup command. I have tested them all a while a go and unfortunately did not document my test, but the one option that will boost backup time most significantly when dealing with large databases is backup to multiple files. It allows SQL Server to get the most out of your disk(s) and also to backup to multiple drives but this is not so common as you typically will have one drive for backups. I experienced a significant improvement of 40% in...