SSAS Performance tips – Lessons learned – Part 1

15 במאי 2010

2 תגובות

I’ve been researching a few performance issues lately on an Analysis Services 2005 instance with 20 (almost) identical databases on it, each weighing roughly 2.5GB and containing 6-7 cubes with approximately 400 partitions in each cube. As you can imagine, this makes up quite an impressive number of files stored on disk. SSAS stores everything on files, and unlike its relational relative (no pun intended), which will store hundreds of GB of data on a fairly small number of large files (I know this is an over simplification of how one should plan for data file allocation on SQL Server but its good enough and true enough in most cases for the sake of this post), SSAS stores a single database as described above in hundreds and hundreds of files. The number of files required is determined by the number of partitions, dimensions, cubes and aggregations and since we have lots and lots of those elements we will eventually have to deal with a heavily congested file system structure.

We have identified that a similar instance, containing only 10 databases performs better in both processing time and query response time. One thing I should note is that on the instance with the 20 databases, only 10 are in use, the rest were there purely due to historical reasons (we actually moved those 10 idle databases to a new instance – the instance that now performs much better).

What we first had to make sure is that both instances are operating under the same server properties, the same disk system and have available to them identical resources such as CPU and Memory. Indeed all of these parameters were similar.

For each SSAS database there is a metadata overhead, this means that when databases are initialized, metadata on the database is loaded into a non-shrinkable section of the SSAS memory. If you have many databases, initialization will take longer. If these databases are complex (i.e. contain many dimensions and partitions), initialization may become a more serious issue. However, we did not experience memory issues on the machine that ran the 20 databases instance. Still, we figured that the fact the disk was loaded with files (1000’s of them), it is best to drop all unnecessary databases and see what happens. Once 10 databases were dropped, performance on this instance became equivalent to performance on the first 10 database instance.

I do not have a clear—cut answer as to why this was the silver bullet we were looking for. As said, the 10 databases that we dropped were not in use in any way – not for querying and certainly not for processing. Still, dropping them and leaving only 10 “live” databases proved to be the trigger that we were looking for to improve performance on this instance.

So ,there you have it: make sure you do not have too many databases on a single instance, especially not “complex” databases that contain many dimensions, partitions and aggregations.

BTW – if any of the readers can shed some light into this – please do so, it will be much appreciated!


(This is post number 20 for week number 24 of my SQL blog challenge)

Technorati Tags: ,
הוסף תגובה
facebook linkedin twitter email

כתיבת תגובה

האימייל לא יוצג באתר. שדות החובה מסומנים *

2 תגובות

  1. Chris Webb16 במאי 2010 ב 23:00

    There was an issue with large numbers of partitions (1000+) on 2005 that I believe got fixed in 2008, which meant SSAS would regularly freeze while a particular file got loaded into memory. But without knowing more about the database design, the queries and looking at some Profiler traces it's impossible to say what's really wrong here…

  2. Sam Kane21 במרץ 2011 ב 16:01

    Here are this and some other articles on SSAS Performance: