Set the Slice on your SSAS Cube Partitions now!

Using TraceEvent for MDX Debugging
January 23, 2012
Review: OfficeWriter v8
February 21, 2012
Show all

Set the Slice on your SSAS Cube Partitions now!


I am pretty sure that most of you would have heard or read about this at least once in your SSAS career, but how many of you actually do set slices for your partitions? Well, I am also guilty of the charge and seldom did set slices till a recent issue. 1 Setting slice in SSAS cube partitions Situation : Duh, that’s strange!
It’s a sunny day in Charlotte and I was just playing around with SQL Server Profiler for learning some performance tuning stuff. I executed this simple MDX query on the sample Adventure Works DB for some purpose which is beyond the scope of this post – select {[Measures].[Internet Sales Amount]} on 0,
{[Ship Date].[Calendar].[Month].&[2008]&[4]} on 1
from [Adventure Works]
Now while I was going through the Profiler trace, I noticed something strange – three partitions (2005, 2006 and 2008) were being read while I expected just one partition (for 2008) to be read as my query just had April 2008 on the rows. 2 Multiple partitions being read in Profiler trace
Now, for people who are not familiar with Adventure Works cube, there are four partitions for the Internet Sales measure group, one for each year from 2005 to 2008 as shown below – 4 Partitions in Internet Sales Measure Group I decided to take a detour from my actual purpose to investigate this issue. Debugging : Need to dig up that grave!
I was aware that SSAS partitions generate automatic data slicers for each partition and these information are present in the info.*.xml file of the partition data file (you can find them usually at C:Program FilesMicrosoft SQL ServerMSAS10_50.MSSQLSERVEROLAPData folder followed by the cube, measure group and partition name if you haven’t changed the default settings. Note that I am using SQL Server 2008 R2). I decided to have a look at these three partition data files. 4 Partition autimatic data slices As seen above, there was clearly a difference between the partitions of 2005 / 2006 and 2008 (which are having the names Internet_Sales_2001.0.prt / Internet_Sales_2002.0.prt and Internet_Sales_2004.0.prt. These names are the partition IDs and can be seen in the ID property of the partitions) 5 Partition ID property It seemed like the auto-slices were not getting created for the 2005 and 2006 partition while it was being done for the 2008 partition. This was further confirmed on opening and inspecting the info.*.xml file with notepad. Knee-jerk Solution : Or is it?
I went into research mode on Google and soon came up with a  property in the msmdsrv.ini file which limits the creation of auto-slicers based on the number of rows in the partition – the IndexBuildThreshold property. 6 Modifying the msmdsrv.ini file By default, the value is 4096 rows and both the partitions for 2005 and 2006 had less than 4000 rows. This explained why the auto-slicers were not getting created and I modified it to 1000. Now on reprocessing the measure group, I was able to see that the auto-slicers were generated in the partition data files and only the correct partition was being read in the profiler trace. Actual Solution : The real deal!
I decided to gloat over my findings to my friend and colleague, Javier Guillen (blog | twitter) who was sitting in the next room. When I described the problem statement to him, the very first question he asked me was whether I had set the partition slices, and I was like – sheesh, I missed on that cardinal rule, as usual. I was under the impression that as long as your partition is for a single data member (as in this case – a single year), it should work. But later I found out that partitions for which autoslices are not generated will be scanned anyway, unless the partition slices are explicitly defined. I went back and set the slices for each partition correctly to their respective years and also reverted the IndexBuildThreshold property to the original value of 4096. 7 Setting the partition slices Once this was done, I processed my measure group and found out that only the correct partitions were being read from the trace files. Reading Materials
I took some time to read and learn more about partition slices, thought you would be interested in it too if you made it this far. 1) Get most out of partition slices – Mosha Pasumansky
2) SSAS Partition Slicing – Eric Jacobsen
3) Partition Slice Impact on Query Performance – Jesse Orosz


  1. shiva says:

    Thanks a Ton Jason for the detailed post on partition slices. I was searching for it all the web.

  2. Hi Jason,
    Hopefully this additional Info might be of some use…By Chris Webb.

    Unexpected partition scans

    Even when you have configured your partitions properly it's sometimes the case that Analysis Services will scan partitions that you don't expect it to be scanning for a particular query. If you see this happening the first thing to determine is whether these extra scans are making a significant contribution to your query times. If they aren't, then it's probably not worth worrying about; if they are, there are some things to try to attempt to stop it happening.

    The extra scans could be the result of a number of factors, including:

    The way you have written MDX for queries or calculations. In most cases it will be very difficult to rewrite the MDX to stop the scans, but the following blog entry describes how it is possible in one scenario:
    The LastNonEmpty measure aggregation type may result in multiple partition scans. If you can restructure your cube so you can use the LastChild aggregation type, Analysis Services will only scan the last partition containing data for the current time period.
    In some cases, even when you've set the Slice property, Analysis Services has trouble working out which partitions should be scanned for a query. Changing the attributes mentioned in the Slice property may help, but not always. The section on Related Attributes and Almost Related Attributes in the following blog entry discusses this in more detail:
    Analysis Services may also decide to retrieve more data than is needed for a query to make answering future queries more efficient. This behavior is called prefetching and can be turned off by setting the following connection string properties: Disable Prefetch Facts=True; Cache Ratio=1
    More information on this can be found in the section on Prefetching and Request Ordering in the white paper Identifying and Resolving MDX Query Bottleneck available from
    Note that setting these connection string properties can have other, negative effects on query performance.

    You can set connection string properties in SQL Management Studio when you open a new MDX Query window. Just click the Options button on the Connect to Analysis Services dialog, then go to the Additional Connection Parameters tab. Note that in the RTM version of SQL Management Studio there is a problem with this functionality, so that when you set a connection string property it will continue to be set for all connections, even though the textbox on the Additional Connection Parameters tab is blank, until SQL Management Studio is closed down or until you set the same property differently.

  3. Michael says:

    This is what I was looking for. I tried this example and am still getting the scan of all partitions with the sliceproperty set according the the example(When not hitting cache). Am I missing something… I am running SSAS 11.0.3000.0 in EVAL mode.


  4. I have a cube whit those specifications

    10 millions of records per day
    partition by day with user base aggregation
    17 dimension using hierarchies
    50 regular measures
    4 months of history data

    unfortunately the cube ir very slow
    can i use partition by day and slice by month?… what happen if i'll do that

  5. Hi Jason,

    I have a cube with 12 partitions, and I have to add a column to this, sourced from the underlying RDBMS. The only way that I know is to update all the 12 partition queries, do we have any better method to do this?


  6. […] bonus, voici ce petit lien qui explique vraiment bien l’intérêt de cette […]

Leave a Reply

Your email address will not be published. Required fields are marked *