Year: 2012

Risk Matrix Chart in SSRS

Risk Matrix Chart in SSRS

December 30, 2012

Of recent, I have been getting mails from my readers whether I have forsaken Reporting Services for Analysis Services in my blog. I want to reassure all of you that this is not the case and that I will be equally focussing on both the technologies. Just that there is a lot of development happening in the AS side and it is very important for all of us to keep on expanding our skillsets and building our expertise. That said, I am writing this post on how to create a Risk Matrix chart in SSRS for all of you guys as a new year gift from my side.

Risk Matrix chart in SSRS

A risk matrix chart, as the name suggests, is used for performing risk analysis. Typically, we combine the Likelihood and Impact ratings of an event to arrive upon a risk score, which aids in deciding on what action to take in view of the overall risk. For the purpose of this post, I have just got 3 levels for the Impact and Likelihood (and this can be increased/decreased as per your requirement). For both Likelihood and Impact, a rating of 1 means Low, 2 means Medium and 3 means High. Now I have categorized the risk score into Low, Medium, High & Critical and would like to plot my events as a scatter chart against this. For that, follow the steps below:-

1) Create an image with the required risk scores in excel / PowerPoint / Paint. I came up with the following image.

background image for SSRS Chart

This will server as the background image of our scatter chart. Add this image to the report.

Add image to report

2) Create a report with the required data sources and dataset. In this case, I have just made a sample dataset

SELECT     ‘Project A’ AS Project, 0.5 AS Likelihood, 0.8 AS Impact
SELECT     ‘Project B’ AS Project, 0.8 AS Likelihood, 1.3 AS Impact
SELECT     ‘Project C’ AS Project, 0.9 AS Likelihood, 2.5 AS Impact
SELECT     ‘Project D’ AS Project, 1.4 AS Likelihood, 0.9 AS Impact
SELECT     ‘Project E’ AS Project, 1.5 AS Likelihood, 1.5 AS Impact
SELECT     ‘Project F’ AS Project, 1.2 AS Likelihood, 2.9 AS Impact
SELECT     ‘Project G’ AS Project, 2.1 AS Likelihood, 0.2 AS Impact
SELECT     ‘Project H’ AS Project, 2.4 AS Likelihood, 1.4 AS Impact
SELECT     ‘Project I’ AS Project, 2.7 AS Likelihood, 2.9 AS Impact
SELECT     ‘Project J’ AS Project, 1.6 AS Likelihood, 1.2 AS Impact
SELECT     ‘Project K’ AS Project, 2.2 AS Likelihood, 2.2 AS Impact
SELECT     ‘Project L’ AS Project, 1.1 AS Likelihood, 0.7 AS Impact

3) Go to the toolbox and drag and drop a chart item into the report body. Select the chart type as Scatter chart and click on OK.

scatter chart in ssrs

4) Drag and drop Likelihood from the dataset fields list into the Values and then select Impact as the X Value in the chart. Also drop Project field into the Category Group of the chart.

Add x and y values as well as category of scatter chart

5) Change the Axis titles to Likelihood and Impact for the Y and X axis respectively. Then go to both of the axis properties, and set the minimum as 0, maximum as 3 and Interval as 1.


6) Now click on the chart area and add the RiskMatrix image as the background image.

Add background image

You might also want to hide the major gridlines for the horizontal and vertical axis.

remove major gridlines

7) Now with a bit of cosmetic changes to your marker colours, we arrive upon the end result. It is a good practice to enable the tooltips on the points, so that we can just hover our mouse to find out the project names.

Risk matrix chart

The beauty of this approach is that it can be extended to a lot of other scenarios. For eg, I remember using this approach to answer a scatter chart with four quadrants question in the MSDN forum.

scatter chart with 4 quadrants

Ok, this is all from me for this year. As for my new year wish, I would like to quote Ann Landers-

“Let this coming year be better than all the others. Vow to do some of the things you’ve always wanted to do but couldn’t find the time. Call up a forgotten friend. Drop an old grudge, and replace it with some pleasant memories. Vow not to make a promise you don’t think you can keep. Walk tall, and smile more. You’ll look ten years younger. Don’t be afraid to say, ‘I love you’. Say it again. They are the sweetest words in the world.”

Wishing all of you a very prosperous new year! Smile

Posted by SQLJason, 2 comments
Introduction to Excel GeoFlow (Beta 1)

Introduction to Excel GeoFlow (Beta 1)

December 24, 2012

I have been playing around for a couple of weeks with the beta release of Excel GeoFlow and I  have got very mixed feelings about the product. No, I am not trying to be the Grinch in this Christmas season and I must say that I sure do like the direction in which Microsoft and this product is going – just that it is not even half as ready as I would have expected (but to be fair to them, it is just the beta 1 release). That said, there are a couple of features that I am really excited about and I thought of jotting down a quick introduction to GeoFlow for those who are interested.


1) How to get the beta release of GeoFlow
If you would like to try GeoFlow, send a mail to and wait for the team to respond. You must have Excel 2013 for GeoFlow to work.

2) Current Limitations
Related tables are not supported in this release.All the data should be in one table.
Backend data changes are not reflected in the visualization in this release and this will mean deleting and recreating the visualizations.
No Undo/Redo is supported in this release.
No way to slice and dice data within the visualization
No drill down functionalities from a higher level (say, states) to a lower level (say, counties) which is there in tools like Power View.

3) Create a basic GeoFlow report
GeoFlow requires your data to have some geographical fields in it. It can be in a variety of formats including latitude and longitude, street address, city, zipcode, state, country, etc. Be aware that you will need internet connection for the geocoding to take place. For the purpose of this demo, I have got the population and personal income data by year of all the counties in the United States. Follow the steps below to create a basic GeoFlow report:-

a) After installing GeoFlow, you should now be able to see the 3D Map button in the Insert tab. Click on any of the cell within the data table and then select the 3D Map button within the Insert tab. Click on the Explore in 3D option as shown in the image below.

1 Insert 3D Map button

b) Choose the geographical field based on which you need the map. In my case, it is the StateName field. Click on Map It button to proceed.

2 Choose geo field

c) Click on the Population field to see the states visualized by this measure.

3 Basic map of population by states

By default, you have got the Column chart visualization (which can be clustered or stacked if a category is present). This can be changed to a bubble map or a heat map visualization also.

4 Bubble map

4) Navigating the 3D Map
Navigating the 3D map can be tricky so it is useful to know all the navigation features. To use a mouse to navigate in the GeoFlow 3D environment, do any of the following:
·         Double click rapidly on any portion in the globe to zoom closer to it.
·         Use the scroll wheel on your mouse to zoom in and zoom out.
·         Click and drag the globe in any direction to pan without changing the pitch.
·         Hold the Alt key, and then click and drag to change the pitch.
·         If you lose your point of focus, zoom all the way out with the scroll wheel to reset the globe and view. To use the Keyboard to navigate in the GeoFlow 3-D environment, do any of the following:
·         Tap the arrow keys up, down, left and right to pan and spin the globe.
·         Hold the Alt key and then tap the left and right arrow keys to orbit the current target, or the up and down arrow keys to increase and decrease the camera pitch.
·         Tap the plus (+) and minus (-) keys to zoom in and out.

5) Interesting Features
There are quite some interesting features in GeoFlow and these are some of the main ones in my opinion:-

a) Handles multiple geographical levels:- We can add multiple geographical levels to GeoFlow. To add the counties (AreaName field) to our basic report, click on the Edit button in the Layer Manager as shown below.


Then select the AreaName field and map it to the County in Geography section. Then click on Map It button.


Now you should see the map visualized by counties. It might take some time for all the data to be processed on the map and it depends on the number of rows that you have. When the processing is done, you should see the message as Finished in the green bar below the map.


You can easily toggle back to the states by selecting the StateName radio button in the Map by section. However, as I mentioned before, drill downs are not possible (as far as I know in this release). Vote to up this request in Connect by clicking here.

b) Visualizing Data over time:- We can visualize the data over time if we have a date  or time field. In my dataset, I have the year but this is not enough as GeoFlow requires a date or time field. So I had to make a new filed called DateYr which converts the year to a date field by adding 1/1/ before every year. Vote here to up this request to have the time play axis use any level of the time dimension. Now in GeoFlow, drag and drop the DateYr field to the Time section. Now you will get an additional section for time settings which has values like Time Accumulation, Instant and Persist the last.


You can also see the time play axis on the bottom. Click on play and you can see the data changing over time. c) Adding Annotations and Textboxes:- This is definitely one of my favourite features. We can now add an annotation to a particular data-point by right clicking on it and then selecting the Add annotation option.


Similarly, you can add a textbox also. The only difference between them is that an annotation is bound to a data point while the textbox is more like a sticky note on the screen. The textbox stays at the same position even when the map is panned or zoomed. d) Find a Location:- The find a location feature is pretty handy in case you have a lot of data points and want to directly zoom to a particular location.


e) Creating and Saving Tours:- I have obviously saved the most exciting feature for the last. Now you can create tours by clicking on the New Tour button and then including the scenes in the tour by selecting the Capture Scene button.


Scenes will transition automatically and effects are automatically applied. A scene is auto-saved when you capture a new scene. If you want to change a scene, for example if you want the scene to show a different angle or a different time subset, select the scene, make the changes and then click on Save Scene, Once you have created a tour, you can play it by selecting the Play Tour button. I have posted a video of a sample tour that I created.

Excel GeoFlow

After creating a tour in GeoFlow, the tour is automatically saved back to your workbook. You can create multiple tours by clicking on New Tour. Saving your Excel workbook will save each of the GeoFlow tours you created with the worksheet it was created from. Now you will be able to see a new option to Manage Tours also.


To summarize, there are some exciting features but clearly, a lot of work needs to be done before it can be used by data professionals for analysis. With this, we come to the end of today’s post. Wishing all of you a very Merry Christmas from my side and be good!

Posted by SQLJason, 14 comments
Querying Attributes and Measures in DAX Multidimensional

Querying Attributes and Measures in DAX Multidimensional

December 2, 2012

The past week has been pretty exciting for Analysis Services geeks – Microsoft SQL Server 2012 With Power View For Multidimensional Models Customer Technology Preview (CTP) has been made available for download (and if you haven’t tried it out yet, download it from this link). Now the obvious part is that Power View can consume data from SSAS Multidimensional objects now and that was a long time request from the community. This will enable the business to use their existing investments in SSAS Multidimensional to utilize the latest end user tools like Power View. The best part is that all this is achieved through native support for DAX in Multidimensional and there is no translation of DAX into MDX. Now this opens up a whole new door of possibilities. For eg, we should be able to design SSRS reports using DAX queries on top of SSAS Multidimensional objects (and who knows, we might be able to replace some of the slower MDX calculations with faster DAX equivalents). Welcome to the world of DAX Multidimensional (or DAXMD)! Querying in DAXMD Now the purpose of this post is to introduce you on how to query Multidimensional objects in DAX, and you should be able to get a good overview on how the multidimensional objects are mapped in tabular from here. But there is something extra in this blog that the official documentation has not mentioned when it comes to querying attributes which don’t have the same name and key values (for SSAS beginners, an attribute in SSAS MD can have different values for it’s key and name, while in SSAS Tabular, you have to specify the key and name as different columns). For the purpose of this post, I am using the Adventure Works DW 2008R2 database and querying using SQL Server Data Tools (SSDT). Before we start, lets have a look at the summary Object mapping summary - MD to Tabular Armed with this knowledge, let us start querying in DAXMD 1) Querying Attributes with same Key and Name Let us take the example of Calendar Quarter of Year in the Date dimension Calendar Quarter of Year We can write a simple DAX query as shown below evaluate
values (‘Date'[Calendar Quarter of Year])
Querying Calendar Quarter of Year Note that Date is a role playing dimension in the cube and you will need to use the name of the cube dimension as the table name (and not the database dimension name). 2) Querying Attributes with different Key and Name Let us take the example of Calendar Year. Querying Calendar Year Now let us see what the query below will result in. evaluate
values (‘Date'[Calendar Year])
We will get an error – Column [Calendar Year] is part of composite key, but not all columns of the composite key are included in the expression or its dependent expression. Composite Key Error in Dax I spent hours trying to figure out what this error means and finally managed to figured out that it happens only when the key and names are different. It made sense also as DAXMD will only recognize the single attribute as two different columns and not as one. But I had no idea of what syntax too use. Enter Jeffrey Wang (blog) from the Anaysis Services Dev team – “ This is a design decision for good performance. A single MDX attribute can produce several DAX columns, some visible, some not, but you can query them in SSMS regardless whether they are visible or not. In general, a MDX attribute, named A, can produce several columns like A.Key0, A.Key1, A. The design requires you to group by at least A.Key0 and A.Key1 if you want to group by any of the three columns. So to query attribute A in DAX, you can do either Summarize(‘Table’, [A.Key0], [A.Key1]) or Summarize(‘Table’, [A.Key0], [A.Key1], [A]). In simple cases where an attribute, named B, only generates a single DAX column, which will also be named B, you can simply query Values([B]) like regular tabular columns.
This constraint helps us achieve good performance since we can send all queries to the MDX engine which only groups by the entire attribute regardless how many DSV columns were used to create this attribute. Otherwise we would have to eliminate duplicate values on the DAX side after MDX query returns the resultset. Hope this makes sense.
“ What this means is that your query will have to follow the below format evaluate
summarize(‘Date’, ‘Date'[Calendar Year.Key0], ‘Date'[Calendar Year])
Querying Calendar Year the right way Adding the .Key format to the column name is a new addition to the DAX language as far as I know. I am reasonably sure that this would be how Power View also issues the DAX when such attributes are used in the model, though I haven’t tested it so far. If anyone of you have seen the profiler traces of Power View, feel free to comment below and add. 3) Querying Attributes with Multiple Keys Let us take the example of Calendar Quarter. Calendar Quarter
As mentioned previously, we will have to group by all the keys at least and then the name, if needed. evaluate
          ‘Date'[Calendar Quarter.Key0], 
          ‘Date'[Calendar Quarter.Key1],
          ‘Date'[Calendar Quarter])
Querying Calendar Quarter
4) Querying Measures in a Measure Group Let us take the example of Internet Sales Amount in the Internet Sales measure group. Internet Sales Amount We can just write a simple DAX query to display the Internet Sales measure group by Calendar Quarter of Year as shown below evaluate
                   ‘Date'[Calendar Quarter of Year],
                   “test”, ‘Internet Sales'[Internet Sales Amount])
Querying Internet Sales Amount We can also refer to the measure without the table name in the query. Also note that we don’t need to provide any aggregation to the measure, else we might get the following error – Column ‘Internet Sales Amount’ in table ‘Internet Sales’ cannot be found or may not be used in this expression. This makes sense also as the engine should use the aggregation that is defined in the multidimensional cube. 5) Querying Measures without a Measure Group This section refers to those calculated measures that are made in the multidimensional cube which are not associated with a measure group. Since all the calculated measures in Adventure Works cube are associated with some or the other measure group, I made a simple calculated measure called test in the calculated member script of the cube as shown below Create Member CurrentCube.[Measures].[Test]    As  2; Now I can refer to this Test measure in a DAX query as shown below evaluate
                  ‘Date'[Calendar Quarter of Year],
                  “test”, ‘Measures'[Test])
Querying custom measure Hopefully, this should get you started thinking in DAXMD now! Smile Update 3/1/2013 On the same vein, Gerhard Brueckl has blogged on DAXMD and Default Members. Interesting read, check it out!

Posted by SQLJason, 2 comments
Classifying and Solving the Events in Progress Problem in PowerPivot

Classifying and Solving the Events in Progress Problem in PowerPivot

November 27, 2012

Last week, I was going through the PowerPivot forums as usual and was answering a question on How to handle count of data with “valid to/from” dates in DAX. This was a classic case of the “Events in Progress” or finding the number of active events between a start date and an end date (In case you haven’t read, Chris Webb (blog | twitter) has got some great posts on the Events in Progress solutions using both DAX and MDX). I replied back with a possible solution and soon, Marco Russo (blog | twitter) came up with a much more elegant solution. But more than the solution, what caught my eye was that our results were not matching and after a few email exchanges with Marco (and a big thanks to Marco for taking time off to answer my questions), we came up to the conclusion that it was because of the difference in our interpretations of the Event in Progress. That is when I thought of classifying the different interpretations of the “Events in Progress” as well as extending Marco’s solution for the same.


In a typical Events in Progress problem, there are fact rows with a start and end date and each row denotes an event. The requirement would be to find the number of events that were in progress on any given date. At a date level, there is no ambiguity as the number of events in progress for a particular day, say D would be the number of those events whose start date is less than or equal to D and the end date is greater than or equal to D (Actually, you can further sub-categorize this into whether the start dates and end dates are inclusive or exclusive i.e. whether the start date is less than or less than or equal to D and whether end date is greater than or greater than or equal to D. For the sake of simplicity, we are assuming that both start and end dates are inclusive). However, the moment you start analysing on a level above the dates, say a month or a year, the question arises on which date of the month to use for comparing the start and end date. Should we be using the first date of the month? Or the last date of the month? Or should we be using a combination of both? There is no wrong answer here and the right definition completely depends on the business requirement. So let us look at the four possible combinations that are possible:-


To give an example, say we are analysing the events in progress for January 2012. For Scenario S2, this would mean the number of events which have a From Date (or start date) less than or equal to January 1 2012 (which is the minimum date for January 2012) and To Date (or end date) greater than or equal to 31 January 2012 (which is the maximum date for the January 2012). Here also, we can further sub-categorize whether the start and end dates are inclusive or exclusive but this is not done for the sake of simplicity and also because our solution can easily be modified by removing the equal to symbol. It might be easier to understand the relationship between the four scenarios with the help of a Venn Diagram.


As you can see, Scenario S3 is the superset and includes any event that was active on any date in the selected time period. Scenario S1 requires that the event started before the time period but can end during any day on or after the time period. Conversely, scenario S4 requires that the event started on or before the time period and ends on any day after the time period. Scenario S2 is the intersection of the scenarios S1 and S4 and requires that the event starts before the time period and ends after the time period. This means that those events starting or ending in the selected time period will not be counted. Let’s have a look at the four scenarios from a different visualization perspective.


Now that you have got a good idea of the different scenarios, let us look at some sample data that we can use for testing the solution. Given below is a simple fact table which contains Order Numbers with a From Date and To Date and also the Order quantity.


The PowerPivot data model that we are going to use is given below

1 PowerPivot model

I just have a simple calendar table (where I have just populated days from 2011 to 2013, again for simplicity) and the above Orders fact table. It is interesting to note that we will not be creating any relationships between the two tables as our solution does not require them. Now follow the steps below to get the solution for the four scenarios:-

1) Before we start, it might be a good exercise to review the original formula that I wrote and why we are moving to a different solution. The original formula I wrote was

OldMsr:=Calculate (
                  CountRows ( Fct ),
                  Filter (
                          CountRows (
                                          Filter (
                                                  Values ( Calendar[DateKey] ),
                                                  ( Calendar[DateKey] >= VALUE ( Fct[FromDate] )  ) && ( Calendar[DateKey] <= VALUE ( Fct[ToDate] ) )
                                            ) > 0

To explain this formula, I am dividing it into 3 parts. The red highlighted area gives us the count of the fact rows which is the count of events in progress, provided the correct set of rows in the Fact table is passed as a table filter. Now to pass the correct table filter, we need to filter the fact table which is the part highlighted in orange and the condition to filter it is highlighted in yellow.  Since we don’t have any relations between the tables, what I am doing is to filter all the fact rows which has at least one row where there is a date greater than the From Date but less than the To Date. When we are analysing by month, what this will mean is that even if there is a single day in that month which satisfies the condition, include that fact row. This is exactly similar to our scenario S3. Now some of the problems associated with this:-

– Formula is difficult to understand and write
– Formula can’t be easily changed to that of other scenarios
– Formula doesn’t perform well

2) Now let us look at the formula which was given by Marco in the forums

S3:= Calculate (
  CountRows ( Fct ),
  Filter (Values ( Fct[FromDate] ), Fct[FromDate] <= Max ( Calendar[DateKey] )  ),
  Filter (Values ( Fct[ToDate] ), Fct[ToDate] >= Min ( Calendar[DateKey] )  )

You can already see how simple it is to understand. We are just passing the From dates which are less than the max of the time period and those To Dates which are greater than min of the time period as a table filter to the calculate statement and there is no need to filter the fact table explicitly. Not only that, it performs much better as it doesn’t have to parse through the entire fact table and just has to parse through two tables having the distinct values of From Date and To Date respectively (Always remember, it is faster to parse through two smaller tables than one big table). Now the best part is that this can be easily extended to the other scenarios as shown below

S1:= Calculate (
CountRows ( Fct ),
Filter (Values ( Fct[FromDate] ), Fct[FromDate] <= Min ( Calendar[DateKey] ) ),
Filter (Values ( Fct[ToDate] ), Fct[ToDate] >= Min ( Calendar[DateKey] ) )

S2:= Calculate (
CountRows ( Fct ),
Filter (Values ( Fct[FromDate] ), Fct[FromDate] <= Min ( Calendar[DateKey] ) ),
Filter (Values ( Fct[ToDate] ), Fct[ToDate] >= Max ( Calendar[DateKey] ) )

S4:= Calculate (
CountRows ( Fct ),
Filter (Values ( Fct[FromDate] ), Fct[FromDate] <= Max ( Calendar[DateKey] ) ),
Filter (Values ( Fct[ToDate] ), Fct[ToDate] >= Max ( Calendar[DateKey] ) )

Wasn’t that simple?

3) Now let us have a look at the results to make sure everything is coming correctly.


I have filtered for January 2012 and you can see that the dates in that table are showing the same values for all the five measures. Let us pick 10 January 2012 as an example. Only 11001, 11003 and 11004 suffice the conditions and hence the measures are all 3. The results come up correctly even if we look at a month or year level.


Let us take January 2012 as an example, so the max will be 31 Jan 2012 and minimum will be 1 Jan 2012. The order numbers which match the given conditions are given below and matches with the measure counts

S1 = 11001, 11002, 11003
S2 = 11001, 11003
S3 = 11001, 11002, 11003, 11004, 11005, 11006
S4 = 11001, 11003. 11006

4) Another benefit of this pattern is that it can easily be applied to other problems where you might need to find the sum or distinct count or any other aggregations between a from date and to date. Let us say, if I want to find the sum of Order Quantity between the from date and to date according to scenario S1, all I need to do is as follows

S1Sum:= Calculate (
Sum ( Fct[OrderQty] ),
Filter (Values ( Fct[FromDate] ), Fct[FromDate] <= Min ( Calendar[DateKey] ) ),
Filter (Values ( Fct[ToDate] ), Fct[ToDate] >= Min ( Calendar[DateKey] ) )

If you want to try this on your own, feel free to download the completed excel file from here.

Posted by SQLJason, 2 comments
Changing PivotTable Names in Excel 2013 + Bug Alert

Changing PivotTable Names in Excel 2013 + Bug Alert

November 23, 2012

Today is Black Friday here in the US and I don’t have time to write a regular post (as I am scourging for deals online and planning to get out for shopping as soon as the insane lines outside the stores dwindle in numbers). So this is just going to be a quick tip for those who have tried upgrading their PowerPivot models with linked tables from Excel 2010 to Excel 2013 and found out that their PivotTable names don’t match anymore.

1 Changing PivotTable names in Excel 2013

For those who are not familiar, let me give an account of the issue. If you upgrade a PowerPivot model which has linked tables from Excel 2010 to 2013, you will notice that the table names in the field list don’t match the table names in your model. As per my testing, this doesn’t break the measures that you might have created using the old table names (as the measures still use the table names in the data model and this name change looks to be more of a front-end display name) but might be a real inconvenience. Follow the steps below to reproduce and solve the issue:-

1) For the purpose of this demo, let us use the PowerPivot file that I created in Excel 2010 for my last post – Measure Selection using Slicers in PowerPivot. You can download it from this link. If you open it in Excel 2010, you can see the original table names in the field list as well as the linked tables used to create the model above.

2 Table names in Excel 2010

2) Now close the file and then re-open in Excel 2013. Julie Koesmarno (blog | twitter) has put a step by step post to upgrade this workbook from 2010 to 2013 here. Also have a look at this post – Upgrade PowerPivot Data Models to 2013 for any potential issues while upgrading your workbook. Now once you have done that, you will see that all the table names have changed in the field list.

3 Changed table names in Excel 2013

However, there is no change to the table names in the underlying PowerPivot data model.

4 Name in PowerPivot model

3) To change the name in the PivotTable field list, all you have to do is to select a cell in the respective linked table, click on Design and then enter the original / required name in the Table Name cell as shown below

5 Rename table

4) You can see that the changes are reflected in the PivotTable field list now.

6 Name change in Excel 2013

You could avoid doing these steps in Excel 2013 if you had renamed your linked tables in Excel 2010 using the same way before upgrading. If not, it will just take the name of the linked table when you upgrade. Apparently, this is a known issue and will be fixed in the next release as per this thread.


If you paid close attention to the file that you just upgraded, you can notice that the slicer for measure name is not working. Now this seems to work for people who have installed the Office Professional Plus (v 15.0.4420.1017) but for people like me who have installed the Excel Preview (v 15.0.4128.1025), the slicers don’t work. Turns out the reason was that FORMAT function doesn’t play well within a measure expression. All you have to do is to change the measure expression for MsrValue from

MsrValue:=switch (TRUE,
Min ( Msr[MsrId] ) = 1,
FORMAT ( Sum ( Fct[Sales] ), “$#,#0” ),
Min ( Msr[MsrId] ) = 2,
format ( Sum ( Fct[Quantity] ), “#,#0” ),
Min ( Msr[MsrId] ) = 3,
format ( Sum ( Fct[Profit] ), “$#,#0” )


MsrValue:=switch (TRUE,
Min ( Msr[MsrId] ) = 1,
Sum ( Fct[Sales] ),
Min ( Msr[MsrId] ) = 2,
Sum ( Fct[Quantity] ),
Min ( Msr[MsrId] ) = 3,
Sum ( Fct[Profit] )

Of course this does mean that you lose the formatting. Now you can see that the slicers are working. This clearly is a bug as it works in Excel 2010 (as well as in the Professional Plus edition of Excel 2013). I have raised a Connect issue here, please vote for it so that it can be rectified in the next release. Ok, time to run for the Black Friday shopping now!

Posted by SQLJason, 2 comments
Measure Selection using Slicers in PowerPivot

Measure Selection using Slicers in PowerPivot

November 21, 2012

As a Business Intelligence professional, I find it really fascin(/frustr)ating working with Excel users. You get exposed to a completely different way of thinking when you work with them, not to say that they are extremely smart and demanding. From the moment I started working with PowerPivot and interacting with Excel users, I knew it was not going to be easy tackling their questions and requirements. You simply cant escape by saying that the required feature is not available because they will push you till you find a workaround. This was the case when one of my users came back to me asking for a slicer which will dynamically help them to select the measures. I replied saying that the slicers are not intended for that purpose and should be used for filtering the data (and not for just selecting or deselecting values like Measures, which can be easily done from the Field List pane). However, the user pushed back saying that he would like to create a dashboard for the top management and didn’t want them to be using the Field List pane. That is when I had to put on my thinking cap. 1 Measure selection using slicers in powerpivot For the purpose of this post, I am going to use some simple mocked up data which involves three measures (Sales, Quantity and Profit) for country and year. 2 Data model and sample data Follow the steps below to recreate a slicer which can be used to select or deselect the measures in a pivot table:- 1) Create a table which will have the list of measure names needed in the slicer in the MeasureName column and their representative integer ids in the MsrId column. Import this table into PowerPivot using Linked Tables and name it as Msr. 3 Create Linked table in powerpivot There is no need to make any relations with the existing tables, and Msr table should be a standalone table. 2) Now make a new measure called MsrValue with the formula given below MsrValue:=switch (TRUE,
Min ( Msr[MsrId] ) = 1,
Sum ( Fct[Sales] ),
Min ( Msr[MsrId] ) = 2,
Sum ( Fct[Quantity] ),
Min ( Msr[MsrId] ) = 3,
Sum ( Fct[Profit] )
) 4 New measure This measure will check the value of the MsrId column, and then use the appropriate measure. However, it should be noted that when there are more than one value of MsrId (or if more than one measures are selected), only the measure having minimum value of MsrId would be displayed. So how will we display more than one measures on the pivot table? Read on. 3) With this very simple setup, we are ready to view the results in the Pivot table. Drag and drop MeasureName column into the Slicer, Year into the Column Labels and Country in the Row Labels. Then drag and drop MsrValue in the Values and we should have something similar to the image below. I have also included the fact rows above the pivot so that we can compare. 5 Drag column to pivot table and insert slicer Now there are a couple of problems with this pivot table. You would have noticed that the pivot table works correctly if only one measure is selected, but if there are more than one selections for the measure name, it will only show the measure with the minimum of the MsrId (eg, Sales in the image above). Also, we would like to see the name of the measure in the column above for clarity. To solve both of these problems, follow the next step. 4) Just drag and drop the MeasureName column in the Column Labels above the Year. 7 Field list pane Now you can see that multiple measures are displayed and also that the measure names are displayed in the column above. 6 Measure getting selected in pivot table as per the slicer selection You can also change the formatting of the measures if needed MsrValue:=switch (TRUE,
Min ( Msr[MsrId] ) = 1,
FORMAT ( Sum ( Fct[Sales] ), “$#,#0” ),
Min ( Msr[MsrId] ) = 2,
format ( Sum ( Fct[Quantity] ), “#,#0” ),
Min ( Msr[MsrId] ) = 3,
format ( Sum ( Fct[Profit] ), “$#,#0” )
) 8 formatted measures Luckily, that wasn’t as hard as I thought (which means that I could still use my evening free time to focus on more important problems like whether the chicken or egg came first into the world!). If you don’t have such important problems to focus on like me, you can download the completed file from this link and test it out on your own!

Posted by SQLJason, 29 comments
PASS Summit 2012 Hangover

PASS Summit 2012 Hangover

November 18, 2012

November has always been my favourite month of the year though I don’t really have a clear cut reason for that. Maybe it is because that my birthday falls in this month and I look forward to opening all those lovely gifts that I usually get. Or maybe because it is the time of the year when you get to see nature in her splendid fall colours and her transition to winter. Or could it be the simple reason that you get 50% off Halloween candy? (in case you didn’t know, I have a BIG sweet tooth) Anyways, I always look forward to November eagerly. Moreover, this year’s November was extra special –  I was going to attend my first PASS Summit. Summit2012_header_passlogo I started my baby steps in Microsoft BI six years back although I was blissfully ignorant of whatever community activities was happening then. I have always been an inquisitive mind (or a heckler, depends on whom you ask) and used to ask a lot of questions. Usually, my technical doubts were answered by the senior consultants in my company, and it was by chance that I got redirected to the MSDN forums (the truth is, they were tired of my constant questions). I think that was when I started getting cognizant of the SQL world and the community. I asked more questions, started recognizing the big names in the BI world, read more blogs by them and before I knew it, I was hooked onto this technology. I started answering questions in the forums apart from just asking them and I really enjoyed the kick that you get out of helping people. It also fuelled my personal growth and soon, I decided to share my learning by starting a blog. Fortunately for me, that was the time I moved to London, UK and I got a first real taste of user groups. I started attending the bi-annual SQLBits conference and it sort of inspired me to start speaking. However, I had never given a talk before (not even in my company) and my bouts of stage fright were legendary in my friend circles. But when you wish for something really hard, the entire universe conspires to make it happen. I decided to mail Chris Webb (Blog | Twitter) asking him if I could speak on one of the user groups and he set me up for my first talk (I don’t think I would have ever had the guts to ask again if I was rejected then). He also encouraged me to submit my abstract for SQLBits (which eventually got picked) and I slowly started getting a little more confident about public speaking. This was one of the many pushes forward that I got from my #sqlfamily (oh yes, that is what they are to me now, not just a community). When I moved to the US, I started submitting to the SQL Saturdays with the encouragement of many people in the community, and I am proud to say that the Washington DC one is going to be my 10th SQL Saturday this year (and this is apart from the user group, code camp and virtual chapter talks that I gave). For a person who would have his knees shake if he was standing and speaking in front of 4 people, this definitely was a big achievement and I have my #sqlfamily to thank for. For me, this cooperation and mutual encouragement was best epitomised in SQL Saturday New York City. I was the next speaker after Kevin Kline (Blog | Twitter) and just when I was setting up my laptop, I realized that I didn’t have my HDMI-to-VGA connector for projecting (and I could have sworn that I had checked it before leaving the speaker room, must have dropped it somewhere). I was totally freaked out – none in the speaker room had a converter and I had the demos only on my laptop. I was almost on the verge of cancelling my session. As a last try, I asked Kevin, who was on his way out, whether he could lend me his laptop so that I could try to use his laptop (which had a VGA port) for setting up a live meeting between mine and his laptops, and then projecting the demos from his laptop. Not only did he agree, but he came to my room to calm down my nerves, helped me in setting up the laptops and was throughout my side for the entire session. The session went great and I had lots of my attendees saying that it was really good. If it wasn’t for Kevin, I would have had an embarrassing situation and he saved my day (and to put things to perspective, he barely knew me!!!). I couldn’t even say thanks properly to him because he was already late for some other appointment and had too rush off. That is how selfless people in the #sqlfamily are and I really wanted to be a part of the biggest family reunion of the #sqlfamily – the PASS Summit. I reached Seattle on Tuesday afternoon and straightaway went to the Chapter Leaders meeting where we discussed on how we can make our user groups more effective. It was very useful to hear from other chapter leaders on how they are organizing their user groups in terms of finance, attendees, marketing, speakers, etc. I also made sure to attend the #sqlFirstTimers meeting where all of the first time attendees were given advice on how to network and make the most of the PASS Summit. PASS Photos1 Over the next three days, all the attendees were presented with a rich selection of sessions across all tracks and difficulty levels. I chose the Intermediate to Advanced topics for Business Intelligence and was truly happy with the results. It was great to attend sessions by Chris Webb, Marco Russo, Alberto Ferrari, Stacia Misner, Allan Folting & Akshai Mirchandani, Paul Turley, Melissa Coates, etc and I learned a lot from these great speakers. It was very difficult making a choice between all the great sessions and I hope to catch up with the rest of the sessions through the recordings. PASS Photos2 As I mentioned before, the PASS Summit was also a place to meet my #sqlfamily. There were a lot of people I had met over twitter and it was a fabulous opportunity to associate faces to those twitter handles. I was also able to reconnect with my friends from UK and India and it was so much like meeting your family again after a long interval. PASS Photos3 The various parties after the sessions kept me busy in the night too. It was awesome getting my first dose of #sqlkaraoke and I was up all the 4 nights – thanks to the Quizball event by SQL Sentry, Hard Rock Cafe event by Pragmatic Works, the Experience Music Project event by Microsoft and of course, the traditional #sqlkaraoke event at Bush Gardens. PASS Party Photos I also made sure to stay a couple of days after the event to explore the city as it was my first time in Seattle. I must say the weather gods took mercy on us and we had some nice sunshine, so unlike Seattle Smile. PASS Photos4 All in all, it was a great trip and it was hard to bid goodbye to the city when it all ended. People usually say that when you go somewhere with a lot of expectations, you usually end up with disappointment. However, the PASS Summit 2012 was above my expectations and I really savoured every moment of it. Guess it is time to end my long post. Hope to see all of you guys for the next PASS Summit, which will be held in the beautiful city of Charlotte. It will be much easier for me as I am based out of Charlotte and I will be there to welcome all of you guys to our annual family reunion!

Posted by SQLJason, 0 comments
SQL Saturday Charlotte Journal

SQL Saturday Charlotte Journal

November 6, 2012

Was it fun? Oh yeah, it was. Was it perfect? Hmm, maybe not. But was it awesome? Definitely yes! Let me tell you, it was an amazing experience being a part of the whole process. SQL Saturday Charlotte was my 9th SQL Saturday of this year, but there was a big difference from the other ones – this time, I was wearing the Organizer hat and not the Speaker hat. logo1 It all started around March of this year when me and Javier Guillen (blog | twitter) were seriously contemplating on starting a BI user group in Charlotte. We pulled in our common friend, Melissa Coates (blog | twitter) to this plan and she in turn informed us that Rafael Salas (blog | twitter) was also trying to do the same. We joined forces and this marked the start of the evening #BeerAndBI rituals, where we started to come up with ideas and plans to start our new user group. Beer and BI In one of those sober (?) rituals, the SQL Saturday idea was introduced and then there was no turning back from there. We informed Karla Landrum (twitter) of our intentions and she was extremely supportive. We started with checking venues around June, and I must say we were really lucky to have Alberto Botero of Central Piedmont Community College as our friend. He acted as a bridge between the college and us, and ensured we had a venue to start off with. We confirmed the dates for SQL Saturday with Karla by June end and started our preparations. Having a big team has its own advantages, it was easier to split tasks without over-burdening anyone. Rafael was in charge of the sponsors and food section, Javier was in charge of finance and venue related section, Melissa was in charge of social media, website, emails, general coordination, etc and me in charge of recruiting speakers and making the schedule. Even though we were assigned specific tasks, it was great to see everyone pooling in their efforts wherever they could. We also got Shannon Lowder (blog | twitter) to help us coordinate all the volunteers. And when the day finally came for setting up the venue on the day before, there were lot of unforgettable moments, for example stuffing the bags on the eve of the event with Javier, Rafael setting off the fire alarm when we were already running late for the speaker dinner, setting up the tables and the cute little raffle boxes made by Melissa, last minute printing with the help of Shannon’s iPhone, etc. The speaker dinner was well attended and was fabulous, we even managed to have a few shots before we left. Event pics On the D-day, we reached the venue around 6:45 am in the morning and from that time on till the afternoon, we were so busy that we couldn’t even speak much with each other. I was always of the mentality that we had planned more than necessary and things will flow pretty easily during the event. I couldn’t have been more wrong and it really took a lot of effort from our as well as our volunteers’ side to make sure everything went well. This made me appreciate the great work done by all our volunteers and organizers of past SQL Saturdays all the more. At the end of the day, it felt great to hear all the positive responses about the event from our attendees. A big thanks to all our sponsors, speakers and volunteers who made this possible. Some of the volunteers of #sqlsat174 It was fun and although it took me around a week to recover completely, I sort of miss all the madness involved in planning a SQL Saturday right now. Anyways, I am sure there will be much more to it with the start of the new BI group in Charlotte. CBIG Time to pack up my bags for the PASS Summit and reward myself with a relaxing week at Seattle!

Posted by SQLJason, 0 comments
SSRS Charts with Data Tables (Excel Style)

SSRS Charts with Data Tables (Excel Style)

October 30, 2012

Ok, this post was supposed to come out a lot earlier but I was just too lazy. Complacency is a struggle we all have to fight and I would be lying if I say that I am immune to all these. This post is a direct result of a few forum posts where people said that charts with data tables are not possible in SSRS (unless you add a tablix beneath the chart and spend umpteen hours trying to align both of them!). Working with SSRS is like making love to a woman – you need to know the woman and what makes her click well for a fabulous time. Well, you can still manage without knowing her well enough, just that it won’t be that awesome! Winking smile

1 Charts with Excel style data tables

For people who are familiar with excel charts, adding a data table to a chart is a one-click affair. It is nice to see the data table just beneath the chart and it also serves as a legend. A simple example is shown below:-

2 Excel chart with data table

In SSRS, having a tablix beneath the chart serves as an ok workaround as long as the number of fields in the category is fixed. When the number of fields change change (lets say, when the value of a report parameter changes), the alignment goes for a toss. Here, I will be demonstrating a much more elegant way to add the data tables than trying to align the tablix and chart. 1) Create a simple dataset from the AdventureWorks cube, displaying the Month, Category and Internet Sales Amount. Add a parameter for Calendar year with a default value of CY 2007.

3 Dataset query

2) Drag and drop a matrix to the designer pane and then add the Category field on the row, Month on the column and Internet_Sales_Amount on the data as shown below

4 SSRS Matrix with fields

3) Add two rows outside group above the category row as shown below.

5 Add two rows outside group for matrix

4) Now you can delete the top row (but not the associated group) containing the Category and the Month Header as shown below

6 Delete header rows

5) Once that is done, drag and drop the month field to the textbox above the data value. Here, I just want to display the first three letters of the month (instead of having the month followed by year), so I am using the expression below directly:-


You might also want to do some quick formatting like hiding the textbox borders for the top row.

7 Hiding textbox borders and setting month expression

6) Once this is done, the only part remaining is to make a chart, format it and then place it on the top right textbox. For that, drag and drop a stacked column chart outside of the matrix, and then drag Internet_Sales_amount on the Values, Month on the Category Groups and Category on the Series Groups as shown below.

8 Chart

7) Right click on the vertical axis, select the Vertical Axis Properties and then modify the expression for Maximum from Auto to the expression written below

=1.2 * max(Fields!Internet_Sales_Amount.Value, “DST_Main”)

9 Chart vertical axis max property

8) Delete the axis titles, chart titles and the horizontal axis from the chart

10 Delete Axis titles

9) Hide the labels for the vertical axis.

11 Hide Labels

10) We need to place this chart within the tablix and make it repeat for each of the month. But before that, we need to make sure that there are no white spaces within the chart area, and hence change the following properties for the Chart Area.

12 Custom Inner Plot Position

Also set the BorderStyle property for the Chart as None and disable the MajorTickMarks for the Vertical Axis. This will also help in making the charts look continuous.

11) Now as a last step, we need to ensure that the vertical axis is only displayed for the first chart, Hence, set the LineStyle property for the vertical axis as shown below =iif(Fields!Month.Value=First(Fields!Month.Value, “DST_Main”),”Solid”,”None”)

13 Vertical axis property

12) Drag and drop the chart into the tablix cell

14 Chart in tablix

13) To add a color as a column next to the Category, we can use the Custom Code technique for consistent colors described here. Then ensure that the BackgroundColor property for the column as well as the series color for the chart is the expression below


15 Color of column as well as series

14) Now preview the report and you will get the intended result. Change the parameter value from 2007 to 2008, and you will still see that the alignment is proper (even though the number of bars have changed)

Charts with data tables

You can download the completed rdl file from here. Now you know which post to point to incase someone asks on how to create charts with data tables in SSRS Smile


Update : 10 September /2015

For those looking to replicate this for the line charts, I have written a new post on #SSRS Line Chart with Data Tables (Excel Style) –

SSRS Line Chart with Data Tables (Excel Style)

Posted by SQLJason, 61 comments
Difference from Maximum of Running Totals in PowerPivot

Difference from Maximum of Running Totals in PowerPivot

October 18, 2012
I don’t know how many of you are Formula1 fans here, but this season is pretty good. Sebastian Vettel and Fernando Alonso are going neck and neck for the title and there are just 4 races remaining. I try to follow F1 as much as I can, but had missed watching the last couple of races and there were some pivotal moments as Vettel took the lead from Alonso. I really missed knowing how they were doing at the end of each race and how far the other drivers were from the race leader. That is when I decided to relive the experience by mocking up the data in PowerPivot.
PowerPivot Running Total Max
So what I want to do is to get a running total of the points for each driver across the races., so that I can see how each driver was doing throughout the season. Also, it would give me a better idea of their performance if I could get the difference of the points for each driver from the race leader at the end of that race. Interesting huh?
The source for our exercise can be downloaded from here and contains the following data:-
1. RaceFact
This table contains the positions for each driver across all the races in this season. The columns are Driver, Race, Position and a flag whether they crashed or not.
2. RaceOrder
This table contains all the completed races in this season and the order in which they happened. The columns are Race and Order.
3. Points
This table contains the points for each position. The columns are Position and Points.
4. Driver
This table contains the list of all drivers and just has one column – Driver.
Race data model
1. The running total of the points for each driver should be displayed race by race as shown below.
Running Total of Points

2. The difference of each driver from the leader (as of that race) should be displayed. Diff from max of Running Total Implementation 1. Build the PowerPivot model for the RaceData as shown in the diagram above. For simplicity, I have already created the model in the source file available for download here. 2. For creating the running total across the races, we will be following the technique detailed in my previous blog. For that, create 2 calculated columns in the RaceFact table called Points and Order respectively. =calculate(sum(Points[Points])) =calculate(sum(Races[Order])) calc columns Notice that the foreign keys in the fact table (Driver, Race, Position) are hidden as it is a best practice to hide the foreign keys. This way, the users will not get confused whether to use the columns in the master tables or the fact tables. 3. For finding the running total, we can create a new measure with the below formula RT_Points:=calculate(sum(RaceFact[Points]), filter(Allexcept(RaceFact,Driver[Driver]), RaceFact[Order]<=Max(RaceFact[Order]))) When this measure is used against the drivers on rows and races on columns, it will display the running total of the points for each driver across the races. 4. Now for getting the difference of each driver from the leader, we need a measure which will give us the points of the race leader at the end of every race. This measure would be equal to the max of the running total that we just created and that should be the same for all drivers for a particular race. Create the measure with the DAX formula given below MaxRT_Points:=calculate(maxx(values(Driver[Driver]), [RT_Points]), All(Driver[Driver])) The result is shown below MaxRTPoints 5. Now all we have to do to get the final requirement is to subtract the MaxRT_Points measure from the RT_Points measure as shown below DiffFromLeader:=[RT_Points] – [MaxRT_Points] The result is shown below Diff from MaxRTPoints 6. With a bit of formatting and adding a chart for the top 6 drivers, I can now clearly see how the battle for the top place has been going Chart The final version of the PowerPivot file can be downloaded from here. It is interesting to see that Fernando Alonso has been the most consistent driver, as can be seen from the above chart, even though he is in second place. With this information, I am going to bet my money on Alonso winning the Drivers championship (and not because I am a biased Ferrari fan :P). What do you reckon?

Posted by SQLJason, 4 comments