Archive for the ‘SSAS’ Category

MDX #46–6 different ways to detect a particular member in MDX script

May 14, 2015 Leave a comment

There are many great tips in MDX with SSAS 2012 Cookbook

The book MDX with SSAS 2012 Cookbook has many great tips for MDX script writers.

Here are two that are from Chapter 5 Navigation:

  • detecting a particular member by comparing object with keyword IS is better than comparing name
  • using SCOPE() statement in MDX script is a more “permanent” solution than calculations in MDX query

MDX script writers frequently need to include or exclude a particular member in a calculation. The first step is to determine the member exists in a hierarchy.

The book provided great recipes on how this can be done in MDX queries. In this blog, I’ll focus on how this can be done in MDX scripts.

If you have questions about the following concepts, please refer to Chapter 5 Navigation:

  • Iteration on query axes
  • Currentmember function
  • IS keyword
  • SCOPE() statement
  • Member’s unique name
  • Why we should avoid comparing names
  • Why using SCOPE() statement is a better way in MDX script

6 different ways it can be done in MDX script

Suppose that we need to detect the NA member in the Color hierarchy of the Product dimension. The result should show us TRUE for color NA only.

Color Member is detected
Assembly Components


Here are 6 different ways it can be done in MDX script (there are more ways if you insist; see the screen shot below).

  1. Member is detected 1 – Name: compare the Name property of the current member
  2. Member is detected 2 – INTERSECT-COUNTING: intersect with the current member and then use Count()
  3. Member is detected 3 – Uniquename: compare the Uniquename property of the current member
  4. Member is detected 4 – VALUE: compare the full value of the current member
  5. Member is detected 5 – IS: use IS to compare member object
  6. Member is detected 6 – SCOPE: use SCOPE() statement

The solutions are increasingly better from 1 to 6, with number 1 being the worst, and number 6 much better. Avoid number 1 and 2; Number 3 is similar to number 4 and 5; Number 6 is a better choice in MDX script.

Become familiar with the SCOPE statement

Check out this link:

Chris Webb did a wonderful job in this “Fun with Scoped Assignments in MDX” video.






















5th Blogging Year

December 31, 2014 Leave a comment

It’s hard to believe that it’s been 5 years since my first ever post, in which I simply spelled Business Intelligence. Thanks to everyone for being with me since then!

I blogged around my daily experience as a Data Warehouse and Business Intelligence developer. Most of my posts are about how to use the tools, i.e. the Microsoft SQL Server Business Intelligence stacks with more productivity and efficiency, of cause, also about the two major data query languages, SQL and MDX. I enjoy working as a DWH and BI developer for the creativity I found in all I do. I mostly praise the tools and the query languages we use, but occasionally my posts complain about the the inefficiency and even bugs in the tools.

In 2013, I co-authored the book MDX with SSAS 2012 Cookbook. As much as I’d love to write again and blog more, in 2014 I felt that I I needed more time for myself to relax and recharge, and to spend more time with my family.

Looking forward to next year, I would like to think that being a DWH and BI developer will continue to give me professional satisfactions; I do not need to find a new career. I also hope that 2015 is a year that will give me inspiration for a new book for DWH and BI developers.

My posts cover a lot of grounds in SSIS, SSRS, SSAS, SQL Jobs, SSMS, Visual Studio’s SSDT, and SQL and MDX. But it’s interesting that the top 5 posts that received the most views are all in SSIS, according to the WordPress’s annual report. More interestingly, these top SSIS posts are all about how to use variables in SSIS. I will consider writing about those topics again.

Once again, thanks for following and commenting on my posts in 2014. I look forward to seeing you all online again in 2015! Happy New Year!


Sherry Li

SSAS #39–Building Optimal and Well-Tuned Dimensions

December 14, 2014 Leave a comment

The Analysis Services Performance Guide from bunch of Microsoft Analysis Services experts have been updated since 2005 edition for 2008 R2 and for 2012.

  • Microsoft SQL Server 2005 Analysis Services Performance Guide
  • Microsoft SQL Server 2008 R2 Analysis Services Performance Guide
  • Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide

I highly recommend the guide. It is short, to the point and it’s FREE.

Here is my summary of how to build optimal dimensions from the Part 1 of the guide.

  1. Efficient Use of Attributes Is a Key Design Skill to Master
  • Dimensions are composed of attributes, which are related to each other through hierarchies.
  • Studying and implementing the attribute relationships available in the business model can help improve cube performance.
  • Using the KeyColumns, ValueColumn, and NameColumn properties effectively can reduce processing time. They also reduce the size of the dimension and the likelihood of user errors. This is especially true for attributes that have a large number of members, that is, greater than one million members.
  • KeyColumns: define the attribute; uniquely identify each instance of the attribute; It is a best practice to assign a numeric source field, if available, to the KeyColumns property rather than a string property; Furthermore, use a single column key instead of a composite, multi‐column key.
  • NameColumn: displayed to end users; if empty, default to the KeyColumns property.
  • ValueColumn: typically used for calculations; strongly typed and can be accessed through the MemberValue MDX function.
  • Using ValueColumn, and NameColumn properties: eliminates the need for extraneous attributes. This reduces the total number of attributes in your design, making it more efficient.
  • Consider using Key fields (numeric) for the KeyColumns property, rather than the alphanumeric fields.
  1. Considering Hiding Most Attribute Hierarchies
  • Set AttributeHierarchyVisible = false for most attribute hierarchies.
  • Hiding the Surrogate Key.
  1. Considering disabling Attribute Hierarchy
  • If attribute is included for ordering purpose, set AttributeHierarchyEnabled = false and AttributeHierarchyOptimizedState = NotOptimized to save on processing operations
  1. Considering Disabling Ordering of Attribute
  • For such hidden attribute that you used only for implementation purposes, you don’t care about the ordering of an attribute (the surrogate key is one such case), set AttributeHierarchyOrdered = false to save time during processing of the dimension.
  1. Set Explicit Default Attribute Members Properly (carefully)
  • Default Attribute Members: Any query that does not explicitly reference a hierarchy will use the current member of that hierarchy.
  • The default behavior of Analysis Services is to assign the All member of a dimension as the default member.
  • Explicitly set default members only on dimensions with single hierarchies or in hierarchies that do not have an All level.
  1. Removing Meaningless All Level
  • It can even be expensive to ask for the All level of dimension if there is not good aggregate to respond to the query.
  • You can disable the All member in a hierarchy. You do this by setting the IsAggregateable = false on the attribute at the top of the hierarchy.
  • Note that if you disable the All level, you should also set a default member as described in the previous section–if you don’t, Analysis Services will choose one for you.
  1. Identifying Attribute Relationships Other Than the Relationships to the Primary Key
  • Bushy attribute relationship: By default, all attributes are related to the primary key, and the attribute relationship diagram represents a “bush” where relationships all stem from the key attribute and end at each other’s attribute.
  • Redefine attribute relationships: You can optimize performance by defining hierarchical relationships supported by the data.
  • Attribute relationships help performance in 3 ways:
    • Cross products between levels (for example the cross-product between Subcategory and Category) in the hierarchy do not need to go through the key attribute, saving CPU time during queries.
    • Aggregations built on attributes can be reused on related attributes, saving resources during processing and for queries.
    • Auto-exits can be more efficient to eliminate attribute combinations that do not exist in the data.
    • If the attribute relationship is defined, the Analysis Services engine knows beforehand which category each subcategory belongs to via indexes built at process time.
  1. Flexible vs. Rigid Relationships: considering using rigid relationships
  • A flexible attribute relationship (customer-city) is one where members can move around during dimension updates, and
  • A rigid attribute relationship (month-year) is one where the member relationships are guaranteed to be fixed.
  • Processing flexible attribute relationship is expensive: When a change is detected during process in a flexible relationship, all indexes for partitions referencing the affected dimension (including the indexes for attribute that are not affected) must be invalidated. This is an expensive operation and may cause Process Update operations to take a very long time. Indexes invalidated by changes in flexible relationships must be rebuilt after a Process Update operation with a Process Index on the affected partitions; this adds even more time to cube processing.
  • Flexible relationships are the default setting. Carefully consider the advantages of rigid relationships and change the default where the design allows it.
  1. Considering unnatural user hierarchies
  • There are two types of user hierarchies, natural and unnatural hierarchy.
  • Unnatural hierarchies have at least two consecutive levels that have no attribute relationships, commonly created for drill‐down paths of commonly viewed attributes that do not follow any natural hierarchy. An example is gender-education relationship.
  • Unnatural hierarchies are not materialized on disk, and the attributes participating in unnatural hierarchies are not automatically considered as aggregation candidates. Rather, they simply provide users with easy‐to‐use drill‐down paths for commonly viewed attributes that do not have natural relationships.
  • By assembling these attributes into hierarchies, you can also use a variety of MDX navigation functions to easily perform calculations like percent of parent.
  1. Considering accessing attributes via member properties only
  • Disable the attribute’s hierarchy: by setting the AttributeHierarchyEnabled property to False if you only want to access an attribute as member property.
    • Member properties provide a different mechanism to expose dimension information.
    • For a given attribute, member properties are automatically created for every direct attribute relationship.
    • For the primary key attribute, this means that every attribute that is directly related to the primary key is available as a member property of the primary key attribute.
  • After you verify that the correct relationship is in place, you can disable the attribute’s hierarchy.
  • Disabling the attribute hierarchy can improve performance and decrease cube size.
  • Because the attribute will no longer be indexed or aggregated. This can be especially useful for highcardinality attributes that have a one‐to‐one relationship with the primary key.
  • High cardinality attributes such as phone numbers and addresses typically do not require slice‐and‐dice analysis.
  • By disabling the hierarchies for these attributes and accessing them via member properties, you can save processing time and reduce cube size.


MDX #43–Find a MDX Calculation Script

November 12, 2014 Leave a comment

Finding out MDX calculation scripts is a common task

A co-worker recently asked me what the calculation for a calculated measure is in our reporting cube.

If you have the Analysis Services project in Visual Studio locally, it is easy to find what the calculation script is from the Calculations tab in the cube designer.

But what if you don’t have the VS project handy, and you have access to the cube from SQL Server Management Studio?


Here is a simple script you can run to quickly get the calculation script.

This script queries the SSAS Dynamic Management View $SYSTEM.MDSCHEMA_MEASURES. The EXPRESSION field will return the actual calculation script. You will need to run the DMV queries in the MDX query editor, not the SQL query editor.







Here is the result.





Here is the query in text.

where    MEASURE_UNIQUE_NAME = ‘[Measures].[Ratio to Parent Product]’

Use $SYSTEM.DBSCHEMA_COLUMNS to find all the columns in a DMV

You might ask how do I know what are all the columns in this view. Here is a DMV view, $SYSTEM.DBSCHEMA_COLUMNS, you can query to find out all the columns in a DMV view.





Here is the result.















Here is the query in text.


$SYSTEM.DISCOVER_SCHEMA_ROWSETS is the only DMV name you need to remember

You might also ask how would I know to use the view $SYSTEM.MDSCHEMA_MEASURES to find out the calculation script.

The answer is to just remember one view, $SYSTEM.DISCOVER_SCHEMA_ROWSETS.



















The $SYSTEM.DISCOVER_SCHEMA_ROWSETS view will show you all the DMV views that you can use to get the metadata about your dimensions and cubes.


DMVs can be very useful for documenting SSAS databases, monitoring usage and activity. To know more about how to use these DMVs check out our book “MDX with SSAS 2012 Cookbook”.

Categories: MDX Challenges, SSAS Tags: , ,

Book Review: Why You Should Read Expert Cube Development With SSAS 2012 Cover to Cover

June 15, 2014 Leave a comment

Expert Cube Development with SSAS 2012 Multidimensional Models was published earlier this year by Packt Publishing. It’s the second edition of the very successful book on SSAS cube development by three well-known industry leaders, Chris Webb, Alberto Ferrari and Marco Russo.

This book is not a tutorial book on using SSAS as a tool. It is more of a guided tour through the lifecycle of building an Analysis Services solution with an informed commentary telling you what to do, what not to do, and what to watch out for.

Reading this book cover to cover

If you are a SSAS cube developer, you would want to read this book cover to cover, no matter what level you are, with the exception of absolute beginners who do not understand basic Analysis Services concepts yet, such as what a cube and a dimension is.

I bought the first edition a few years ago, but didn’t read it cover to cover because at the time I didn’t find some of the topics relevant to my work. Earlier this year I bought the second edition and I found myself unable to put the book down. By the time I knew it, I had already read it cover to cover once, with pages of notes in Microsoft OneNote. Knowing that my cube development skills could have progressed much faster, I wish I had read the book a few years ago cover to cover.

So don’t repeat the same mistake I made. Whether you already have the first edition or just bought the new 2012 edition, go ahead and start reading it now.

What I enjoyed about the book

I don’t wish to spoil your fun with the book, so I’ll just gloss over a few key points about the book.

  • Beginner developers might think that cube development is all about how to use SSAS as yet another tool. This book will change your mind. The big chunk of Chapter 1 focused on the data modeling for Analysis Services. Then the book moved on to Chapter 2 to show you how to build basic dimensions and cubes. More complex dimension modeling is covered in Chapter 3. Data modeling for measures and measure groups is covered in Chapter 4. What I enjoyed the most is how the book presented the challenges we all encountered in our day-to-day work and provided the best practices in terms of data modeling in Analysis Services multidimensional model.
  • Microsoft Analysis Services is not a standalone technology, it’s part of a family of technologies and disciplines that all work together to make it possible for end-users to do interactive data analysis, reporting, and visualization. From a developer’s point of view, these technologies include the SQL Server engine, the Reporting Services, the Analysis Services, with the Integration Services in the middle as the glue. The disciplines include, but are not limited to, data warehouse data modeling, multidimensional modeling, and designing and implementation for performance and good user experience. I personally find that being able to fit all these techniques and disciplines together in the lifecycle of building an Analysis Services solution is not an easy task. Throughout the book the authors did a fantastic job of showing how each technique and discipline can fit seamlessly to build high performance cubes.
  • As a tool, Analysis Services is very easy to use; some might say too easy. Dimensions and cubes are built with various wizards with properties already being filled with default values. You can have a cube up and running in a matter of minutes. Some properties are for cube’s client tools to consume, but many of the properties are cube’s metadata and will end up having some impact on the cube processing performance, query performance, and/or storage engine performance. Assuming that your cube has started its life with a good design, then a good portion of a cube developer’s job is to understand what those impacts are and to make informed trade-off decisions. This book is a life-saving book that tells you what those properties mean, what to do with them, what not to do, and what to watch out for.
  • Bad cube query performance can be detrimental for your Analysis Services projects. The book has devoted an entire Chapter 8 to query performance tuning. The concept of query performance tuning is very familiar to SQL Server developers, but cube query performance tuning methodology has its own twist and turns, such as the Formula Engine vs. the Storage Engine, the partitions and aggregations, and tuning an algorithm in MDX. The book explains in detail what to do with each methodology and even the right tools and scripts to use to get the job done correctly.
  • I also like the many links in the book to other very detailed white papers, such as “The Analysis Services 2008 R2 Performance Guide”, and “The Many-to-Many Revolution”. Many blog posts are also included in the book, such as the blog posts from Mosha Pasumansky who was considered the most influential person in MDX.

No covering of SSAS Tabular models

As you may know, as of SQL Server 2012, there are two versions of Analysis Services: Multidimensional and Tabular. Although both of them are called Analysis Services and can be used for much the same purposes, the development experience for the two is completely different.

I have bought the first edition a few years ago. Although this is basically the same book as the first edition, I still went ahead and bought it because the 2012 edition has a new section that talks about the DAX query support in SSAS 2012 multidimensional model. Don’t get me wrong, this book only covers SSAS Multidimensional models. But it’s nice to have a new section on how SSAS 2012 multidimensional model supports not only MDX queries, but also DAX queries.

No substantial changes in this second edition

Since there are no substantial changes in this second edition, it’s probably not worth buying a copy of the second edition if you already have a copy of the first edition. What is covered in the first edition should work perfectly fine in SSAS 2008 and 2012, and even in 2014. This is because Microsoft has not added anything that is substantially new to SSAS Multidimensional models since the 2008 version. But if you don’t have the 2008 edition, I’d recommend you to buy this new 2012 edition, even if you are still working on cubes in SSAS 2008.

Not a book for absolute beginners

If you still need to understand basic Analysis Services concepts, such as what a cube and a dimension is, then this book is not book for you. This book does not take the form of a basic tutorial either.

Authors’ personal experience and thoughts are invaluable

Chris Webb, Alberto Ferrari, and Marco Russo are well-known in the SSAS and MDX community. This is an invaluable book because it contains their personal experience and thoughts. I myself visit Microsoft books online (BOL) very often. But if a book is solely derived from BOL then it is not too useful for me, as I can read it in the BOL myself. I am putting this review on my blog, and also planning to put it out on Amazon and Barnes and Nobel, hoping that all cube developers will read the book cove to cover.

Packt Publishing

Packt Publishing is one of my favorite tech book publishers. Their books focus on practicality, recognizing that readers are ultimately concerned with getting the job done. They also offer a subscription service, which I personally also use. Good job for putting out “Expert Cube Development with SSAS 2012”!


MDX+SSAS #37–Ignore unrelated dimension or not

February 7, 2014 1 comment

In Adventure Works cube 2008 or 2012, in the Dimension Usage tab, you will see that many intercepts of measure groups and dimensions are blank. To a specific measure group, the dimensions that have blank intercepts are un-related to the measure group. As a matter of fact you will see blank intercepts more often than not.

As the term un-related dimension suggests, these measure groups have no relationship to these dimensions. SSAS, however, gives us a measure group property IgnoreUnrelatedDimensions (true or false) to control how we want these measures to show for those un-related dimension members.

IgnoreUnrelatedDimensions is a boolean property, so how hard it can be. It turned out that this property is not as straightforward as it seems, and it might not be as effective as you would think either.

When IgnoreUnrelatedDimensions = True, the measure repeats for all members and all levels on the un-related dimensions

IgnoreUnrelatedDimensions = True is a default setting for measure groups. The measure behavior is not the best (actually very confusing to users), but the behavior is consistent and easy to remember.

When IgnoreUnrelatedDimensions = True,

  • the measure will repeat with the same value for all member for all level for any unrelated dimension,
  • regardless if the dimension has default member or not, root [All] member or not
  • this behavior is the same as VALIDMEASURE() function which also repeats the measure with the same value for all member for all level for any unrelated dimension
  • Example 1:

  • Exchange Rates measure group is not related to the Promotion dimension
  • Exchange Rates measure group’s IgnoreUnrelatedDimensions property is set as default True
  • Promotion has no default member


    Example 2:

  • Exchange Rates measure group is not related to the Scenario dimension
  • Exchange Rates measure group’s IgnoreUnrelatedDimensions property is set as default True
  • Scenario has a default member of Actual


    Example 3:

  • Exchange Rates measure group is not related to the Sales Channel dimension
  • Exchange Rates measure group’s IgnoreUnrelatedDimensions property is set as default True
  • Sales Channel IsAggregatable is set to False; no root [All] member


When IgnoreUnrelatedDimensions = False, the measures behave differently depending on if  the un-related dimension ….

…has default member or not, has root level [All] member or not.

Example 4:

  • Sales Target measure group is related to the Date dimension but only at Calendar Quarter level
  • (I’ve disable the calculation in the MDX Script "Sales Quota Allocation" which allocates values down to the Month level)
  • The measure shows at the Quarter level and rolls up to all the ancestor levels.
  • VALIDMEASURE() function overwrites the above behavior and force it to repeat the the same value for all months and days 
  • image

Example 5:

  • Sales Target measure group is not related to the Promotion dimension
  • Promotion dimension has a [All Promotions] root level member
  • The measure shows for the [All Promotions] member only.
  • VALIDMEASURE() function overwrites the above behavior and force it to repeat the the same value for all members


Both Example 4 and 5 produce an easy to remember result, that is, the setting forces the measure to the top root member and the member that the measure is linked to and al the ancestor members from that point on.

However, Example 6 and 7 is a bit confusing, because the setting of IgnoreUnrelatedDimensions = False is totally ignored, just because the un-related dimension

  • has a default member, or
  • has no root [All] member

Example 6:

  • Sales Target measure group is not related to the Scenario dimension
  • Scenario dimension has a default member Actual
  • The setting IgnoreUnrelatedDimensions = False is totally ignored as if IgnoreUnrelatedDimensions = True.
  • VALIDMEASURE() function behaviors the same as always and repeats the value for all members.


Example 7:

  • Sales Target measure group is not related to the Sales Channel dimension
  • Sales Channel IsAggregatable is set to False; no root [All] member
  • The setting IgnoreUnrelatedDimensions = False is totally ignored as if IgnoreUnrelatedDimensions = True.
  • VALIDMEASURE() function behaviors the same as always and repeats the value for all members.


Reporting Driven Decision – relate or un-relate

I recently worked on a request to pull some measures on a dimension that is unrelated to the measure group.

Although the original cube design didn’t link the measure to the dimension, I can see that the design might have missed the reporting requirement. I had a few choices to meet the new reporting requirement:

  1. IgnoreUnrelatedDimensions = False
  2. Creating a MDX calculation to show the measure for the appropriate members on the un-related dimension.
  3. Linking the measure to the dimension in the relational data warehouse, not to the leaf level members but to a middle level, similar to how the example 4 where the Sales Target measure group is related to the Date dimension but only at Calendar Quarter level.
    After some experiment, I’ve abandoned both option 1 and 2, opted for option 3.
    This blog explains why I didn’t choose option 1. The IgnoreUnrelatedDimensions setting for measure groups is not a very reliable way to meet my reporting requirement with so many moving parts.

Option 2 seemed very attractive to me at first. The reason that I didn’t go for it will be topic for another blog.

When SSAS is not enough

After running the above queries with different IgnoreUnrelatedDimensions setting for different scenarios, I recalled a Chapter in our book MDX Cookbook “When MDX Is Not Enough”. Here I’d like to say that sometimes settings in SSAS can seem to provide simple solutions but are actually not enough.

%d bloggers like this: