Archive

Archive for December, 2011

SSRS #64 – Remove default values for parameters on report server

December 31, 2011 Leave a comment

In my previous blog, SSRS & SSAS #63 – Stop SSRS 2008 MDX Query Designer from Overwriting Parameter Queries, I mentioned that you can manually remove the parameter default values in BIDS before you deploy your reports to your report server.

But what if you already “infected” your report server” with unwanted default values for report parameters? It has happened to me many times. So I thought I should share with you this little annoying thing about default values for parameters on report server.

Here is a screenshot from the report server showing the unwanted default values.

image

Talking about unwanted or unwelcomed guests, this is one of those. It will stay on the report server, unless we take time to remove them.

Simply uncheck the box “Has Default”.

image

Categories: SSRS Expert

SSRS & SSAS #63 – Stop SSRS 2008 MDX Query Designer from Overwriting Parameter Queries

December 30, 2011 Leave a comment

In a previous blog, SSRS & SSAS #61 – SSRS and SSAS Integration: how bad it is and our hope with two types of data providers, I complained about how bad the integration between Reporting Services and Analysis Services is.

Complain itself will not solve any issue. But it drove me to focus on the frustration and research solutions or workarounds.

Issue – The MDX query designer in SSRS 2008 overwrites my parameter queries

When using Analysis Services as data source, the query designer is a very attractive tool for assembling MDX queries. When I decide to use parameters, it even creates the queries for me for the parameters. All is good, except when I found out that my customized parameter queries got overwritten over and over again.

There are many reasons why I write customized parameter queries. Here are a few:

1. The least reason is to format my MDX queries for better readability. If you have read some machine-generated MDX queries, such as queries from Excel, ProClarity, SSRS, you would not blame me for spending some extra time for better readability.

2. The auto-generated parameter queries most likely will retrieve the top level member ALL for the hierarchy along with children members. I personally do not like this feature. I often modify the parameter queries to exclude the ALL member.

3. To implement cascading parameters, the auto-generated parameter queries must be modified.     

Unfortunately, if I open the main data set again in the MDX query designer, all the customized parameter queries will be overwritten. 

A Workaround – Add a tag SuppressAutoUpdate = true to the RDL file

There are very few books on the integration between SSRS and SSAS, even fewer resources on solutions to the issues we face. Thanks to some dedicated bloggers, I found the following articles online regarding this annoying issue:

This link, MDX Query Designer Overwrites Parameter Queries, is actually a bug report on Microsoft Connect. It seems that this new issue in SSRS 2008 had never got fixed. I could be wrong on this. It will be appreciated if anybody can let me know if this has been fixed in later release of SSRS 2008.

It turned out that  the workaround to preserve my changes to the parameters queries is to modify the report’s RDL source code as follows by adding <rd:SuppressAutoUpdate>true</rd:SuppressAutoUpdate> between the tag <Query> and </Query>:

<Query>
   …
   <rd:SuppressAutoUpdate>true</rd:SuppressAutoUpdate>
</Query>

Here are some simple steps to accomplish this:

1. Right click the report in BIDS 2008, and select View Code.

image 

2. Search string </DataSet> in the RDL/XML file.

Then add <rd:SuppressAutoUpdate>true</rd:SuppressAutoUpdate> to just before </Query>.

image

3. Repeat step 2 for every data set (or only the data sets you do not want to be updated).

Default values for parameters still need to be manually removed

The above workaround prevents SSRS 2008 from overwriting my parameters queries, but still leaves the Default Values for my parameters.

image

This is actually not a bad thing for testing purpose because it can save me many mouse-clicking to select parameters. When I am ready to send my report to the report server, I can just open the property window for my parameters, and change to No default value on the Default Values tab.

image

So I guess I will keep complaining about SSRS-SSAS-MDX till I can find either workarounds or solutions.

Categories: SSRS Expert

SSRS & SSAS #62 – Create a data source on Report Server for an Analysis Services database

December 30, 2011 Leave a comment

The team I recently moved into had never had a data source created on their report server for an Analysis Services source.

A co-worker helped me got a data source created with Microsoft SLQ Server Analysis Services as the Data Source Type.

For credentials for connection, we correctly flagged “Credentials stored securely in the report server” with the right User name and Password.

The initial try of using the newly created data source failed, however. After fixing two issues, the configuration for the data source finally worked. The final configuration is shown below. image

Since I don’t do this kind of task every day, I thought it would be a good idea to write down how we fixed the issues.

1. The user name for the connection should be part of the server role

This is the error message from the report server before we made sure that the user name is part of the server instance role.

image

In SSMS, right click on the Analysis Services server instance, select Properties. On the Security tab, make sure that the user name is added. I have a little doubt about the user being part of the Server administrators. It seems to be too much privilege to me.

image

2. Check "Use as Windows credentials when connecting to the data source”

Before we check this option, here is the error we got:

image

I guess this has to do with the fact that we can only connect to Analysis Services (and also Integration Services) instance via Windows Authentication.

(Note: in the screen shot above, ignore the red warning about invalid password. I changed the User name to be a fake user name for this blog.)

Categories: SSRS Expert

SSIS #105 – Loop through rows in a SQL table (with multiple columns)

December 23, 2011 Leave a comment

I am a SQL purist, meaning that I have vowed never to use cursors in T-SQL to loop through rows record by record.

In SSIS package design, however, I have used the Foreach Loop Container very often. I would not say that I have broken my vows though for the following reasons:

1.  The record set I loop through is usually relatively small.

2. Foreach Loop Container in Integration Services takes a data type of System.Object (not the physical SQL table), which is a in-memory object that is native to the Integration Services. So I trust that Microsoft engineers have done a wonderful job in terms of optimal performance.

In Integration Services, I have used two ways to generate this System.Object for the  Foreach Loop Container to consume.

One way is to use an Execute SQL Task. I have blogged about this way in the past.

Here are two screen shots that shows how to configure the Execute SQL Task to receive the query result into a variable with data type of System.Object.

image

image 

Although you can map the query result to more than one System.Object variables, this would not work for the problem I am facing now.

Challenge

1. Here is a pseudo query that I need to retrieve a collection of new QueueName and their parent IDs.

image

2. I want to use both the QueueName and the QueueParentId, and pass them through a Foreach Loop Container to another task and use them as query parameters.

For the 2nd challenge, where more than one columns must be split up from one single System.Object, I cannot use Execute SQL Task with a Full result set configuration.

Recordset Destination comes to the rescue

 

image

The ADO recordset fits perfectly for my problem.

A Design Pattern (Loop through rows with multiple columns) with Recordset Destination and Foreach Loop Container

1. First, I need three package level variables. vLoopSet is the System.Object that will store the ADO dataset. The other two non-System.Object variables are for each column in the ADO recordset.

image

2. Here is the Control Flow I will need. The first Data Flow will run some query against my data source and send the results to a Recordset Destination.

image

3. Here is the Data Flow.

image

Configuration the Recordset Destination is simple enough. All the interface on top is useless. The only thing you need is to set the  VariableName.

image

And also the Output columns.

image

4. For the Foreach Loop Container, select Foreach ADO Enumerator, User::vLoopSet, and Rows in the first table.

image

For the variable mapping, use the zero-based index to map the columns to the variable. Index zero will be the first column in the query, and 1 is the second column in the query.

image

5. Finally, we are ready to use the User::vQueryName and User::vQueueParentID as dynamic parameters for our task inside the Foreach Loop Container.

image

Here we are using the famous question mark again to represent our dynamic parameters. Note: the ? is position sensitive, meaning that the first ? will represent the first variable from the above mapping, and the second ? represents the 2nd variable in the above mapping in the Foreach Loop Container configuration.

I hope after reading this post, you will rush off to to open your own SSIS packages and find opportunities to use this Loop Through Rows with Multiple Columns pattern.

Categories: SSIS Best Practices

SSIS #104 – When Row Count Transformation is invaluable

December 21, 2011 Leave a comment

The Row Count transformation is a pretty straightforward transformation. It does the following two things:

1. It counts rows as data passes through a data flow, and

2. it stores the final count in a variable. (Note that the variable is only populated when the data flow is finished, not while the data flow is still in the middle of transferring data.)

image

Here is a data flow design pattern I use very often.

image

At the end of the data flow, my variable @vNewQueueCount is populated with the row count from the data source.

Then I use this variable @vNewQueueCount in a Precedence Constraint expression, as shown below, to decide conditionally whether I need to process my Queue dimension or not.

image

Configuring the Row Count transformation is simply enough. I’ll show you that later in this blog.

What I want to say first is why the Row Count transformation is invaluable in this specific design pattern.

In this design, I’ve decided to use the Recordset Destination, which is a in-memory ADO dataset and it saves data in memory in a recordset that is stored in an Integration Services package variable of the Object data type. In my next blog, I’ll show you how to use a Foreach Loop container with the Foreach ADO enumerator to process one row of the recordset at a time.

The Recordset Destination is a handy component to use when you do not need to (nor want to) persist your data. One drawback of using the in-memory ADO dataset is that Integration Services does not provide any out-of-box methods to get the row count in the dataset. This is when Row Count transformation becomes invaluable to me. Without too much work on my side, I am able to accomplish the following:

1. Retrieve data from the source, and store the data in a, ADO dataset.

2. Also store the row count in a variable.

3. Finally, be able to use the row count variable to conditionally decide whether I need to process my dimension or not.

Here are a few simple steps to configure the Row Count transformation.

1. Define a variable vNewQueueCount.

 image

2. Create a new data flow

image

3. Drag the Row Count transformation to the data flow design surface. On the Advance Editor, for VariableName select User::vNewQueueCount.

image

Finally, you can use the variable in a precedence constraint expression to do conditional processing:

@vNewQueueCount > 0

image

Have fun with the Row Count transformation!

Categories: SSIS Best Practices

SSRS & SSAS #61 – SSRS and SSAS Integration: how bad it is and our hope with two types of data providers

December 19, 2011 Leave a comment

A few months ago, I had a post, SSAS #21 – Steps to create a SSRS report with MDX accessing a Cube. If you have worked on creating SSRS reports accessing cubes for a little while, you already knew that my post really doesn’t help you much in terms of solving real problems in your work.

The issue of integration between Reporting Services and Analysis Services is a never ending source of pain and frustration for many BI developers. Before I talk about the pain and frustration I’ve experienced, I want to give you some hope first by showing you two types of data providers we can use in SSRS to access your cubes.

Data Provider 1 – Microsoft SQL Server Analysis Services

Many blogs, such as my blog above, showed you the steps to create SSRS reports accessing cubes in Analysis Services database without telling you a very important piece of information. That is they are using the Microsoft SQL Server Analysis Services as the data provider for the integration between SSRS and SSAS.

image

This simple omission can be very beneficial to you, or can create frustration for you as well, depending on the nature of your project. If your cubes are designed well and your data navigation flows naturally as you would want, the drag-and-drop nature of this type of integration can be very beneficial to you. The well-liked SSAS query designer can be a life-saver for many developers who are so much experienced in creating manual MDX queries yet. On the other hand, when you start to experience serious performance issue with your SSRS reports, you would start to struggle with the un-optimized MDX queries that the integration has generated for you.

1) This is because this types of direct integration between SSRS and SSAS assumes that CROSSJOIN works for all data. Even without much experience in MDX, we all know that CROSSJOIN is absolutely not the best candidate for optimal performance in either TSQL or MDX.  

2) It also assumes that we need help to create MDX queries for every report parameter. Any customized MDX queries you managed to create for your report parameters will be overwritten without any notification or your consent. It also assumes that we do not have interest in seeing the datasets it has created for our parameters by hiding them from us. This can create serious performance problems for us when many unused datasets are hidden from us.    

image

Coupling the above two “helps” from this type of direct integration with the serious performance issue with your reports, there is nothing but pain and frustration.

(To be fair, this type of direct integration provides lots of benefits too. This post intends to focus on the pain and frustration, so I’ll ignore the benefits for now.)

Data Provider 2 – Microsoft OLE DB Provider for Analysis Services 9.0 (9.0 fro AS 2005 or 10.0 for AS 2008)

The OLE DB provider is not exactly better. What it interests me is that now I can put a little more customization/optimization to the MDX queries. With more customization, my hope for performance improvement (without any modifications to many shared/existing dimensions in the OLAP database) has increased a notch (if not soared).    

image

image

Categories: SSRS Expert

SSIS #103 – Learn SSIS NOW

December 11, 2011 1 comment

When I wrote the blog SSIS # 102 – Don’t be afraid to leverage TSQL in SSIS package, it was purely “inspired” by an ETL design I was working on during the day, and also an SSIS training piece I was reading from Pragmatic Works.

However, I was pleasantly surprised by some comments about my blog from the readers.

Here is one:

No… I’m not badmouthing SSIS. I’m looking for someone to give me a good honest answer to give me incentive to spend valuable time learning to do something a different way when I already know of a good way to accomplish ETL for millions of rows per batch in a very high performance manner.

Here is another one:

I’m afraid this may become a religious issue. When there are a number of ways to do things, some better in some situations, others better in other situations, it can become a messy conversation.

I can tell that both comments come from somebody who already has significant amount of experience in their own profession.

I have been thinking about these two comments for a few days now.

These two comments seem to be un-related at the first glance.

The first reader seems to be needing some more persuasion so he (or she) will be willingly spend time to learn SSIS (and other BI tools).

The second reader is reminding me of the sometimes contentious discussion among developers, i.e. the all SSIS solution Vs. the all TSQL solution Vs. the mix. I used to work in such a development team. The lesson I learned from that experience is that it’s the end result that matters. I was able to deliver an ETL solution within the deadline with a mix approach of SSIS and TSQL (meaning procedures). Another developer was also able to deliver his portion of the ETL solution with 100% hand-coded TSQL. However, the developer who insisted on SSIS only solution failed to deliver. But the failure is totally un-related to our discussion of SSIS Vs. TSQL. The failure is because the developer totally underestimated the complexity of the data.

You might wonder where I am heading in this blog. What I really want to do is to convince the first reader to start to learn SSIS NOW.

I don’t want to go into technical aspects of SSIS Vs. TSQL. There are plenty of resources out there that have done an excellent job explaining what SSIS can do. What I want to say is totally non-SQL, and non-technical, and also very simple. The world is evolving. I was an application programmer, then database administrator, then SQL developer, then SQL developer with BI skills. I was forced to evolve along the way, because as an independent contractor I had no one stable job I can go to every day, every month, or every year.

The second reason for the first reader to start to learn SSIS (and other BI tools) NOW, is the point the second reader is making. The more skills you have in your toolkit, the more chance you will have to succeed in your career. Forget about the religious issue, the messy conversation. We need to focus on enhancing our skills and collecting valuable tools for our toolkit along the way.

I will be very happy if the first reader one day will decide to join the Microsoft BI bloggers…..

Categories: SSIS Best Practices
%d bloggers like this: