OLEDB is the native client for SQL Server. Every SSIS developer is familiar with how to use it. Almost every task in Integration Services can use an OLEDB connection. As a matter of fact, it’s a preferable client to use if your backend is SQL Server.
However, there is one exception, that is the Transfer SQL Server Objects Task. This task uses an SMO Server connection only.
If you don’t know this before you use this transfer object task, there can be confusions during development and also during deployment.
Here are my lessons learned.
- Visual differences between the OLEDB and the SMO Server connections in Integration Services 2012 helps
In SSIS 2012, I am happy to see the visual differences. If you are still stuck in 2008, you will not see any visual differences.
- Name your SMO Server connections properly
A good practice is to name the SMO Server connection properly. I used such names as:
- SMOServer_Staging (this could be the source server for the transfer task)
- SMOServer_Reporting (this could be the destination server for the transfer task)
- Rename the SMO Server connection properly
In SSIS 2012, you have three difference places to create connections.
- At project level: connections are available to all packages in the project
- At package level: only available to the packages in which the connection is created
- At task level: also only available to the packages in which the connection is created
The best practice is to create connections at the project level.
However, no matter where you are creating the connections, the UI in the SSIS always prefix the connection with the server name:
- For OLEDB – servername.databasename: removing the servername is usually the best practice here, leaving just the database name.
- For SMO Server – servername: since we cannot just simply remove the servername, the best practice here is to rename it to what is suggested in the section 2.
- Confusions during deployment over where to change to production/test server
In 2008 , during deployment, I’ve seen DBAs done the following:
- Create new connection: because of the confusions about OLEDB vs. SMOServer connections, and even bigger confusions if your connections have your development server name in it, DBAs would most likely choose to create new connections.
- Rename the connection name: again if you have not named your SMO connections properly, they will tend to rename it.
Neither of the above practices is good. If you as a developer have done if properly, DBA’s will not need to mess too much with the connections:
- Named your SMO Server connections properly so they know the differences with the OLEDB connections.
- Your connection names do not have your development server name so they don’t need to rename it.
- In 2012, the deployment is a lot easier
In 2012, deployment is much more straightforward because
- Connections can be created at project level: there is only one place to mess with the connections, including the connection name, server name and database name, and bunch of other connection properties
- Of cause you can take even greater advantages of many features such as project level parameters if you have chosen to deploy packages to SSIS Catalog
More about SMO connection manager
Quote from the MSDN article:
An SMO connection manager enables a package to connect to a SQL Management Object (SMO) server. The transfer tasks that SQL Server Integration Services includes use an SMO connection manager. For example, the Transfer Logins task that transfers SQL Server logins uses an SMO connection manager.
When you add an SMO connection manager to a package, Integration Services creates a connection manager that will resolve to an SMO connection at run time, sets the connection manager properties, and adds the connection manager to the Connections collection on the package. The ConnectionManagerType property of the connection manager is set to SMOServer.
You can configure an SMO connection manager in the following ways:
- Specify the name of a server on which SQL Server is installed.
- Select the authentication mode for connecting to the server.
More about Transfer SQL Server Objects task
Besides, the data flow object, the Transfer SQL Server Objects task is another task you can use to transfer not just data but also other types of objects between instances of SQL Server. For more info, check out this MSDN article.
MSOLAP is a native OLE DB provider for Analysis Services
A common task during ETL is to process the OLAP dimensions and the cube after all the dimension and fact tables are loaded into the data warehouse. Integration Services provide two special control flow tasks for this purpose, the Analysis Services Processing Task and the Analysis Services Execute DDL Task.
Before you can use either of these two tasks, you will need to create a SSIS Connection Manager that knows where the Analysis Services server is. The Connection Manager is just the name of the GUI in Integration Services. Behind the fancy GUI, it’s just a connection string that uses the native data provider, the Analysis Services OLE DB provider (or just short as MSOLAP DLL).
Location in the connection string causing “connection cannot be made”
In the above screenshot, I not only provided the Server name but also the Location which I made it the same as the server. The testing of the connection in the GUI was successful.
After the GUI is completed, the final connection string looks like this.
Provider=MSOLAP.5;Data Source=localhost\MULTI2012;Integrated Security=SSPI;Initial Catalog=SSAS2012DEMO;Location=localhost\MULTI2012
The control flow task Analysis Services Execute DDL Task is usually the more flexible choice for processing Analysis Services objects than the Processing Task. But to makes sure that my connection string is correct, I used the Analysis Services Processing Task, which is the GUI version of the DDL task, to test the connection string. I ran into this “connection cannot be made” error.
It turned about the problem is in the Location. Removing the Location from the connection string solved the connection issue.
AS a matter of fact, the shortest connection string that can pass the connection is just this:
No explanation for Location in connection strings for MSOLAP and ADOMD.NET provider
I checked both the connection string document in MSDN library for MSOLAP and ADOMD.NET , and couldn’t find much information about what the Location is supposed to be.
In the first link for the ADOMD.NET provider it just simply states:
Location: Sets the name of the server.
In the second link for the MSOLAP provider, I couldn’t even find any mentioning of Location.
Three types of data providers for Analysis Services
There are really three types of data providers for Analysis Services.
Analysis Services provides three data providers for server and data access. All applications connecting to Analysis Services do so using one of these providers. Two of the providers, ADOMD.NET and Analysis Services Management Objects (AMO), are managed data providers. The Analysis Services OLE DB provider (MSOLAP DLL) is a native data provider.
More ways to access Analysis Services in SSIS than the DDL Task and the Processing Task
Check out this book if you want to know more than just the DDL Task and the Processing Task to access Analysis Services in SSIS.
Quote from the book:
* Using the control flow object Analysis Services Processing Task is the most straightforward
approach, and is discussed in the next section. This allows any SSAS objects (that involve
processing) and their processing method to be selected. A few drawbacks are listed in the next
* Although the Analysis Services Execute DDL Task may sound as if it only allows objects to be
modified, created, or deleted, its functionality goes way beyond Data Definition Language
(DDL). This task runs an SSAS XMLA script. XMLA includes the capability not just to run DDL,
but also query and process. Therefore, this task is very useful for SSAS processing, because the
XMLA can be modified in an SSIS package before it is executed.
* Another approach involves using the Execute Process Task to call the ASCMD executable that
comes with SSAS. This executable can run MDX, Data Mining Expressions (DMX), and XMLA,
and also contains some command – line switches to make your processing more dynamic.
The data flow in SSIS includes two destinations: the Dimension Processing destination and the
Partition Processing destination . These allow data directly from the pipeline to be pushed into
either an SSAS dimension or SSAS partition, respectively. Unlike all the other SSIS approaches
discussed here, this approach is the only one where data is pushed directly to SSAS. The
other approaches essentially tell SSAS to start pulling the data from a data source. Like the
other SSIS approaches, this approach is described later in this chapter.
* Finally, the programming API for SSAS, called Analysis Management Objects (AMO), allows the
interface into the SSAS object model and can leverage the processing methods to process any of
the SSAS objects described. To use AMO for processing, you must leverage either the Script Task
in the control flow, or the Script Component in the data flow.
A reader asked me to share a C Sharp script I used in this post, SSIS #114 – How many ways can you do INSERT-UPDATE?.
This code snippet tried to determine if the original value of MyFirstColumn is different or not compared to the incoming data in the pipeline. It is used in the last UPDATE-DELTA design.
The Script Component needs some configuration. Without the proper configuration, the above code will not work.
For the purpose of this blog, my answer is 4.
I’ve always wanted to do a benchmark comparison so I can proudly say that my “normal” way is the best in terms of run time.
- STAGE-TSQL: Use the data flow to bring the raw data into staging, and use do the INSERT-UPDATE in TSQL. This is my “normal” way. And the best way according to my testing. The name STAGE-TSQL implies: 1) two steps are involved, 2) raw data is staged first, 3) INSERT-UPDATE are done in TSQL only.
- UPDATE-ALL: Only one step is involved. INSERT-UPDATE is done in one data flow step. 1) Transformation OLE DB Command is used for UPDATE, 2) Destination OLE DB Destination is used for INSERT.
- UPDATE-STAGE: Two steps are involved. 1) INSERT is done in the data flow step, 2) but the matching rows are saved to a staging table, and UPDATE is done in TSQL using the matching rows.
- UPDATE-DELTA: similar to number 2 UPDATE-ALL. In stead of directly sending all matching rows to transformation OLE DB Command for UPDATE, the Script Component transformation is used to determine if there are actually changed rows. Send data to Transformation OLE DB Command only if there are changes in the matching rows.
Here is the Control Flow.
There is nothing exciting about the Data Flow. All it does is to bring the source data into a staging table.
The TAREGT_TSQL task does all the wonderful work of INSERT-UPDATE. TAREGT-TSQL is the winner with only 3 seconds and 885 mini-seconds.
The Control Flow is very simple. Only one Data flow is used. No truncating staging tables because no staging tables are used.
But the Data Flow step has two more transformation components: Lookup and OLE DB Command. The Lookup transformation is used to split outputs into No Match Output and Match Output. No Match Output are the rows that will be inserted, and Match Output are the rows that will be updated.
This seems to be an elegant solution but the run time is the worst, comparing almost 3 minutes to only about 4 seconds in STAGE-TSQL.
The Transformation OLE DB Command is to blame for the poor performance here.
Since I blamed the Transformation OLE DB Command for the poor performance, I am going to swap it with a staging table, and finish the UPDATE in a TSQL task.
In the Data Flow step, matching rows are no longer sent to the OLE DB Command transformation, instead, they are saved into a staging table. The actual UPDATE are done in the TSQL task TARGET_MyChangingTable_UPD.
The run time has improved from almost 3 minutes to only 8 seconds and 291 mini-seconds.
Here is another way to improve that 3 minutes run time.
The Control Flow looks the same, but I’ve added a Script Component (SC) Transformation. The SC is used to create two outputs. Only the rows that have changed data are sent to the UpdateRow path, otherwise the rows will be sent to a Row Sampling and are ignored. Adding the SC has indeed improved the run time from 3 minutes to only 7 seconds and 494 mini-seconds.
5. STAGE-TSQL is the winner!
It’s a relief knowing that our skills in TSQL are serving us well.
- My testing involved only INSERT-UPDATE. No DELETE is done in data flow or in TSQL.
- I’ve made sure that all four runs are against the same Source data and Target data to make sure that the run time comparison is valid.
- I don’t think the data size matters much in my testing. No data size is given.
- Using staging tables means more disks spaces are needed. Most teams would prefer faster run time over disks spaces for their ETL processes.
I worked on a SSIS package a few months back to retrieve data from a SSAS cube with MDX queries. A co-worker recently is developing a SSIS package to perform a similar task, but couldn’t get the SSIS package to work. The data flow was lost somewhere and the package could never finish the data loading.
This is what it will look like in the Data Flow task. The OLE DB Source will stay yellow forever.
I couldn’t remember right away how my SSIS package was configured to make the data flow work. I had to find my old SSIS package and open it to refresh my memory.
Need to set Format=Tabular in the Data Link Extended Properties
After choosing the OLE DB Provider for Analysis Services 10.0 as the provider for the connection manager, we need to go to the All tab to set Format=Tabular in the Data Link Extended Properties.
There are many blogs on this little “secret” developers discovered. Some claimed that this is a fix for a bug in SSIS 2005 SP1. I had to do this in both SSIS 2005 and 2008 to make it work. So I am not sure if it is caused by a bug or a designed “feature” in SSIS.
Neither can I find more information about what this property value means. You are welcome to share if you have more insight on this.
The happy green color is what you will see once you set Format=Tabular in the Data Link Extended Properties.
A couple of notes:
1. About the data type. Whenever you try to access the column information, you will get a “nice” warning from SSIS that data type DT_WSTR (nvarchar) will be used instead. Again if you have more insight on this, you are welcome to share.
2. I used a Data Conversion transformation to convert the DT_WSTR to DT_STR or DT_NUMERIC accordingly.
There is no IIF statement in SSIS scripting functions
If you are looking for IIF() for SSIS expressions, you’ll be disappointed.
Most of us are familiar with the IIF statement in SSRS expressions (or in many other scripting languages).
But in SSIS, any Program Flow functions are unmistakably missing. Here are all the function categories you can see in the Expression Builder in SSIS. Program Flow is not one of them.
We can achieve IIF() using ( Condition? Value_when_true : Value_when_false)
There is a very good explanation for the missing IIF statement in SSIS. Before I go too far on this topic, I want to give you the good news first. The Integration Services did give us a tool to accomplish the same function as the IIF statement does, only in different disguise.
Here is what we can use:
( Condition? Value_when_true : Value_when_false)
You can write any acceptable expression in the Condition part, but it’ll only make sense for the expression to include at least one variable in order to achieve the goal of dynamic as you set out to achieve with IIF in the first place.
The value_when_true and value_when_false part can obviously include variables too.
Here is an example. For a user variable varSourceServerPrefix, I want to set it to an alia name of the linked server in our development environment, but set it to blank in the production server.
( @[User::varProduction] == 0? @[User::varLinkedServer] : "")
Processing flows are largely controlled by the Precedence Constraints
There is an even better reason why Integration Services are missing the entire category of Program Flow Functions.
In Integration Services, processing flows are largely controlled by the Precedence Constraints between tasks. These Precedence Constraints are almost completely controlled by conditions that are expressed using variables, while variables in Integration Services can be manipulated by Script Tasks.
For examples how variables can be manipulated by Script Tasks and example of Precedence Constraints:
- SSIS #109 – Wait for data with For Loop Container
- SSIS #96 – Everybody Needs Integration Services Expression Reference
Note: the parentheses are not really required. I don’t think the parentheses are required even in nested conditional expressions. It certainly doesn’t hurt to have the parentheses in nested conditional expressions for better readability.
Here is the MSDN link on this: http://msdn.microsoft.com/en-us/library/ms141680.aspx