Creating Tasks in the DAC for New or Modified Informatica Workflows

You need to perform this step for all new workflows you create in Informatica and for all workflows that you modify.
To create a task in the DAC for new or modified Informatica workflows
  1. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  2. In the DAC, create custom logical and physical task folders for the custom folder you created in the Informatica repository.
    1. In the DAC, navigate to Tools, then select Seed Data, then select Task Folders.
    2. To create a custom logical folder, click New.
    3. In the Name field, enter a name for the custom logical folder, for example, Custom Logical.
    4. In the Type field, select Logical.
    5. To create a custom physical folder, click New.
    6. In the Name field, enter a name for the custom physical folder, for example, Custom Physical.
    7. In the Type field, select Physical.
  3. Register the folders you created in Step 2 in the Source System Folders tab.
    1. Navigate to Design, then select Source System Folders.
    2. Click New.
    3. In the Edit child tab, enter the name of the custom logical folder in the Logical Folder field.
    4. Enter the name of the custom physical folder in the Physical Folder field, and click Save.
  4. Create new tasks for the workflows.
    1. Navigate to Design, then select Tasks, and click New in the top pane toolbar.
    2. In the Edit child tab, enter the workflow name as it appears in Informatica Workflow Manager.
    3. Right-click and select Synchronize Tasks.
    4. Select Selected Record Only, and click OK. Click OK in the informational message box.
      This command synchronizes the source and target table information between the DAC and Informatica.
    5. In the Tasks tab, enter the remaining information required for the task.

DAC: Customizing the Data Warehouse

You can add tables, columns, and indexes to the data warehouse, and you can modify these existing objects. Customizing the data warehouse in this way requires using DAC and Informatica client tools. For more information about using Informatica client tools to customize the Oracle Business Analytics Warehouse, see the Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users.
Figure shows the major steps required for adding a new object to the data warehouse or modifying existing objects. As shown in Figure, you can begin the customization process by adding or modifying the new data warehouse object in DAC and then using the DAC's Data Warehouse Configurator to create or update the object in the data warehouse. Alternatively, you can add or modify the object directly in the data warehouse database and then use the DAC's Import from Database command to add the new object in DAC.
Figure 8-1 Process Flow to Add New Object to Data Warehouse
This image is described in the surrounding text.

Adding a New Table and Columns to the Data Warehouse

As shown in Figure, there are two alternative process flows for adding a new object to the data warehouse. You can enter the table and column definitions in DAC and then use the DAC's Data Warehouse Configurator to create the table and columns in the data warehouse database.
Alternatively, you can add the new table and column definitions directly in the data warehouse database and then use the DAC's Import from Database command to add the new table and columns in DAC;
To add a new table and columns to the data warehouse using the DAC's Data Warehouse Configurator
  1. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  2. From the Menu bar, select Views, then select Design, then select Tables.
  3. Create the new table.
    1. In the Tables tab, click New.
    2. In the Edit child tab, enter the appropriate information about the table, and click Save.
      For a description of the fields in this tab
  4. Add the columns for the new table.
    1. In the Columns child tab, click New.
    2. Enter the appropriate column information for each column you want to add to the table, and click Save.
    3. Enter the appropriate foreign key table and column information.
      Note:
      For performance purposes, it is recommended that you do not enter more than 254 columns to a dimension of fact table.
  5. Create the new tables and columns in the data warehouse database.
    1. Select Tools, then select ETL Management, then select Configure.
    2. Select the appropriate Source and Target database platforms, and then click OK.
    3. In the Data Warehouse Configuration Wizard, select Create Data Warehouse Tables, and then click Next.
    4. Enter the required information, and then click Start.
      An informational message reports whether the process was successful. For information about the process, you can review the createwtables.log file in the OracleBI\DAC\log\config folder.
To add a new table and columns using the DAC's Import command
  1. Add the new table and column definitions into the data warehouse database.
  2. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  3. From the Menu bar, select Views, then select Design, then select Tables.
  4. Import the new table definition.
    1. Right-click and select Import from Database, then select Import Database Tables.
    2. In the Import Tables dialog, select DataWarehouse.
    3. Optionally, enter filter criteria to identity the table name you entered in Step 1.
       for available filter commands and operators.
    4. Click Read Tables.
    5. In the list of tables displayed, select the Import check box for the tables you want to import.
    6. Click Import Tables.
      An informational message indicates whether the process was successful.
  5. Import the new column definitions.
    1. In the Tables tab, query for the table you imported in Step 4.
    2. With the table highlighted, right-click and select Import from Database, then select Import Database Columns.
    3. In the Importing Columns... dialog, select Selected Record Only, and then click OK.
    4. In the Import Table Columns dialog, click Read Columns.
      The Changes column displays a description of column changes, which are explained below:
      Change Explanation
      The object was added to the database.
      The column is in the database but not the DAC repository. Importing it will add the column to the DAC repository.
      The object was added to the repository.
      The column is in the DAC repository but not in the database. Importing it will delete it from the DAC repository.
      The object was modified.
      The column definition in the database does not match the definition in the DAC repository.
    5. In the list of columns displayed, select the Import check box for the columns you want to import.
    6. Click Import Columns.
      An informational message indicates whether the process was successful.

Adding an Index to the Data Warehouse

Follow this procedure to add a new index to the data warehouse.
To add a new index to the data warehouse
  1. Add the new index definition into the data warehouse database.
  2. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  3. From the Menu bar, select Views, then select Design, then select Tables.
  4. Query for the table for which you want to import index definitions.
  5. Right-click and select Import from Database, then select Import Indices.
  6. Choose to import indexes for a selected table or for all the records retrieved in your query, and click OK.
  7. In the Import Indices dialog, select DataWarehouse from the Data Sources drop-down list.
  8. Click Read Indices.
    1. In the list of indexes displayed, select the Import check box for the indexes you want to import.
    2. Click Import Indices.
      An informational message indicates whether the process was successful.

DAC:SQL Type Parameter

Procedure to define a parameter using the text data type. This procedure applies to parameters defined at both the source system and task levels.
To define a SQL type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the SQL data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select a Logical Data Source.
  7. Enter a SQL statement, and click OK.
  8. (Optional) To inactivate the parameter, select Inactive.
  9. Click Save.

DAC:Timestamp Type Parameter

Procedure to define a parameter using the Timestamp data type. This procedure applies to parameters defined at both the source system and task levels.
To define a Timestamp type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the Timestamp data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select one of the following options:
    • Static. This option specifies a value that remains constant for all ETL runs.
    • Runtime. This option specifies the value will be updated by the DAC before each ETL run.
    • SQL. This option
  7. If you selected the Static option:
    1. Click in the Date field to open the Date dialog.
    2. Enter a data and time, click OK.
  8. If you selected the Runtime option:
    1. Click in the Value field to open the Enter Parameter Value dialog.
    2. Select a Variable from the list.
    3. From the Function list, select a format to which the DAC will convert the date. If you select Custom, enter a custom date format.
      If you select SQL Syntax or SQL Syntax (Date Only), select a Connection Type.
  9. If you selected the SQL option:
    1. Click in the SQL field to open the Enter Parameter Value dialog.
    2. Select a Logical Data Source from the list.
    3. Enter the parameter definition and click OK.
  10. Click OK to close the Enter Parameter Value dialog.
  11. (Optional) To inactivate the parameter, select Inactive.
  12. Click Save.

DAC : Database Specific Text Type Parameter

Procedure to define a parameter using the DB Specific Text data type. This procedure applies to parameters defined at both the source system and task levels.
To define a database specific text type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the DB Specific Text data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select one of the following Connection Type options:
    • @DAC_SOURCE_DBTYPE. This option specifies a source database connection.
    • @DAC_TARGET_DBTYPE. This option specifies a target database connection.
  7. To define a parameter specific to all database types:
    1. Click in the Default field to open the Default text box.
    2. Enter the parameter definition, and click OK.
  8. To define a parameter specific to a particular database type:
    1. Click in the appropriate database type field to open the text box.
    2. Enter the parameter definition, and click OK.
  9. Click OK to close the Enter Parameter Value dialog.
  10. (Optional) To inactivate the parameter, select Inactive.
  11. Click Save.

DAC: Text Type Parameter

Procedure to define a parameter using the Text data type. This procedure applies to parameters defined at both the source system and task levels.
To define a text type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the Text data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select one of the following options:
    • Static. This option specifies a value that remains constant for all ETL runs.
    • Runtime. This option specifies a value will be updated by the DAC before each ETL run.
  7. If you selected the Static option, enter a text value in the text window, and click OK.
  8. If you selected the Runtime option, select a DAC Variable from the list, and click OK.
  9. (Optional) To inactivate the parameter, select Inactive.
  10. Click Save.

OBIA 11.1.1.7.1 Part 2 : Category 2 Changes

In the last blog I went through a very basic customisation, adding a new column to an existing dimension table. That served as an overview for how the BI Apps mappings are packaged in ODI for use in a dynamic ETL. This blog will go through the creation of a dimension and fact table, illustrating some of the concepts used to maintain data integrity. Most of these are similar to concepts used in previous releases of BI Apps, but modified for use in the new tool. For this example I will be adding two new tables to an existing EBS solution and running it as a single ETL load. The first is a dimension based on RETURN_REASON, the second is a fact based on SALES_RETURNS.
The first step is to create the source and target table definitions in the ODI model if they don’t already exist. Remember that you can just specify the table name and then use the Reverse Engineer feature to get the columns. The only constraint is that the table definitions are made in the correct models, but it’s worth grouping them into sub-models so that they can be navigated easily.
image
There are sample tables seeded in the repository for dimensions, staging tables and facts. These tables indicate the recommended naming convention (prefixing with WC_ instead of W_) as well as required system columns for warehouse tables. Below is a screenshot of the columns from the sample dimension table. All of these were included in tables for previous releases of BI Apps.
image
  • ROW_WID: Surrogate key for dimension tables.
  • INTEGRATION_ID: Natural key from the source table. Is often a concatenation of keys when several tables are used to populate a dimension.
  • DATASOURCE_NUM_ID: The identifier of the source system the data was extracted from. This allows for multiple sources to populate the same warehouse table without conflict.
  • ETL_PROC_WID: Run identifier for the load.
  • EFFECTIVE_FROM_DT/EFFECTIVE_TO_DT: These can be used to enable SCD type 2 dimensions.
  • CREATED_ON_DT/CHANGED_ON_DT: These dates (and all of the auxiliary changed dates) are from system columns on the source system. These are used to extract only newly changed information. The auxiliary dates can be used to improve this logic to derive from several tables.
In addition to the table and column definitions, some other attributes need to be configured in order for the load plan generator (LPG) to calculate the dependencies. The only data the user gives the LPG are the fact groups to load. From then, the following logic is used to generate the plan:
  • Flexfield OBI Fact Group can be set on fact tables to link them to configuration groups.
  • Staging tables are identified from the naming convention, e.g. XX_D will assume a staging table of XX_DS.
  • Required dimensions for a fact are identified by reference constraints defined in the ODI model.
So for my example, I needed to set the fact group flexfield on the fact table as well as the constraints between the foreign keys and the newly created dimension table.
image
There is a fact group X_CUSTOM_FG which is included in each functional area. It is recommended that generic customisations are included in this group. You can set this on the datastore definition as above. In addition to this create various constraints on the new datastores.
  • Staging Tables: Primary Key over INTEGRATION_ID and DATASOURCE_NUM_ID
  • Dimension Tables:
    • Primary Key over ROW_WID
    • Alternate Key over INTEGRATION_ID and DATASOURCE_NUM_ID
  • Fact Tables:
    • Primary Key over ROW_WID
    • Alternate Key over INTEGRATION_ID and DATASOURCE_NUM_ID
    • References for each foreign key to the ROW_WID of the parent table
image
image
After the datastores are configured, it’s time to create the SDE interfaces and packages. Create these in the Custom_SDE folder as well so it’s separate from any prebuilt logic. Most of the columns can map across directly but it is important to use the global variable for DATASOURCE_NUM_ID. Variables are referenced by prefixing with # but also can be inserted using the GUI expression editor.
image
The other important thing for the SDE mappings is to add a filter for the extract logic. Previously, this was done using two workflows and overriding the logic on one of them. Now we only need one interface as we can use a global function (seeded in the repository) to perform the logic. The logic used in the example is as follows:
where #IS_INCREMENTAL is derived from querying a system table: W_ETL_LOAD_DATES. Once the mappings are made, they should be included in a package which refreshes the IS_INCREMENTAL and LAST_EXTRACT_DATE variables first. This is typical of all the extract mappings and can be made by just dragging the necessary objects across, defining one of the variables as the first step and joining them using the green connectors. For all staging mappings, choose the BI Apps Control Append IKM in the flow tab of the interface designer. There are BI Apps versions of all the default IKMs which have some additional features.
image
The SIL mappings are created in much the same way, but require an update key to be selected. It’s important that this is the key defined over the INTEGRATION_ID and DATASOURCE_NUM_ID. The update key can be set on the target datastore properties. In order to populate the ROW_WID, a sequence needs to be created. The prebuilt mappings all use native sequences stored in the data warehouse. This can then be imported into ODI and referenced by using NEXTVAL(SEQUENCE_NAME).
image
The other main difference for the SIL mappings is that they use the BI Apps Incremental Update or BI Apps Slowly Changing Dimension IKMs. For dimensions, the IKM has a useful option to insert the unspecified row automatically. For fact mappings (and some dimensions) it will be necessary to perform lookups. This procedure is done very simply by clicking the magnifying glass icon in the interface designer. That will open a wizard which allows you to select the table and the join condition. After that, any column from the lookup table can be used in the target expressions.
image
The SIL interfaces also need to be put into packages although only the IS_INCREMENTAL variable is required for refresh. Once all of the packages have been created, scenarios need to be generated for each of them. This can be done for each package at once by choosing generate scenarios at a higher folder level. Existing packages will be regenerated. These new scenarios need to be added to the master load plan, in the X_CUSTOM plans for SDE Dims, SDE Facts, SIL Dims and SIL Facts. Add the step by selecting the highest level and choosing Run Scenario step for the add menu. Then set the restart mode to Restart from failed step.
image
Once all of this has been done, the load plan must be edited to include the X_CUSTOM_FG fact group. This is done through the configuration manager where the plan can also be regenerated. After running the plan, I could see all of the tasks being included in the appropriate order. The data was successfully loaded into the fact table, with the foreign keys resolving.
image
That concludes the guide to customisations in the new BI Apps. Hopefully it was helpful with the overall process of how to do these customisations and why some of the steps are necessary. The Oracle documentation is very thorough and is certainly worth a look for some of the finer details. A lot is in common conceptually to previous BI Apps releases, the only step is the new tool which gives some very good new features.

OBIA 11.1.1.7.1 Part 1: Category 1 Changes

Over the last couple of days I’ve been taking a look into the new BI Apps package Oracle have released using ODI instead of Informatica. . However, this blog will focus on how you can make your own customisations in ODI and relate them back to concepts from previous BI Apps releases. If you want to follow along with the examples in this blog,  This will take you through the point of generating a load plan to load one or more facts I won’t repeat the steps for this configuration, but will go through how to generate the load plan to include your custom packages. The fact group that I am selecting to load is Inventory Transactions (INVTRX_FG).
The most basic and typical type of customisation is simply adding a column to an existing table, called a Category 1 change. For this, I’ll go through a very simple addition onto W_INVENTORY_PRODUCT_D, just adding a new column to hold the current date. The first step required is to create some new sub folders to hold these custom mappings. This mirrors previous versions and is done for the same reason: to separate customisations from prebuilt metadata in order to allow for an easier upgrade path. This can be done in the Designer tab, using the Projects pane.
New Folders
It is also recommended to edit the release tags for the new folders to register them to the correct BI Apps sources and targets. These tags allow for shortcuts to be made in ODI, and all of the objects relating to specific tags to be referenced together. You can edit release tags by clicking on the icon in the top right hand side of the Designer tab.
Next, find the interface (akin to an Informatica mapping) to be customised in it’s relevant extract folder. In this case I’m using EBS 12.1.3 as a source and modifying SDE_ORA_InventoryProductDimension. Copy the whole subfolder, that way you get the interfaces as well as the packages (similar to an Informatica workflow). At this point I added the custom column, X_CURRENT_DATE,  to the database tables:
  • W_INVENTORY_PRODUCT_DS
  • W_INVENTORY_PRODUCT_D
It’s still worth prefixing new columns with “X_” to denote customisation. ODI has the capability to import these changes into the mode, similarly to importing source and target definitions in Informatica. Open up the Models pane on the left hand side. This contains all of the table definitions for all schemas and is organised by source and then by table type.
Models
After opening a table definition you can edit several attributes including the column definitions. The easiest way to do this is to use the Reverse-Engineer functionality. This will read the table definition from the database and import it into ODI. Another interesting feature of ODI is to choose the OLAP type. This has options of Fact, Dimension (SCD 1) and Slowly Changing Dimension (SCD 2). When set to Slowly Changing, you can edit the column properties to set their update behaviour. This way you can very easily alter a dimension to be SCD type 2 or vice versa.
Reverse Engineer
Slowly Changing Options
Once the table definition has been saved, the new column can be referenced when editing interfaces. The process of adding new columns is relatively simple in that you can drag across the desired column into the target datastore. Furthermore you can use expressions which reference variables and functions defined in ODI. In this example I’ve simply set the new column to be CURRENT_DATE in the extract (SDE) interface. Then this column can then be brought through the load (SIL) interface. Often, the BI Apps interfaces will use Yellow interfaces (as indicated by their icon) as their sources. This is an ODI mapping which doesn’t load into a defined datastore. Instead you define the target columns in the interface itself and ODI will create a temporary table. This interface can be used as a source in another mapping. This can be chained as many times as necessary and hence can replicate flow-based mappings which were frequent in Informatica. Typically, they are used for similar purposes to a source qualifier in previous releases.
Interface Designed
The interface is run as part of a package which can include other steps using functionality from ODI, the database or on the OS itself. This is equivalent to a workflow in Informatica. One of the key differences however, is that there is only one package required for both full and incremental loads whereas we had two Informatica mappings. This is because of the existence of functions and variables defined globally in ODI, whereas previously parameters were defined at a mapping and workflow level. The mechanics of this will be described in part 2 of this blog series. The package for the interface is accessible from the Projects pane as well.
Package
The next step is to right click on the package and generate a scenario which will be executed by the load plan. Note, that before doing this, it is worth changing the user parameter Scenario Naming Convention to %FOLDER_NAME(2)%_%OBJECT_NAME%. This will ensure they match the out of the box scenarios. The final step is to ensure that the new mappings will be included in the generated load plan. As part of the configuration for BI Apps, you are asked to select which fact groups to include in the generated load plan. This is equivalent to adding subject areas to an execution plan and then generating the dependencies. This version of BI Apps has provided similar functionality through it’s Load Plan generator. The mechanics of the load plan generator will be described further in the next part of the blog. In order for the generator to pick up the new mappings, they need to be added to the master plan which is a superset containing all interfaces without any particular order. The master plan can be edited in the Load Plans and Scenarios pane of the Designer tab. The master plan information is under BIAPPS Load Plan/Load Plan Dev Components. They are split into the three extract phases and then subsequently split into fact and dimension groups. In this case, I’ve edited the INVPROD_DIM Load Plans for the SDE and SIL folders. Open the load plan and navigate to the steps section. Here we can modify the relevant task to use the scenario from our custom folder. This is the same as changing the logical folder for a task in DAC.
Load Plan
Now you can go back to the BI Apps Configuration Manager, navigate to Load Plans and regenerate the plan. This will include the custom scenario instead and you can reset data sources and run the plan to load the custom column.
In the next part of the blog I will go through how to do a category 2 customisation, creating a new dimension and fact table and adding that to the load plan.

OBIA 11.1.1.7.1 – Use of ODI11g for ETL

In the two previous postings in this series on the Oracle BI Apps 11.1.1.7.1, we looked at the release at a high-level, and then at the product architecture including the new configuration and functional setup tools. From a technology and developer perspective though probably the most interesting thing about this new release is its use of Oracle Data Integrator as the ETL tool rather than Informatica, and the doing-away with the DAC for load orchestration and monitoring.
This introduction of ODI brings a number of potential benefits to customers and developers and gives Oracle the opportunity to simplify the product architecture, but bear in mind that there’s no migration path from the earlier 7.9.x releases to this version, with Informatica customers instead having to wait until the “patch set 2″ version due in the next twelve months; even then, migration between tools won’t be automatic, with existing Informatica-based installations expected to stay on Informatica unless they choose to re-implement using ODI.
So how does ODI work within this new release, and how has the DAC been replaced? Let’s take a look in this final piece in our short series on Oracle BI Apps 11.1.1.7.1, starting by looking at the overall role that ODI plays in the platform architecture.
Odi arch
Existing ODI developers will know that the tool uses two repositories, known as the Master and Work repositories, to store details of data sources and targets, mappings, data models and other aspects of an ETL project. Within the BI Apps these two repositories are stored in a schema called prefix_ODI_REPO, for example DEV_ODI_REPO, and are accompanied by a new schema called prefix_BIACOMP, again for example DEV_BIACOMP. The BIACOMP schema contains tables used by the various new WebLogic-based BI Apps supporting applications, and contain details of the functional setup of the BI Apps, load plans that have been generated and so forth. There’s also another schema called prefix_BIACOMP_IO which is used for read-write access to the BIACOMP schema, and all of these are held in a repository database alongside the usual schemas used for OBIEE, MDS and so forth.
The major difference in using ODI within this environment is that it’s treated as an “embedded” ETL tool, so that in most circumstances you won’t need to use ODI Studio itself to kick-off load plans, monitor their execution, set up sources and targets and so forth. This was the original vision for Informatica within the original BI Apps, but Oracle are able to do this far more effectively with ODI as they own all parts of the tech stack, can alter ODI to make it easier to embed, they’e got control over ODI’s various metadata APIs and so forth. What this means in practice is that the setup of the ODI topology (to connect to the ERP sources, and the target data warehouse) is done for you via a web-based application called the Oracle BI Applications Configuration Manager, and you can kick-off and then monitor your running ETL jobs from Configuration Manager and from ODI Console, the web-based operator tool that’s been around since the 11g release of ODI. The screenshot below shows Configuration Manager setting up the source database ODI topology entry, with the details that you provide then being pushed through to the ODI master repository:
NewImage
Setting up a new BI Apps system involves using the Configuration Manager to define the connections through to the various source systems, then select the BI Apps modules (Financial Analytics, for example, and then the various subject areas within it) that you wish to implement. There are then a number of steps you can perform to set up system-wide settings, for example to select default currencies or languages, and then you come to run your first ODI load plan – which in this instance copies settings from your source system into the relevant tables in the BIACOMP schema, performing automatically the task that you had to do via the various domain configuration spreadsheets in the earlier 7.9.x releases – the screenshot below shows this ODI load plan listed out and having run successfully.
NewImage
You can then view the execution steps and outcome either in ODI Console (embedded within Configuration Manager), or over at ODI Studio, using the Operator navigator.
NewImage
Moving over to ODI Studio, the folders (or “adapters”) that in Informatica used to hold workflows and mappings for the various source systems, are contained with the BI Apps project within the Work repository and the Designer navigator. In the screenshot below you can also see the Fusion Apps adapter that’s not supported in this particular release, and the ETL Data Lineage adapter that should get enabled in an upcoming patch release.
NewImage
In the screenshot above you can also see one of the loading tasks, SDE_ORA_APAgingBucketsDimenson, is a package that (were you to expand the Interfaces entry) makes reference to a regular, and also a temporary, interface.
NewImage
Packages in ODI perform the same role as Informatica workflows in earlier releases of the BI Apps, and each package runs some steps to refresh variables, work out if its doing a full or incremental load, and then call the relevant ODI interface. Interfaces in ODI for the BI Apps typically load from other temporary interfaces, with these temporary interfaces performing the role of maplets in the Informatica version of the BI Apps, as you can see in the screenshot on the left below. On the right, you can see the flow for another mapping, along with one of the custom KMs that come as part of the BI Apps 11.1.1.7.1 package.
NewImage
Individual packages are then assembled into the equivalent of BI Apps 7.9.x “execution plans” through a new JEE application called the Load Plan Generator, which also gets installed into ODI Studio as a plug-in so you can develop new data loading routines away from the full production setup. As you can see in the final screenshot below, these load plans are then visible from within ODI Studio (whether you generated them there, or from Configuration Manager), and like all ODI 11g load plans you can view the outcome of each load plan instance run, restart it if this feature is enabled, and so forth.
NewImage
So there you have it – how ODI is used within the BI Apps 11.1.1.7.1. I’m going to take a break now as it’s almost time for the Atlanta run of the Rittman Mead BI Forum 2013, but once I’m back in the UK I’ll try and put something together for the blog on pulling together your first ETL run. Until then – have fun with the release.

OBIA 11.1.1.7.1: use of ODI11g for Data Loading

which used ODI11g in the background to perform data loads, rather than Informatica PowerCenter. Since then,  install  data load, with the aim being to provide readers with a “minimum viable install” OBIA 11.1.1.7.1 system. and how element of ODI are used (and extended) to provide a proper, embedded ETL tool that Oracle can presumably take forward in a way that they couldn’t with Informatica. In the next couple of blog posts then, I’m going to look at this ODI integration in a bit more detail, starting today with “lifting up the lid” on the configuration and data load process, to see just how the various BI Apps platform tools integrate with ODI and control how it works.
If you’ve worked with earlier, Informatica-based releases of the BI Apps, you’ll be familiar with the many, many manual steps involved in configuring and integrating the various platform components, including two initial configuration phases that are particularly involved and prone to operator error:
  • Linking all of the ETL, ETL control (DAC), database and other elements together, so that the DAC can run tasks that in turn, run Informatica Workflows that in turn, load data into the BI Apps data warehouse
  • Preparing lots of CSV files to contain the various domain values, segment codes, flex field details and so on for your source systems, even though you know these same details are held in tables in EBS, for example.
BI Apps 11.1.1.7.1 simplifies this setup process greatly, though, by automatically pushing all source and target configuration details entered into the web-based Configuration Manager application directly into ODI’s repository. You can see this in action when defining the EBS data source in our cookbook example, where in the screenshot below connection details to the EBS database are entered into Configuration Manager, and then appear thereafter in ODI Studio’s Topology navigator.
NewImage
Configuration Manager stores copies of these settings in its own metadata tables, in this instance C_DATA_SERVER in a schema called DEV_BIACOMP, which can be thought of as similar to some of the DAC repository tables in BI Apps 7.9.x; in this instance though, Configuration Manager automatically pushes the values through to the ODI repository held in the DEV_BIA_ODIREPO schema, rather than you having to manually create the same entries in Informatica Workflow Manager yourself.
It’s a similar story with the setting up of domain values – what happens with BI Apps 11.1.1.7.1 is that you configure a special type of load plan, analogous to BI Apps 7.9.6.4′s execution plans, to read from the various EBS (in this instance) metadata table and set up the domain values automatically. In the screenshots below, the one on the left shows one of the special “domain extract only” load plans being set up for a particular set of fact table groups, while the screenshot on the right shows the same load plan being executed, loading domain values into the Configuration Manager metadata tables.
NewImage
Notice also that what ODI11g is using to load data into these tables, is a standard ODI11g 11.1.1.5+ load plan, and it’s presumably to support the BI Apps and their more complex loading routines and dependencies that load plans were introduced. And, because these are standard load plans, when they go wrong, as an ODI developer they’re a lot easier to diagnose and debug than Informatica/DAC load routines, which left log files all over the place and used Informatica terminology for the load, rather than ODI’s familiar ELT approach. In the screenshots below, this domain-load load process has failed at the point where data starts to get read from the EBS instance, and looking at ODI’s error message view, you can quickly see that the error was caused by the EBS server being unreachable.
NewImage
So far the screenshots of ODI’s internals that you’ve seen are from ODI Studio, the Java thick-client developer tool that all ODI developers are familiar with. But the BI Apps tries to hide the details of the ETL tool from you, treating it as an embedded, “headless” server that ideally you administer as much as possible from Configuration Manager (for system-wide configuration) and Functional Setup Manager (for application-specific configuration). To achieve this, Configuration Manager can run ODI Console embedded within its web view, so its possible to click on a load plan, view its status and drill into the individual steps all from your web browser, no ODI Studio install needed.
NewImage
So how do these load plans get generated, when there’s no DAC and no historic way within ODI of dynamically-generating load plans based on metadata? What enables this dynamic creation of load plans is a new utility included with BI Apps 11g called “Load Plan Generator”, which ships as a JEE library within the WebLogic domain and a plug-in to ODI Studio, for creating test load plans as part o the ETL customisation process.
NewImage
describes the Load Plan Generator JEE back-end utility in a fair bit of detail, but in summary you can think of the load plan generation process as going like this:
1. The ODI developer decides to create a new load plan, for loading data about inventories from EBS, for example. As shown in the screenshots before, domain values for this area within EBS will need to have been loaded prior to this, but assuming this is in place, the developer first selects one or more fact table groups, with each fact group a kind of “subject area” containing one or more data warehouse fact tables.
NewImage
2. In the background, ODI doesn’t have the concept of fact groups, but it instead uses flex field metadata for each fact table to specify which fact group each belongs to. You can see what’s included in each fact group by looking at the warehouse domains view in Configuration Manager, and you can see the fact table flex fields in ODI Studio, when you view details of the table (or “model”) in the Designer navigator.
NewImage
3. So when the BI Apps ODI developer tells Configuration Manager to create a new load plan, the steps it goes through and metadata it consults looks like this:
NewImage
4. Giving you, in the end, a load plan with a standard set of phases, and optimally-selected and orchestrated load plan steps to load data into the required fact groups.
NewImage
5. Then, once the load plan runs, you’ve got the same in-built restartability capabilities that you get in all other load plans, together with the concept of load plan instances, exceptions and so forth, so again as an ODI developer all of this is fairly obvious and fairly transparent to debug.
In the background, WebLogic Server hosts the ODI agent within an ODI-specific managed server and within the overall BI Apps WebLogic domain, with ODI’s security linked to WebLogic so that the same developer and operator IDs work across the whole system, and with a set of additional Fusion Middleware Security application roles for the various levels of administrative access.
So – you could say that BI Apps 11.1.1.7.1 is a “developer’s dream” for ODI professionals, as it brings the productivity and source system compatibility benefits of the BI Apps to the familiar, more efficient world of ODI. Everything is connected, and you can see where everything is stored in the various underlying repository tables. But – and this is probably the most obvious next thought from experienced BI Apps developers – how do you go about customising these ETL routines, adding new data sources, and upgrading the system over time? And where does the new integration with GoldenGate come in, giving us the ability to break-out of restrictive load windows and potentially move BI Apps into the cloud – watch this space for more details.