Creating Tasks in the DAC for New or Modified Informatica Workflows

You need to perform this step for all new workflows you create in Informatica and for all workflows that you modify.
To create a task in the DAC for new or modified Informatica workflows
  1. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  2. In the DAC, create custom logical and physical task folders for the custom folder you created in the Informatica repository.
    1. In the DAC, navigate to Tools, then select Seed Data, then select Task Folders.
    2. To create a custom logical folder, click New.
    3. In the Name field, enter a name for the custom logical folder, for example, Custom Logical.
    4. In the Type field, select Logical.
    5. To create a custom physical folder, click New.
    6. In the Name field, enter a name for the custom physical folder, for example, Custom Physical.
    7. In the Type field, select Physical.
  3. Register the folders you created in Step 2 in the Source System Folders tab.
    1. Navigate to Design, then select Source System Folders.
    2. Click New.
    3. In the Edit child tab, enter the name of the custom logical folder in the Logical Folder field.
    4. Enter the name of the custom physical folder in the Physical Folder field, and click Save.
  4. Create new tasks for the workflows.
    1. Navigate to Design, then select Tasks, and click New in the top pane toolbar.
    2. In the Edit child tab, enter the workflow name as it appears in Informatica Workflow Manager.
    3. Right-click and select Synchronize Tasks.
    4. Select Selected Record Only, and click OK. Click OK in the informational message box.
      This command synchronizes the source and target table information between the DAC and Informatica.
    5. In the Tasks tab, enter the remaining information required for the task.

DAC: Customizing the Data Warehouse

You can add tables, columns, and indexes to the data warehouse, and you can modify these existing objects. Customizing the data warehouse in this way requires using DAC and Informatica client tools. For more information about using Informatica client tools to customize the Oracle Business Analytics Warehouse, see the Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users.
Figure shows the major steps required for adding a new object to the data warehouse or modifying existing objects. As shown in Figure, you can begin the customization process by adding or modifying the new data warehouse object in DAC and then using the DAC's Data Warehouse Configurator to create or update the object in the data warehouse. Alternatively, you can add or modify the object directly in the data warehouse database and then use the DAC's Import from Database command to add the new object in DAC.
Figure 8-1 Process Flow to Add New Object to Data Warehouse
This image is described in the surrounding text.

Adding a New Table and Columns to the Data Warehouse

As shown in Figure, there are two alternative process flows for adding a new object to the data warehouse. You can enter the table and column definitions in DAC and then use the DAC's Data Warehouse Configurator to create the table and columns in the data warehouse database.
Alternatively, you can add the new table and column definitions directly in the data warehouse database and then use the DAC's Import from Database command to add the new table and columns in DAC;
To add a new table and columns to the data warehouse using the DAC's Data Warehouse Configurator
  1. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  2. From the Menu bar, select Views, then select Design, then select Tables.
  3. Create the new table.
    1. In the Tables tab, click New.
    2. In the Edit child tab, enter the appropriate information about the table, and click Save.
      For a description of the fields in this tab
  4. Add the columns for the new table.
    1. In the Columns child tab, click New.
    2. Enter the appropriate column information for each column you want to add to the table, and click Save.
    3. Enter the appropriate foreign key table and column information.
      Note:
      For performance purposes, it is recommended that you do not enter more than 254 columns to a dimension of fact table.
  5. Create the new tables and columns in the data warehouse database.
    1. Select Tools, then select ETL Management, then select Configure.
    2. Select the appropriate Source and Target database platforms, and then click OK.
    3. In the Data Warehouse Configuration Wizard, select Create Data Warehouse Tables, and then click Next.
    4. Enter the required information, and then click Start.
      An informational message reports whether the process was successful. For information about the process, you can review the createwtables.log file in the OracleBI\DAC\log\config folder.
To add a new table and columns using the DAC's Import command
  1. Add the new table and column definitions into the data warehouse database.
  2. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  3. From the Menu bar, select Views, then select Design, then select Tables.
  4. Import the new table definition.
    1. Right-click and select Import from Database, then select Import Database Tables.
    2. In the Import Tables dialog, select DataWarehouse.
    3. Optionally, enter filter criteria to identity the table name you entered in Step 1.
       for available filter commands and operators.
    4. Click Read Tables.
    5. In the list of tables displayed, select the Import check box for the tables you want to import.
    6. Click Import Tables.
      An informational message indicates whether the process was successful.
  5. Import the new column definitions.
    1. In the Tables tab, query for the table you imported in Step 4.
    2. With the table highlighted, right-click and select Import from Database, then select Import Database Columns.
    3. In the Importing Columns... dialog, select Selected Record Only, and then click OK.
    4. In the Import Table Columns dialog, click Read Columns.
      The Changes column displays a description of column changes, which are explained below:
      Change Explanation
      The object was added to the database.
      The column is in the database but not the DAC repository. Importing it will add the column to the DAC repository.
      The object was added to the repository.
      The column is in the DAC repository but not in the database. Importing it will delete it from the DAC repository.
      The object was modified.
      The column definition in the database does not match the definition in the DAC repository.
    5. In the list of columns displayed, select the Import check box for the columns you want to import.
    6. Click Import Columns.
      An informational message indicates whether the process was successful.

Adding an Index to the Data Warehouse

Follow this procedure to add a new index to the data warehouse.
To add a new index to the data warehouse
  1. Add the new index definition into the data warehouse database.
  2. In the DAC toolbar, select the appropriate source system container from the drop-down list in the toolbar.
  3. From the Menu bar, select Views, then select Design, then select Tables.
  4. Query for the table for which you want to import index definitions.
  5. Right-click and select Import from Database, then select Import Indices.
  6. Choose to import indexes for a selected table or for all the records retrieved in your query, and click OK.
  7. In the Import Indices dialog, select DataWarehouse from the Data Sources drop-down list.
  8. Click Read Indices.
    1. In the list of indexes displayed, select the Import check box for the indexes you want to import.
    2. Click Import Indices.
      An informational message indicates whether the process was successful.

DAC:SQL Type Parameter

Procedure to define a parameter using the text data type. This procedure applies to parameters defined at both the source system and task levels.
To define a SQL type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the SQL data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select a Logical Data Source.
  7. Enter a SQL statement, and click OK.
  8. (Optional) To inactivate the parameter, select Inactive.
  9. Click Save.

DAC:Timestamp Type Parameter

Procedure to define a parameter using the Timestamp data type. This procedure applies to parameters defined at both the source system and task levels.
To define a Timestamp type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the Timestamp data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select one of the following options:
    • Static. This option specifies a value that remains constant for all ETL runs.
    • Runtime. This option specifies the value will be updated by the DAC before each ETL run.
    • SQL. This option
  7. If you selected the Static option:
    1. Click in the Date field to open the Date dialog.
    2. Enter a data and time, click OK.
  8. If you selected the Runtime option:
    1. Click in the Value field to open the Enter Parameter Value dialog.
    2. Select a Variable from the list.
    3. From the Function list, select a format to which the DAC will convert the date. If you select Custom, enter a custom date format.
      If you select SQL Syntax or SQL Syntax (Date Only), select a Connection Type.
  9. If you selected the SQL option:
    1. Click in the SQL field to open the Enter Parameter Value dialog.
    2. Select a Logical Data Source from the list.
    3. Enter the parameter definition and click OK.
  10. Click OK to close the Enter Parameter Value dialog.
  11. (Optional) To inactivate the parameter, select Inactive.
  12. Click Save.

DAC : Database Specific Text Type Parameter

Procedure to define a parameter using the DB Specific Text data type. This procedure applies to parameters defined at both the source system and task levels.
To define a database specific text type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the DB Specific Text data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select one of the following Connection Type options:
    • @DAC_SOURCE_DBTYPE. This option specifies a source database connection.
    • @DAC_TARGET_DBTYPE. This option specifies a target database connection.
  7. To define a parameter specific to all database types:
    1. Click in the Default field to open the Default text box.
    2. Enter the parameter definition, and click OK.
  8. To define a parameter specific to a particular database type:
    1. Click in the appropriate database type field to open the text box.
    2. Enter the parameter definition, and click OK.
  9. Click OK to close the Enter Parameter Value dialog.
  10. (Optional) To inactivate the parameter, select Inactive.
  11. Click Save.

DAC: Text Type Parameter

Procedure to define a parameter using the Text data type. This procedure applies to parameters defined at both the source system and task levels.
To define a text type parameter:
  1. Do one of the following:
    • To define a source system parameter, from the Views menu, select Design, and then select Source System Parameters.
    • To define a task level parameter, from the Views menu, select Design, then select Tasks, and then click the Parameters subtab.
  2. Click New.
  3. Enter a parameter name.
  4. Select the Text data type.
  5. Click in the Value field to open the Enter Parameter Value dialog.
  6. Select one of the following options:
    • Static. This option specifies a value that remains constant for all ETL runs.
    • Runtime. This option specifies a value will be updated by the DAC before each ETL run.
  7. If you selected the Static option, enter a text value in the text window, and click OK.
  8. If you selected the Runtime option, select a DAC Variable from the list, and click OK.
  9. (Optional) To inactivate the parameter, select Inactive.
  10. Click Save.

OBIA 11.1.1.7.1 Part 2 : Category 2 Changes

In the last blog I went through a very basic customisation, adding a new column to an existing dimension table. That served as an overview for how the BI Apps mappings are packaged in ODI for use in a dynamic ETL. This blog will go through the creation of a dimension and fact table, illustrating some of the concepts used to maintain data integrity. Most of these are similar to concepts used in previous releases of BI Apps, but modified for use in the new tool. For this example I will be adding two new tables to an existing EBS solution and running it as a single ETL load. The first is a dimension based on RETURN_REASON, the second is a fact based on SALES_RETURNS.
The first step is to create the source and target table definitions in the ODI model if they don’t already exist. Remember that you can just specify the table name and then use the Reverse Engineer feature to get the columns. The only constraint is that the table definitions are made in the correct models, but it’s worth grouping them into sub-models so that they can be navigated easily.
image
There are sample tables seeded in the repository for dimensions, staging tables and facts. These tables indicate the recommended naming convention (prefixing with WC_ instead of W_) as well as required system columns for warehouse tables. Below is a screenshot of the columns from the sample dimension table. All of these were included in tables for previous releases of BI Apps.
image
  • ROW_WID: Surrogate key for dimension tables.
  • INTEGRATION_ID: Natural key from the source table. Is often a concatenation of keys when several tables are used to populate a dimension.
  • DATASOURCE_NUM_ID: The identifier of the source system the data was extracted from. This allows for multiple sources to populate the same warehouse table without conflict.
  • ETL_PROC_WID: Run identifier for the load.
  • EFFECTIVE_FROM_DT/EFFECTIVE_TO_DT: These can be used to enable SCD type 2 dimensions.
  • CREATED_ON_DT/CHANGED_ON_DT: These dates (and all of the auxiliary changed dates) are from system columns on the source system. These are used to extract only newly changed information. The auxiliary dates can be used to improve this logic to derive from several tables.
In addition to the table and column definitions, some other attributes need to be configured in order for the load plan generator (LPG) to calculate the dependencies. The only data the user gives the LPG are the fact groups to load. From then, the following logic is used to generate the plan:
  • Flexfield OBI Fact Group can be set on fact tables to link them to configuration groups.
  • Staging tables are identified from the naming convention, e.g. XX_D will assume a staging table of XX_DS.
  • Required dimensions for a fact are identified by reference constraints defined in the ODI model.
So for my example, I needed to set the fact group flexfield on the fact table as well as the constraints between the foreign keys and the newly created dimension table.
image
There is a fact group X_CUSTOM_FG which is included in each functional area. It is recommended that generic customisations are included in this group. You can set this on the datastore definition as above. In addition to this create various constraints on the new datastores.
  • Staging Tables: Primary Key over INTEGRATION_ID and DATASOURCE_NUM_ID
  • Dimension Tables:
    • Primary Key over ROW_WID
    • Alternate Key over INTEGRATION_ID and DATASOURCE_NUM_ID
  • Fact Tables:
    • Primary Key over ROW_WID
    • Alternate Key over INTEGRATION_ID and DATASOURCE_NUM_ID
    • References for each foreign key to the ROW_WID of the parent table
image
image
After the datastores are configured, it’s time to create the SDE interfaces and packages. Create these in the Custom_SDE folder as well so it’s separate from any prebuilt logic. Most of the columns can map across directly but it is important to use the global variable for DATASOURCE_NUM_ID. Variables are referenced by prefixing with # but also can be inserted using the GUI expression editor.
image
The other important thing for the SDE mappings is to add a filter for the extract logic. Previously, this was done using two workflows and overriding the logic on one of them. Now we only need one interface as we can use a global function (seeded in the repository) to perform the logic. The logic used in the example is as follows:
where #IS_INCREMENTAL is derived from querying a system table: W_ETL_LOAD_DATES. Once the mappings are made, they should be included in a package which refreshes the IS_INCREMENTAL and LAST_EXTRACT_DATE variables first. This is typical of all the extract mappings and can be made by just dragging the necessary objects across, defining one of the variables as the first step and joining them using the green connectors. For all staging mappings, choose the BI Apps Control Append IKM in the flow tab of the interface designer. There are BI Apps versions of all the default IKMs which have some additional features.
image
The SIL mappings are created in much the same way, but require an update key to be selected. It’s important that this is the key defined over the INTEGRATION_ID and DATASOURCE_NUM_ID. The update key can be set on the target datastore properties. In order to populate the ROW_WID, a sequence needs to be created. The prebuilt mappings all use native sequences stored in the data warehouse. This can then be imported into ODI and referenced by using NEXTVAL(SEQUENCE_NAME).
image
The other main difference for the SIL mappings is that they use the BI Apps Incremental Update or BI Apps Slowly Changing Dimension IKMs. For dimensions, the IKM has a useful option to insert the unspecified row automatically. For fact mappings (and some dimensions) it will be necessary to perform lookups. This procedure is done very simply by clicking the magnifying glass icon in the interface designer. That will open a wizard which allows you to select the table and the join condition. After that, any column from the lookup table can be used in the target expressions.
image
The SIL interfaces also need to be put into packages although only the IS_INCREMENTAL variable is required for refresh. Once all of the packages have been created, scenarios need to be generated for each of them. This can be done for each package at once by choosing generate scenarios at a higher folder level. Existing packages will be regenerated. These new scenarios need to be added to the master load plan, in the X_CUSTOM plans for SDE Dims, SDE Facts, SIL Dims and SIL Facts. Add the step by selecting the highest level and choosing Run Scenario step for the add menu. Then set the restart mode to Restart from failed step.
image
Once all of this has been done, the load plan must be edited to include the X_CUSTOM_FG fact group. This is done through the configuration manager where the plan can also be regenerated. After running the plan, I could see all of the tasks being included in the appropriate order. The data was successfully loaded into the fact table, with the foreign keys resolving.
image
That concludes the guide to customisations in the new BI Apps. Hopefully it was helpful with the overall process of how to do these customisations and why some of the steps are necessary. The Oracle documentation is very thorough and is certainly worth a look for some of the finer details. A lot is in common conceptually to previous BI Apps releases, the only step is the new tool which gives some very good new features.

OBIA 11.1.1.7.1 Part 1: Category 1 Changes

Over the last couple of days I’ve been taking a look into the new BI Apps package Oracle have released using ODI instead of Informatica. . However, this blog will focus on how you can make your own customisations in ODI and relate them back to concepts from previous BI Apps releases. If you want to follow along with the examples in this blog,  This will take you through the point of generating a load plan to load one or more facts I won’t repeat the steps for this configuration, but will go through how to generate the load plan to include your custom packages. The fact group that I am selecting to load is Inventory Transactions (INVTRX_FG).
The most basic and typical type of customisation is simply adding a column to an existing table, called a Category 1 change. For this, I’ll go through a very simple addition onto W_INVENTORY_PRODUCT_D, just adding a new column to hold the current date. The first step required is to create some new sub folders to hold these custom mappings. This mirrors previous versions and is done for the same reason: to separate customisations from prebuilt metadata in order to allow for an easier upgrade path. This can be done in the Designer tab, using the Projects pane.
New Folders
It is also recommended to edit the release tags for the new folders to register them to the correct BI Apps sources and targets. These tags allow for shortcuts to be made in ODI, and all of the objects relating to specific tags to be referenced together. You can edit release tags by clicking on the icon in the top right hand side of the Designer tab.
Next, find the interface (akin to an Informatica mapping) to be customised in it’s relevant extract folder. In this case I’m using EBS 12.1.3 as a source and modifying SDE_ORA_InventoryProductDimension. Copy the whole subfolder, that way you get the interfaces as well as the packages (similar to an Informatica workflow). At this point I added the custom column, X_CURRENT_DATE,  to the database tables:
  • W_INVENTORY_PRODUCT_DS
  • W_INVENTORY_PRODUCT_D
It’s still worth prefixing new columns with “X_” to denote customisation. ODI has the capability to import these changes into the mode, similarly to importing source and target definitions in Informatica. Open up the Models pane on the left hand side. This contains all of the table definitions for all schemas and is organised by source and then by table type.
Models
After opening a table definition you can edit several attributes including the column definitions. The easiest way to do this is to use the Reverse-Engineer functionality. This will read the table definition from the database and import it into ODI. Another interesting feature of ODI is to choose the OLAP type. This has options of Fact, Dimension (SCD 1) and Slowly Changing Dimension (SCD 2). When set to Slowly Changing, you can edit the column properties to set their update behaviour. This way you can very easily alter a dimension to be SCD type 2 or vice versa.
Reverse Engineer
Slowly Changing Options
Once the table definition has been saved, the new column can be referenced when editing interfaces. The process of adding new columns is relatively simple in that you can drag across the desired column into the target datastore. Furthermore you can use expressions which reference variables and functions defined in ODI. In this example I’ve simply set the new column to be CURRENT_DATE in the extract (SDE) interface. Then this column can then be brought through the load (SIL) interface. Often, the BI Apps interfaces will use Yellow interfaces (as indicated by their icon) as their sources. This is an ODI mapping which doesn’t load into a defined datastore. Instead you define the target columns in the interface itself and ODI will create a temporary table. This interface can be used as a source in another mapping. This can be chained as many times as necessary and hence can replicate flow-based mappings which were frequent in Informatica. Typically, they are used for similar purposes to a source qualifier in previous releases.
Interface Designed
The interface is run as part of a package which can include other steps using functionality from ODI, the database or on the OS itself. This is equivalent to a workflow in Informatica. One of the key differences however, is that there is only one package required for both full and incremental loads whereas we had two Informatica mappings. This is because of the existence of functions and variables defined globally in ODI, whereas previously parameters were defined at a mapping and workflow level. The mechanics of this will be described in part 2 of this blog series. The package for the interface is accessible from the Projects pane as well.
Package
The next step is to right click on the package and generate a scenario which will be executed by the load plan. Note, that before doing this, it is worth changing the user parameter Scenario Naming Convention to %FOLDER_NAME(2)%_%OBJECT_NAME%. This will ensure they match the out of the box scenarios. The final step is to ensure that the new mappings will be included in the generated load plan. As part of the configuration for BI Apps, you are asked to select which fact groups to include in the generated load plan. This is equivalent to adding subject areas to an execution plan and then generating the dependencies. This version of BI Apps has provided similar functionality through it’s Load Plan generator. The mechanics of the load plan generator will be described further in the next part of the blog. In order for the generator to pick up the new mappings, they need to be added to the master plan which is a superset containing all interfaces without any particular order. The master plan can be edited in the Load Plans and Scenarios pane of the Designer tab. The master plan information is under BIAPPS Load Plan/Load Plan Dev Components. They are split into the three extract phases and then subsequently split into fact and dimension groups. In this case, I’ve edited the INVPROD_DIM Load Plans for the SDE and SIL folders. Open the load plan and navigate to the steps section. Here we can modify the relevant task to use the scenario from our custom folder. This is the same as changing the logical folder for a task in DAC.
Load Plan
Now you can go back to the BI Apps Configuration Manager, navigate to Load Plans and regenerate the plan. This will include the custom scenario instead and you can reset data sources and run the plan to load the custom column.
In the next part of the blog I will go through how to do a category 2 customisation, creating a new dimension and fact table and adding that to the load plan.

OBIA 11.1.1.7.1 – Use of ODI11g for ETL

In the two previous postings in this series on the Oracle BI Apps 11.1.1.7.1, we looked at the release at a high-level, and then at the product architecture including the new configuration and functional setup tools. From a technology and developer perspective though probably the most interesting thing about this new release is its use of Oracle Data Integrator as the ETL tool rather than Informatica, and the doing-away with the DAC for load orchestration and monitoring.
This introduction of ODI brings a number of potential benefits to customers and developers and gives Oracle the opportunity to simplify the product architecture, but bear in mind that there’s no migration path from the earlier 7.9.x releases to this version, with Informatica customers instead having to wait until the “patch set 2″ version due in the next twelve months; even then, migration between tools won’t be automatic, with existing Informatica-based installations expected to stay on Informatica unless they choose to re-implement using ODI.
So how does ODI work within this new release, and how has the DAC been replaced? Let’s take a look in this final piece in our short series on Oracle BI Apps 11.1.1.7.1, starting by looking at the overall role that ODI plays in the platform architecture.
Odi arch
Existing ODI developers will know that the tool uses two repositories, known as the Master and Work repositories, to store details of data sources and targets, mappings, data models and other aspects of an ETL project. Within the BI Apps these two repositories are stored in a schema called prefix_ODI_REPO, for example DEV_ODI_REPO, and are accompanied by a new schema called prefix_BIACOMP, again for example DEV_BIACOMP. The BIACOMP schema contains tables used by the various new WebLogic-based BI Apps supporting applications, and contain details of the functional setup of the BI Apps, load plans that have been generated and so forth. There’s also another schema called prefix_BIACOMP_IO which is used for read-write access to the BIACOMP schema, and all of these are held in a repository database alongside the usual schemas used for OBIEE, MDS and so forth.
The major difference in using ODI within this environment is that it’s treated as an “embedded” ETL tool, so that in most circumstances you won’t need to use ODI Studio itself to kick-off load plans, monitor their execution, set up sources and targets and so forth. This was the original vision for Informatica within the original BI Apps, but Oracle are able to do this far more effectively with ODI as they own all parts of the tech stack, can alter ODI to make it easier to embed, they’e got control over ODI’s various metadata APIs and so forth. What this means in practice is that the setup of the ODI topology (to connect to the ERP sources, and the target data warehouse) is done for you via a web-based application called the Oracle BI Applications Configuration Manager, and you can kick-off and then monitor your running ETL jobs from Configuration Manager and from ODI Console, the web-based operator tool that’s been around since the 11g release of ODI. The screenshot below shows Configuration Manager setting up the source database ODI topology entry, with the details that you provide then being pushed through to the ODI master repository:
NewImage
Setting up a new BI Apps system involves using the Configuration Manager to define the connections through to the various source systems, then select the BI Apps modules (Financial Analytics, for example, and then the various subject areas within it) that you wish to implement. There are then a number of steps you can perform to set up system-wide settings, for example to select default currencies or languages, and then you come to run your first ODI load plan – which in this instance copies settings from your source system into the relevant tables in the BIACOMP schema, performing automatically the task that you had to do via the various domain configuration spreadsheets in the earlier 7.9.x releases – the screenshot below shows this ODI load plan listed out and having run successfully.
NewImage
You can then view the execution steps and outcome either in ODI Console (embedded within Configuration Manager), or over at ODI Studio, using the Operator navigator.
NewImage
Moving over to ODI Studio, the folders (or “adapters”) that in Informatica used to hold workflows and mappings for the various source systems, are contained with the BI Apps project within the Work repository and the Designer navigator. In the screenshot below you can also see the Fusion Apps adapter that’s not supported in this particular release, and the ETL Data Lineage adapter that should get enabled in an upcoming patch release.
NewImage
In the screenshot above you can also see one of the loading tasks, SDE_ORA_APAgingBucketsDimenson, is a package that (were you to expand the Interfaces entry) makes reference to a regular, and also a temporary, interface.
NewImage
Packages in ODI perform the same role as Informatica workflows in earlier releases of the BI Apps, and each package runs some steps to refresh variables, work out if its doing a full or incremental load, and then call the relevant ODI interface. Interfaces in ODI for the BI Apps typically load from other temporary interfaces, with these temporary interfaces performing the role of maplets in the Informatica version of the BI Apps, as you can see in the screenshot on the left below. On the right, you can see the flow for another mapping, along with one of the custom KMs that come as part of the BI Apps 11.1.1.7.1 package.
NewImage
Individual packages are then assembled into the equivalent of BI Apps 7.9.x “execution plans” through a new JEE application called the Load Plan Generator, which also gets installed into ODI Studio as a plug-in so you can develop new data loading routines away from the full production setup. As you can see in the final screenshot below, these load plans are then visible from within ODI Studio (whether you generated them there, or from Configuration Manager), and like all ODI 11g load plans you can view the outcome of each load plan instance run, restart it if this feature is enabled, and so forth.
NewImage
So there you have it – how ODI is used within the BI Apps 11.1.1.7.1. I’m going to take a break now as it’s almost time for the Atlanta run of the Rittman Mead BI Forum 2013, but once I’m back in the UK I’ll try and put something together for the blog on pulling together your first ETL run. Until then – have fun with the release.

OBIA 11.1.1.7.1: use of ODI11g for Data Loading

which used ODI11g in the background to perform data loads, rather than Informatica PowerCenter. Since then,  install  data load, with the aim being to provide readers with a “minimum viable install” OBIA 11.1.1.7.1 system. and how element of ODI are used (and extended) to provide a proper, embedded ETL tool that Oracle can presumably take forward in a way that they couldn’t with Informatica. In the next couple of blog posts then, I’m going to look at this ODI integration in a bit more detail, starting today with “lifting up the lid” on the configuration and data load process, to see just how the various BI Apps platform tools integrate with ODI and control how it works.
If you’ve worked with earlier, Informatica-based releases of the BI Apps, you’ll be familiar with the many, many manual steps involved in configuring and integrating the various platform components, including two initial configuration phases that are particularly involved and prone to operator error:
  • Linking all of the ETL, ETL control (DAC), database and other elements together, so that the DAC can run tasks that in turn, run Informatica Workflows that in turn, load data into the BI Apps data warehouse
  • Preparing lots of CSV files to contain the various domain values, segment codes, flex field details and so on for your source systems, even though you know these same details are held in tables in EBS, for example.
BI Apps 11.1.1.7.1 simplifies this setup process greatly, though, by automatically pushing all source and target configuration details entered into the web-based Configuration Manager application directly into ODI’s repository. You can see this in action when defining the EBS data source in our cookbook example, where in the screenshot below connection details to the EBS database are entered into Configuration Manager, and then appear thereafter in ODI Studio’s Topology navigator.
NewImage
Configuration Manager stores copies of these settings in its own metadata tables, in this instance C_DATA_SERVER in a schema called DEV_BIACOMP, which can be thought of as similar to some of the DAC repository tables in BI Apps 7.9.x; in this instance though, Configuration Manager automatically pushes the values through to the ODI repository held in the DEV_BIA_ODIREPO schema, rather than you having to manually create the same entries in Informatica Workflow Manager yourself.
It’s a similar story with the setting up of domain values – what happens with BI Apps 11.1.1.7.1 is that you configure a special type of load plan, analogous to BI Apps 7.9.6.4′s execution plans, to read from the various EBS (in this instance) metadata table and set up the domain values automatically. In the screenshots below, the one on the left shows one of the special “domain extract only” load plans being set up for a particular set of fact table groups, while the screenshot on the right shows the same load plan being executed, loading domain values into the Configuration Manager metadata tables.
NewImage
Notice also that what ODI11g is using to load data into these tables, is a standard ODI11g 11.1.1.5+ load plan, and it’s presumably to support the BI Apps and their more complex loading routines and dependencies that load plans were introduced. And, because these are standard load plans, when they go wrong, as an ODI developer they’re a lot easier to diagnose and debug than Informatica/DAC load routines, which left log files all over the place and used Informatica terminology for the load, rather than ODI’s familiar ELT approach. In the screenshots below, this domain-load load process has failed at the point where data starts to get read from the EBS instance, and looking at ODI’s error message view, you can quickly see that the error was caused by the EBS server being unreachable.
NewImage
So far the screenshots of ODI’s internals that you’ve seen are from ODI Studio, the Java thick-client developer tool that all ODI developers are familiar with. But the BI Apps tries to hide the details of the ETL tool from you, treating it as an embedded, “headless” server that ideally you administer as much as possible from Configuration Manager (for system-wide configuration) and Functional Setup Manager (for application-specific configuration). To achieve this, Configuration Manager can run ODI Console embedded within its web view, so its possible to click on a load plan, view its status and drill into the individual steps all from your web browser, no ODI Studio install needed.
NewImage
So how do these load plans get generated, when there’s no DAC and no historic way within ODI of dynamically-generating load plans based on metadata? What enables this dynamic creation of load plans is a new utility included with BI Apps 11g called “Load Plan Generator”, which ships as a JEE library within the WebLogic domain and a plug-in to ODI Studio, for creating test load plans as part o the ETL customisation process.
NewImage
describes the Load Plan Generator JEE back-end utility in a fair bit of detail, but in summary you can think of the load plan generation process as going like this:
1. The ODI developer decides to create a new load plan, for loading data about inventories from EBS, for example. As shown in the screenshots before, domain values for this area within EBS will need to have been loaded prior to this, but assuming this is in place, the developer first selects one or more fact table groups, with each fact group a kind of “subject area” containing one or more data warehouse fact tables.
NewImage
2. In the background, ODI doesn’t have the concept of fact groups, but it instead uses flex field metadata for each fact table to specify which fact group each belongs to. You can see what’s included in each fact group by looking at the warehouse domains view in Configuration Manager, and you can see the fact table flex fields in ODI Studio, when you view details of the table (or “model”) in the Designer navigator.
NewImage
3. So when the BI Apps ODI developer tells Configuration Manager to create a new load plan, the steps it goes through and metadata it consults looks like this:
NewImage
4. Giving you, in the end, a load plan with a standard set of phases, and optimally-selected and orchestrated load plan steps to load data into the required fact groups.
NewImage
5. Then, once the load plan runs, you’ve got the same in-built restartability capabilities that you get in all other load plans, together with the concept of load plan instances, exceptions and so forth, so again as an ODI developer all of this is fairly obvious and fairly transparent to debug.
In the background, WebLogic Server hosts the ODI agent within an ODI-specific managed server and within the overall BI Apps WebLogic domain, with ODI’s security linked to WebLogic so that the same developer and operator IDs work across the whole system, and with a set of additional Fusion Middleware Security application roles for the various levels of administrative access.
So – you could say that BI Apps 11.1.1.7.1 is a “developer’s dream” for ODI professionals, as it brings the productivity and source system compatibility benefits of the BI Apps to the familiar, more efficient world of ODI. Everything is connected, and you can see where everything is stored in the various underlying repository tables. But – and this is probably the most obvious next thought from experienced BI Apps developers – how do you go about customising these ETL routines, adding new data sources, and upgrading the system over time? And where does the new integration with GoldenGate come in, giving us the ability to break-out of restrictive load windows and potentially move BI Apps into the cloud – watch this space for more details.

OBIA 11.1.1.8.1: Mapping GL Accounts to Group Account Numbers

Group Account Number Configuration is an important step in the configuration of Financial Analytics, as it determines the accuracy of the majority of metrics in the General Ledger and Profitability module. Group Accounts in combination with Financial Statement Item Codes are also leveraged in the GL reconciliation process, to ensure that subledger data reconciles with GL journal entries. This topic is discussed in more detail later in this section.
You can categorize your General Ledger accounts into specific group account numbers. The GROUP_ACCT_NUM field denotes the nature of the General Ledger accounts.
Note:
The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:
Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.
Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.
Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.
  • file_group_acct_codes_jde.csv - this file maps General Ledger accounts to group account codes.
The associations in this file are used in conjunction with the values defined for the following Domains:
  • W_GL_GROUP_ACCOUNT
  • W_GL_ACCT_CATEGORY
  • W_FIN_STMT
These Domain values and the mappings between them classify accounts into sub-groups, like Revenue and Cost of Goods Sold, as well as dividing accounts between Balance Sheet and Profit and Loss. Before you load your data, you must ensure that the account values are mapped consistently across these three collections. In particular, the GROUP_ACCOUNT_NUM domain that is specified in Oracle BI Applications Configuration Manager must contain valid members of the W_GL_GROUP_ACCOUNT Domain. Those values, in turn, are mapped to members of the W_GL_ACCT_CATEGORY and W_FIN_STMT Domains.
You can categorize the General Ledger accounts in Oracle's JD Edwards EnterpriseOne into specific group account numbers. The group account number is used during data extraction as well as front-end reporting.
The GROUP_ACCT_NUM field in the GL Account dimension table W_GL_ACCOUNT_D denotes the nature of the General Ledger accounts (for example, Cash account, AR account, Long Term Debt account Payroll account). For a list of the Group Account Number domain values, see Oracle Business Analytics Warehouse Data Model Reference.
The mappings to General Ledger Accounts Numbers are important for both Profitability analysis and General Ledger analysis (for example, Balance Sheets).
Using the file_group_account_codes_jde.csv, you can specify which group account (among the available group accounts) the object account is associated with. The Company column in this CSV file is the actual company the object account belongs to.
In addition to the From Account and To Account range, the system uses the incoming company as a parameter for the association. If the incoming company has not been configured in the group account flat file, the system inserts 00000 as the default value for Company for lookups. You can choose to not configure group accounts for any company other than 00000 if you are using a single global chart of accounts. However, if you configure group accounts for additional companies, you must configure all possible From Account and To Account ranges for these companies. In addition, you must always configure the entire range of accounts for company 00000.
Table B-20 below shows example values specified in the file file_group_account_codes_jde.csv
Table B-20 Example of file_group_account_codes_jde.csv
COMPANY FROM ACCT TO ACCT GROUP_ACCT_NUM
00000
4100
4190
AP
00000
1200
1299
AR
00000
2120
2195
ACC DEPCN
00000
4200
4211
ACC LIAB
00000
1100
1121
CASH
00000
4900
4910
CMMN STOCK
00000
1401
1469
FG INV
00000
3990
3990
GOODWILL
00000
4690
4690
LT DEBT
00000
3900
3940
OTHER ASSET
00000
1310
1400
OTHER CA
00000
4212
4550
OTHER CL
00000
4950
4950
OTHER EQUITY
00000
4610
4685
OTHER LIAB
The Domain mapping from W_GL_GROUP_ACCOUNT to W_FIN_STMT specifies the relationship between a group account number and a Financial Statement Item code.
Table B-21 shows the Financial Statement Item codes to which Group Account Numbers must map, and their associated base fact tables.
Table B-21 Financial Statement Item Codes and Associated Base Fact Tables
Financial Statement Item Codes Base Fact Tables
AP
AP base fact (W_AP_XACT_F)
AR
AR base fact (W_AR_XACT_F)
COGS
Cost of Goods Sold base fact (W_GL_COGS_F)
REVENUE
Revenue base fact (W_GL_REVN_F)
OTHERS
GL Journal base fact (W_GL_OTHER_F)
By mapping your GL accounts against the group account numbers and then associating the group account number to a Financial Statement Item code, you have indirectly associated the GL account numbers to Financial Statement Item codes as well.

Add a New Metric in the Logical Table Fact – Fins – GL Journals Posted

If you add new Group Account Numbers to the file_group_acct_codes_<source system type>.csv file, then you must also use Oracle BI EE Administration Tool to add metrics to the Oracle BI repository to expose the new Group Account Numbers, as described in this section.
This task is applicable to the following tasks:
This example assumes that you have a new Group Account Number named Payroll (Domain member code 'PAYROLL'), and you want to add a new metric to the Presentation layer called 'Payroll Expense'.
To add a new metric in the logical table Fact – Fins – GL Other Posted Transaction:
  1. Using Oracle BI EE Administration Tool, edit the BI metadata repository (that is, the RPD file).
    For example, the file OracleBIAnalyticsApps.rpd is located at:
    ORACLE_INSTANCE\bifoundation\OracleBIServerComponent\coreapplication_
    obis<n>\repository
    
  2. In the Business Model and Mapping layer:
    1. Expand the Core\Fact - Fins - GL Journals Posted\Sources\ folder and double click the Fact_W_GL_OTHER_GRPACCT_FSCLPRD_A source to display the Logical Table Source dialog.
    2. Display the Column Mapping tab.
    3. Click the 'Add New Column' icon to display the Logical Column dialog.
      This screenshot is described in surrounding text.
    4. Display the Column Source tab.
      This screenshot is described in surrounding text.
    5. Select the Derived from existing columns using an expression radio button, then click the 'Edit Expression' icon.
      This screenshot is described in surrounding text.
    6. In the Expression Builder, select 'Logical Tables' in the Category list.
      This screenshot is described in surrounding text.
    7. Use the Expression Builder to specify the following SQL statement:
      FILTER("Core"."Fact - Fins - GL Journals Posted"."Transaction Amount"
      USING "Core"."Dim - GL Account"."Group Account Number" = 'PAYROLL')
      
      This screenshot is described in surrounding text.
    8. Click OK to return to the Logical Column dialog.
      This screenshot is described in surrounding text.
  3. Click OK to save the details.
  4. To expose the new repository objects in end users' dashboards and reports, drag the new objects from the Business Model and Mapping layer to an appropriate folder in the Presentation layer.
To add a new metric in the logical table Fact – Fins – GL Balance:
  1. Using Oracle BI EE Administration Tool, edit the BI metadata repository (that is, the RPD file).
    For example, the file OracleBIAnalyticsApps.rpd is located at:
    ORACLE_INSTANCE\bifoundation\OracleBIServerComponent\coreapplication_
    obis<n>\repository
    
  2. In the Business Model and Mapping layer:
    1. Create a logical column named 'Payroll Expense' in logical table 'Fact – Fins – GL Balance'.
      For example, right-click the Core\Fact – Fins – GL Balance object and choose New Object, then Logical Column, to display the Logical Column dialog. Specify Payroll Expense in the Name field.
    2. In the Column Source tab, select the Derived from existing columns using an expression radio button.
    3. Click the Expression Builder icon to display Expression Builder.
    4. Use the Expression Builder to specify the following SQL statement:
      FILTER("Core"."Fact - Fins - GL Balance"."Activity Amount" USING "Core"."Dim - GL Account"."Group Account Number" = 'PAYROLL')
      
      The filter condition refers to the new Group Account Number 'PAYROLL'.
  3. Save the details.
  4. To expose the new repository objects in end users' dashboards and reports, drag the new objects from the Business Model and Mapping layer to an appropriate folder in the Presentation layer.

Map Oracle GL Account Numbers to Group Account Numbers

This section explains how to map Oracle General Ledger Account Numbers to Group Account Numbers.
Note:
If you add new Group Account Numbers to the file_group_acct_codes_<source system type>.csv file, you must also add metrics to the BI metadata repository (that is, the RPD file).
To map Oracle GL account numbers to group account numbers:
  1. Edit the file_group_acct_codes_ora.csv file.
    Note:
    The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:
    Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.
    Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.
    Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.
  2. For each Oracle GL account number that you want to map, create a new row in the file containing the following fields:
    Field Name Description
    CHART OF ACCOUNTS ID
    The ID of the GL chart of account.
    FROM ACCT
    The lower limit of the natural account range. This is based on the natural account segment of your GL accounts.
    TO ACCT
    The higher limit of the natural account range. This is based on the natural account segment of your GL accounts.
    GROUP_ACCT_NUM
    This field denotes the group account number of the Oracle General Ledger account, as specified in the warehouse domain Group Account in Oracle BI Applications Configuration Manager. For example, 'AP' for Accounts Payables, 'CASH' for cash account, 'GEN PAYROLL' for payroll account, and so on.
    For example:
    101, 1110, 1110, CASH
    101, 1210, 1210, AR
    101, 1220, 1220, AR
    
    Note:
    You can optionally remove the unused rows from the CSV file.
  3. Ensure that the values that you specify in the file_group_acct_codes_ora.csv file are consistent with the values that are specified in Oracle BI Applications Configuration Manager for Group Accounts.
  4. Save and close the CSV file.

Overview of Mapping Oracle GL Accounts to Group Account Numbers

Group Account Number Configuration is an important step in the configuration of Financial Analytics, because it determines the accuracy of the majority of metrics in the General Ledger and Profitability module. Group Accounts in combination with Financial Statement Item Codes are also leveraged in the GL reconciliation process, to ensure that subledger data reconciles with GL journal entries. This topic is discussed in more detail later in this section.
You set up General Ledger accounts using the following configuration file:
  • file_group_acct_codes_ora.csv - this file maps General Ledger accounts to group account codes.
Note:
The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:
Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.
Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.
Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.
You can categorize your Oracle General Ledger accounts into specific group account numbers. The group account number is used during data extraction as well as front-end reporting. The GROUP_ACCT_NUM field in the GL Account dimension table W_GL_ACCOUNT_D denotes the nature of the General Ledger accounts (for example, cash account, payroll account). For a list of the Group Account Number domain values, see Oracle Business Analytics Warehouse Data Model Reference. The mappings to General Ledger Accounts Numbers are important for both Profitability analysis and General Ledger analysis (for example, Balance Sheets, Profit and Loss, Cash Flow statements).
The logic for assigning the group accounts is located in the file_group_acct_codes_ora.csv file. Table B-3 shows an example configuration of the file_group_acct_codes_ora.csv file.
Table B-3 Example Configuration of file_group_acct_codes_ora.csv File
CHART OF ACCOUNTS ID FROM ACCT TO ACCT GROUP_ACCT_NUM
1
101010
101099
CA
1
131010
131939
FG INV
1
152121
152401
RM INV
1
171101
171901
WIP INV
1
173001
173001
PPE
1
240100
240120
ACC DEPCN
1
261000
261100
INT EXP
1
181011
181918
CASH
1
251100
251120
ST BORR
In Table B-3, in the first row, all accounts within the account number range from 101010 to 101099 that have a Chart of Account (COA) ID equal to 1 are assigned to Current Asset (that is, CA). Each row maps all accounts within the specified account number range and within the given chart of account ID.
If you need to create a new group of account numbers, you can create new rows in Oracle BI Applications Configuration Manager. You can then assign GL accounts to the new group of account numbers in the file_group_acct_codes_ora.csv file.
You must also add a new row in Oracle BI Applications Configuration Manager to map Financial Statement Item codes to the respective Base Table Facts. Table B-4 shows the Financial Statement Item codes to which Group Account Numbers must map, and their associated base fact tables.
Table B-4 Financial Statement Item Codes and Associated Base Fact Tables
Financial Statement Item Codes Base Fact Tables
AP
AP base fact (W_AP_XACT_F)
AR
AR base fact (W_AR_XACT_F)
COGS
Cost of Goods Sold base fact (W_GL_COGS_F)
REVENUE
Revenue base fact (W_GL_REVN_F)
TAX
Tax base fact (W_TAX_XACT_F)Foot 1 
OTHERS
GL Journal base fact (W_GL_OTHER_F)
Footnote 1 E-Business Suite adapters for Financial Analytics do not support the Tax base fact (W_TAX_XACT_F).
By mapping your GL accounts against the group account numbers and then associating the group account number to a Financial Statement Item code, you have indirectly associated the GL account numbers to Financial Statement Item codes as well. This association is important to perform GL reconciliation and to ensure the subledger data reconciles with GL journal entries. It is possible that after an invoice has been transferred to GL, a GL user might adjust that invoice in GL. In this scenario, it is important to ensure that the adjustment amount is reflected in the subledger base fact as well as balance tables. To determine such subledger transactions in GL, the reconciliation process uses Financial Statement Item codes.
Financial Statement Item codes are internal codes used by the ETL process to process the GL journal records during the GL reconciliation process against the subledgers. When the ETL process reconciles a GL journal record, it looks at the Financial Statement Item code associated with the GL account that the journal is charging against, and then uses the value of the Financial Statement item code to decide which base fact the GL journal should reconcile against. For example, when processing a GL journal that charges to a GL account which is associate to 'AP' Financial Statement Item code, then the ETL process will try to go against AP base fact table (W_AP_XACT_F), and try to locate the corresponding matching AP accounting entry. If that GL account is associated with the 'REVENUE' Financial Statement Item code, then the ETL program will try to go against the Revenue base fact table (W_GL_REVN_F), and try to locate the corresponding matching Revenue accounting entry.

Add Dates to the Order Cycle Time Table for Post-Load Processing

To add more dates, you need to understand how the Order Cycle Times table is populated. Therefore, if you want to change the dates loaded into the Order Cycle Time table (W_SALES_CYCLE_LINE_F), then you have to modify the interfaces for both a full load and an incremental load that take the dates from the W_* tables and load them into the Cycle Time table.
To add dates to the Cycle Time table load:
  1. In ODI Designer Navigator, expand Models - Oracle BI Applications - Oracle BI Applications - Fact.
  2. Find W_SALES_CYCLE_LINE_F and add a column to store this date you want to add.
    For example, if you are loading the Validated on Date in the W_SALES_CYCLE_LINE_F table, then you need to create a new column, VALIDATED_ON_DT, and modify the target definition of the W_SALES_CYCLE_LINE_F table.
  3. Save the changes.
  4. Open Projects - BI Apps Project - Mappings - PLP folders.
  5. Find PLP_SalesCycleLinesFact_Load folder and modify interfaces under the folder to select the new column from any of the following source tables, and load it to the W_SALES_CYCLE_LINE_F target table:
    • W_SALES_ORDER_LINE_F
    • W_SALES_INVOICE_LINE_F
    • W_SALES_PICK_LINE_F
    • W_SALES_SCHEDULE_LINE_F
  6. Modify the temp interfaces and the main interfaces for both a full load and an incremental load.

Make Corrections to the Group Account Number Configuration for PeopleSoft

Note: Refer to 'How to set up Group Account Numbers for Peoplesoft' for general concepts about group account number and Financial Statement Item code.
When a user maps a GL natural account to an incorrect group account number, incorrect accounting entries might be inserted into the fact table. For example, the natural account 620000 is mistakenly classified under 'AR' group account number when it should be classified under 'AP' group account number. When this happens, the ETL program will try to reconcile all GL journals charged to account 620000 against sub ledger accounting records in AR Fact (W_AR_XACT_F). Since these GL journal lines did not come from AR, the ETL program will not be able to find the corresponding sub ledger accounting records for these GL journal lines. In this case, the ETL program will insert 'Manual' records into the AR fact table because it thinks that these GL journal lines are 'Manual' journal entries created directly in the GL system charging to the AR accounts.
To make corrections to group account number configurations for Peoplesoft, correct the mapping of GL natural account to the correct group account in the input CSV file called file_group_acct_codes_psft.csv.
If you add values, then you also need to update the BI metadata repository (that is, the RPD file).
Note:
The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:
Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.
Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.
Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.
For example, before correction, a CSV file has the following values (Incorrect Group Account Number assignment):
  • BUSINESS_UNIT = AUS01
  • FROM ACCT = 620000
  • TO ACCT = 620000
  • GROUP_ACCT_NUM = AR
After correction, account '620000' should now correctly point to 'AP' group account number, and the CSV file would have the following (corrected) values:
  • BUSINESS_UNIT = AUS01
  • FROM ACCT = 620000
  • TO ACCT = 620000
  • GROUP_ACCT_NUM = AP
Based on the Group Account corrections made in the CSV file, the next ETL process would reassign the group accounts correctly and fix the entries that were made to the fact tables from the previous ETL run(s).

Deploy Objects in E-Business Suite for Exploding the Bill Of Materials

The Bill of Materials (BOM) functional area enables you to determine the profit margin of the components that comprise the finished goods. BOM enables you to keep up with the most viable vendors in terms of cost and profit, and to keep your sales organization aware of product delivery status, including shortages.
To deploy objects in E-Business Suite for exploding the BOM, ensure that the E-Business Suite source environment meets the minimum patch level for your version, as follows:
  • Customers with Oracle EBS version R12.2.x must be at or above patch level 17457141:R12.BOM.D.
  • Customers with Oracle EBS version R12.0.x or OPI patch set A must be at or above patch level 16507117:R12.OPI.A.
  • Customers with Oracle EBS version R12.1.x or OPI patch set B must be at or above patch level 16507117:R12.OPI.B.
  • Customers with Oracle EBS version 11i must be at or above patch level 16506948.
Refer to the System Requirements and Supported Platforms for Oracle Business Intelligence Applications for full information about supported patch levels for your source system.
Note: Systems at or above these minimum patch levels include the package OPI_OBIA_BOMPEXPL_WRAPPER_P or OBIA_BOMPEXPL_WRAPPER_P in the APPS schema, and include the following tables in the OPI or BOM schema with alias tables in the APPS schema:
  • OPI_OBIA_W_BOM_HEADER_DS or OBIA_W_BOM_HEADER_DS
  • OPI_OBIA_BOM_EXPLOSION or OBIA_BOM_EXPLOSION
  • OBIA_BOM_EXPLOSION_TEMP
How to Configure the Bill of Materials Explosion Options
The Bill of Materials (BOM) functional area enables you to analyze the components that comprise the finished goods. BOM enables you to determine how many products use a particular component. It also enables you to get visibility into the complete BOM hierarchy for a finished product.
Note: To run the ETL as the apps_read_only user, you must first run the following DCL commands from the APPS schema:
Grant insert on opi.opi_obia_w_bom_header_ds to &read_only_user;
Grant analyze any to &read_only_user;
You can explode the BOM structure with three different options:
  • All. All the BOM components are exploded regardless of their effective date or disable date. To explode a BOM component is to expand the BOM tree structure.
  • Current. The incremental extract logic considers any changed components that are currently effective, any components that are effective after the last extraction date, or any components that are disabled after the last extraction date.
  • Current and Future. All the BOM components that are effective now or in the future are exploded. The disabled components are left out.
    These options are controlled by the EXPLODE_OPTION variable. The EXPLODE_OPTION variable is pre-configured with a value of 2, explode Current BOM structure.
These options are controlled by the EXPLODE_OPTION variable. The EXPLODE_OPTION variable is preconfigured with a value of 2, explode Current BOM structure.
There are five different BOM types in a source system: 1- Model, 2 - Option Class, 3 - Planning, 4 - Standard, and 5 - Product Family. By default, only the Standard BOM type is extracted and exploded. You can control this selection using the EBS_BOM_TYPE parameter.
The SDE_ORA_BOMItemFact_Header mapping invokes the OPI_OBIA_BOMPEXPL_P or OBIA_BOMPEXPL_P package in the EBS database to explode the BOM structure. The Table B-2 lists the variables used to control the stored procedure.
Table B-2 Variables for the BOM Explosion Stored Procedure
Input Variable Preconfigured Value Description
BOM_OR_ENG
1
1—BOM
2—ENG
COMMIT_POINT
5000
Number of records to trigger a Commit.
COMP_CODE
Not applicable.
This parameter is deprecated and no longer affects the functionality of the procedure.
CST_TYPE_ID
0
This parameter is deprecated and no longer affects the functionality of the procedure.
EXPLODE_OPTION
2
1—All
2—Current
3—Current and Future
EXPL_QTY
1
Explosion quantity.
IMPL_FLAG
1
1—Implemented Only
2—Implemented and Non-implemented
LEVELS_TO_EXPLODE
10
Number of levels to explode.
MODULE
2
1—Costing
2—BOM
3—Order Entry
4—ATO
5—WSM
ORDER_BY
1
Controls the order of the records.
1—Operation Sequence Number, Item Number.
2—Item Number, Operation Sequence Number.
PLAN_FACTOR_FLAG
2
1—Yes
2—No
RELEASE_OPTION
0
For RELEASE_OPTION, the possible values are:
0 – include Revised Items with a status of 6 (Implemented)
1 – Include Revised Items with statuses 4, 6, and 7 (Scheduled, Implemented and Released)
2 – Include Revised Items with statuses 1, 4, 6, and 7 (Open, Scheduled, Implemented and Released)
3 – Include items regardless of status.
STD_COMP_FLAG
0
0 – Exclude all components
1 – Explode only standard components
2 – Explode all components
3 – Explode only optional components
NOTE: STD_CMP_FLAG is only used when MODULE = 3 (Order Entry)
UNIT_NUMBER
Not applicable.
When entered, limits the components exploded to the specified Unit.
VERIFY_FLAG
0
This parameter is deprecated and no longer affects the functionality of the procedure.