Saturday 25 April 2020

And then there's the Waterfall Chart in PBCS

Ever found yourself looking to create a Waterfall chart in PBCS Dashboards only to be left disappointed?  Sure, you can have this done when Management Reporting eventually gets rolled into PBCS to replace Financial Reporting.  What if you need to have the chart now?  Well, you've come to the right blog!  I've managed to create one for one of my projects.  So, hang on tight and I'll walk you through how I got PBCS to produce the Waterfall Chart.



The above chart is attempting to visualise the evolution of the selected account (the Operating Costs in this case) in Dec from what it was in Jan across all the entities.  The extreme left and right bars (in blue) are the anchors showing the starting and ending position of the Operating Costs for Total Australia while the bars in between (the red and green bars) represent the movements (up or down) for each child entity of Australia; up represented in green and red otherwise.

You might be wondering what's the big fuss about this?  It looks just like the normal column chart that we can already produce in PBCS.  You're right.  It is a column chart but with a twist!

The twist...

You see, unlike the column chart we are used to, we need to have the movement bars be plotted relative to the position of its previous bar.  For instance, the bar for Victoria (second one from left) needs to begin from where the previous bar ends (in this case at $226M for Australia in Dec FY18).  The third bar (South Australia) will need to begin where Victoria has left off and so on and so forth.  How can we do that with the a bar chart?  A normal column chart will begin plotting from $0 of the Y-Axis.  So how can we trick the bar chart to do what we want?  The secret lies in manipulating the empty bars below the red and the green bars.  Notice that I have set the background to a light grey?  The reason is not because of aesthetics but rather we have bars coloured in the same colour as the background to give you the illusion that the movement bars are plotted relative to their immediate left bar.  There are hidden bars plotted below the red and green bars and they all start from 0.  Gotcha!  The screenshot below shows you the hidden bars (which I have set to black here to reveal them).



You might also be wondering why didn't I set the background and the hidden bars to transparent?  Well, I did, but unfortunately, they turn to black the next time I accessed the dashboard.  A little bug for Oracle to solve.  Now let's dive into the details of this solution, shall we?

Nuts and Bolts

This example requires users to make the following selections to perform the comparison:

  • Comparison Scenario 1 & 2
  • Comparison Version 1 & 2
  • Comparison Year 1 & 2
  • Comparison Period 1 & 2
  • Parent Entity
  • Account
The chart will then make comparison of the selected account across the children of the selected entity between Comparison 1 and Comparison 2 data sets.  You could vary this to compare across other dimensions that you may have (e.g. Cost Centres, Department, etc).


Remember I mentioned about manipulating the blank bars earlier?  Well, we have other elements that we need to handle too and here's the list of what's required:

  • Anchor - the starting and ending values for the chart.
  • Delta - the difference between Comparison 1 and 2.
  • Increase - contains the absolute positive delta value.
  • Decrease - contains the absolute negative delta value.
  • Blank - the value of this element is dependent on the value in the previous bar (we will discuss the logic in more detail later in this blog).
These elements need to be created in a separate dimension and they are in the Measures dimension in this example.

A business rule is set to launch each time the dashboard is loaded to ensure the latest numbers are shown.  Let's examine the rule.

The Heart of the Solution...

In the nutshell, the rule covers the following processes:


  • Step 1 - Populate all relevant measures and accounts for Comparison 1 Scenario, Version, Year, Period.
  • Step 2 - Populate all relevant measures and accounts for Comparison 2 Scenario, Version, Year, Period.
  • Step 3 - Calculate the Delta, Increase, Decrease, and Blanks for the Waterfall Chart.

Step 1





Step 1 - Comparison 1
  • Step 1a - Clear Previously Populated Values
  • Step 1b - Populate Anchor Measure (Parent Entity Only)
  • Step 1c - Transfer the Income Statement Values to the Waterfall Accounts (Child Entity)
We start off with the clearing of data from the waterfall related measures to ensure that all previously populated numbers are cleared.  This is followed by populating the anchor measure with the selected parent entity value for each of the Income Statement accounts for comparison 1 data set (Scenario, Version, Year, and Period).  We will finally transfer the Income Statement account values to their respective Waterfall accounts.  Such accounts are required should you have other Waterfall Charts to prevent on wterfall chart from overriding data of another.  This example demonstrates the waterfall chart for entities, so I've created a set of waterfall accounts for entity, for instance:
  • NPAT
  • Revenue
  • Operating Costs
  • Non-Operating Costs
  • etc..
You'd create another set of waterfall accounts for another chart, e.g. another set for Cost Centre.

Step 2



Step 2 - Comparison 2
  • Step 2a - Clear Previously Populated Values
  • Step 2b - Populate Anchor Measure (Parent Entity Only)
  • Step 2c - Transfer the Income Statement Values to the Waterfall Accounts (Child Entity)
They are identical to the processes described in step 1.  The difference is in step 2, we are dealing with the Comparison 2 data set.

Step 3



Step 3 - Calculate the Measures for the Waterfall Chart
  • Step 3a - Calculate the Delta
  • Step 3b - Determine the Increase/Decrease
  • Step 3c - Create Blocks
  • Step 3d - Determine the Blanks
This step deals with the logic to produce the movement bars.  We begin with calculating the difference between Comparison data set 1 and set 2 for all the children of the selected entity.  The delta is then evaluated to assign its abosolute value (without signs) to either the increase or the decrease measures.  Once done, we proceed to create the necesary blocks before we determine the values for the blank bars.  Here, we need to first determine the anchor value if it is a positive or a negative.  There is a difference in the formula to calculate the blank for a positive anchor and a negative one.  Let's review the logic for each one:

Positive Anchor

Assign the starting anchor value to the blank measure if it is the first child of the selected entity.  So, in this example, Victoria (which is the first child in the Australia entity hierarchy) will have the anchor value minus the decrease of the first child entity assigned to its blank measure.  



All subsequent entities will have the following formula to calculate the blank measure:


Negative Anchor

The logic for the first child of the selected entity with a negative anchor is shown below.  Differences are highlighted:



The blank calculations for the remainder of the child entities, again with differences highlighted:



Forms and Dashboard

To produce the chart in a Dashboard, we'd need 2 forms to be created, one each for the grid and the other to plot the Waterfall Chart.  The form to show the grid has the IChildren of the selected entity across the columns with 3 measures in the rows, namely:
  • Anchor
  • Increase
  • Decrease
The form for the chart on the other hand requires the same settings for the columns but the rows contains the following measures:
  • Anchor
  • Blank
  • Increase
  • Decrease
For the dashboard, create one and include the above 2 forms in the dashboad with the latter form (for chart) being set with a Chart Type of Column.  Remember to change the colour for the data series:
  • Series 1 (Anchor) - Blue
  • Series 2 (Blank) - Light Grey
  • Series 3 (Increase) - Green
  • Series 4 (Decrease) - Red
And please remember to set the background colour for the chart to be Light Grey as well so that the Blank bar appears hidden.




There you go.  The complete code line to produce the Waterfall Chart.

Before we go, some parting remarks...

Did you realise that this example covers only positive starting and ending numbers (anchors)?  This works perfectly when both numbers are positive or both are negative.  Additional logic and measures are required to address other data conditions such as:
  • Starting anchor is positive and ending anchor is negative or vice versa.
  • The movements moves from positive to negative or vice versa.
To make this post short and precise, I've decided to start off with the easy stuff and then cover the complex one in another post.  So, stay tuned for my next post where I'll discuss the logic to cover the above conditions.  Till then, stay safe and stay home! #Covid-19

Thursday 18 July 2019

Data Phasing and Allocation

Today, I'd like to discuss about techniques I've used to implement rules for cost allocation and data phasing.  Sticking to the "no hard-coding" mantra, here are some of the methods that were used in my PBCS projects.

The examples in this blog rely on a PBCS application with the following dimensions and members:
  • Entity
  • Cost Centre
  • Account
  • Measure
  • Period
  • Scenario
  • Version
  • Cycle
  • Cycle_Line
All the dimensions contain members you would usually find in any PBCS application.  The exceptions are in the Measure, Cycle, and Cycle_Line dimensions which I'll discuss in more detail in sections below.

Phasing of Plan Data

Introduction

More often than not, you'd find yourself getting requests to build planning models that is quick to prepare and the most requested function I found has been the phasing of the budget or forecast data in the outer years based on trends derived from Actuals.  In this instance, planners would have entered their annual planned numbers and will rely on the business rule to phase their plans across the months for the outer years based on historical trends that he/she has selected.  This is to ensure that the plans relfect the actual trend.  One such implementation of mine, the client has requested for a phasing model that allows them to assign one of the following phasing methods to each of the account that they wish to phase:

  • Last Year Actual
  • Last 2 Years Actual Average
  • Last 3 Years Actual Average
Here's a sample of how the phasing works:











In the example above, the Annual Amount of $20,000 for "Acc001" in FY19 is phased based on "Last Year's Actual" while "Acc004" is using the "Last 2-Years Actual Average" as the phasing basis (third row of "Acc004").  So, with "Acc001" the calculation is simply taking the "Amount to be Phased" and then multiplying with the monthly last year actual number for "Acc001" before being divided by the total last year actual amount for "Acc001".  The formula is shown below:










While "Acc004" uses the Average of Last 2 Years' Actual to phase the annual data.









Here's what I did:

Tagging of Phasing Method

A webform was created to expose the relevant accounts that are subjected to the phasing logic with a measure (associated with a Phasing_Method smartlist) displayed in the column.  The Phasing_Method smartlist contains the 3 methods mentioned above.  This form provides the users the flexibility to vary the phasing method by account.




















Another form was defined to capture the annual planned amount and this amount is captured against the "BegBalance" period member.

The Rule

The phasing of the data is done through a business rule where it will evaluate the phasing method associated to each of the accounts and then apply the necessary logic to spread the annual plan data across the periods using the user-selected basis.  Here's a snapshot of the business rule:
























Inter & Intra-Company Cost Allocation

Introduction

Another use case for the above implementation is for cost allocation.  I realised that I have written similar post about cost allocation when discussing about reverse-mapping techniques.  You can read the post here.  In this blog, I have expanded the description to include some examples to further illustrate how the allocation model will work.  So, read on.

In building such a model for one of my clients, I was determined to build them one that they have full control over how the allocation is performed.  By this, I mean to develop one that allows the users to determine the source of costs to be allocated and the targets for the allocated costs.  The model will also accept the allocation ratio for the source/target combination.  The spreadsheet below illustrates the allocation of costs from Entity A Cost Centre 001 to Entity A Cost Centre 002, 003 & 004.



And here's a sample of the Inter-Company allocation where I will allocate part of the cost of Entity A to entities B, C, & D.  Note that 40% of the cost from Entity A is retained within the same entity.

















Now, let's examine the building blocks for this model:

Dimensions & Members

As mentioned earlier, I'll focus on the Measure, Cycle, and Cycle_Line dimensions in this section.  T

Measure Dimension

The main function of this dimension is to enable us to further break an account down further to delineate the origin of a piece of data.  Consider the following hierarchy in the Measure dimension:

  • Measure
    • Total_PL
      • Total_PL_Before_Alloc
        • Loaded_Amount
        • Direct_Amount
        • Phased_Amount
        • Workforce_Amount
        • Project_Amount
      • Allocation_Amount
        • Alloc_In
        • Alloc_Out
    • Allocation_Drivers
      • Phasing_Method
      • Cycle_Description
      • Source_Entity
      • Source_Cost_Centre
      • Source_Account
      • Target_Entity
      • Target_Cost_Centre
      • Target_Account
      • Alloc_Ratio
The "Loaded_Amount" member is used to store the data that is loaded from external data sources (e.g. Actuals from ERP).  "Direct_Amount" member is an input member capturing inputs from users during the planning process while the "Workforce_Amount" and "Project_Amount" members will store that data coming from the Workforce and Project models respectively.  The "Allocation_Amount" member which is further divided into "Alloc_In" and "Alloc_Out" will be used in the Allocation model.  All these members will roll-up to "Total_PL" to arrive at the total P&L number.

Extending on the example discussed in the introduction, I am adding the Measure dimension to the Intra-Company allocation table:



























The measure "Total_PL_Before_Alloc" is a roll-up of amounts from "Loaded_Amount", "Direct_Amount", Workforce_Amount", and "Project_Amount".  The example above shows that the last 3 members ("Direct_Amount", Workforce_Amount", and "Project_Amount") contain values of $400, $300, and $300, and therefore,"Total_PL_Before_Alloc" being the parent will hold the sum of $1,000.  We will use this roll up as our source for allocation.  

As a result of the allocation process, the model will allocate to the target Cost Centres (002, 003, & 004) based upon the user-defined ratios and the allocated numbers are stored in the "Alloc_In" measure.  The whole amount from the source "Total_PL_Before_Alloc" will also be reversed and posted to the source Entity/Cost Centre/Account combination and the reversal (-$1,000 in this example) is stored in the "Alloc_Out" measure.  So, when reported at "Total_PL" level, the Overheads from the source Entity/Cost Centre" combination (Entity A, Cost Centre 001) will be fully allocated to the respective targets because the sum of "Total_PL_Before_Alloc" ($1,000) and "Allocation_Amount" (-$1,000) will result in zero.


Cycle & Cycle_Line Dimension

These dimensions allow users to define different allocation rules (in Cycle) and multiple lines per allocation rule (Cycle_Line).  The members in both the dimensions are just running sequential numbers.

  • Cycle
    • C001
    • C002
    • C003
    • ...

  • Cycle_Line
    • CL001
    • CL002
    • CL003
    • ...

Source/Target & Ratios Definition

As mentioned in the introduction, this model serves to allocate cost within the company as well as inter-company allocation.  The mapping table that I've defined via a webform capture not only the source and target accounts, but also the source/target Entity & Cost Centre combination as well.  Here's an illustration of the webform defined to capture the source/target combinations:















The top part of the diagram shows the form to define the Allocation Cycle while the bottom part of the diagram shows the detail allocation lines for each allocation cycle.  Notice that both forms are capturing the combinations against the NA members for Entity, Cost Centre, and Account.  You might be wondering how do I then capture the different combinations of source and target Entity, Cost Centre, and Account combinations?  The answer lies in Smartlist (see next section for details on smartlists).









Columns 3 through to 8 of the Allocation Line form are measures with smartlists tagged to each of the member.  This enables users to then pick the source/target entity, cost centre, and account of their choice from their associated smartlist.  Column 9 is an input column allowing users to enter the allocation ratio for each allocation line.  The above definition is  reflective of the Intra-Company Allocation example that I have mentioned in the introduction earlier.


Smartlists

The following heirarchy-driven smartlists are required to support the allocation model.  The smartlists will be assigned to Allocation_Driver members in the Measure dimension:

  • Source Entity
  • Source Cost Centre
  • Source Account
If you have Target Entities, Cost Centres, or Accounts that are different from the source, you'd need to also define Target smartlists from them:
  • Target Entity
  • Target Cost Centre
  • Target Account

A sample of a hierarchy-driven smartlist is shown below.  Please refer to my blog on Hierarchy-Driven Smartlists here for more in-depth description on the process to build such smartlist.





















The Business Rule

To code the allocation logic, I have decided to use the @XWRITE function.  The cool thing about the model is that the rule is able to create the necessary target blocks based on what the users have selected for their source/target combinations via the webform created.  This helps minimise any chance of a block explosion where blocks are created unnecessarily which will in turn adversely affect the performance of the cube in general.  Keep a look out for my post on block creation which is still a "work in progress".  Also, the fact that all the source/target combinations are captured against Entity_NA, Cost_Centre_NA, and Account_NA means that the rule will rule faster as it does not need to cycle through all the different entity, cost centre, and account base members.  My rule is fixed to the NA members only and using @XWRITE to create the necessary blocks and perform the required allocation.  Here's the rule:




























Notice that I've used the @XWRITE function to both create the blocks and perform the allocation.  The first part of the rule posts the reversal amount to the Source Entity, Cost Centre, and Account combination and to the "Alloc_Out" measure.  The value of "Total_PL_Before_Alloc" is multiplied by -1 before posting the results.

The second section of the rule deals with the allocation of the "Total_PL_Before_Alloc" amount from the Source Entity, Cost Centre, and Account combination to the Target Entity, Cost Centre, and Account combination and to the "Alloc_In" measure.  The value of "Total_PL_Before_Alloc" is multiplied by "Alloc_Ratio" before posting the results to the targets.

That's it folks for this blog.  Hope you've enjoyed reading this as much I have enjoyed writing it.  Do leave a comment and let me know what else you want me to cover.

Until then, sayonara for now.


Wednesday 26 June 2019

To Hard-Code or Not To-Hard-Code - The Sequel

Keeping to the theme from my previous post, I thought of sharing more tips and tricks on how to avoid hard coding your business rules.  In this post, I'll share with you the following techniques that I have used in the various projects that I've worked with previously:

Reverse-mapping using Smartlist

This is essentially relying on dimension-based smartlists to resolve to the intended member.  This option gives the users the ability to maintain the mappings via a webform without having the underlying business rules hard-coding to any target members.  Use case for this method can be applied to any allocation-type business rules.  Details of this techniques can be found in this post and this one.

User Variables & Planning Expressions

The User Variables feature in Planning can be a useful tool to give users more control over what they wish to view in a webform.  Forms configured to use these variables will give users the ability to dynamically select a member at runtime (e.g. a parent member of a hierarchy) which could then be used to display the descendants of the selected parent.  To take it one step further, we can use the user variables in business rules as well.  This helps us to limit the calculation to only the selected member(s), thus, making the rule run faster.

Let's see how this can be achieved in the business rule.  The trick here is to use a Planning Expression to resolve to the selected member of the User Variable.  The Planing Expression in question here is the [[PlanningFunctions.getUserVarValue("User_Variable")]].


Consider this.  I have defined the following User Variables:








And I wish to limit my calculation scope to level 0 members of the selected Entity and Cost Centre parents.  Here's how I would code my rule:








The "PlanningFunction.getUserVarValue" Planning Expression will resolve to the base members of  entity and cost centre that the user has selected at runtime.  You could also use other Planning Expressions to further your quest in avoiding any hard-coding in your rules.  Here are some other examples of Planning Expressions that you could possible use:


Periods

The following is a list of Period-related Planning Expressions:











I have personally not used any of the above Planning Expressions except for [[Period("FIRST_PERIOD")]] & [[Period("LAST_PERIOD")]].  Here's an example of how I've used them in my rule to control how I get the prior period value for an account.



Scenarios

Here's a list of Planning Expressions linked to Scenarios:

















The last I used the above expressions, they require the Scenario Name to be a string (i.e. hard-coded) but the latest reference guide from Oracle shows that the expressions now accepts variables such as Runtime Prompts (RTP).  Try it out and let me know we can now use variables.  Would be awesome if this is true.

Here's an example code I've lifted from the Oracle's guide.  Looks like it accepts RTPs.


















That's all folks for this blogpost.  Let me know if you find this post useful and any other feedback is also welcomed.  Thank you and see you in my next post.

Friday 21 June 2019

To hard-code or not to hard-code?


I'm not a fan of hard-coding my business rules as I find that more often than not, those hard-coded beasts will turn around and bite you.  You'd struggle to remember which hard-coded rules need updating when new hierarchies are added to your planning application.  Sure, hard-coded scripts tend to be easier to read but I'd still steer away from having hard-coding where possible.

Enough said, let's get down to business... This post, I'd like to share with you a technique I often used to resolve an assumption that is set globally for a group of common parent members (e.g. Entities).  Normally, one would create an "NA" member to hold the global assumptions for these members as shown below:











I could write a rule that hard-codes to ANZGROUP_NA to resolve the global assumption like this:








Or, I could create a rule that will dynamically resolve to ANZGROUP_NA. 








The code will resolve to ANZGROUP_NA by first resolving to the ANZGROUP node using the @ANCEST("Entity",2) function and followed by concatenating the resulting member with a suffix of "_NA".  The @MEMBER will then convert the concatenated string to a member before used in the crossdim with "Annual_Salary".  The series of functions will result in the following crossdim:

"Monthly Salary" = "Annual_Salary"->"ANZGROUP_NA" / 12;

This of course, is a very simple example with the entity dimension having just 3 generations (Entity, Group and Individual Level 0 entities) and you might ask what's the big deal with hard-coding the rule to point to ANZGROUP_NA?  You're right, this isn't a big deal to have it hard-coded in this instance as the chances of you adding another NA member in this hierarchy is quite minimal.  However, it will be a different conversation if we had an entity structure like the one shown below:






















Each of the country entities (i.e. AU and NZ) having their respective NA members to capture their country-specific assumptions.  The hard-coded rule needs to be updated to evaluate if the current entity being calculated is part of the AU or the NZ hierarchy and then apply the correct country-specific assumption to the rule.












Any subsequent addition to the entity dimension (i.e. adding of new countries), will require the rule to be amended to also consider the newly added countries.  Now, compare it with the non-hard coded rule.  You would update the code to use Generation 3 that will resolve to either "AU" or "NZ" before concatenating with the "_NA" suffix.








With just 3 lines of code, the rule will resolve to AU_NA for all entities under the AU hierarchy and to NZ_NA for all NZ entities.

With the above code, any subsequent addition of siblings to the AU or NZ nodes will NOT require you to amend your rules as it will work perfectly as long as you maintain the same hierarchy structure as we have for AU or NZ.

There! A short and sweet post.  Hope you found it useful and keep a look out for more tips and tricks on how to avoid hard-coding in your rules.  Till then, adios!

Sunday 16 September 2018

Integrating Oracle Financials Cloud into PBCS and a Drill Through back to source

First off, sorry for the long hiatus from blogging about Essbase.  To resume where I left off, I thought I'd share my first attempt to configure the data integration between Oracle Financial Cloud with PBCS and a drill through back to the data source.  Here goes...


Establish the link to Oracle Fusion in PBCS via Data Management
Start with setting up the connection to Oracle Fusion to allow us to then load the GL balances from the ERP system.  We will begin with adding a Source System in Data Management.  Select "Oracle Financials Cloud" from the "Source System Type" dropdown list and then click "Save".



















Click on the "Configure Source Connection" to define the login credentials and the URL to the source system.  The Oracle Fusion user you use here should be granted the "OA4F_FIN_GL_DETAIL_TRANSACTIONS_ANALYSIS_DUTY" role access.  Fill in the details and click the "Test Connection" to ensure a successful login to the source system.  Click "Configure" to conclude the configration.










Click on the "Initialize" button to create the necessary Essbase cube required to stage the data extracted from Oracle Financials Cloud.


















This step will create the Essbase cube that will appear in the Target Applications section of Data Management.


Create the Target Applications to store the extracted data
This section of Data Management contains all the target systems where the raw extracted data and the final mapped data are stored.  The "COA" is a Target Application created by Data Management when we initialized the connection between Oracle Financials Cloud and PBCS.  Leave the settings as they are.  We do not need to make any changes to the settings for "COA".


You will now need to manually add the Target Applications for the final destination of the extracted data.  We will define one for an Essbase ASO cube and another one for the PBCS BSO Planning cube.

Let's begin with the ASO cube called "FinRpt".  Here's the settings that you will need to be aware of to ensure that data is loaded to the ASO cube and have the ability to drill back to the source data.  Note that we need to select "Essbase" as the Type for an ASO Target Application.  Click on the "Create Drill Region" check box for the relevant dimensions.  Please ensure that the following dimensions are tagged with the correct "Target Dimension Class".  
  • Account_Segment should be tagged as Account
  • Cost_Centre_Segment should be tagged as Entity (if this dimension was defined as an Entity type dimension in PBCS)
  • Scenario should be tagged as Scenario (note that Data Table Column Name for this dimension will disappear when it is tagged to Scenario dimension class.  This is fine as the scenario in Data Management is derived from the Category code)
  • Version should be tagged as Version
  • Years should be tagged as Year

Failing to do this may result in an unsuccessful drill through.  Click "Save" when done.


















Click on the "Application Options" tab and turn on the Drill Through.  Click "Save" when done.


















Repeat the same for the Planning cube Target Application (in this example, named as "Fin").




















Define the Import Formats required
Now that you have the Target Applications defined, let's proceed to adding the required Import Formats.  This allows you to define the links and dimension mappings between the source and target applications.  Let's add one for the FinRpt cube.


Just select the relevant Source Dimensions from the dropdown list and map them to the correct Target.  You can also define Expressions to pad in zeroes to the values when importing to Data Management.  Click "Save" when done.  Repeat the same process for the Fin Planning cube.





















Build the Locations
Locations allow you to create different data load rules and mappings using the same Import Format.  In the example, we will be creating three locations for the ERP_FinRpt Import Format and a single location for the ERP_FinPlan Import Format.

For ERP_FinRpt, we will create

  • ERP_FinRpt_PL (for P&L load)
  • ERP_FinRpt_BS_CB (for Balance Sheet Closing Balance load)
  • ERP_FinRpt_BS_Mvmt (for Balance Sheet Movement load)

We will create ERP_FinPlan_PL for P&L load.

Let's start with ERP_FinRpt_PL.  Add the location and link the ERP_FinRpt Import Format to this location.  Click "Save" when done.




















Now repeat for the rest of the locations.  Remember to select the ERP_FinPlan Import Format for the ERP_FinPlan_PL location.





















Confirm the period mappings
The Period Mapping is used to map to the correct Period and Year in the Target Application.  In this example the application is defined with the Financial Year starting in July and ends in June the following year.  So make sure that the Year is mapped correctly.  Click "Save" when done.




















Build the Category Mappings (if required)
Categories in Data Management maps to the scenario dimension in the Planning or ASO applications.  Data Management will automatically create several categories and you can add new ones or update the ones created by Data Management as required.  Just make sure that the name of the Category matches the Scenario name in your Target Applications.  As usual, remember to click "Save" when you are done.




















Create Data Load Rules
For each Location that you've define, create a Data Load rule and define the filters for each of the dimension from the Source System.  Click "Save" when you are done.


When selecting the filters, please make sure you use fully qualified member names to avoid any duplicate member names that is possible in Oracle Financials Cloud.















Repeat for the rest of the locations that you've defined.


Configure the Dimension Mappings
Define the mappings like you would for any Data Management configuration for each of the locations created.  Remember to click "Save" when done.




















Repeat the mappings for the rest of the dimensions.




















Extract and Load the Data into PBCS
We are now ready to extract the data from Oracle Financials Cloud and load them into PBCS.  You can either do this via the Workbench or the Data Load Rule.  In this example, I'm loading the data via the Data Load Rule screen.  Click "Execute".




















Fill in your selections then click "Run".





















Drilling back to the source!  Woohoo!!
Now that we've loaded the data into PBCS, let's look at how we can drill through from the summary balances in PBCS to the underlying details that exist in Data Management.














Notice that each cell that is "drillable" has an indicator at the top-right of the cell.  Right-clicking on the drillable cell will reveal an option for you to "Drill Through".


















Select "Drill Through" from the pop-up menu and the underlying details of the selected cell will be revealed.

















The top half of the screen shows the data intersection of the selected cell while the bottom half of the screen shows the underlying records that make up the amount shown in the selected cell.

There you go, the step by step guide to integrating Oracle Financials Cloud data into PBCS with the ability to drill back to the data source.  Hope you find this blog useful and provides some guide for you.  Cheers!

And then there's the Waterfall Chart in PBCS

Ever found yourself looking to create a Waterfall chart in PBCS Dashboards only to be left disappointed?  Sure, you can have this done when ...