BPC450 - EN - Col18 - v1 SAP Business Planning and Consolidation Embedded Model
BPC450 - EN - Col18 - v1 SAP Business Planning and Consolidation Embedded Model
.
.
PARTICIPANT HANDBOOK
INSTRUCTOR-LED TRAINING
.
Course Version: 18
Course Duration: 3 Day(s)
e-book Duration: 4 Hours 55 Minutes
Material Number: 50151727
SAP Copyrights, Trademarks and
Disclaimers
Demonstration
Procedure
Warning or Caution
Hint
Facilitated Discussion
vi Course Overview
TARGET AUDIENCE
This course is intended for the following audiences:
Systems Architect
Project Manager
Application Consultant
Development Consultant
Lesson 1
Positioning BPC, Discussing Key Terms and Components 2
Lesson 2
Describing BPC Implemention Options 13
Lesson 3
Introducing SAP HANA 20
Lesson 4
Introducing BW and How it is Used for Planning 34
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Course Landscape
The figure, Training Landscape, only shows the main user interfaces. Keep in mind that data
can be consumed using the SAP BusinessObjects (BI) suite:
SAP Lumira Designer can be used as a Web-based planning application, including the
execution of planning functions.
Note:
There are no feature differences between 11.0 and 11.1. The main purpose of
Version 11.1 is to provide compatibility with BW/4HANA 2.0.
For this reason, this document will refer to the version being used as 11.0.
Note:
SAP Business Planning and Consolidation (SAP BPC) applications include:
SAP Business Planning and Consolidation, version for the Microsoft platform
When SAP Business Planning and Consolidation, version for SAP NetWeaver, is
shipped with embedded BW under SAP S/4HANA, it is referred to as SAP
Business Planning and Consolidation, add-on for SAP S/4HANA.
The table, Key Terms, outlines some of the key terms related to BPC Embedded
Consolidation:
Dimension
General term for master data.
Characteristic
BW term for master data.
Properties
Related fields.
Equivalent to BW attributes.
Hierarchy
Rollup and selection mechanism.
Descriptions
BW term for Text master data.
Term Definition
BW Query
A report definition with rows and columns.
Contains no data.
CompositeProvider
This InfoProvider is a view on other InfoProviders, or SAP
HANA information models.
Replaces MultiProviders.
Ownership
Rate
Standard
Financial
Generic
BPC Standard Consolidation Reference, and require, Ownership and Rate model
Models
types.
Term Definition
Note:
BPC Standard was initially named Classic. BPC Embedded was initially named
Unified.
In version 11.0, there are two main options, standard and embedded. Standard is similar to
what we knew as SAP Business Planning and Consolidation 10.0, which uses account-based
tables (one data column). In contrast, the embedded model is based on SAP BW Integrated
Planning (multiple data column) tables. All functionality of SAP Integrated Business Planning
is already available in the embedded option. Most standard SAP BPC features are available in
the embedded option.
The Planning Application Kit (PAK) is a subset of BPC embedded functionality, and is used
within BPC BW/4HANA.
The decision of whether to use the standard or embedded option should be based on the
business requirements of each customer’s planning scenario. However, both models can be
used in parallel in one system. The same license applies for both standard and embedded.
SAP HANA software offers many advantages in regards to data acquisition. Smart Data
Access refers to accessing data virtually. The Enterprise Information Management solution
includes Smart Data Access, as well as Smart Data Integration. The Operational Data
Provisioning is the new paradigm to load data into BW-related tables. Data lakes refer to
unstructured data, such as twitter feeds.
Note:
Workspaces allow advanced business users to do simple data loads and modeling
with BPC Embedded.
The table, BPC Standard vs Embedded Options, provides a comparison of BPC standard and
BPC embedded, according to a number of features.
BW Namespace BPC BW
The following list provides comments on the table, BPC Standard vs Embedded Options:
Database:
Self explanatory.
BW Namespace:
BPC Standard BW objects have the following prefix: /cpmb/ ...
For example, an advanced DataStore Object id could be /CPMB/XXYYYABCD, with
similarly named characteristics. In BW, delivered objects have an id that begins with a
number. For example, 0COSTCENTER is the technical name for a cost center that is part
of the 0CCA1 advanced DataStore object. If you want to share or merge data from the
advanced DataStore objects 0CCA1 and 0CCA2, it is relatively easy because of the
common naming convention.
Data Integration:
As BPC Standard uses a reserved/unique namespace, sharing data with non-BPC
Standard BW tables is much more difficult.
Consolidation Engine:
All options use the same engine. This includes business rules and consolidation
components in the BPC Web Client, such as the consolidation monitor, controls monitor,
ownership manager, and journals.
Planning Engine:
The BPC Standard planning engine consists of the script logic engine, input forms, and the
data manager. Data is only locked when saving data input or running planning functions.
Integrated Planning has its own formula builder called FOX code, and many other planning
functions. Data is locked when viewing in write mode and, therefore, provides the ability to
simulate. For example, run a currency translation function in analysis and, if you are only
testing, you can go back to the last saved state.
Excel Add-In:
The Enterprise Performance Management add-in has baked functionality for BPC
Standard. It can only be used with BPC Embedded with permission from SAP.
IT Support:
As BPC Standard does not use IP, for example, and the BW objects are created very easily
from the BPC Web Client, less IT is required.
BW Queries:
These are required in Integration Planning to perform manual input planning. BW Queries
make up the OLAP layer of BW, and provide a rich layer of reporting features including
variables, structures, and calculated key figures. They are maintained primarily by IT or
more technical business people.
Data can be exported from BPC to SAP Analytics Cloud, so that advanced analysis and
forecasting tools can be used. Enhanced data can then be imported back into BPC.
One of the most appealing features of BPC is that you can do so many activities from the
Excel interface, including reporting, analysis, planning, and data loads.
Other benefits of the planning and consolidation application are as follows:
Business process-centric.
Configurable Business Process Flows (BPFs) guide users and drive process consistency.
For more information on SAP BPC 11.0, see the official documentation: https://
help.sap.com/viewer/p/SAP_BPC_VERSION_BW4HANA
For information on what is not possible with SAP BPC 11.0, consult the following: https://
blogs.sap.com/2017/11/28/whats-not-possible-with-sap-bpc-11.0/
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Implementation Options
2. Multiple BPC Standard systems on any database Consolidate and then Migrate to
BPC11.0.
3. New implementation.
Path 1
Path 2
Mulitple BPC Standard systems on any database Consolidate the systems and migrate
to BPC11.0.
This option, most importantly, involves migration to the SAP HANA database. The BPC
environment, and related objects, can be backed up via transaction UJBR. They can then
be restored in the BW/4HANA system. The migration path is from 10.1 to 11.0.
Path 3
New implementation.
This is a bare metal scenario. There are a number of starter kits for BPC Standard, such as
Environment Shell and the USGAAP Starter kit. For BPC Embedded, there are no starter
kits. The only delivered content are the function types.
Install the SAP BW BW/4HANA starter add-on in the BW Powered by HANA system.
You can keep the Multiprovider ID the same as the CompositeProvider ID.
Planning objects, queries, InfoObjects, and analysis workbooks, for example, do not need
to be converted as long as you keep the Multiprovider ID the same as the
CompositeProvider ID.
When you install the starter add-on, SAP BW objects for data modeling, as well as processes
and user interfaces, are primed for use with an SAP HANA database. Data modeling is
restricted to the small number of objects that are suited for modeling the layer architecture of
a data warehouse on SAP HANA (LSA++). As the name suggests, this add-on provides an
intermediate step during a migration to SAP BW/4HANA.
Carry out the database migration and conversion of legacy BW objects to BW/4HANA.
Set up the BW security – this includes the bpc related BW standard authorization objects
and the analysis authorization objects that can be enhanced in BPC.
Carry out the database migration and conversion of legacy BW objects to BW/4HANA.
Create an EDW model, or use an existing EDW model, in BW. Create the embedded
environment in BPC.
Create teams, and maintain security settings and assignments in the NW backend.
Set up a work status, and move the work status locking data from the 10.0 table to the 10.1
embedded tables.
Enable data audit. Data audit logs from 10.0 cannot be moved to 10.1 embedded.
Set up BPF templates. Existing BPF instances from 10.0 cannot be moved to 10.1
embedded.
This scenario might be rare, but it could happen that a customer is using BPC10.0 and want
better EDW integration. This scenario is essentially starting from bare metal, as the 10.0
objects in BW have their own namespace. Consequently, there is no existing tool to migrate a
BPC 10.0 NW environment to a 11.0 Eembedded. In addition, a 10.0 customer could be using a
normal database or SAP HANA. The steps above do not include the database migration.
The key concept is to use in the BPC stand-alone implementation the InfoObjects which
are delivered with SAP BPC optimized for S/4HANA Finance in SAP Simple Finance.
After a complete S/4 HANA implementation including the activation of the SAP BPC
optimized for S/4HANA Finance content, the InfoObjects will exist in the SAP Simple
Finance target system, as well as “standard” time and unit InfoObjects such as
0CALMONTH, 0FISCPER or 0UNIT.
S/4 HANA: /ERP/-namespace, master data read from SAP HANA Views during runtime.
https://launchpad.support.sap.com/#/notes/2243472
SAP provides tools to assist with upgrades and migration.
The Database Migration Option (DMO) of the Software Update Manager (SUM) combines the
upgrade and database migration to SAP HANA.
The migration cockpit is a very useful set of programs for pre and post migration. It also
includes a program to check if planning functions are processed by SAP HANA.
The Planning Function Check tool detects whether planning functions are executed in ABAP
or in SAP HANA. It uses the program RSPLS_PLANNING_ON_HDB_ANALYSIS and should be
installed on your system. Otherwise, you will see a screen stating: "Program
RSPLS_PLANNING_ON_HDB_ANALYSIS does not exist".
With the Planning Application Kit, BW on HANA based, in-memory enabled, planning, planning
functions, and disaggregation in queries are executed directly in SAP HANA.
Summary
Migrating from BPC Standard 10.x to 11.0 Embedded is a start over scenario.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
In traditional applications (especially from SAP), the database is largely used as a data store
mechanism only. Massive queries bring large amounts of data back to the application server
for processing. Lots of application execution time is spent in the application server, looping
over records and performing exclusions, calculations, and so on.
The key to the best application performance in SAP HANA is to push as much of the logic
execution into the database as possible. We now "trust" the database. Keep all data-intensive
logic in the database as SQL, SQLScript, and HANA views.
SAP HANA permits OLTP and OLAP workloads on the same platform, by storing data in high-
speed memory, organizing it in columns, and partitioning and distributing it among multiple
servers. This delivers faster queries that aggregate data more efficiently, yet avoid costly full-
table scans and single column indexes.
64-bit processors are designed so that their arithmetic logic unit can process 64 bits (8
bytes) simultaneously during a cycle. Furthermore, the instruction set is designed
consistently on 64-bit, unless a backward-compatible legacy (see X86 architecture) is
present. This also applies to the standard addressing modes. The bit width of the arithmetic
logic unit, in principle, may differ from the address of the unit (as with most 64-bit CPUs).
To provide more acceleration in data processing, manufacturers have come up with different
acceleration techniques. These techniques range from the reduction of write operations on
the outer tracks of the disk sectors, on the preprocessing of the data in or on the hard drive
itself, to large caches that are designed to reduce the actual number of hits on hard drives.
These techniques assume that data is stored on the hard drives, and they try to speed up
access. Large affordable amounts of memory are available thanks to modern 64-bit operating
systems. With 32-bit, address space is limited to 4 GB of memory, while one with 64-bit
addressing can use so much memory that it does not fit on a server.
However, all the data in main memory is useless if the CPU does not have enough power to
process this data. To address this, there has been a change from complex CPUs to multicore
processor units. For this innovative computing power, software has to be written in a specific
way. SAP HANA splits the overall task into many small process strands (threads), which use a
large number of parallel cores. For optimal data processing, it is necessary to provide
optimized data structures.
SAP HANA systems require a CPU to RAM ratio, which is fixed for production systems. It is
clearly defined by SAP at 256GB/socket for analytic use cases like SAP BW, and 768GB/
socket for SAP Business Suite or SAP S/4HANA. As a result, there is a maximum of 4 TB for
OLAP scenarios, or 20 TB for ERP scenarios with modern hardware, in a single system
(02/2017).
Following the scale-up approach, we deploy a single system with as many resources as
possible. Scale-out architectures connect a cluster of smaller SAP HANA systems together
into one clustered database. SAP HANA is a shared-nothing architecture, so there must be
shared storage for data persistence. Note that in a scale-out environment, data is distributed
across the nodes.
In SAP BW, you can distribute large fact tables across multiple nodes, and place dimension
tables together in a single node. It uses one master node for configuration tables. This
configuration is excellent at dealing with the major disadvantage of scale-out, which is the
cost of intranode network traffic for temporary datasets.
We recommend that you scale-up before considering scale-out, to reduce complexity.
However, in terms of performance, SAP BW scale-out works well and scales exceptionally
well.
In addition to a classical row-based data store, SAP HANA is able to store tables in its column-
based data store. You must understand the differences between these two methods, and why
column-based storage can increase certain types of data processing. The concept of column
data storage has been used for quite some time. For example, the first version of SAP Sybase
IQ, a column-based relational database, was released in 1999.
Historically, column-based storage was mainly used for analytics and data warehousing,
where aggregate functions play an important role. On the other hand, using column stores in
Online Transaction Processing (OLTP) applications requires a balanced approach to the
insertion and indexing of column data, so as to minimize cache misses. The SAP HANA
database allows the developer to specify whether a table is stored column-wise or row-wise. It
is also possible to change an existing column-based table to row-based, and row-based to
column-based.
Conceptually, a database table is a two-dimensional data structure with cells organized in
rows and columns. Computer memory, however, is organized as a linear structure. To store a
table in linear memory, the following two options exist:
For example, you want to aggregate the sum of all sales amounts using a row-based table.
Data transfer from main memory into CPU cache happens in blocks of fixed size, called cache
lines (for example, 64 bytes). With row-based data organization, each cache line could
contain only one sales value (stored using 4 bytes). The remaining bytes are used for the
other fields of the data record. For each value required for the aggregation, a new access to
main memory is required.
With row-based data organization, the operation slows down due to cache misses that cause
the CPU to wait until the required data is available. With column-based storage, all sales
values are stored in contiguous memory, so the cache line contains 16 values, which are all
needed for the operation. Memory controllers can use data prefetching to minimize the
number of cache misses because the columns are stored in contiguous memory.
SAP HANA permits OLTP and OLAP workloads on the same platform by storing data in high-
speed memory, organizing it in columns, and partitioning and distributing it among multiple
servers. This delivers faster queries that aggregate data more efficiently, yet avoid costly full-
table scans and single column indexes.
SAP S/4HANA is the next generation business suite to help lines of business and industries
run simply. SAP S/4HANA combines the most recent innovations (SAP HANA platform, SAP
Fiori UX) with over 40 years of experience in mastering complex industry challenges in a new
suite that caters to digital, networked economy.
With both ECC and BW on the same SAP HANA database, for example, data can be accessed
on a real-time basis. Therefore, there should be less of a need to load data between ECC and
BW related tables.
Attribute views are used to give context. This context is provided by text tables, which give
meaning to data. For example, if a fact table or an analytic view in a car sales database only
contains a numeric ID for each dealer, you can use an attribute view to provide information
about each dealer. Using this method, you could then display the dealers' names and
addresses, thus giving context to the data.
Analytic views are used to model data that includes measures. For example, an operational
data mart representing sales orders would include measures for quantity, sales order value,
and more.
Calculation views can be used in scenarios where the analytic view does not satisfy the
business requirements. For example, when you need to combine the result sets from two
different tables or views (with the graphic view designer, or with the SQL UNION operator).
As part of an SAP HANA implementation, you can use the modeling capabilities of SAP HANA
to build flexible information models and easily report on your data.
Experience has proven that the data transfer between different SAP HANA engines can result
in bottlenecks. This can cause performance disadvantages. For this reason, starting from
version SPS9, SAP has enhanced the scope of SAP HANA calculation views so that they can
take over the role of the OLAP engine and join engine. Therefore, calculation views are now
SAP´s best option for developing data models for all master data, as well as for OLAP
scenarios. If a complete application is modeled as a calculation view, the data transfer
between the different engines can be completely avoided.
When you create an SAP HANA calculation view, there is a new Data Category setting, which
manages the services provided by this SAP HANA view.
SAP HANA 2.0 was introduced in November 2016. It innovates the SAP HANA platform in
terms of the maintenance strategy. It also has a crucial impact on the modeling concepts of
SAP HANA. In future, modeling will be executed in the new SAP web-integrated development
environment, which provides a calculation view monitor (SAP Web IDE). Modeling in Eclipse,
based on SAP HANA studio, will no longer be strategic. The SAP Web IDE was introduced with
SAP HANA 1.0 SPS11.
Enrich data sets via joins, calculated and restricted measures, and data type conversions.
BW Modeling Perspective
Eclipse is an open-source, integrated development environment (IDE). It contains a base
workspace, and an extensible plug-in system for customizing the environment. Eclipse is
written mostly in Java and is primarily used to develop Java applications. However, it can also
be used to develop applications in other programming languages (including ABAP through the
use of plug-ins).
The SAP BW Modeling Tools (BWMT) are an example of these plug-ins, and are a separate
perspective in SAP HANA Studio. They provide a new, integrated modeling environment for
the management and maintenance of SAP BW ABAP metadata objects. The main objective of
this is to support SAP BW metadata modelers in increasingly complex BI environments. SAP
BW does this by offering flexible, efficient, and state-of-the-art modeling tools. These tools
integrate with the ABAP development tools available in SAP HANA Studio. The tools also
integrate with SAP HANA modeling, and the consumption of SAP HANA elements in SAP BW
metadata objects.
When using the SAP BW Modeling perspective, establish a system connection to an existing
SAP BW system (technically managed by a corresponding SAP BW project). The SAP BW
perspective enables access to both SAP HANA Studio-based and GUI-based SAP BW
Modeling editors. The connection details are all taken from the SAP logon, therefore, SAP
logon must be available on the client.
The SAP BW Modeling perspective defines the initial set and layout of tools (views and
editors) in the SAP HANA Studio. In this way, it provides a set of functions aimed at
accomplishing SAP BW modeling tasks. In particular, it enables working with SAP BW
metadata objects that are managed by an SAP BW back-end system.
The SAP BW Modeling perspective is designed for working with SAP BW metadata objects
that the user can access using SAP BW projects. It consists of an editor area, where the BW
metadata object editors are placed. The perspective also contains several views, each of
which provides a different function.
In the SAP BW Modeling perspective, you can open and edit all BW metadata objects that are
displayed in BW projects. However, for a few of the SAP BW metadata objects, such as
Transformations or DTPs, the SAP GUI editor runs inside the SAP HANA Studio-based IDE.
While using SAP BW 7.4 or SAP BW 7.5 on SAP HANA, you had the choice to either work in
SAP GUI or BWMT. This is no longer the case in SAP BW/4HANA. The modeling options of the
classic Data Warehousing Workbench are gone, and modeling can only be done in the BW
Modeling Tools.
You can create and edit BW DataSources in the BW Modeling Tools. This is available for all
source systems supported by SAP BW/4HANA. The SAP GUI transaction RSDS is still
available as a fallback option for now. However, SAP recommends that you fully leverage the
BW Modeling Tools for DataSource maintenance.
You can create BW source systems in the BW Modeling Tools. However, the Data
Warehousing Workbench still offers the full functions for source system setup and
customizing. Hence, source system maintenance should be managed using a combination of
both user interfaces.
SAP BW Modeling Tools provide you with a simple way of creating, editing, and documenting
data flows, and objects in data flows. The data flow is the central entry point for modeling in
SAP BW/4HANA. You can use the data flow to model objects, and their relationships with
each other, for a particular scenario. There are two types of data flows in SAP BW/4HANA.
Summary
The Administration Console perspective can be used to access database tables and
Information Models.
Before SAP HANA, planning solutions were not integrated, they were more like vertical silos.
Although BI Integrated Planning has many important features, it had some pain points:
No process control.
No comments functionality.
With BPC Embedded, there is just one platform for many types of planning. With just one
license, customers can use either the standard or embedded solution.
The BI Integrated Planning Engine, with the Planning Application Kit (PAK) built in, software
runs on SAP HANA.
The BPC11.0 Embedded solution features the Integrated Planning functions that have been
optimized to run on SAP HANA, via the Planning Application Kit. Although IP is IT-intensive, it
has the advantage of being very well integrated into BW and it uses BW queries.
In the figure, BPC Planning Solutions, financial planning refers to planning with the SAP
Simple Finance data structures and delivered content. Generic planning can be financial in
nature also, and it can include HR planning, for example.
In the figure, BPC Planning and Consolidation Option Comparison, you can see a high-level
comparison between the four main options. BPC MS is the solution that resides exclusively on
the Microsoft platform.
With the release of SAP Business Planning and Consolidation 11.0, version for SAP BW/
4HANA (SAP BPC 11.0), a more flexible, hybrid solution is offered to customers for cloud
integration. It combines planning capabilities from both worlds, with the advanced analytical
features of SAC.
After a one-time job of connection creation and data region mapping, planning users in SAC
can perform planning activities directly on BPC data in SAC, via the version management
pane. They can extract the latest data from BPC, or submit new planning data to BPC,
anytime and anywhere.
By leveraging the delta region based data synchronization, a near live experience could be
achieved.
Customers can still leverage planning capabilities, such as logic scripts, from BPC.
In the meantime, customers can also enjoy flexible planning capabilities, such as spreading,
distributing, predictive forecast, and powerful visualization features from SAC.
To leverage the new type of data acquisition between BPC and SAC:
Your BPC version should be SAP Business Planning and Consolidation 11.0, version for
SAP BW/4HANA.
You need to set up the model mapping between BPC and SAC from the very beginning.
The following comments will give you a feel for SAP and BPC Integration. When you are done
with data manipulation in SAC, you can use the version management panel to sync data
between BPC and SAC. The panel shows the last synchronized timestamp for each public
version. You can do planning directly on the public version and save it to BPC directly. The
system will first submit the data change within the mapping data region to BPC, run the
default script logic, and then retrieve the data change done in other clients to SAC. You can
also first copy to a private version and do planning there. Afterwards, save the private version
to its originated public version to sync with BPC.
The model in SAC is an exact copy of BPC. The model and its dimensions cannot be extended.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
BW InfoObjects
In BW, fields are referred to as InfoObjects. There are two main types of InfoObjects:
Key Figures
InfoObjects provide values to be calculated or evaluated such as: Quantity (0Quantity)
Amount (0Amount).
Characteristics
InfoObjects are business reference objects that are used to analyze key figures such as:
Cost center (0CostCenter) Material (0Material).
Characteristics represent master data in general.
Attributes
Text
Hierarchies
Time
Technical
Units of measure
Time characteristic InfoObjects form the time frame for many data analyses and evaluations.
They are delivered with BW content. Examples of time characteristic InfoObjects include:
Calendar day (0CalDay), Calendar year (0CalYear), or Fiscal year (0FiscYear). Delivered
InfoObjects are preceded by a 0 (zero).
In general, custom InfoObjects are preceded by a letter, such as Z.
Characteristics can have related fields called Attributes. Attributes can either be for display,
or they can be navigational. For example, the responsible person is an attribute for cost
center.
BW InfoProviders
InfoProviders provide data to reports. They can be tables that contain data, or they can be
views on tables. They are grouped into folders called InfoAreas.
Examples of InfoProviders include:
CompositeProviders:
- Do not contain data.
- Used to union or join InfoProviders and HANA views.
Aggregation Levels:
- Do not store data.
- Used to determine the level of granularity when planning.
Characteristics:
- Represent dimensional data, such as cost center.
LESSON SUMMARY
You should now be able to:
Lesson 1
Designing Aggregation Levels 38
Lesson 2
Configuring Basic Planning Functions 41
Lesson 3
Creating Reporting and Planning Queries 45
Lesson 4
Using the Planning Buffer 59
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Aggregation Levels are used to determine the level of detail on which data can be entered or
changed either manually through user input or automatically by a planning function.
In the preceding figure, the underlying InfoProvider has both year and period. If the scenario /
scope of the planning activity is annual planning then only year should be included and vice
versa.
Aggregation Levels can be created on:
CompositeProvders.
LocalProviders.
When creating the Aggregation Level, you can access the source InfoProvider in the General
tab. And, in the Output tab, select which fields to include. It must activated to be available for
use.
Required.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
The planning functions that are the easiest to use are the predefined / basic planning
functions. They can be set up and maintained by simply entering the required information in
the planning modeler (by entering a target in a copy function for example). These planning
function provide a defined behavior that can only be influenced within the limits set by the
Customizing options.
Fox Formulas can be used to create more powerful planning functions. As they provide
certain functionality that is usually only provided by programming languages (such as loop
statements or if statements) they can be used to cover a wider range of business
requirements than the predefined planning functions. On the other hand, they are more
complex, as the processing logic has to be defined when creating the planning function.
Customer defined planning functions are created using ABAP. They therefore offer the
functionality of a full-blown programming language. This type of planning function is definitely
the most complex, but also the most flexible.
Planning Functions are used to perform mass updates in planning scenarios such as copying
actual to plan, deletions, and calculating revenue for example.
Standard planning functions such as copy, delete, and revaluation can be created based on
off the shelf function types.
In a typical planning scenario, there is a need for a wide range of automatic functions to
change or create plans. BW Planning offers different types of planning functions that can be
used to create all necessary automatic functions: Predefined Planning Functions: a set of
planning functions with fixed behavior that can be easily set up.
Planning function types are procedures that allow you to change transaction data in the
context of Integrated Planning in BI in a particular way. For example, you can distribute data
by reference data, copy, delete or revaluate it.
Each planning function type consists of a definition that describes the general properties of
the function type and an ABAP class that realizes a particular operation(algorithm).
The most important element of the definition part are the function type parameters. These
determine how the algorithm is applied to the transaction data. For the function type
"Revaluation", for example, there are revaluation factors as parameters. Each parameter has
a particular parameter type. The complete set of parameters defined for the planning function
type is called a parameter record. To apply a planning function type to concrete transaction
data (in an aggregation level), you create a planning function with the relevant function type.
With regard to transport, Business Content and activation, planning function types behave
like other metadata objects of the BI system.
In each planning function, you have to specify which characteristic values in the data records
are to be changed by the planning function. If we copy from one version to another and one
InfoProvider to another, the characteristics version and InfoProvider have to be indicated as
characteristics that are to be changed.
Conditions can be used to define which actions the planning function performs, depending on
which set of data is actually being changed. In the example above, no conditions are used.
Try to minimize the use of conditions that include multiple values since it may cause the
function not execute on HANA.
Note:
Conditions should always be avoided because they cause sub-optimal processing.
Revaluation
You use the Revaluation function type to increase or decrease key figures by a percentage
figure. No characteristic values are changed. In characteristic value usage, you can only select
characteristics as condition characteristics. You can choose whether you want to enter a
common percentage for all key figures or revaluate key figures with individual percentages. In
both cases you can either enter the percentage directly or use variables. The percentage is
interpreted as delta; the system does not expect you to enter a percentage sign.
For example, if you enter 15.4, the system performs the following calculation: new value = old
value + 15.4% * old value.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
For analyzing consolidated results, most queries can be created on the CompositeProvider.
The queries can then be consumed in a variety of BI Suite applications:
The preceding figure is displaying a BW Query from the BW Modeling perspective in the SAP
HANA studio.
Columns: Here is where the query objects (key figures or characteristics) must be placed if
you want them to appear in the columns of the results set.
Rows: Here is where the query objects (key figures or characteristics) must be placed if
you want them to appear in the rows of the results set.
Free Characteristics: Put the characteristics that you want to offer to the user for
navigation purposes in this pane. These characteristics do not appear in the initial view of
the query result set; the user must use a navigation control to make use of them. You do
not define the filter values here.
The Dependency tab displays global objects that are used in the query.
Fixed Values: Here you define the characteristic filter values that apply to the entire result set.
Default Values: In this pane, you define the characteristic filter values that should be used for
the initial view of the result set. The user may choose to modify these filters in the result.
Variables are used to make queries more dynamic. They can be used as prompts to the user
or can be filled via programs.
A BW variable with an SAP exit means that processing of the variable is controlled by a
function module. An SAP exit is merely a programming solution that exits the normal program
and performs a lookup, for example.
In the preceding image, the same key figure is being used to display both actual and plan data.
You can create formulas that are query-specific or InfoProvider-specific and therefore re-
usable. Whenever you need the % symbol on the output, use the off-the-shelf percentage
functions. These calculations are also calculated upon refresh.
Users can simply launch Analysis from their start menu or desktops. They would then need to
log into BW for example.(assuming single sign on is not being used).
If a BPC user is in the web client, it is very convenient to access Analysis from the settings
icon on the upper right. No additional login would be necessary.
SAP BusinessObjects Analysis for Microsoft Office, is a Microsoft Office Add-In that allows
multidimensional analysis of OLAP sources. It consists of the following components:
Analysis Plug-in.
The Analysis plug-in allows multidimensional analysis of OLAP sources in Microsoft Excel, MS
Excel workbook application design, and intuitive creation of BI presentations with MS
PowerPoint. The Plug-in is available for the following Microsoft Office versions:
The crosstab is the data region in the worksheet where the query output resides.
In the Analysis plug-in, you can use BW Queries, HANA views and InfoProviders as data
sources. The data is displayed in the workbook in crosstabs. You can insert multiple crosstabs
in a workbook with data from different sources and systems. If the workbook will be used by
different users, it is also helpful to add info fields with information on the data source and filter
status.
Using the design panel, you can analyze the data and change the view on the displayed data.
You can add and remove dimensions and measures to be displayed easily with drag and drop.
To avoid single refreshes after each step, you can pause the refresh to build a crosstab. After
ending the pause, all changes are applied at once.
In order to retain inserted queries, formatting, formulas and so forth, you should save the
workbook. In our scenario, the BW server is the most logical location. Workbooks in My
Documents are like favorites, only you can see them. If you want to publish a workbook, save
it to a role.
Using the business intelligence platform enables you to save workbooks and presentations
with their navigation state in a central management system and to reuse these analysis views
in other applications such as SAP Crystal Reports or Analysis, OLAP edition.
Disaggregation.
Inverse Planning.
Hierarchies.
BW Queries are used for both reporting and data niput when it comes to planning.
When creating queries for BW reporting scenarios, you can always pick and choose which
InfoObjects to include in the query. If you don't need to report by Flow, you can simply choose
not to include it in the query.
When creating queries for manual input however, the rules are a little different. In this case,
you need to include every InfoObject in the aggregation level to allow for manual input
planning. The reason for this is that during manual planning, every cell must represent a single
value for each characteristic. That means that any characteristic that is not the rows or
columns must be included in the filter and restricted to single values.
Start query in input mode: You can set whether to start an input-ready query in change
mode or in display mode . This property is in the Query Properties on the Planning tab
page. If there is at least one input-ready query component, the query is started in
change mode provided that no setting has been made to the contrary.
Symmetrical calculation mode: when using queries for inverse planning on calculated
key figures, this setting controls the behavior of related key figures. For example, if
selected and quantity is increased by 10%, then revenue is also increased by 10%.
New lines on every level: This feature is used for ad hoc input forms. In a plannable
query with key figures set to disaggregation and characteristics set to posted values,
users can add rows of data as they please. For example, from an empty data set, input
material group values while material is in the free characteristics. See Exercise: Ad Hoc
Planning and Reporting with Analysis.
Always disaggregate to all valid combinations: Use this setting to disaggregate not only
to posted data records but to all valid combinations of member IDs.
First Steps from BW-IP and PAK towards SAP BPC 11.0, version SAP BW/4HANA planning
with Embedded Model: https://blogs.sap.com/2017/06/19/first-steps-from-bw-ip-and-
pak-towards-bpc4-planning-with-embedded-model/
Not Input Ready (Not Relevant for Locking): Use this to read the data but not block others
from changing it while you are viewing it.
Not Input Ready (Relevant for Locking): Use this to read the data but lock it so no else can
change it while you are viewing it.
Input Ready (Relevant for Locking): Use this to allow write access and block others from
changing it while you are viewing it or changing it.
In the preceding figure, the business requirement calls for the rows to be available for input.
Queries normally only display characteristic values if they have postings in the InfoProvider.
Of course, since this is a planning scenario and there isn't any data yet, the characteristics are
set to display based on their master data. To do that, just select Fiscal Year/Material Group in
the rows, go to the properties on the upper-right, and in the Extended tab choose Master
Data.
Master Data means that mathematical characteristic combinations from the data range are
displayed regardless of whether or not transaction data records exist for them.
Characteristic Relationships means that the input help only displays correct business
selections.
Characteristic extended properties.
Master Data: this setting is used to display members based on the master data table.
Posted values: this setting is used to display members based on the transaction data table.
Planning in Analysis
This is an introduction to the planning features in Analysis.
If you are using version 2.6 (and prior) of Analysis for Office, the default profile may not have
the planning group activated. So, the first time you access Analysis, you create your own
profile and turn on the planning group. From then on, the planning group will be available on
the machine where you make this setting.
By adding planning functions to the workbook, they can be run from their context menu or via
push buttons. The first function inherits ID PF_1 and so on. The VBA for the push buttons
reference the planning function ID.
To restrict the data region for planning functions, you can either use variable values or filter
values. The filter characteristics belong to the query but they can also be used for the
planning functions if needed.
When you run a planning function from Analysis for example, the data does not update the
database until you save it. Of course, after running the planning functions, you could also
change the data manually. And then you could either revert back to the latest database value
or save the data. The point is that you do all the testing you need to without saving any data.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Planners need to perform calculations without saving the results to the database.
The data needs to be held constant during the planning process for the planner.
When users create data, it is first stored in the user's local memory or buffer. They can do
their testing for example and then choose to either save the data to the database or clear the
buffer.
When a user requests transaction data in change mode, this data has to be locked
exclusively for the user.
All the records in the selection are locked. (for example: US / 2018 / Version B / Revenue.
If the selection is empty(*), each data record is locked since no restrictions exist.
For characteristics outside the aggregation level, selection * (all) is always locked.
Locking Example
The selection tables are compressed and stored on the SAP standard lock server. The
collision check for locks is not performed by the lock server, but by an ABAP program.
The selection tables are not compressed. They are stored in a shared object memory area.
This shared memory area is connected to an application server. •Option 2 offers better
performance than option 1. However, here you have to ensure that the server that has the
lock table is always available when you want to lock transaction data using selections. You
can set the size of the shared object memory using profile parameter abap/
shared_objects_size_MB.
Summary
LESSON SUMMARY
You should now be able to:
Lesson 1
Learning About the BPC Web Client Features 64
Lesson 2
Modeling BPC Environments and Models 68
Lesson 3
Maintaining BW Master Data 73
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
The BPC Web Client can be used for several different activities:
Business Users
- Run Consolidation Tasks.
- View system reports for Data Audit and Work Status for example.
- Maintain dimensional data.
- Create Local Providers.
- Access Activities
Administrators
- Manager periodic tasks such as Business Process Flow Instances.
Developers/Consultants
- Maintain Models.
- Configure Business Rules.
- Set up Work Status.
The product supports the SAP HANA database and uses SAPUI5 user interface technology.
The SAP UI5 user interface is based on standard HTML5, which provides all the benefits of
HTML5 and supports languages such as Hebrew and Arabic that are written and read from
right to left. SAP BW/4HANA is a new, next-generation data warehouse product that is
optimized for the SAP HANA platform, delivering real-time, enterprise-wide analytics that
minimizes the movement of data and connecting all the data in an organization into a single
logical view.
SAP Business Planning and Consolidation 11.0, version for SAP BW/4HANA also introduces
the new Belize user interface that has a clean and consistent layout. Belize is the refined SAP
Fiori visual language whose calm color tones help users stay focused on daily business tasks
and content. With delightful visual details and typography, Belize conveys content with clarity
and makes the user experience richer.
BPC Embedded has several off-the-shelf reports that you can access from the Web Client.
This includes:
Data Audit.
Work Status.
1. Consolidation Monitor: This is used to perform month end closing tasks such as currency
translation and eliminations.
Business rules are parameter-driven functions within SAP Business Planning and
Consolidation models for calculating and posting monetary amounts in support of common
accounting activities, such as intercompany booking, currency translation, and eliminations
and adjustments.
After adding a business rule type to a model, you must customize it to meet your needs by
specifying parameter values. For example, in Eliminations and Adjustments, you can indicate
which balances to read before calculating an amount, or under which account and audit
member to post the calculated amount. In addition to customizing default business rules,
delivered with the IFRS starter kit for instance, you can define new rules for various types of
business processes and add them to a model.
Business Rules Management: https://help.sap.com/viewer/
22b7cf3842074145818abb62fe10f5a4/11.0/en-US/
8a3b7f5f47f74934b81bd038e39f3df5.html
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Before creating a model, the underlying BW InfoProviders must maintained first. A model can
have one or more InfoProviders attached to it.
BPC11.0 Embedded Models can be created on:
CompositeProviders.
LocalProviders.
The BAdI provider is an InfoProvider that you can enhance by means of ABAP
implementation.
After creating a model, you can view the associated meta-data. This includes:
Dimensions.
Aggregation Levels.
Related CompositeProviders.
Dimension Mapping.
The dimension mapping is particularly useful when the underlying InfoProviders do not
contain identical characteristics.
A dimension is a collection of related data members that represent one aspect of a business,
for example, accounts, products, or entities. In BW terms, this is a Characteristic.
In the preceding figure, the system graphic on the left is generated from the BW Modeling
perspective. In the context menu of the Aggregation Level simply choose Explore Data Flow.
Summary
After adding a Model, dimensional data can be maintained in the BPC web client.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
There is a need to maintain the master data for characteristics used in a BPC Model.
The master data is stored in the characteristic’s master data BW related table in the HANA
database. (nothing to do with ECC)
Figure 71: Maintain Master Data from the BPC Web Client
Traditionally master data has been maintained centrally. If the business need calls for a more
agile solution, then the process must be coordination to ensure migh data quality.
From the BPC web client, just go to a dimension’s member sheet to view the master data. If
the underlying characteristic is time dependent then you will see a data from and date to field.
You can add new member IDs simply by typing or via copy and paste.
If you update the master data from the BPC web client, the characteristic’s master data BW
related table is updated in the HANA database.
Displaying Dimension Structures: You can view the structure of a dimension used in an
InfoProvider that has been assigned to a model within an environment. You can see the
dimension description and the dimension properties that have been assigned.
Maintaining Hierarchies: You can add a new hierarchy, maintain an existing hierarchy
structure, or delete an existing hierarchy.
Creating Local Dimensions: You can create local dimensions based on existing central
dimensions, as well as remove unnecessary properties and add new properties during the
creation process. Local dimensions can also be deleted when they are no longer useful.
You can maintain the time-dependent property and text when time dependency is
activated in the BW backend.
You can maintain the member of a compounding dimension after it has been activated in
the BW backend. When you add a new member for a compounding dimension, you can
select an existing member for superior dimensions from the value's help.
You can maintain the member of a dimension property when the property itself has master
data. To maintain a member for the dimension, view the dimension structure and click
directly in the property ID.
You can add a new member that does not exist in a global dimension and maintain its
properties.
You can overwrite property values for members inherited from a global dimension.
The attribute values can exist exclusively in the dimension member data table or it can come
from the attribute check table. You can add attribute values as shown in the preceding figure.
Hierarchies can be built with members from current dimension or external dimension.
External Dimensions are defined in BW
Member selector display not only ID and description but also properties. Sort and filter by
property make adding new hierarchy nodes much easier.
Use Drag & Drop to move nodes. Multiple rows selection is supported. Use buttons on toolbar
to move nodes as well.
The time range is displayed when you 'mouse over' the row. You can directly maintain time
range in the row after click 'change' button.
You can create a new hierarchy that exists exclusively in the characteristic’s master data
tables. You can also copy an existing hierarchy and change it.
LESSON SUMMARY
You should now be able to:
Lesson 1
Configuring Stand Alone LocalProviders 79
Lesson 2
Configuring Integrated LocalProviders 87
Lesson 3
Integrating Local into Global Data 96
UNIT OBJECTIVES
Model LocalProviders
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Model LocalProviders
No BW InfoObjects needed.
Advanced business users can easily create LocalProviders from the BPC Web Client.
You must use a csv file. Set the file parameters according to the source data ... separator,
header, etc.
In step 3: Map Dimensions - turn on data audit to track who made changes to the data. This
causes the system to add audit fields into the HANA table. Key Figures will be created in
HANA as data fields.
Use the Type column to select the data type for each field. For characteristics, the Type
options are: Character String with Leading Zeroes, Date (saved as yyyymmdd), Time(saved
as hhmmss), InfoObject.
If you select InfoObject, you can select a characteristic and also choose whether to use its
conversion routine or not. For the key figure fields, the Type options are: Integer, Decimal,
Floating Point.
Figure 81: Validate the Master Data and Create the Model
You can create the local model at the same time as the LocalProvider to cut down on the cost
of development somewhat.
In the preceding figure, in the review step, the system displays important information
regarding the generated components. In this version, even though data audit is on, the log will
list it as Off.
You can view the generated Aggregation Level in HANA Studio. In the preceding figure you
can see the generated field names beginning with @3B. Although they appear as InfoObjects
they do not exist in the normal BW data dictionary but only in HANA.
The description Workspace Aggregation Level of Local Provider serves as a reminder that
local providers first came on the scene as a component of Workspaces. You can proceed to
create planning functions and sequences on the 'local' Aggregation Level at this point. When
you create a query on the local provider, it can be found in the InfoArea: Workspace Area.
The data audit report shows who entered the data, the timestamp, the data region as well as
the values. Data audit can be turned off in administration.
Figure 85: Create Query on Local Aggregation Level and Plan in Analysis
The planning enabled query on the local Aggregation Level has all of the same settings as any
other planning query ... the only difference is that the InfoObject technical names are
different.
You can preview the data from the admin console perspective. The data includes the audit
data if it was enabled. For manually entered data, the query is listd as the source (F9996 field
in the preceding figure).
There are four key fields in LocalProviders related to data audit:
Figure 87: View the table structure and Runtime for the LocalProvider
When the LocalProvider is created, the system generates the table in HANA. The field data
types and field lengths can be viewed in the preceding figure.
The data comes into Delta Storage initially and is then merged into Main Storage. Queries will
access data from both locations.
LESSON SUMMARY
You should now be able to:
Model LocalProviders
LESSON OBJECTIVES
After completing this lesson, you will be able to:
BW InfoObjects are needed to allow planning on a union of local and global member IDs.
Local members represent new material items for example that do not yet exist in the
source system.
The local planning data will be integrated into the central planning data.
In the integrated scenario, the fields from the flat file are mapped to existing InfoObjects. This
has three benefits:
1. The planning can be carried out on local as well as global member IDs.
3. The fields in the local HANA table inherit data types and field lengths from the InfoObjects.
The characteristic we are using in class is Local Material Group as shown in the preceding
figure. In real life scenarios, you could easily use Material, Material Group, Cost Center, and so
on as the referenced characteristic. This characteristic should have the following settings:
It should be an InfoProvider.
BW Workspaces were originally designed to provide the business user with the means to
develop their own models within SAP BW.
BPC LocalProviders use BW Workspaces to provide the planning users own planning
models within SAP BW.
This is a dedicated area in BW for advanced business users to do ETL and data modeling.
One of the BW business user interfaces is the NetWeaver Business Client web application.
You can access the web screens for business for the NetWeaver Business Client via
transaction code NWBC.
BW Workspace
Since we want to plan on local as well as global member IDs, the MATGRP field is mapped to
the P00CH_LMG characteristic. Once a local dimension is created, it can be referenced again
in subsequent LocalProviders (in step 3 in the preceding figure, choose Local Dimensions).
A characteristic can only be referenced once per BPC Environment (but its local dimension
can be referenced multiple times).
It isn’t always necessary to map each field to an InfoObject. In this scenario however, we want
to generate the HANA fields with the same meta-data as the InfoObjects because the local
table will later be integrated with the central planning objects in a CompositeProvider.
In the preceding figure, the flat file contains a new version member ID as well as a new
material group.
The local dimension members can be easily added from the web client. The SID table for the
referenced characteristic does not include the local members until they are posted.
Note:
SID = surrogate ID. This is a machine generated integer that is create for each
unique member ID.
Beside the capability to generate local dimension while creation of local provider, it is also
possible to create a local dimension directly in the dimension list. By select an existing central
dimension, users can create local dimension directly. Local attributes can also be added to
the local dimension.
To be able to maintain local master data, users need authorization object RSBPC_IOMA. User
can add a new local member with text and property values. User can also overwrite property
value inherited from global master data.
Figure 95: Viewing the Local Member IDs in the SID Table
The SID table for the referenced characteristic includes the local members when they are
posted to. However, they are not visible in planning and reporting unless the local dimension
is used.
The ...K... table contains the local hierarchy data.
Local hierarchies are available from BW7.4 SP13. Local hierarchies are not visible when the
referenced characteristic is used.
Text nodes are parent members that only exist in the hierarchy they belong to. Non text
nodes are members.
In the query on the local aggregation level, the local dimension can be configured to display
the local hierarchy.
If a hierarchy exists on the referenced characteristic, it can also be used in the query.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Plan data in the LocalProvider needs to be copied into the advanced DataStore Object.
Plan data on local members needs to be moved to the final / global member IDs.
The following process is an example of how to integrate local into global planning data. The
solution may vary for differences in the circumstances.
In order to integrate local into global data, we will carry out the following steps:
Add the calculation view to a CompositeProvider that already includes the central planning
aDSOs.
In the CompositeProvider map the hana view fields to the central InfoObjects.
Create a planning enabled query that includes both the local and global plan data.
Create a copy planning function to copy the local revenue/quantity plan data from the
native HANA table into the plan aDSO.
Create a repost planning function to move revenue/quantity plan data local material items
to the materials that reside in the material master.
There are several modeling options when working with calculation views. In our case, there is
a very straightforward need for a calculation view so that the LocalProvider data can be made
available to and existing CompositeProvider.
In earlier releases of SAP HANA, calculations views were part of a family called Information
Views. There were three members of that family; Attribute View, Analytic View, and
Calculation View.
Each of these view types had its own unique features, and typically, all three view types were
required. However, since the calculation view has inherited all the features of the two other
views, we no longer develop attribute or analytic views. In fact these types of views can be
migrated to calculation views using the supplied tools. The calculation view can now do it all,
which means we no longer have to be concerned about which view type to use.
There are four possible data categories / star join combinations:
2. Cube without star schema: this is for transaction data where there is not need to join to
dimensional data.
3. Cube with star schema: this is for transaction data where there is a need to join to
dimensional data.
4. Blank: this is for master data where there is no need to report on it.
Figure
Projection nodes are used to fine tune our source objects before we use in subsequent nodes
like union, aggregation and rank. We can choose which fields to include and we can also filter
the data and create additional columns if needed.
Figure 104: Add all fields to the output of the projection node
In this situation, we need to include all fields in the output of the projection node.
Figure 105: Add all fields to the output of the aggregation node
In this situation, we need to include all fields in the output of the aggregation node.
In general, aggregation nodes are used to perform aggregation on specific columns based the
selected attributes.
This is useful blog on aggregation nodes: http://teachmehana.com/aggregation-node-sap-
hana-calculation-view/
In general, semantic nodes allow you to define and maintain the data types for each field and
also make use of advanced features like variables, hierarchies and input parameters. In our
case we only need to assign the attribute data type to the attribute fields and the measure
type to the value fields.
After activating the view you can see the data by choosing Data Preview from the context
menu of the semantic node. In the Analysis tab drag a dimension or two into the Label axis
and a measure or two into the Value axis and choose which graphic you like.
You can view the data in tabular format via the Raw Data tab.
Figure 109: Map the Fields from the HANA View to the Target InfoObjects
Since the calculation view has just been added to the CompositeProvider, its fields need to be
mapped to the output fields.
After activating the CompositeProvider, you can preview the data by choosing Data Preview
.
The preview for CompositeProviders has navigation symbols as shown in the prior figure ... for
version 1.18 of the BW Modeling Tool. In 1.19, there are no navigation buttons.
The Aggregation Level in this case includes a subset of the CompositeProvider InfoObjects.
After creating the Aggregation Level, you can create the query. Each restricted key figure is
assigned either the underlying calculation view (local data) or the aDSO (global data).
After creating the Aggregation Level, you can create the planning functions.
The copy function will be used to copy the local data from the underlying calculation view
(local data) to the aDSO (global data).
The repost function will be used to move the local data from the LM##_1/2 local material
groups to the GM##_1/2 global material groups. GM##_1/2 epresents the final material
group IDs in the source system’s material master.
In #1 in the preceding figure, you can see the data before running any planning functions. This
data was uploaded from the flat file and then the LM##_1/2 data was added in Analysis.
The copy function copies the data from the calculation view (local data) to the aDSO (global
data).
The repost function moves the local data from the LM##_1/2 local material groups to the
GM##_1/2 global material groups.
LESSON SUMMARY
You should now be able to:
Lesson 1
Administering Work Status 107
Lesson 2
Positioning and Designing Business Process Flows 116
Lesson 3
Administering Security 128
Lesson 4
Administering Transports 137
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
As key phases of the planning process are passed, you need to control who can change
data by setting a work state.
You need work states such as: In process, submitted, and approved.
You need set the work state by year, version, and material group for example.
Owners of each profit center for example can set the work status for their profit center.
The concept of owner and manager plays a vital role in the work status setup.
In the work status setting, a dimension (such as profit center) is assigned as the owner
dimension. Profit Center contains an owner attribute with a user or team to determine the
owner.
The system derives a user as a manager for a profit center based on a hierarchy. The
manager of a Profit Center is the Owner of the parent of the Profit Center.
The simplest way to determine owners is to use attributes of a characteristic such as profit
center that is contained in the advanced DataStore Object involved in the planning activity.
This characteristic is assigned the role of the owner dimension is the work status
configuration. After the work status is configured, just assign the model to the workbook, save
it, close it, and then return.
Once the model is assigned to the workbook and the workbook is saved and then re-opened,
you can access the work status options and view the work status criteria for example This
shows the data region relevant for the work status setting.
After you set the work status to approved for example, no data changes are possible. There is
a different style called SAPReadOnlyDataCell for approved cells.
After the work book is configured for work status, you can also turn on Analysis workbook
protection which will prevent any data entry for approved data intersections. Workbook /
worksheet protection is not mandatory however. The user would still receive a message that
the Data Slice protects the characteristic combination. (data slice refers to the locked work
state)
If a planning function is executed for a locked data region, you will receive the same message
and not changes will occur.
The Controlled By setting determines who can change a work state. If it is set to Owner, for
example, then only the owner of the United States for example can change the work status. If
it is set to Manager, then only the Manager of the US can change the work status and so on.
In the Manual Entry column of the preceding figure, a value of All for Unlocked would mean
that anyone can change the data. In the list that follows you can see the possible manual entry
values and what they control:
These are the tables that store the work states: RSBPCW_STATCODE (Code) and
RSBPCW_STATCODET (Description). Other metadata tables can be found via SE16 with
RSBPC*.
The controlled by setting determines who can change the work status.
Manual entry includes manual data input via planning enabled queries as well as planning
functions.
The locking dimensions are, in essence, used to record the work status and determine the
users and teams.
In the preceding figure, there are four key points:
Owner Dimension: This characteristic is used to look up the owner or team. It must have a
hierarchy along with an attribute to store the owners and an attribute to store the teams.
Hierarchy: This is the hierarchy used to derive the manager (the owner of a parent node is
the manager of the child node).
Owner Property: The work status owner is the attribute of the characteristicbeing used to
look up the owner.
Team Property: The work status owner is the attribute of the characteristic being used to
look up the team.
If you include characteristics in the other dimensions with hierarchies, then you can set the
work status on nodes of that dimension.
Keeping in mind that an Embedded model can be referenced to more than one InfoProvider,
work status can therefore be set for all InfoProviders in a model or just for a specific
InfoProvider. If you activate Set Work State by InfoProvider then you will need to specify which
InfoProvider when you set the work status.
To clear all work states, simply turn work status off and save it.
These are the options to change work states:
The work status report can be found at the system report area of BPC Web Client. Also, by
creating a BW query on top of the work status InfoProvider, work status reports can be
viewed in SAP Analysis for Microsoft Office for example.
In 10.1, when you turn on work status for a model, the system automatically generates a BW
VirtualProvider that is used to feed the work status report. If you go into BW to transaction
To determine owner and manager an external characteristic will be used. In the preceding
figure, the external PC (profit center is this case) has an attribute for owner and one for
reviewer. Reviewer will be used to determine the manager. (in this example the same look-up
table is being used for business process flows ... which needs a reviewer).
BAdIs are maintained from transaction code SE19. In this example, the RSBPCB_SETUSER
enhancement spot is being used. The BAdI filter is restricted to the Environment (appset_id)
and the Model (appl_id). The implementing class contains the ABAP code.
In the preceding figure, we are using the standard set user code. Only the environment and
the attributes are custom.
Figure 132: Work Status BAdI External and Internal Characteristic Assignment
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Business Process Flows can be used to improve communication in the planning process.
Comments are available as of when changing the status of a step. BPF Comments are
independent from BPC (planning assumption) comments.
When users submit, complete, approve, or reject a BPF activity, they are able to enter a
comment the action. Comments will be displayed in My activities list and in the Process
monitor. It's also possible to enter a comment when reopening an activity.
In the preceding figure, there is an example of a BPF being used as a launchpad (from the Web
Client). The user can easily pick from a list which activity they need to perform and the link
takes them to the user interface to perform that task.
You can also access your activities from the EPM Add-in if the BusinessObjects Planning and
Consolidation plug-in is installed. You can verify this via the Excel File menu > About Analysis
> Plug-ins.
The BPC plug-in is a component to SAP Analysis for Microsoft Office, as of version 2.3. The
plug-in supports only SAP Business Planning and Consolidation 10.1 Support Package 10,
version for SAP NetWeaver or higher. The plug-in is available in the interface as a pane named
Activity.
From Analysis you can access your BPC Environment and any associated BPF activities that
you are responsible for.
From your activities list in Excel, you can simply launch queries or Analysis workbooks with a
single click. From the lower left of the activity pane you can bring up your activity list as well as
launch the web client BPF related pages.
Typically only one template per corporation would be needed for type of process. For
example, one for planning and one for consolidation.
The BPF template includes roughly 4 to 6 key milestones or activities. Each activity is
assigned an owner for example and a workspace which contains action groups used to
complete that activity. Activities should be built in a flexible manner to allow for normal
process variation. The instance dimensions determine the data region that activity statuses
are recorded. For example, you complete the labor planning activity for the US for 2017 for
plan version Plan17.
Identity Dimensions determine the data region used to track the status of the activities.
Identity dimensions are characteristics that belong to the InfoProviders assigned to the
models in the environment.
Process Owners can run the process monitor to check on the progress.
Activities can be reopened if need be. However there is a setting called Check only the current
step when reopening. When selected, the step can be reopened if the user is the performer or
reviewer of the current step. If Check only the current step when reopening is not selected, the
user needs to be the performer or reviewer for both the current step and the previous step to
reopen the current step.
1. All: All regions must complete step x before any region can do step x+1.
Allow Reopen: if selected, users may reopen a completed step. The updated status is then
visible to process monitors.
The driving dimension is used to lookup the owner and reviewer for the activity. The Driving
Dimension can either be a characteristic in an InfoProvider being used in one of the models
(Internal) or an independent characteristic (External) can be used to determine the owner and
reviewer. The internal characteristic must have the same members as the external
characteristic.
In the preceding figure, the external material group has been mapped to internal material
group in the driving dimension settings.
The performer can be determined via properties such as performer user and performer team
or by a customer implementation.
You can use due dates in bpf steps. When the due date is passed, the activity is no longer
viewable.
If a reviewer is used, the process owners can only submit their activities for approval.
Reviewers can use the same workspace as the performer or a different one.
You can preview activity instances to see the users selected for each driving dimension
member.
When using an external characteristic as a lookup for the performer and reviewer, use custom
attributes for each one. There are no system reserved characteristics for owner or reviewer
attributes.
Analysis Office - this is used for BW Queries or Analysis workbooks. If any variables are
involved, there will be an option to link the context to the variable as a way of providing a
default value.
Work Status - this is used to set work states. For this action, you need to select the model
as well as the context.
External Resources - this is used to access URLs and external web applications such as
SAP Lumira Designer planning applications.
The URL action could be used to provide a useful link to a related website.
Open External Web Based Application: When you are using a Business Process Flow in BPC
10.1 for NetWeaver Embedded version it is fairly easy to call Analysis for Office or EPM
workbooks as both are available as direct link types in a BPF workspace. But what if you want
to do Web based planning? Simple input forms can be called directly but they only provide
access to a single input enabled query and not entire planning applications. So you would
rather link a Design Studio(DS) application and obviously in many cases you want to use the
same selection in the DS application as you are using in the BPF step. In the following how to
paper we show how a DS application can be called with the parameter from the BPF context:
https://www.sap.com/documents/2016/12/b61f6c24-9f7c-0010-82c7-eda71af511fa.html
When using the work status activity, you can choose to have the work status automatically set
when the step is changed. For example, if the step is completed, the work status can be
automatically be set to Approved.
The instance can be started immediately, on a future date, or manually. If manual, the
instance is in suspended mode until activated on a certain date. Users cannot access an
instance unless it is started.
If an instance has deadlines, the activity can be set to either Wait(the activity will still be
available) or Close(the activity will not be available after the deadline).
Figure 149: The Business Process Flow Monitor and Managing Instances
BPF Summary
b) Enter the user name, go to change mode and enter the email address and save.
Any SMTP server will work.
e) Choose Continue .
f) Choose Start Condition and Schedule the job to run immediately and save it.
3. Test
a) Change the status of a BPC step.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
Security Overview
BPC Security Key Points
BPC uses off the shelf standard authorization objects to control activities.
BPC also has data profiles that can be used, if needed, to further restrict BW analysis
authorizations.
RSBPC_BBPF Manage and use a BPF Environ- BPF tem- 03, 16, 23,
ment name plate name 70
Note:
There are other BPC authorizations for business rules and journals.
03 Display
06 Update
16 Execute a BPF
In transaction code SU21, all of the Class/Objects that start with RSBPC are for BPC
Embedded.
Users need to be assigned to an Environment in order to access the BPC web client. This
assignment needs to be done in the BW transaction code RSECADMIN(or PFCG) by creating a
role with the RSBPC_ID authorization object. The role is then assigned to the users.
You can view the role shown in the preceding figure in the training server. Go to PFCG and
display role: U00 Role for BPC Data Authorization.
Data Authorizations
To control access to data with BPC Embedded, you need to make the characteristic
authorization relevant. This can be done in the BW Modeling perspective.
After making the characteristic authorization relevant, then the initial BW analysis
authorizations would be maintained in transaction code RSECADMIN. In preceding figure you
can see an analysis authorization called for characteristic P00C_MG (like Material Group).
In the Auth. Structure are four Characteristics / Dimensions:
P00C_MG - this is used to provide the BW data restriction. Under the Intervals column, the
green * means that all characteristic values will be available.
0TCAIPROV - this contains the assignment of the infoprovider. In this case it is the
underlying plan InfoProvider.
0TCAVALID - this is used to control the validity date of the authorization. In the example
below it is set to all (the green *).
Once the analysis authorization is activated then it is assigned to the user(s) based on input
from the LOB. In the preceding figure you can see that the USERP20 user has been assigned
the U00AA_MG analysis authorization.
This can be carried out in the RSECADMIN transaction code: in the User Tab Individual
Assignment Change Select the analysis authorization Select the Manual or Generated
tab Choose Insert Save.
The preceding figure is based on the idea that the LOB (Line of Business) can take a more
active role when it comes to administering data authorizations.
On the lower left, the EDW authorizations (BW) are maintained initially by IT in
RSECADMIN. These analysis authorization objects are assigned to users or roles.
Then the environment user authorizations are maintained by the LOB in the BPC Web
Client data access profiles. The Analysis workbook must have the BPC model assigned for
this to work.
BPC data access profiles are used to control read and write access to transaction data in the
model/InfoProvider.
BPC Embedded relies initially on the underlying BW analysis authorizations (maintained by
IT); then the line of business (LOB) can further restrict a user’s data access in BPC by using
data access profiles.
In other words, the BW analysis authorizations can be merged with the BPC data access
profiles, to allow the LOB to further restrict a user’s access in BPC, thereby providing a more
practical business solution if needed.
The analysis authorization unions the environment authorization, which intersects with the
BPC data access profile.
In the preceding figure (Security – Analysis Authorizations Concept), there are two
dimensions: Customer and Country. The possible values for the Customer dimension are 1, 2,
3, and 4, and the possible values for the Country dimension are DE and FR. The access rights
for these dimensions are shown in the preceding figure.
After the data access profile is added to the combined result of the previous step, only
customer 2 has authorizations for country FR because that is the only customer that appears
in both the combined result and the data access profile.
The data access profile does not include any permissions for country DE (there is no
intersection between the combined result of the previous levels and the data access profile).
Because of that, the permissions for country DE are entirely excluded from the final
combination of authorizations.
The BPC Environment/Model must be assigned to the Analysis workbook for the BPC Data
Access Profile to be processed by the system.
Embedded BPC also allows the analysis authorization be assigned to the environment. This is
performed manually in the RSECENVI transaction code.
In RSECENVI: Select the environment Change Select the analysis authorization Choose
Insert. RSECENVI grants the inserted analysis authorization to users who been assigned to
the environment via the RSBPC_ID authorization object.
Note:
You can find additional information here: http://ims-bw.wdf.sap.corp:50815/
wiki/index.php/BPC_Authorizations_-_New_Concept .
BPC data access profiles can only further restrict the BW analysis authorizations. For
example, if BW allows write access to material group Juice, Soda, and Water, then the BPC
data access profiles can only be used to further restrict that access.
You can select All Members, Selected Members, or Aggregation. Aggregation allows users to
see data in total by not by ID.
The Access Rights include Read, and Write access.
In order to use the BPC data profile, the model needs to be assigned to the workbook. When
the user opens the workbook and refreshes the query, they will be able to use only those
material groups(for example) they have access to the BPC data profile. If they try to access
an ID outside of what BPC allows, they don’t get anything.
Matrix security is the ability to control data access for a data intersection. The concept of
matrix security: you want give a user access to all costcenters for one account ... and a
specific cost center for all accounts.
For example, in the preceding figure, the user ran a report that shows all accounts (COGS,
LABOR, REVENUE, SERVICES, and Not Assigned) for the ADMIN cost center. If they try to see
all accounts for any other cost center they get an authorization error.
In the bottom part of the slide, the report displays only the COGS account for all cost centers.
If they try to see any other account for all cost centers they also get an authorization error.
LESSON SUMMARY
You should now be able to:
LESSON OBJECTIVES
After completing this lesson, you will be able to:
BPC Embedded Environment - this can be used to transport environments and also
models, teams, BPFs, and Workspace objects.
BPC Embedded Model - this is used to transport models as well as the data audit setting
and work status.
BPC Team - this is used teams and also the user assignments.
Note:
Application Set (BPC Transport) is no longer used.
To transport a environment for example, just open the BPC Embedded Environment folder
and search for the environment you need to transport. Transfer it into Collected objects and
set the grouping to Objects Afterwards.
To access the transport collector, go to: RSA1 Transport connection Object types More
types Select an object type Select objects Transport object enter transport request ID.
If a model has multiple InfoProviders that belong to a CompositeProvider, you have the option
of transporting the Model, the InfoProviders, and the CompositeProvider all at the same time.
Set the Collection mode to Start manual Collection. Set the Grouping to Data Flow Afterwards
and Execute.
You can right click on an object in order to access the context menu options such as
Transport All Below and so forth.
LESSON SUMMARY
You should now be able to: