Pega Client Lifecycle Management For Financial Services '24!1!2024!07!14-22-17-16
Pega Client Lifecycle Management For Financial Services '24!1!2024!07!14-22-17-16
CONTENTS
Get started___________________________________________________________________________ 15
Release notes________________________________________________________________________ 16
Pega Client Lifecycle Management and KYC 24.1 enhancements. . . . . . . . . . . . 16
Issues addressed in the release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Known issues in the release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Deprecated and withdrawn rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Product overview____________________________________________________________________ 20
Pega Client Lifecycle Management (CLM) and KYC. . . . . . . . . . . . . . . . . . . . . . . . . 20
Manager Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Systems and environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Deployment options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Pega CLM and KYC Customer Journeys and Workflows. . . . . . . . . . . . . . . . . . . . . 31
Example customer journeys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Standard case types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Case Designer overview of main onboarding case. . . . . . . . . . . . . . . . . . . . . . . 36
Customer Journey Stages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Capture stage – Retail Banking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Enrich stage - Retail Banking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Capture stage – Corporate and Investment Banking (CIB). . . . . . . . . . . . . . . 39
Enrich stage – Corporate and Investment Banking. . . . . . . . . . . . . . . . . . . . . 42
Due diligence stage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Fulfillment stage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Abandon Journey. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Pega Know Your Customer due diligence cases. . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Roles, portals, and dashboards in Pega CLM and KYC. . . . . . . . . . . . . . . . . . . . . . 46
Client lifecycle management roles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Client lifecycle management portals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Case search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Case overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Customer summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Customer search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Primary data entities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Install__________________________________________________________________________________ 62
Completing the prerequisite tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Preparing your environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Installing the application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Importing the application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Optional: Importing the sample data application. . . . . . . . . . . . . . . . . . . . . . . . 65
Optional: Enabling sample operator accounts. . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Configuring your application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Update________________________________________________________________________________ 69
Completing the prerequisite tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Preparing your environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Updating the application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Importing the application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Optional: Importing the sample data application. . . . . . . . . . . . . . . . . . . . . . . . 76
Optional: Enabling sample operator accounts. . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Resuming background processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Configuring your application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Application access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Building your application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Post-update tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Pega Client Lifecycle Management and KYC post-update tasks. . . . . . . . . . . . . . 81
Hotfixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Hotfixes for Pega Client Lifecycle Management and KYC '23. . . . . . . . . . . . . . . 97
Hotfixes for Pega Client Lifecycle Management and KYC 8.x. . . . . . . . . . . . . . . 99
Release 8.8 hotfixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Release 8.7 hotfixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Release 8.6 hotfixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Release 8.5 hotfixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Release 8.4 hotfixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Product overview
Install
• Installation guide
• Update guide
• Post-update tasks
• Hotfixes
Implement
• Implementation guide
Resources
Release notes
Pega Client Lifecycle Management and KYC 24.1 provides several enhancements to the
application including, but not limited to:
For more product information, and a list of additional documents available for this
release, see the Pega Client Lifecycle Management and KYC product page.
For a complete list of new Pega Platform features, see the Pega Platform Release Notes.
Generate Customer Summary powered by Pega GenAI™ from the Customer Master
Profile screen. This innovative feature uses GenAI to automatically create
comprehensive, insightful reports on your customer data, empowering you with faster,
more informed decision-making, proactive risk management, and seamless regulatory
compliance. Some of the key benefits include:
Equifax OneView
In the 24.1 release, CLM/KYC enhanced the KYC questionnaire’s interface to provide
users with a more intuitive and streamlined experience. The solution introduces
counters in KYC questionnaires, offering analysts a quick overview of incomplete,
approved, rejected, and pending mandatory items. These counters make the Due
Diligence process more efficient by providing visual cues for areas requiring action,
enhancing efficiency in task completion.
For each issue, a reference number is provided, and the prefix of the reference number
indicates the issue type. You can use the reference number of an issue in your related
conversations with Pega Support.
INCs
Customer-reported incidents. For example, INC-183895.
SRs
Support requests, which were used instead of incidents in older releases. For
example, SR-D79601.
ISSUEs
Pega-identified issues. They might or might not be related to customer-reported
incidents. For example, ISSUE 654263 (which might also be written as just 654263).
SEs
Sustenance engineering activities. For example, SE-60265.
Starting Q2 2021, all customer-reported issues are logged as INCs. You can view INCs
that you logged in the My Support Portal. INCs logged by other Pega customers, and all
other issue types (SR, ISSUE, and SE), are available in Pega internal tracking systems, in
addition to these release notes.
Issue Description
Issue Description
Tip: You can look up Pega Platform resolved issues in the Pega Platform
Resolved Issues.
The Save for Later button on the manage An enhancement is planned for the next
related party screen does not retain the release to address this issue.
related party data.
Note: To submit new issues or find out more about known issues, or to
request a hotfix, go to the Pega Product Support Community. Look up or
subscribe to your Support Requests (INCs) in My Support Portal. Ensure that
you refer to the issue ID in all communications.
Product overview
• Pega Client Lifecycle Management (CLM) and KYC
This Pega Client Lifecycle Management and KYC Product Overview describes the default
features and benefits, cases, data model, and various preconfigured roles and portals
available in Pega Client Lifecycle Management and KYC.
The Pega Client Lifecycle Management and KYC solution consists of a layer of standard
components applicable for onboarding activities across all market segments of the
Financial Services industry. These standard components form journeys for different
types of customers, such as individuals or organizations. Pega has customized the
application to meet the specific needs of the Corporate and Investment Banking market
and the Retail Banking market segment. Retail Banking may be referred to as
Consumer Banking or Personal Banking, depending on the geographic region in which
they operate. The Pega Client Lifecycle Management and KYC solution currently
supports specialized customer journeys for Corporate and Investment Banking (CIB)
and Retail Banking segments. The following diagram illustrates how the features of the
Pega applications that Pega bundles together into Pega Client Lifecycle Management
and KYC interact to provide organizations with a robust set of rules, analytics, and
planning tools to streamline onboarding for financial services institutions.
• Manager Tools
• Deployment options
Manager Tools
Operational structure landing page
Pega Client Lifecycle Management and KYC provide an easy-to-use tool to define the
operational taxonomy of financial services institutions. Configuring the taxonomy
enables you to build a full operational chart with the necessary access groups, work
queues, and departments to drive, route, and manage work.
Product matrix
Pega Client Lifecycle Management and KYC automatically track operator actions during
their work. Case narratives and histories provide clear insight into actions taken as part
of the customer journey. Any data item can be tracked when configured, including
important metadata around the source of change, time, and (where appropriate)
manual comments. A data auditing tool allows for the exploration of the history of any
piece of data being tracked, which is available directly from the summary screen for the
customer.
Pega Client Lifecycle Management and KYC include a wide variety of standard reports
and graphs that provide real-time information about processes, work, assignments, and
historical data that customers use to analyze their users’ performance. Powerful, drill-
down analysis capabilities enable you to travel from a summary view of your entire
operation to the details of a single onboarding case.
Role-based access
Managers can control access to the application, specific features, and data elements
using a combination of configurable roles and privileges. Authorized users can easily
configure dialogs, coaching tips, and expert skills for service requests. These system
tools are immediately available to users as allowed by their permissions.
Provided through Pega Foundation for Financial Services, the Event-Driven Architecture
facilitates the reception of customer-based events through an API. All events are
validated, enriched, and processed based on functional needs. Perpetual KYC uses the
Event Driven Architecture to ensure that the necessary due diligence is executed for
inbound events. The Event-Driven Review customer journey can be configured to
provide event-specific behavior for screening, documentary requirements, and targeted
due diligence questions. Like events can be bundled together for more efficient
processing.
Example events include, but are not limited to, Legal name update, Full name update,
Registered address change, Document expiry, and Periodic due diligence review.
Provided through the Pega Foundation for Financial Services, the Customer Risk
Assessment is a declarative network that tracks various data elements that impact
customer risk. One or more elements are linked to single risk factors, such as Product,
Country, or Related Parties. These lower-level risk factors roll up to a single overall risk
for the customer.
Risk override
Users can override the automated overall customer risk as part of the risk model. The
manually overridden risk will now be used to drive any risk-based activities instead of
the automated risk.
Note: The system will continue calculating the risk level in the background if
the user wishes to revert to undo the risk override.
The automated risk can be manually overridden from the Case Summary or by using a
local action when working on the case.
Case navigation
A bank onboarding a customer can update core driver data when the case is already in
due diligence, so that the system can handle data corrections later in the process.
Thisenables a customer to select a local action to update core driver data while the
main case is in the Due diligence stage. The case then navigates back to the Enrich
stage, wherethe desired data is updated. The case returns to the Due diligence stage,
leading to a recalculation of the appropriate due diligence activities.
CLM users can select the most appropriate Due Diligence orchestration mode while
considering the level of complexity of the financial institution’s organizational model.
Upon selection, the chosen orchestration mode will persist to subsequent customer
journeys. Any in-flight customer journeys started before the change in orchestration
mode will be completed using the previous orchestration mode.
To align with global business taxonomy, the previous Single mode is now known as
Standard mode. This mode is most suitable for the largest organizations working across
multiple regions catering to a mix of complex product offerings. In this case, the
regulations to be addressed in due diligence are handled at the lowest granular level,
distributing work across many teams.
Compared to the Standard mode, the Intermediate mode goes up one level of
granularity, so there are significantly fewer subcases for AML. Instead of potentially five
subcases across product and tax regulatory work (in Standard mode), Intermediate
mode creates one subcase per area. Therefore, this mode is best suited to
organizations operating in only one or two geographical regions and with a product mix
of medium complexity.
Provided through the Pega Foundation for Financial Services, the Customer Master
Profile represents the local copy of all core data that directly drives customer journeys.
This includes, but is not limited to, customer risk profiling, compliance elements (Pega
Know Your Customer for Financial Services profile information), jurisdictional aspects,
product related and relevant related parties. The complete Customer related data
resides in the Financial Institution‘s systems of record, likely to be external to the
application. Still, the systems must rely on a local copy of the driving data to function
without depending entirely on those external systems of record.
In the Corporate and Investment Banking (CIB) business, there are instances where a
group of clients being onboarded share some of the elements of onboarding journeys,
such as operational data or KYC items.
• Auto-propagate data and documents that are known to be common for all
entities, including products, related parties, and documents.
• Select one entity from the group to use as a source of KYC data for specific KYC
data fields. The system uses a preconfigured auto-propagation of data for defined
data fields in the Capture and Enrich stages in the onboarding journey. The other
entity onboarding request can subsequently use the data auto-propagated from
the source entity. The scope of fields subject to auto-propagation depends on the
type of group and relationship among entities.
Supporting evidence
Know Your Customer Data Change Control (KYC DCC) provides required control on due
diligence data changes. It automatically identifies modified data, highlights changes,
and enables efficient reviewer actions. This improves data accuracy, accelerates the
review process, and provides a robust audit trail for compliance.
KYC DCC consists of a mechanism that constantly compares two different pieces of
data:
Current Data
This is the current version of the data, representing the value that the KYC item
holds at the moment of evaluation. It can further change when: the user updates
the user interface the data programmatically propogates from Enrich into Due
Diligence the associated KYC items from the previous journeys are modified.
Baseline Data
The latest version of the KYC item response that has undergone the review
process. It serves as a reference point for constant comparisons. This includes
both KYC item responses and supporting evidence.
The Pega CLM application allows financial institutions to separate duties regarding the
input and review of data in cases.
When performing KYC due diligence on a customer, there may be a large amount of
data to review by the “checker.” If an error is found or additional information is required
on any KYC item, the Analyst (also referred to as the “Analyst” or “maker”) shouldn’t
have to review all the questions again to figure out what has been flagged by the
Reviewer (also referred to as the “Reviewer” or “checker”).
Additionally, auditors will have further evidence that each KYC item was reviewed in the
context of KYC due diligence cases. This is done by showing one of the below labels
next to each KYC item:
• Approved
• Rejected
• Amended
Pega Business Process Management (BPM) streamlines your operations so you can
reduce costs and improve business agility. Leading analyst firms recognize Pega as the
most comprehensive and unified BPM platform. Rules and processes automatically
resolve work wherever possible.
Case management
Pega industry-leading case management helps you simplify and automate work. Case
management helps you keep your promise to your customers, connecting all the
people and systems required to resolve each customer inquiry. It tracks related
information, automates and assigns outstanding tasks, and connects front- and back-
office activity for end-to-end resolution.
Visual workflow
Pega Directly Capture Objectives (DCO) visual tools capture every aspect of how work
gets done, including processes, user interface, rule creation, and integration. You can
easily configure workflows using the Pega process modeler by dragging and dropping
from a library of smart shapes. You can quickly add new service requests, automatically
creating the infrastructure that Pega Customer Lifecycle Management and KYC require.
You can translate business requirements into finished applications without manual
programming, reducing implementation time and bridging the gap between technical
and business resources.
Pega Live Data simplifies the use of data in business processes by delivering the right
data, in the right place, at the right time. It manages data requests behind the scenes so
that data flows to the right process steps. It is easy to change and adapt to new data
sources and new applications across thousands of users.
Integrations with your existing systems of record are crucial to any Customer
Relationship Management implementation. Pega customers can choose to use Pega as
the system of record or orchestrate their data from multiple sources for delivery
through the Customer Service application. In the case of the second option, Pega Live
Data simplifies the use of data in business processes by delivering the right data, to the
right place, at the right time. It manages data requests behind the scenes so that data
flows to the right process steps. It is easy to change and adapt to new data sources and
applications across thousands of users.
Situational layer
With Pega Situational Layer Cake, you can automatically tailor applications to the
business context in which they operate. Scale variation is all handled in one application
version, using a coherent, layered architecture. Pega efficiently delivers repeatable,
differentiated solutions by re-using standard policies and procedures in multiple
business units, channels, geographies, and customer segments.
Multiple language
Language packs support application localization through translated field values for
buttons, prompts, and labels.
Deployment options
Cloud Choice and Pega Cloud
The following image shows the case summary of a new onboarding customer journey in
the due diligence stage. A clear overview of the progress and status of the overall
journey is available, including details on tasks being executed across the business in
different groups.
The following image shows how an active customer can be onboarded with additional
products. All data captured in the initial onboarding, including relevant due diligence
information, will be reused if still valid.
The following image shows the initial confirmation screen of a Customer periodic
review journey. A set review date has passed for this customer, and the customer
journey has been automatically created using event-driven architecture. A detailed
review of the core customer data and due diligence now takes place.
Maintain Business relationship – Add Fund Adds a Fund to an existing Fund manager
business relationship
Offboard existing customer – Exit products of Exits products or locations for an existing
locations customer
Customer Review – Customer periodic review A periodic review of the customer’s due
diligence based on the “next review date”
Standalone due diligence - Conduct due Conductsa standalone due diligence review
diligence for a customer already in the system
These base assets are specialized for the Retail Banking and Corporate Investment
market segments:
• Fulfillment stage
• Abandon Journey
After entering a customer name, the Pega Client Lifecycle Management and KYC
application returns a list of existing customers or a customer that is in the process of
onboarding; appropriate customer journeys are then presented.
After the customer answers a simple set of questions, the Pega Client Lifecycle
Management and KYC application determines whether onboarding can continue via the
self-service channel or if the potential customer must visit a branch.
Select products
The user sees a list of appropriate products with relevant information to choose from.
Enter the minimum data about the customer required to proceed with the account
opening application.
Personalize
The user enters account personalization details such as a PIN, a password for online
banking, and statement preferences.
The user provides documents as required to progress the onboarding process. They
can provide files from the local file system or use pictures from a mobile device.
Review submission
The user is presented with final terms and conditions related to the chosen products
and services.
Confirmation
The user is presented with a message confirming their next action in the onboarding
process.
In some instances, the third-party eKYC customer validation process requires further
review. A back-office user is presented with all the necessary information to accept,
reject, or escalate the request.
Using data captured and standard templates, the application generates a welcome pack
of appropriate documents attached to the case, and this is sent by email to the
customer.
Pega Client Lifecycle Management and KYC generate a set of business approval cases
and route them to the relevant regional business sponsor for processing and an
approval process appropriate to the products chosen.
Customer synchronization
Following successful approvals from the business sponsors, all the core data captured
during the first two stages of the customer journey is appropriately persisted into the
systems of record. A sample database comes built into the Pega Client Lifecycle
Management and KYC application to represent this process.
Pega Client Lifecycle Management and KYC can search a list of existing customers or a
customer that is in the process of onboarding; appropriate customer journeys are then
presented.
Entering basic information about the customer initiates the appropriate onboarding
journey:
• Entity type
• High-level industry classifications
• Country of incorporation address
• Primary point of contact details
Add products
Selecting the Jurisdiction and/or Booking Entity presents a list of products available to
onboard the customer. This product list is driven by the operator’s alignment to the
Financial Institution’s operational structure and, as appropriate, supports selections of
multiple products across multiple jurisdictions.
Related parties
While normally collected during the Enrich stage, related party data can also be added
using the Manage related parties local action. See Enrich Data for Due Diligence for
further details.
The review of the initially required documents (based on the currently captured
customer data) is available using the Review required documents local action. This list
of required documents can grow significantly as additional data is captured during the
Enrich stage.
Note: This was an explicit workflow step before version 8.5 of the Pega CLM
application. Requirements are displayed on the right-hand side of onboarding
cases from all versions of the Pega CLM application going forward.
Business case
The business case information captured from this relationship includes expected
service date, investment objectives, investment outlook, and expected value of the
business to the Financial Institution.
Resolve duplicates
Stakeholder notifications
To enhance the data collection process during the Enrich stage, Pega Client Lifecycle
Management and KYC allow you to easily configure third-party services such as VEDaaS
(AVOX), SWIFT KYC, Dun & Bradstreet (via Encompass) and Markit (KYC.com) to retrieve
documentation and information automatically. After completing the configuration for
the desired entity, all documents can be retrieved and optionally attached to the parent
case for future processing. For individuals, Pega Client Lifecycle Management and KYC
automatically provides options for four business services: eIDVerification, eFraudCheck,
eCreditProfile, and eScreening. By default, Pega Client Lifecycle Management and KYC
configure these as connected to Equifax, but the connection to similar service providers
can be updated to meet your specifications or requirements.
This placeholder step represents the generation of a welcome pack that meets your
specifications or requirements for the customer being onboarded.
eScreening
business relationships and human networks. A separate eScreening case is spun off for
parallel handling from the primary parent case.
Refinitiv World-Check One Adverse Media API enables Pega CLM users to seamlessly
query, adjudicate, and exchange adverse media (also known as negative news) findings
on a single unified platform.
When enabled, Pega CLM users will be prompted with a separate Adverse Media sub-
case in parallel with eScreening cases. As with eScreening, the Adverse Media case will
be spun off simultaneously with the main parent case to allow users to complete this
and other vital data enrichment in parallel.
Sales Support can build on the customer data captured during the initiation of the
onboarding journey. The wide range of required enrichment data is split into nine
logical categories to aid processing.
One area of data enrichment is related parties. To drive the necessary due diligence
processes, the Sales Support staff can relate individuals and organizations to the
customer being onboarded. They select existing parties or create a new one and then
select this new one. They then assign a related party from the available categories, any
of which can have custom attributes to drive later business processes.
Where appropriate, data captured in the Enrich stage is automatically mapped to the
answer in the due diligence questions during the Due Diligence stage. The Pega Client
Lifecycle Management and KYC application presents customer data undergoing
maintenance journeys, including those for brand new customers or locations for your
organization. The Credit and Legal data categories are currently placeholders scheduled
to be enhanced in future Pega Client Lifecycle Management and KYC application
releases.
Pega Client Lifecycle Management and KYC generate a set of business approval
cases,and route to them to the relevant regional business sponsor for processing and
an approval process appropriate to the products chosen.
Customer synchronization
Following successful approvals from the Business Sponsors, all of the core data
captured during the first two stages of the customer journey is appropriately persisted
into the systems of record. A sample database comes built into the Pega Client Lifecycle
Management and KYC application to represent this process.
Fulfillment stage
Product Fulfillment cases are generated for each jurisdiction, followed by wrap-up
activities and final stakeholder notification typically found in the type of organization in
which the onboarding journey is based. Such fulfillment cases must be completed
before you can resolve the overall customer journey.
Abandon Journey
An Abandon Journey stage is available as a local action during the Capture and Enrich
stages. It allows any user to put in a request to abandon the journey. This journey
option presents a list of preconfigured reasons and the ability to provide background
information for the request. Upon approval by the necessary manager, the Pega Client
Lifecycle Management and KYC application “rewinds” any data changes made up until
that point and enables you to resolve the case appropriately.
Every Pega Know Your Customer for Financial Services case has the following stages:
• Related party KYC: Generates additional cases for all relevant related parties.
This generation iterates down through the related parties’ network based on
configuration.
• Global KYC: Collects Global KYC data.
• KYC review: Reviews/completes authorizations for Global KYC collected data.
• Local due diligence: Generates separate cases for each jurisdiction, including
Anti-Money Laundering (AML) checks, any checks required according to the local
regulatory requirements, and a Relationship Manager review.
Product-based regulations, such as IIROC, Dodd-Frank, FINRA, EMIR, and MiFID, are
handled in specific Regional and Local due diligence cases.
Tax-specific cases for FATCA and CRS are available using the KYC engine and regulatory
rules.
The Pega Client Lifecycle Management and KYC application carries out an automatic
legal due diligence process based on the nature of the product, relationship, and type
of organization being onboarded. The Pega Client Lifecycle Management and KYC
application generates simple placeholder cases with representative stages.
• Case search
• Case overview
• Customer summary
• Customer search
Global KYC – User User carrying out data collection for Global
Know Your Customer due diligence
Global KYC Review – User User carrying out reviews of Global Know
Your Customer work
Local KYC Review - Manager Manager in charge of the team carrying out
Local Know Your Customer reviews
Local KYC Review - User User carrying out Local Know Your Customer
review
Each preconfigured role has access to portals designed to present information and help
drive decisions. The end-user portals are as follows:
Name Description
CLM Back Office Manager Review and manage the work of others in areas such
as business approvals, product fulfillment, legal and
credit approvals
CLM Back Office Operator Carry out back-office tasks such as business approvals,
product fulfillment, legal and credit approvals
CLM KYC Manager Used to review and manage the work of the Know Your
Customer (KYC) team.
CLM KYC Operator Used to carry out all appropriate types of Know Your
Customer
CLM Sales Support Manager Used to review and manage the work of those in Sales
Support
CLM KYC Rule Manager Used to create, update, or delete KYC Types
CLM Sales Support Operator Used to carry out Sales Support work
Branch Portal (RetailBanking) Used by branch front office users in Retail Banking
This section offers a concise overview of the diverse dashboards and landing pages
accessible through the available end-user portals.
The retail onboarding dashboard within the branch portal offers perspective of
successfully onboarded products, the current real-time status of customer applications
seeking onboarding, and a dynamic action list for necessary follow-up steps.
The onboarding process within the web self-service platform incorporates interactive
guidance text, and the user interface is designed to enhance the self-service experience
for retail customers throughout the entire journey, from product selection to successful
onboarding completion.
Case search
This function allows operators and managers to search for cases and filter the list by
core fields.
Case overview
It gives users a clear indication of the current stage, who owns the current task, the key
participants, case progress, and access to further information using the Customer
summary in the case header.
Customer summary
Client Lifecycle management (CLM) allows users to gain a unified and comprehensive
understanding of their customers - both individuals and organizations - via the
Customer Summary functionality. The Customer Summary includes various
components, from basic customer data to ownership structure, to details of ongoing
journeys.
You can now generate succinct and insightful summaries of customer profiles with the
help of Pega GenAI™. The Generate Customer Summary feature provides a
comprehensive report that can be easily accessed from the Customer Master Profile
(CMP) screen. This eliminates the need to navigate through multiple pages, significantly
enhancing customer onboarding, review processes, and overall value chain interactions.
The Generate Customer Summary feature is readily accessible on the top right-hand
side of the CMP screen. This feature:
Key Features
• Basic information: Fundamental details about the customer, such as their name,
contact information, and any essential identifiers.
• Products: Various products or services associated with the customer. This has
details such as product type, client jurisdiction, and booking entity.
• Ownership structure: It applies exclusively to entities, illustrating the primary
aspects of ownership and control within the organization.
• Risk: Overall risk profile of the customer, including country risk, business risk,
product risk, related party risk, secondary risk, relationship duration risk and
external data risk.
• Existing business relationships: Ownership and control, management, and
authorized representative roles of the customer in other relationships.
• Current journey details: A snapshot of the cases created in the current journey
along with the number of opened sub-cases information.
Customer search
You can search for existing customers using a basic or advanced search and then
access their customer summary. Search across a single or all customer types with
results across multiple tabs.
Where licensed appropriately, you can use a series of third-party data partners to
retrieve details of a prospective customer. The onboarding process then starts with all
available data prefilled, thus accelerating the enrichment process.
In addition to the end-user portals, there are two additional business configuration
portals:
Name Description
KYC Rules Manager Studio Configure all rules related to Know Your
Customer
A business, holding
company, or corporation
that can be a customer,
contact, or prospect to the
financial institution. It can
consist of one or more
organizations that can hold
many accounts. Some
examples include
corporations, partnerships,
associations, funds, trusts,
family
Install
This document describes how to install the Pega Client Lifecycle Management and KYC
application.
1. Before starting an install, and before backing up your system, review the database
policies and application permissions that are used by your Pega Platform
installation. Determine whether the application is permitted to update the
database automatically or if you must generate the database scripts that your
organization will use to manually make schema changes.
2. Determine which language packs are applicable to your product and check for
availability. For additional information, see Pega Marketplace.
3. Install the latest version of Pega Platform 24.1 and ensure that you can log in as
an administrator. For more information, see the Install and update Pega Platform
for your environment.
4. Install Pega Foundation for Financial Services 24.1. For information, see the Pega
Foundation for Financial Services installation guide on the Pega Foundation for
Financial Services product page. Apply any hotfixes for this application. For the list
of the required hotfixes, see the Pega Foundation for Financial Services hotfixes .
5. Install Pega Product Designer for Financial Services 24.1. For information, see the
Pega Product Designer for Financial Services installation guide on the Pega
Foundation for Financial Services product page. Apply any hotfixes for this
application. For the list of the required hotfixes, see the Pega Product Designer for
Financial Services hotfixes page.
6. Obtain the latest Pega Client Lifecycle Management and KYC application media.
For the list of the required hotfixes, see the Pega Client Lifecycle Management for
Financial Services Hotfixes.
Complete any prerequisites required for Pega CLM and KYC. Ensure that all necessary
conditions are met before initiating the installation.
Complete the prerequisites for this installation. For information, see the
"Completing the prerequisites" topic earlier in this guide.
Note: Set the value to true for the following DSS settings to avoid any
SRS indexing issues during installation.
Setting Description
To import the sample configuration for Pega Client Lifecycle Management and KYC,
complete the following steps.
1. In the header of Dev Studio, click Configure > Application > Distribution > Import.
2. Select the SampleData/PegaCLMFSSampleBasic.jar file from your distribution
media.
3. Select the check box to enable the advanced mode and ensure that the existing
sample operators are updated with the new configuration.
4. Follow the wizard instructions to import the sample data.
5. To import sample customers, repeat the previous steps for SampleData/
PegaCLMFSSampleCustomers.jar in standard mode.
1. In the header of Dev Studio, click Configure > Org & Security > Authentication >
Operator Access.
2. In the Disabled operators section, select the check box next to the operator ID to
enable.
3. Click Enable selected.
The Enable Operator dialog is displayed.
4. Click Submit to confirm that you want to enable the selected operator ID.
5. Click OK to close the dialog box.
Application access
After installing Pega Client Lifecycle Management (CLM) and KYC, assess the base
application for out-of-the-box functionality. Utilize the provided standard access groups
and operators as a reference point for your settings. The base application provides
several standard access groups that can facilitate access to the application.
• If you installed the sample application and enabled the operators in the
installation step, you can use these credentials to access the application.
• If you did not install the sample application, create operators aligned with the
standard access groups listed below.
make the base product meet your specific business needs. Creating an implementation
layer is a multi-step process involving various tasks such as:
• Defining a strategy
• Building new application
• Configuring access groups and operators
Update
This document describes how to update the Pega Client Lifecycle Management and KYC
application.
• Post-update tasks
• Hotfixes
1. Before starting an update, and before backing up your system, review the
database policies and application permissions that are used by your Pega
Platform update. Determine whether the application is permitted to update the
database automatically or if you must generate the database scripts that your
organization will use to manually make schema changes.
2. Determine which language packs are applicable to your product and check for
availability. For additional information, see Pega Marketplace.
3. Update to the latest version of Pega Platform and ensure that you can log in as an
administrator. For more information, see the Install and update Pega Platform for
your environment.
4. Update Pega Foundation for Financial Services 24.1. For information, see the Pega
Foundation for Financial Services update guide on the Pega Foundation for
Financial Services product page. Apply any hotfixes for this application. For the list
of the required hotfixes, see the Pega Foundation for Financial Services hotfixes .
5. Update Pega Product Designer for Financial Services 24.1. For information, see the
Pega Product Designer for Financial Services update guide on the Pega
Foundation for Financial Services product page. Apply any hotfixes for this
application. For the list of the required hotfixes, see the Pega Product Designer for
Financial Services hotfixes page.
6. Obtain the latest Pega Client Lifecycle Management and KYC application media.
For the list of the required hotfixes, see the Pega Client Lifecycle Management for
Financial Services Hotfixes.
Complete any prerequisites required for Pega CLM and KYC. Ensure that all necessary
conditions are met before initiating the installation.
To avoid inconsistencies and data integrity issues during the manual import, pause any
background process that might trigger the running of rules before the update is
complete. Otherwise, the application might start running rules before the new rule
base is consistent and ready for use.
Review the following background processes that are provided and active by default, and
that might be in use by your application.
Consider any other background process or inbound flow that your organization might
have implemented and that might trigger the execution of logic. For example,
processes of this type might be the remote creation of cases through Pega API or the
reception of events through data flows.
Backup the database to ensure that you can revert to the last working version of the
system if you encounter an issue. The update process modifies both the data schema
and the rules schema. Complete an offline backup procedure that preserves the data
schema and the rules schema. For details, follow the specific instructions provided by
your database vendor.
Importing the new version of the application might require the execution of column
and declare-index population jobs. These jobs run in the background, populating new
columns and declare-indexes that are imported with the application, which sometimes
requires the update of many records. In PostgreSQL installations, this massive update
of records requires additional temporary disk space. Ensure that you have enough disk
space available for the database.
Complete the prerequisites for this update. For information, see the "Completing
the prerequisites" topic earlier in this guide.
4. Depending on which version you are updating from, the wizard might alert you of
some pre-existing rules. Select the Replace check box to replace pre-existing rules
and continue with the wizard.
5. Depending on your environment, the system might determine during the import
that database schema changes are required. For schema changes, depending on
your site's configuration and internal policies, select either Automatic to make the
changes on your behalf wizard or Manual to generate the SQL to be executed by
your DBA.
If you select Manual, see Viewing and applying schema changes.
6. When the import is complete, click Done.
7. Repeat steps 2 to 6 for:
• Rules/PegaKYC.jar
• Rules/PegaKYCRC.jar
• Rules/PegaCLMFS.jar
• Rules/PegaCLMFSCIB.jar
• Rules/PegaCLMFSRet.jar
• Upgrade/PegaCLMFSUpgrade.jar
8. Apply the required hotfixes by using Hotfix Manager. For more information about
applying hotfixes, see Applying hotfixes.
Note: Set the value to true for the following DSS settings to avoid any
SRS indexing issues during installation.
Setting Description
Setting Description
To import the sample configuration for Pega Client Lifecycle Management and KYC,
complete the following steps.
1. In the header of Dev Studio, click Configure > Application > Distribution > Import.
2. Select the SampleData/PegaCLMFSSampleBasic.jar file from your distribution
media.
3. Select the check box to enable the advanced mode and ensure that the existing
sample operators are updated with the new configuration.
4. Follow the wizard instructions to import the sample data.
5. To import sample customers, repeat the previous steps for SampleData/
PegaCLMFSSampleCustomers.jar in standard mode.
1. In the header of Dev Studio, click Configure > Org & Security > Authentication >
Operator Access.
2. In the Disabled operators section, select the check box next to the operator ID to
enable.
Start the out-of-the-box queue processors, job schedulers and agents that your
application used as well as other background processes that you stopped before. Open
again to external systems the access to the application.
Note: If this update was part of a bigger update process invoking other
applications and you stopped processes from other applications, they can be
resume now.
Application access
After installing Pega Client Lifecycle Management (CLM) and KYC, assess the base
application for out-of-the-box functionality. Utilize the provided standard access groups
and operators as a reference point for your settings. The base application provides
several standard access groups that can facilitate access to the application.
• If you installed the sample application and enabled the operators in the
installation step, you can use these credentials to access the application.
• If you did not install the sample application, create operators aligned with the
standard access groups listed below.
• Defining a strategy
• Building new application
• Configuring access groups and operators
• Application access
Application access
After installing Pega Client Lifecycle Management and KYC your environment, check the
base application to see the out of the box functionality and use that configuration as a
reference point for your settings.
The base application provides several standard access groups that can facilitate access
to the application. If you installed the sample application and enabled the operators in
the installation step, you can use these credentials to access the application.
If you did not install the sample layer, you can create operators against the standard
access groups that are listed in the table below.
Post-update tasks
The new version of Pega Client Lifecycle Management and KYC that you have just
updated comes with new features that you may want to adopt. The comprehensive list
of those features is available at Pega Client Lifecycle Management and KYC Post-update
tasks. Review those lists to ensure that your implementation uses the latest features
available as soon as possible and a smooth transition into the new version of the
product.
For more information about the required configuration, see the Pega Client Lifecycle
Management for Financial Services implementation guide.
Feature Description
Feature Description
Feature Description
Feature Description
Customer audit data retrieval enhancements The CLM, Customer audit data
retrieval and display has been
enhanced, resulting in a substantial
improvement in performance and
overall user experience. For more
information about the configuration/
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
KYC rule database tables KYC rules are stored in the platform ta
The update to 8.6 automatically migrat
rules into new dedicated database tab
kyc_supporting_evidence) that provide m
flexibility to the solution. In the unlikel
you have some custom functionality p
Feature Description
Pega Known Your Customer for Financial Services rules deprecation In 8.6, Pega Known Your Customer for
Services is sunset as a product and its
capabilities, and rules are now part of
Lifecycle Management and KYC. The ru
made up Pega Known Your Customer f
Feature Description
Feature Description
Requirements and document collection The 8.6 version of the application includes m
changes in the requirements and document
collection functionality Requirements Next
Generation (RNG). By default, this new funct
is not active after the update. The system be
in the same way as in previous versions, and
feature requires activation and configuration
more information, see the Pega Client Lifecy
Management for Financial Services impleme
guide.
Feature Description
Relationship codes and entity type The new version of the application includes
changes in the existing relationship codes an
entity types. If you use default values, you do
need to do anything to pick up these change
visible automatically. If your organization cha
the default values, you might need to assess
the new values impact your implementation
more information about access to these two
types, see the Pega Client Lifecycle Managem
for Financial Services implementation guide.
Master Profile loading mechanism To reduce the memory footprint of the appli
the data page that gives access to the Maste
Profile imposes a limit on the number of acti
instances in memory. You can configure defa
thresholds and the availability of this functio
For more information, see the Pega Client Lif
Management for Financial Services impleme
guide.
Feature Description
KYC declare indexes The declare indexes that enabled the reporti
KYC types and items that present significant
performance issues are deprecated. Disable
indexes by changing the PegaKYCFS/
EnableDueDiligenceIndexing dynamic system s
to false. If your organization needs to repo
across KYC types, consider exporting the dat
its consumption in an external reporting syst
For more information, see the Pega Know Yo
Customer Engine.
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
To evolve the application functionality and improve its maintenance, some of the rules
might be deprecated or withdrawn in the new version. These changes do not affect
standard implementations, but may affect customized implementations. Review the
deprecated and withdrawn rules to ensure that they do not impact your
implementation of Pega Client Lifecycle Management and KYC. For a detailed list, see
Deprecated and withdrawn rules.
Hotfixes
Hotfixes are software updates that Pega creates to fine-tune the behavior of a release.
Import each type of hotfix in the listed order during the Pega Client Lifecycle
Management and KYC installation or upgrade:
• Apply Pega Platform hotfixes immediately after the Pega Platform installation or
upgrade.
• Apply Pega Client Lifecycle Management and KYC hotfixes just after you complete
the application bundle import.
For hotfix installation details, see the readme that is included in the hotfix. Tables that
contain hotfix information for each release are provided below:
HFix-84795 to 8.8 on
customer request.
8.7.4 HFix-B294
Added the Save button in the Manage
related parties screen to enable temporary
save of related parties’ data.
HFix-82141 Change in
KYCDueDeligenceInformatio
n section of class PegaKYC-
Data-PolicyProfile. Replaced
pxCreateDatetime with
AppliedDatetime property
as AppliedDateTime
property holds the right
date time value of KYC
cases.
1. Removed unwanted
portal associated to
Access group.
2. Error handling when
Escreening/Adv.media
provider component is
not present in the
stack.
3. Remove redundant
versions of
pyCaseHeader in
different e-screening/
adv.media classes.
4. Withdraw Overridden/
customized case
types /stage rules in
CIB/RET rulesets &
specialize only by
circumstance.
5. WC secondary fields
Enhancements/issues.
1. Remove unwanted
portal in dev or app
studio associated with
configuration of access
groups.
2. Error handling
functionality provide in
the scenario when
World-Check
integration component
is not present in the
stack.
3. Removal of redundant
versions of
pyCaseHeader in
various different
eScreening/Adverse
Media classes.
4. Withdraw overridden
case types and stage
rules in PegaCLMFSCIB/
PegaCLMFSRET rulesets
• Mandatory Question
Fields
• Multi-line Edit Fields
• KYC/Review Link Labels
• KYC: HFIX-70091
• KYC-RC: HFIX-70092
to a user after
submitting waive
request.
24. Error thrown on UI
when a user attempts
to submit related party
data when onboarding
case is open in a
separate session.
Important: The
KYC HotFix-61554
is a pre-requisite
for this hotfix.
Fund Manager
onboarding case.
• Resolved abandoned
cases are not hidden
after proceeding to the
due diligence stage for
the second time.
• Single Page attributes
are not captured
during PDF Snapshot
generation.
• Contains organization
chart units, operators,
work baskets, and
workgroups to support
target operating model
for Argentina, Brazil,
Cayman Islands, Chile,
Colombia, Dubai, Isle
of Man, Israel, Japan,
Malaysia, Mexico, New
Zealand, Peru,
Philippines, Qatar,
Russia, Saudi Arabia,
South Africa, Taiwan,
Thailand, Turkey, UAE
• Compatibility with
Know Your Customer
Regulatory Compliance
18.2 release.
• Support 22 new APAC
jurisdictions-Argentina,
Brazil, Cayman Islands,
Chile, Colombia, Dubai,
Isle of Man, Israel,
Japan, Malaysia,
Mexico, New Zealand,
Peru, Philippines,
Qatar, Russia, Saudi
Arabia, South Africa,
Taiwan, Thailand,
Turkey, UAE.
• Created CIP
requirements for 22
new jurisdictions.
• Withdrew unnecessary
rules related to the
document handling
based Requirements
functionality from CLM
to allow the use of the
equivalents from PFFS
by default
• Implemented CLM-
specific specializations
by creating
appropriate extension
points in PFFS for the
document handling-
based Requirements
functionality.
• Contains organization
chart units, operators,
work baskets, and
workgroups to support
• Compatibility with
Know Your Customer
Regulatory Compliance
18.1 release.
• Support 4 new APAC
jurisdictions- China,
India, Indonesia, and
South Korea.
• Created CIP
requirements for 4
new APAC jurisdictions.
• Capture of Local Entity
including validations
for India and Indonesia
jurisdictions only for
onboard Journeys.
• Compatibility with
Pega Know Your
Customer Regulatory
Compliance for
Financial Services 17.4.
• Compatibility with
Pega Foundation for
Financial Services
(PFFS) 7.32
• Support UI ToolKit 10
through PFFS 7.32
• Compatibility with
Pega Know Your
Customer Regulatory
Compliance for
Financial Services 17.4.
• Support 7 new EU AML
jurisdictions.
• Updated CIP
requirements for 6
existing countries of
EU AML jurisdictions.
• Compatibility with
Pega Foundation for
Financial Services
(PFFS) 7.32.
• Support UI ToolKit 10
through PFFS 7.32.
Interoperability Services
(CMIS) and Alfresco server.
Implement
Pega Client Lifecycle Management and KYC (Know Your Customer) is a robust solution
for expediting the onboarding of new customers within the realm of financial
institutions. It significantly reduces the effort required to handle intricate regulatory
requirements while ensuring a seamless customer experience.
The primary objective of Pega Client Lifecycle Management and KYC is to empower
various stakeholders involved in the onboarding process, such as relationship
managers and sales support teams. By providing enhanced capabilities for capturing
customer information and conducting due diligence, this application enables these
professionals to perform their tasks more efficiently and accurately.
• Application Configurations
• Perpetual KYC
• Customer Review
• Supporting functions
• Application performance
• Reference
The Building a new application chapter describes step-by-step process to create your
implementation layer and start using the application. The remaining parts of the guide
explain configuration and extension functionalities.
The configuration tasks given are optional and are only required when you want to
meet specific business needs. For more information see, Pega Client Lifecycle
Management for Financial Services knowledgebase articles.
Define your implementation strategy and prepare an operator that can be used
to run the new application wizard.
Result:
The New Application wizard opens when the operator logs in.
4. Click Use this application type to reuse resources from the application, such as
case types or data types and import the resources to your new application. To
resuse resources:
a. In the Select case types section, select the checkbox next to each applicable
case type. You can select all case types at a time or add or remove them as
per your requirements from the Case Types section of your new application.
b. Click Continue.
c. In the Select data types section, select the checkbox next to each relevant
data type. By default, the list includes the most relevant data types used by
the base application, but depending on your business needs, you might not
need all of them. You can select all the available data types at a time or add
or remove them as per your requirements from the Data Types area of your
new application.
d. Click Continue.
e. On the Name your application page, enter the name of the application, and
then click Advanced configuration.
f. In the Organization settings section, enter the Organization name, Division
name, and Unit name for this application.
Result:
The New Application wizard creates the application class structure for
you based on the organization settings that you enter.
. For more information, see Class layers and Class hierarchy and
inheritance.
g. Click Save.
h. Click Create application.
Result:
Journeys
Cases that implement end-to-end journeys in a business, that the application manages.
Journeys are instantiated as top-level cases and might require other case types to
complete their work.
In the base product, journey cases reside under PegaCLMFS-Work-CLM, a class that
defines the generic case lifecycle used as a base by most journeys. For example, the
Client Lifecycle Management case in the Case types area of Dev Studio and App Studio.
The class structure identifies each specialized journey case.
If you select all the case types when you create your application, you should have the
following journey cases available under your main work pool (for example, UPFS-MyApp-
Work):
Each of these journeys implements different variations, which have name journey
subtypes. The journeys are implemented as classes and case types. The subtypes are a
qualifier (a property) available to the case that can slightly change the behavior of the
journey.
For example, all customers can be onboarded using the Onboard new customer
journey. However, the onboarding process for an individual will present some
differences compared with the process for an entity. The process for the main entity will
also be different from, for example, a subsidiary. The journey subtypes manage all the
minor variations over the main journey in the system.
Supporting cases
These cases support the process defined by a journey. Most supporting cases are
instantiated as subcases of the journey case, while others are created as top-level
cases. However, they all need a journey case to exist. The following supporting cases
are available out of the box:
FATCA UPFS-MyApp-Work-Tax-FATCA
CRS UPFS-MyApp-Work-Tax-CRS
eScreening UPFS-MyApp-Work-Screening
Requirement UPFS-MyApp-Work-Requirement
You must perform these steps to make a new journey type available. For more
information, see the list of available Journey types and subtypes.
Create a new class under the root class for your journeys in Pega Client Lifecycle
Management and KYC.
For example, if you plan to create a specific journey to manage the renewal of
documentation, you can create a class with the following characteristics:
Attribute Value
Class UPFS-MyApp-Work-CLM-DocumentCollection
The system refers to the class that you create in many different operations, and so the
class must be registered under the D_AppExtension data page used by the application
for DCR.
As the last step of the registration, the class must be added to the map value
MapForCustomerJourney.
1. If this is the first time that you create a new journey, copy the rule
MapForCustomerJourney from the CLM layer (PegaCLMFS-Work-CLM) into your
implementation layer. If this is not the first journey, the rule should already be
there.
2. Open the rule in the implementation layer and add an entry setting the new
journey type (for example, DocumentCollection) in the left column, and a reference
to the new class in the right column (for example,
D_AppExtension.WorkClass_DocumentCollection).
After creating the new class, you must create the three field value rules that register the
class as a journey type, with the following attributes:
For example, after creating a journey, you can create a first journey subtype to support
a periodic collection of documents. Create an additional field value rule to register a
new journey subtype. For additional field values, see the following table:
Users can trigger journeys on the existing customer through the Actions menu for
customer search results. The Actions menu lists all journey types and subtypes
applicable to that specific customer. The applicability of the journeys is driven by a set
of when rules, which when executed against the customer data, determine which of the
journeys should appear.
To add a journey type and subtype available in the Actions menu, create a new when
rule. For example, to make your journey available only to active customers, create a
when rule with the following parameters:
If you do not trigger your journey from the UI but create it by a background process (for
example, a recurrent job, an external service call), create the when rule so that it always
returns false.
In the same way that you determine when your journey type is available, you must
specify the conditions that make your journey subtype available. For example, to make
a journey subtype available only to your organization, create another when rule with
the following parameters:
You can also create journeys on the customer profile screen, known as the Customer
360 or Master Profile screen. On the customer profile screen, the Actions menu in the
top-right corner of the screen lists all journey types and subtypes applicable to that
customer.
1. Make a copy of the rule to your implementation layer. If this is not the first
journey you create, you might already have a copy available.
2. Create a new entry in the navigation rule for the journey subtype that you want to
add to the menu. For example, Periodic Document Collection.
3. Configure the visibility condition of the new entry with a new when rule that
determines when the journey subtype is applicable. The when rule must apply to
the class of your customer search case type. You can use the rule
CanCreateNewPrincipalSubJourney as a reference.
4. Configure the following actions associated to the Click event that you associate
with the new entry:
JourneySubtype:
DocumentCollectionPeriodi
c
SetCustomerIDFromWorkCo
ntext: true
ClassName: UPFS-MyApp-
Work-CLM-
DocumentCollection
FlowName: pyStartCase
For more information on the available journey types and their subtypes, see Journey
types and subtypes. To create a new journey subtype on the existing journey, complete
the following instructions in the previous chapter Creating a new journey type:
Single application
Under this model, the application operates within a unified environment with a
unique implementation layer managing segments and locations. Cases, regardless
of segments, are stored in a shared database and distributed based on the
operational structure. Customer information and documentation are accessible
across segments and locations, including risk assessments. A financial institution,
such as uPlus, operates in both retail and commercial banking segments across
multiple locations. Under this model, the application serves all segments and
locations within a single environment.
Segregated implementation
If data storage constraints apply to the customer, the custome may consider
three-server model, distributed across various locations or data centers. While
offering infrastructure flexibility, this setup limits data sharing between locations,
creating data silos. This implementation strategy can be implemented for distinct
business segments. For example, a customer chooses to deploy separate
instances of the application in Australia, the United Kingdom, and the US to
comply with data residency regulations.
Hybrid implementation
The customer may explore intermediate solutions where a single server hosts
multiple applications catering to distinct business segments or locations. The
primary advantage of this approach lies in the ability to share customer data
among applications (e.g., risk profiles, documentation) while retaining the
flexibility to independently manage each application.
From a Pega Client Lifecycle Management perspective, a SoR is any external system
hosting customer information, either required by Pega Client Lifecycle Management or
collected by Pega Client Lifecycle Management and needed by other systems of the
enterprise. Depending on the architecture, Pega Client Lifecycle Management will need
to integrate with one or more SoR’s to retrieve and persist customer data as required by
the onboarding and maintenance journeys.
It is important to note that each financial institution will have different data categories
falling under different groups, something that will determine the different data-flows
and integration patterns required in the enterprise to support the Pega Lifecycle
Management function.
external SoR’s. For that purpose, Pega Foundation for Financial Services (PFFS) comes
with a set of sample tables that can be used to read/write the data that Pega Client
Lifecycle Management is expecting to read/write from a SoR. This is the list of database
tables and their content.
It is very important to understand that these database tables are only to support
demos and the initial phases of the implementation. They cannot be used in a
production environment. Financial institutions are responsible to identify the different
pieces of information that should be managed and/or stored in external systems,
define the appropriate data-flows and integrations, and overwrite the rules listed below
in the next section to make Pega Client Lifecycle Management point to the organization
SoR’s instead of to these tables.
• Data retrieval
• Data maintenance
Data retrieval
Pega Lifecycle Management makes uses of customer data available at the SoR to
initialize, enrich and progress onboarding and maintenance journeys. Some of the
scenarios in which existing customer data in SoR is used are:
Scenario Description
Onboarding/maintenance of Many financial institutions already have a SoR with customer information b
customers in SoR before CLM time they implement CLM. In the day of the CLM implementation, custome
implementation already exist in the SoR but not in CLM. When a CLM user initiates a custom
journey, the system automatically registers the customer in CLM (see Mast
Profile) and initialize it with the information coming from the SoR.
Onboarding/maintenance For those customers that are new to the organization (i.e., they do not exis
customers created after CLM SoR), CLM collects some initial data during the onboarding journey and, at
implementation certain point in time, sends the data to the SoR for synchronization. At the
time, when a CLM user initiates a new maintenance journey for an existing
customer, the system retrieves the data from SoR and used it during case
processing.
Adding a related party to a While onboarding or maintaining a customer, new or existing related party
customer SoR can be added. While adding an existing customer as related party, data
associated to the related party is retrieved from the SoR. When a new party
added, the data collected for that party is in turn sent to the SoR for future
reference.
Customer search The search of customers in Pega Client Lifecycle Management is done agai
SoR instead of against its internal records (Master Profile). That gives finan
institutions the chance to discover and manage customers that might have
gone through CLM yet.
In order to support these functions, financial institutions must modify the data-pages
below, which by default give access to the PFFS sample tables, and make them point to
their SoR.
By default, Pega Client Lifecycle Management application ships few sample customers
with required information that application captures while onboarding a customer. ‘New
Wave Energy Solutions’ is one such sample customer.
New Wave Energy Solutions customer details like Customer ID (9912345999), Tax ID
Number (190819746) can be used to run below data pages to understand/analyze the
response and make appropriate changes to return similar data when data pages are
overridden to point to client specific SoR.
• Inputs: Customer ID, Legal Identifier, and Fund Name. Sample fund 'Market
defensive funds' with customer ID '5432100009' shipped by CLM application
can be used to review the response sent by the data page and make changes
to return similar data when data page is updated to point to client specific
SoR.
• Output: The data page retrieves the list of funds matching the input, with
complete details in SoR including address, communication details.
5. Data Page : D_CustomerSearch
• Usage: The data page is used in customer search module.
• Inputs: Name, Customer ID, Tax ID Number, Phone Number, Customer Type,
Zip Code, Email ID, and Contact ID. During search more than one input can
be provided to the data page for filtering the results.
• Output: The data page retrieves the list of customers matching the input,
with complete details in SoR including address, communication details.
6. Data Page : D_CustomerProspectSearch
• Usage: The data page is used in customer search module for getting the list
of customers based on the search criteria.
• Inputs: Name, Customer ID, Tax ID Number, Phone Number, Customer Type,
Zip Code, Email ID, and Contact ID, CustomerOrProspect. During search
more than one input can be provided to the data page for filtering the
results. The input CustomerOrProspect decides whether a active customer
record is searched or an prospect.
• Output: The data page retrieves the list of customers matching the input,
with complete details in SoR including address, communication details.
7. Data Page : D_GetCustomerBasicDetails
• Usage: Used while forming related party relationship structure for getting
only basic details of the customer/party required for the relationship
structure.
• Inputs: Customer ID
• Output: The data page retrieves the list of customers matching the input,
with very basic details for quick retrieval from SoR.
8. Data Page : D_CustomerShortDetails
• Usage: Used while forming related party relationship structure for getting
only basic details of the customer/party required for the relationship
structure.
• Inputs: Customer ID
• Output: The data page retrieves the list of customers matching the input,
with very basic details like Name and Customer ID, Tax ID Type.
9. Data Page : D_GenerateUniqueSampleCustomerID
• Usage: Used while creating an new customer ID while onboarding a new
customer or while adding a new related party. The generated Customer ID
has lot of prominence in Pega Client Lifecycle Management application, it is a
unique identifier for a customer and is used for synchronization or retrieval
of customer data from SoR and other supporting database tables in the
application. Implementation teams need to review unique ID generation
logic and make changes to comply with standards that your financial
institution follows for customer IDs.
• Inputs: None
• Output: The data page outputs a unique Customer ID. The Customer ID
follows a sequence during creation of the ID.
Data maintenance
Data is persisted (synchronized) to SoR at end of the Enrich and Fulfillment stages. The
process is orchestrated by the CustomerSynchronization flow, which prepares the data
and persists it. There are two different types of operations that are performed during
that synchronization: additions/updates to reflect in the SoR the addition of new parties
and the modifications on existing ones, a the removal of party information that might
not be relevant any more.
Additions/Updates
Pega Client Lifecycle Managements uses the two rules below to add/update customer
data into the PFFS sample SoR tables. Financial institutions are expected to overwrite
the logic in this rules to make them point to their specific SoR.
Data removal
After making changes to customer/party data, there are scenarios where existing data
in the SoR might not be valid any more. The data could become out of sync with the
latest customer/party data on the Pega Client Lifecycle Management cases and Master
Profile. In those situations, the data needs to be deleted from the SoR so that there is
no stale data associated to the customer.
For example, consider a scenario where a new customer/party is being onboarded. The
details of the customer/party that are collected during Enrich stage are persisted to SoR
at end of that stage and case progress until Due diligence stage. Then, a user can
perform a case-renavigation and send the case back to Enrich, where changes to the
party data can be made again. Under those changes, either new data can be added,
existing data can be updated or removed as well. At that point, the data available at the
case will be different to the data available at the SoR. If there was any data that was
removed (For example, a party was removed), the system should ensure that is deleted
from the SoR as well. That way, it will guarantee full synchronization between Pega
Client Lifecycle Management and SoR, and avoid data inconsistencies.
Pega Client Lifecycle Management uses the rule below to delete any unwanted
customer/party data at the SoR. Financial institutions are expected to overwrite the
logic to point to their specific SoR.
In a similar manner, when a customer journey has been abandoned due to some
reason, and if customer/party data is already synchronized to SoR, the system needs to
roll back the changes to the state where it was when the journey started. The rule
below is used out-of-the-box in the Pega Client Lifecycle Management to perform this
action. Financial institutions are expected to overwrite the logic to point to their specific
SoR.
Introduction
Reference data in the application can change over time, with instances that were active
becoming inactive. For instance, In the Exchanges list reference data type that
maintains an approved list of exchanges worldwide, Chicago Stock Exchange is one of
the approved exchanges. If this entry is delisted due to regulatory issues, then this
entry should not be displayed as an option in the approved exchanges dropdown while
collecting data for new customers. However, for the onboarded customers associated
with the Chicago Stock Exchange, the entry in the list should be available to display the
selection in a read-only format.
While the reference table contains active and inactive records, their availability depends
on the user scenario. During data collection (editable forms), inactive records are
hidden, while for resolved cases (read-only forms), both active and inactive records are
displayed.
To view the complete list of reference data types used in Pega Client Lifecycle
Management and KYC application, see Database table catalogue
Reference data maintenance is performed using a soft delete mechanism. Soft delete,
also known as logical/virtual delete, is a mechanism where data is not permanently
removed from the system but is instead marked as deleted or deactivated. The data
remains in the database, but its status is changed to indicate that it is no longer active
or visible in the application. This helps in retaining historical information or when the
data is associated with other records and removing it completely could lead to data
integrity issues.
Before using soft delete, inactive data is filtered out from the reference data. Each
reference data type has a column in the database table that shows whether a record is
Active (valid) or Inactive (no longer applicable). An active status signifies a valid record,
while an inactive status denotes that the record is no longer applicable. The data pages
corresponding to each reference data type are used to retrieve data from reference
tables using the following parameters:
If the use case is to show the reference data with only active records, then data
page invocation is updated to pass ReturnOnlyActive is set to True and
SectionReadOnly paramert is either null or 0.
If the use case is to show the reference data with all records, then data page
invocation is updated to pass ReturnOnlyActive is set to True and
SectionReadOnly parameter is set to 1
Extensions
The data pages and report definitions corresponding to the following data types have
been updated to ensure that the parameters mentioned above are passed. If there are
any customizations related to these data types in the implementation layer, it is advised
that clients update the references to the data pages or report definitions accordingly.
After updating the reference data, it is necessary to update the in-flight cases utilizing
that data to ensure they do not contain any inactive information. The system offers the
following extensions to facilitate the upgrade of inflight cases to the most recent
reference data.
Application Configurations
After running the new application wizard, your application has most of the main
components required for basic functionality. However, you can perform a few
additional adjustments and configurations for a complete functional Pega Client
Lifecycle Management and KYC application.
Choose the segments your application will manage. For more details, see
Implementation strategy for CLMKYC. The base product comes with four different
application rules on which you can base your implementation:
PegaCLMFS
This is a segment-agnostic application. It does not contain any segment
specialization. All cases are run based on a generic journey.
PegaCLMFSCIB
This is a Commercial Banking application that contains CIB-specific functionality
and behaviors for cases created under this segment.
PegaCLMFSRet
This is a retail application that contains retail-specific functionality and behaviors
for cases created under this segment.
PegaCLMFSUnified
This multisegment application that caters to both retail and CIB (Commercial and
Institutional Banking) functionalities. It is applied to the cases according to their
segment configuration. Use this application rule if you want to make the most of
the available segments.
To change the application that your new implementation is based on, complete the
following steps:
1. In the header of Dev Studio, click Application: <your application> > Definition.
2. In the Built on application list, replace PegaCLMFS with the application rule that
you select from the list above.
3. Click Save.
Result:
Sample operators
The sample data package in Pega Client Lifecycle Management and KYC, includes
default operators called sample operators. You can either use sample operators to
understand the application or create customised operators as per your implementation
needs.
These sample operators are pre-configured for testing and demonstrations purposes.
While they cannot access the custom implementation layer you might create for
specific needs, they can leverage the pre-configured layers in the application. This
makes them useful for running sample journeys or scenarios before your own
customized implementation is fully operational.
If you choose not to import the sample data package, you can create your own
operators within the appropriate access groups to match your implementation needs.
The following table provides details about the operators and their access groups.
Application operators
To access the new application created with the help of the application wizard, you must
create operators. For that purpose, the system generates a group of access groups, all
prefixed with the name given to the new application, for example,
<app>:CLMFSSysAdmin.
The access groups assigned to application operators ensure that users have the
appropriate level of access and control based on their roles, such as administrators,
end users, or system integrators.
1. In the header of Dev Studio, click Configure > Org & Security > Organization >
Operators.
2. In the Operators tab, click New.
3. Create an operator in each of the following access groups:
The operators that you create, give access to the main roles and system functions
of the application, but are not configured to use any specific operating structure.
Operators might not have access to all the workgroups and workbaskets required
for a fully functional deployment. This access must be granted later, once the
operating structure of your organization is configured.
During the setup of the New Application wizard, the system duplicates access groups
that represent various user roles within the application. If you've created an application
named "MyApp," the following access groups are automatically generated:
By default, the system configures these access groups with new roles, inheriting from
pre-defined ones. This setup allows you to create custom privileges and security
settings for built-in resources. Besides the general configuration common to Pega
applications, two key configurations are crucial to prepare your operators:
Access groups
An operator must be associated with one of the access groups mentioned in the
table above. While Pega supports linking operators to multiple access groups, the
CLM portals cannot switch between groups. Therefore, an operator can only
access the group marked as default. For instance, if you want an operator to
function as a Relationship Managers group manager, they should utilize the
access group MyApp:CLMFSCIB_RM_Manager.
Organizational unit and work group
Connect your operators with the operational structure, see Configure your
operating structure. Assign the operator to a specific organizational unit and link
the work group to one of the established departments. For instance, if an
operator is responsible for Relationship Managers at the Frankfurt booking entity,
assign them to the UPFS-GM-EMEA-DE-UPFSFR organizational unit and the
UPFS_GM_EMEA_DE_RelMgmt work group. Associate your operators with the
operating structureAssign the operator to one of the organizational units and
make the work group point to one of the departments created.
For example, if the operator must be responsible for the Relationship Managers at
the Frankfurt booking entity, allocate it to the UPFS-GM-EMEA-DE-UPFSFR
organizational unit and to the UPFS_GM_EMEA_DE_RelMgmt work group.
In the header of Dev Studio, click Configure > Org & Security > Organization >
Operators to configre your operators.
The DCR mechanism involves reusing logic across different layers by copying and
modifying data as needed. This approach enables efficient and consistent
implementation of functionality in various parts of an application, ensuring seamless
integration with the DCR mechanism while configuring the necessary settings for your
implementation.
During the creation of the application, the system creates a new class and table to store
the work of the new implementation (class group). For example, if you named your
application UPFS, the new class group has the name UPFS-Work.
For example:
5. Click Save.
Result:
After the changes, the circumstanced case type appears in the Case Type
designer under the case type CLM Requirement.
Create field value rules within the newly created journey class in your application to
register a journey or sub-journey. In the base implementation layer, this class is named
PegaCLMFS-Work-CLM, while in your new implementation layer, it should be named
UPFS-Work-CLM. This registration process ensures the correct recognition of journeys or
sub-journeys in your application's context.
You can modify field values for customer journeys. By modifying these values, users can
align the application more precisely with their specific needs, enhancing the accuracy
and relevance of customer journey tracking and management.
PegaCLMFS-Work-CLM CustJourneyType
MaintainBusinessRelationship
4. Click Save.
5. Repeat steps 1 to 3 for the following field values:
PegaCLMFS-Work-CLM- CustJourneySubtype
FulfillClusteredProducts FulfillProductsByLocation
What to do next:
After you create your application, enable and extend your application's
configurable functionality and features to meet your business needs.
• Supporting infrastructure
• Configuring validations
The following key modules made the related party management solution:
Data model
In an organization, related parties are governed by and are entitled to only those
privileges that their roles dictate. A related party can be associated with the customer in
more than one role.
For example, a party can be a beneficiary and at the same time can also be a primary
contact.
The solution allows you to add a party and list down all the roles and their specific
attributes within the same relationship, thereby making it easy to view and maintain the
relationship.
Directional relationship
The system identifies two distinct relationships, Is or Has, that represent how a related
party is associated with the contracting party in a given role and relationship.
For example, if you add Y as a party to X, the relationship has the following view:
For example:
Party Y:
Is a Beneficiary of X
Is a Primary Contact of X
After you add a related party to a customer, it is essential to evaluate the related parties
that the related party in question might have so that the system carries out the
required due diligence procedures and considers the associated risk. The solutions
represent such relationships on the related party grid by marking them as indirect.
When you add an existing related party Y to a contracting party X, the system navigates
the entire relationship network. All related parties that are KYC significant and
associated with Y or any related parties are brought in and associated with the
contracting party. This logic is executed at each level until the system reaches the last
related party in the network.
You can also manually add indirect related parties to a contracting party by selecting an
intermediary party from the list of related parties that are added and already
associated with the customer.
Enforced pairs
There are certain roles that have a corresponding implied role. Such implied roles are
called enforced pairs and can typically be represented in a pair of Is and Has roles.
When you add or update related parties to have a particular role, the system detects
any implied roles and automatically adds them to the relationship. For example, the
party Y as a majority shareholder in the party X implies that X is a subsidiary of Y. For
the relationships of the enforced pairs, see the following example:
For example:
Party Y:
Is a Majority Shareholder of X
Has X as Subsidiary
Each role comes with its own set of attributes like percentage of ownership, KYC
significance, controlling nature, and so on. The terms of association of the related party
to a contracting party are a consolidation of all the attributes that each of these roles
may bring.
For example, a party Y can be a Minority Shareholder for Contracting party X with 10
percent ownership and may own 15 percent in X through its relationship with Z. The
party Y, therefore, becomes a 25 percent owner and may hence be evaluated to be KYC
significant or deemed controlling and be subject to due diligence.
User interface
Relationship visualizer
The related parties grid presents a quick yet detailed summary of all the related parties
associated with the customer and key attributes of the relationship such as type of
relationship, KYC significance, controlling nature, percentage of ownership, and status
of the related party.
The grid is also a central location form where you can add, update, or delete the related
parties. The intelligent background processing supplements each of the actions that,
among many other tasks, perform the key functions of aggregating the attributes and
building the relationship network by bringing in or deleting from the case all the
indirect parties that a particular related party implies.
The related party’s grid is available throughout the Capture, Enrich, and Due
Diligence stages through the Action menu. You can add or update the related
parties in any stages that are mentioned above, and the system makes necessary
adjustments to the due diligence processes as required.
Case orchestration
All the related party cases, direct or indirect, are direct subcases to the main due
diligence case for the contracting party.
Based on the roles, direction of relationship, and aggregated or derived attributes such
as Risk, AML CDD Profile, KYC Significance, and Controlling flags, related parties must
undergo various levels of due diligence. The application does not just handle and
maintain the due diligence on the related parties that are added upfront but also
adapts to the slightest change that might happen during the due diligence phase and
reacts by updating or lowering the level of due diligence or even withdrawing the due
diligence cases when they are no longer required.
For example, as the system evaluates the due diligence cases for the related parties,
the cases may increase the risk of the contracting party, which, at run time, affects the
due diligence levels for all the related parties. In another scenario, a change in the
related party data through the related party’s grid could render the related party non-
KYC Significant, in which case the corresponding due diligence case shall be withdrawn.
Supporting infrastructure
The out-of-the-box Pega Client Lifecycle Management and KYC 8.5 application version
introduces the possibility of adding multiple roles for a related party and adding
indirect related parties. The following data model infrastructure supports the
functionality.
While the preconfigured data mapped to the above page list is enough to cover most
regular cases provided out of the box, there might be some situations where the
Relationship Manager might require extra data for processing in the background or
displaying on the UI and, in those scenarios, the above data model infrastructure can
be used.
While the preconfigured logic for automatic mapping of attribute value covers most
regular cases, there might be some situations where the Relationship Manager might
want to set attribute values for more attributes. To add more attribute values, complete
the following steps.
• For example, if the Relationship Manager adds Majority Shareholder (MSH) role to
a party relationship, the system automatically adds the Subsidiary (SUB) role
accordingly.
• For example, if the Relationship Manager adds Subsidiary (SUB) role, the system
automatically adds Majority Shareholder (MSH).
While the preconfigured logic for adding the enforced pairs covers most regular cases,
there might be some situations where the Relationship Manager might need to add
more enforced relationship pairs. To add more enforced pairs combinations, perform
the following steps.
While the preconfigured logic for displaying the list of intermediary related parties
covers most regular cases, there might be some situations where the Relationship
Manager might need to override the automated logic. To update the logic, perform the
following steps.
Configuring validations
While adding a related party, the Relationship Manager typically collects basic data,
roles, and attributes. Certain validations are implemented in the product during this
process so that the data collected is consistent and appropriate.
While the preconfigured validation logic covers most regular cases, there might be
some situations where the Relationship Manager might need to override the validation
logic to add custom validations. The following rules can be configured to modify the
behavior of both default validation.
While the preconfigured logic for related party completion logic covers most regular
cases, there might be some situations where the Relationship Manager might need to
override the automated completion logic. To update the completion logic, perform the
following steps.
When party is added as a related party, only limited data is enough to perform different
assessments (for example, a risk, KYC significant flag, AML CDD profile, and so on). In
these situations, retrieving the complete customer data from the System of Record
(SoR) introduces an unnecessary performance overhead that, when working with a high
number of related parties or with slow integration points, might have an impact on user
experience.
The Pega Client Lifecycle Management and KYC comes configured by default to use the
same integration point in both situations. This configuration facilitates the
implementation of the application by reducing the number of integrations with the SoR.
However, if your organization wants to optimize the access to the SoR, it can use a new
integration to perform a light retrieval of customer data.
If your organization manages a high number of parties or uses a slow integration point
with the SoR, the process might have a significant impact on user experience in the
three points where the logic is used: case creation, manage related parties action, and
network visualizer. Some organizations optimize this process and use specific
integration points to retrieve all the data in a single access.
Data transform
LoadRelationshipNetwork Use to retrieve a full list of
related parties with
StructureRecursiveQuickDB
While the preconfigured logic for enriching the component provided related party data
typically covers most regular cases, there might be some situations where the
Relationship Manager might need to enrich with more data or enrich from a different
component that is not part of the Pega Client Lifecycle Management and KYC product.
The following rules can be configured to modify both default mapping behavior and
add custom mappings.
The visualizer comes preconfigured with default icons, color coding, filters, and data
that UI displays. The system provides some extension points that you can use to
customize them based on customer needs. If you need to make changes to these
extension points, review the following rules.
The visualizer can navigate up to the bottom most node of the relationship network
and present all the relationships on the UI. By default, the system is configured to
navigate and present parties that are only ten level deep from the contracting party. To
configure the setting based on the customer's need, perform the following steps.
While the preconfigured actions cover most regular cases, there might be some
situations where the Relationship Manager might need to view more related party data
or add more actions. The following rules can be configured to modify the default
behavior to display extra related party data on the related parties’ grid or add more
actions.
While the preconfigured logic for assessing this flag typically covers most regular cases,
there might be some situations where you may want to tune the automated
assessment. To update the KYC significance logic, perform the following steps.
4. In the Table tab, add new or update existing conditions return the true value
based on your custom logic.
5. Click Save.
Deemed controlling related parties are entities or individuals that are considered to
have major ownership or control over the customer, in the sense that they have a
powerful influence or control over the operations and decision-making of the customer.
Financial regulators may require disclosure of deemed controlling related parties, and
to conduct proper due diligence on them to ensure compliance with regulatory
requirements.
Parties are classified as deemed controlling based on the rule RelevantPartyDecision that
applies to the PegaFS-Data-RelCodes class, which the system invokes while adding a
While the preconfigured logic for assessing this flag typically covers most regular cases,
there might be some situations where the user might need to override the automated
assessment. To set the IsDeemedControlling to true or false, perform the following
steps.
The system manages both scenarios by classifying the changes as material or non-
material and performing the corresponding actions.
For a material change, the contracting party global KYC case is restarted from the
related party stage so that the related party cases can be orchestrated. The contracting
party global KYC case can wait for the related party cases to absorb the results of due
diligence on the related parties.
For non-material change the systems loops through all the KYC types on the contracting
party global KYC case and executes the refresh data transforms that are configured on
them. For more information, see the Know Your Customer Engine on Pega Client
Lifecycle Management and KYC product page.
Related parties are assessed for nature of change based on properties listed in the
ChangeDeterminationFieldsMaterial or ChangeDeterminationFieldsNonMaterial rule that
applies to the PegaFS-Data-Party class. Any change to the values of the fields listed in
the corresponding rules results in a Material or a Non-material change for the related
party.
Similarly, the contracting party is assessed for the nature of change based on the rules
PegaCLMFS-Work.ChangeDeterminationFieldsMaterial and PegaCLMFS-
Work.ChangeDeterminationFieldsNonMaterial. The system sets the contracting party by
default to have a material change if any of its related parties have a Material change or
a KYC Significant party is added or deleted, and non-material change when any of the
related parties have a Non-material change.
Your implementation may require watching less or more fields for the material or non-
material changes based on customer needs. To update the configuration, perform the
following steps.
Individual
First name
Last name
Middle name
Tax ID number
Nationality
Date of birth
Country of birth
Phone
Address
Organization
Full legal name
Legal ID
Phone
Entity type
Address
Flag Value
Assessed false
Source Case
ActionTaken
Add
Update
Remove
2. Invoke below data transforms based on the action that needs to be performed.
Parameter Usage
Parameter Usage
The system encapsulates the logic in the PegaCLMFS-Work.UpgradeData rule and invokes
from platform APIs PegaCLMFS-Work.pyUpgradeOnOpen and PegaCLMFS-
Work.pyUpgradeOnPerform. The APIs ensure that the system performs an update when
any assignment is opened or submitted. In addition, the update module is executed at
the following points:
• Related party cases are resolved and the data such as risk id propagated back up
to the contracting party global due diligence cases.
• Surgical policy update is applied to the inflight cases.
• Cases are queued for asynchronous processing.
If required, this module can be invoked from any additional invocation points that your
implementation may need. The module is also designed with a few extension points
that can be specialized to cater to the module's behavior to meet your business needs.
If you need to make changes to these extension points, review the following rules.
The topics in this section describe the risk profile and scorecard rules to calculate the
risk.
The risk profile is set in Pega Foundation for Financial Services by using scorecards. A
scorecard contains the weighted values for each risk. For example, you can create a
scorecard rule to calculate customer segmentation that is based on age and income,
and then map particular score ranges to defined results.
For more information about existing risk factors, see Customer risk assessment.
To add new or edit existing risk factors in your risk profile, complete the following tasks.
The risk engine is associated with the master profile of the customer. The risk engine is
triggered when you make changes to specific properties. If you extend the risk engine
with additional properties, map the customer data to the related data transform.
Create a declare expression for the new risk factor to trigger the calculation of the risk
profile.
Create a scorecard rule for the new risk factor to calculate, for example, customer
segmentation based on age and income and then map particular score ranges to
defined results.
Map the scorecard rule that you created for the new risk factor to either an individual
or an organization risk scorecard.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. In the Scorecard tab, click Add a row, and then add the necessary logic to
calculate the risk.
4. Click Save.
RelatedPartiesRisk
Calculates the risk based on related parties.
DurationOfRelationship
Calculates the risk based on duration of the business relationship.
BusinessCodeRisk
Calculates risk based on business code.
CountryRelatedRisk
Calculates the risk based on countries in which business is carried out.
ProductRelatedRisk
Calculates the risk based on related products.
CustomerRiskAssessmentForOrg
Calculates the aggregated risk for a commercial banking customer based on
all risk factors.
CustomerRiskAssessmentForInd
Calculates the aggregated risk for an individual customer based on all risk
factors.
CustomerRiskAssessmentForFund
Calculates the aggregated risk for a fund customer based on all risk factors.
BusinessCode
Calculates the business score value based on the business code.
InitialWorkUrgency
Assesses initial urgency based on several customer- and work-related
factors.
5. Case navigation
6. Funds specialization
The need to understand risk is not just during the initial onboarding but over the
lifetime of the client through to offboarding. Many events such as periodic risk-based
reviews, data changes, document expiry, and ownership updates result in the need for
reassessment of risk.
Due diligence can vary from relatively simple to highly complex, depending on global
regions, types of products offered, and market segments served. Any solution that
serves these use cases must be flexible based on the level of complexity a financial
institution needs to support to operate.
Due diligence can be divided into many categories and subcategories such as AML
(Global/Local/Regional), Tax (FATCA/CRS), or Product Regulatory (MIFID/FINRA). A
customer may be subject to one or more of these due diligence categories based on
various factors such as country of residence, selected products, jurisdictions where the
products are onboarded, related parties, and so on.
For example, a US-registered company, New Wave Energy, buys products in the US,
France, and Australia and has John Woods as its majority shareholder, see the Due
diligence categories image. During the onboarding process, the company attracts the
following due diligence categories: some of them are on the company itself and some
on its KYC significant related parties (in this case, the majority shareholder). Each of
these categories may contain one or more questionnaires, depending on the
complexity of the due diligence category and the profile of the customer.
Based on its business complexity and operational structure, a financial institution may
have a single department for handling all the due diligence or different departments
may be dedicated to specific categories and subcategories. Most of the time, financial
institutions adopt a unique operational model for all business scenarios. However, in
some situations, financial institutions may want to drive the operational model based
on certain factors associated with the case. For example, they may want to involve
specialized departments only for investigations on their high-risk clients, while a
generic due diligence department still evaluates the low-risk clients.
The grouping and routing of due diligence work to those departments is managed in
Pega Client Lifecycle Management and KYC by using due diligence cases. Each case
generated by the application is routed to a specific department and contains one or
multiple questionnaires covering the due diligence categories managed by that
department.
Depending on the operational structure of the financial institution, the application can
generate a single case with all the due diligence questionnaires or can generate of
multiple cases containing the questionnaires specific to each department. For example,
a financial institution may opt for managing the due diligence of the scheme shown in
the diagram above through a single case routed to a department where all due
diligence are conducted. On the contrary, the financial institution may opt for a
distributed approach where the application generates ten cases, one per subcategory,
and routes each case as appropriate to different specialized due diligence teams.
By default, Pega Client Lifecycle Management and KYC provides three main due
diligence categories: Anti-Money Laundering (AML), Product Regulatory, and Tax. There
are two additional categories, Credit and Legal, that are provided as placeholders that
customers can extend in their implementation layer. The granularity and structure of
the cases that drive the processing of each of these categories can be configured in a
flexible manner based on your business needs. The configuration can be completed by
performing the tasks described in the following two sections.
Next topic:
AML CDD profile categorization
Standard
The most comprehensive mode under which dedicated due diligence cases are
created for each category and their subcategories when they apply. The case
structure under this mode is shown on the following diagram:
Standard mode
The application creates one case per due diligence subcategory. For example, in
the diagram above, the system creates one case to manage the AML of the
contracting party (GKYC-1), one case for each of the related parties to manage
their AML due diligence (for example, GKYC-2), and one case per local AML due
diligence to be conducted (for example, LKYC-1). In addition, it created two cases
to manage product regulatory due diligence (REG-1, REG-2), two cases for tax due
diligence (TAX-1, TAX-2), and two cases as placeholders for legal and credit due
diligence (CRE-1, LEG-2). Each of these cases can be routed independently to
different departments according to your business needs.
The mode is best suited for those firms which have specialized departments for
each due diligence category. It is, however, not required to have separate
departments to make use of this mode. The mode can also be used to keep the
questionnaires segregated and yet assigned to the same department or different
people within the same department.
Intermediate
Under this mode, the cases are created for each due diligence category rather
than creating them for each subcategory. The case structure in this model is
shown on the following diagram:
Intermediate mode
Under this mode, the application groups under the same case all questionnaires
related to the same category and party. For example, all the AML due diligence for
the contracting party is managed under the same case (GKYC-1). The
questionnaires for product regulatory due diligence are all conducted under
another case (REG-1), and so are tax-related ones (TAX-1).
Simplified
The mode is a highly simplified case orchestration mode that creates a single case
for all the due diligence categories and subcategories. The case structure in this
model is shown on the following diagram:
Simplified mode
Under this configuration, the system groups all AML, Product Regulatory, and Tax-
related due diligence under the same case (GKYC-1). This case can be routed to a
single department where the due diligence is conducted.
This mode can be utilized by institutions that need to handle all the due diligence
by a single department, person or do not have the luxury of separate
departments.
You can configure the orchestration mode in the three following ways: at the
application level, at the business segment level, or at the case level.
Application level
The same orchestration mode applies to all cases created in the system. The
configuration is controlled by the Dynamic System Setting clmfs/
DDCaseOrchestrationMode, which can take Standard, Intermediate, and Simplified
values. The value is read and available to the application through the data page
D_AppExtension (see the AppExtension_DDOrchestrationSettings data transform),
which can be customized to set the value dynamically.
Business segment level
The orchestration mode is set based on the business segment of the onboarding
case. The Dynamic System Setting clmfs/<business-segment>/
DDCaseOrchestrationMode, drives this behavior and accepts business segments
CIB, Retail or Unified. If not configured, the system uses the application-level
configuration.
Case level
The application and business segment level configuration can be overridden at
each case by using the extension point UpdateDDOrchestrationMode_Ext.
For more information on how these two mechanisms are configured and supported in
the system, see Applying KYC types to a case in KYC Engine.
Configuration mechanisms
Once the orchestration mode is configured, the next step is to present relevant
questionnaires within each due diligence case created. Therefore, it is important to
carefully configure the applicability of the KYC Types so that they appear in the right
cases. The KYC Core Engine provides two methods of defining the applicability of KYC
Types.
Standard Filtering
The applicability conditions of the KYC Type are defined on the KYC Type rule. The
KYC Type determines when it is applicable or not.
Applicability Matrix
The applicability is defined in a centralized decision table that provides a
comprehensive view of the KYC types and the ability to specify the applicability per
case type. In addition, the KYC applicability matrix brings a tremendous
performance improvement over the standard filtering mechanism.
The two applicability mechanisms can be chosen by configuring the simple Dynamic
System Setting PegaKYC/DDSmartFilter. A true value enables the applicability matrix,
and the false value activates the standard filtering applicability mechanism.
Pega Client Lifecycle Management and KYC application relies on the Pega Know Your
Customer Regulatory Compliance application for its KYC questionnaires and comes pre-
configured with a KYC Applicability Matrix for driving the applicability in the three
supported orchestration modes for the KYC types provided with that product.
• KYC locators
The locators have two key attributes: Purpose and Key. The Purpose is presented as the
category, and the Key is analogous to the subcategories. The following table includes
some examples of locators that are used in Pega Client Lifecycle Management and KYC.
AML US AML US
AML UE AML EU
Regulatory US Reg US
Regulatory EU Reg EU
When the application creates a due diligence case, it initializes the locators that
determine the KYC Types managed by that case (DDPurpose and DDKey properties).
These locators are compared against those provided in the KYC Applicability Matrix to
display the appropriate KYC questionnaires to the case.
EU
US Regulatory Reg US
UE Regulatory Reg EU
US
AU
SP
EU
Regulatory REG
US
EU
TAX TAX
FATCA
CRS
US
AU
SP
EU
REG
US
EU
TAX
FATCA
CRS
In the intermediate mode, a single Global KYC/AML case is created to address all the
global, regional and local regulations. Thus, while setting the DDPurpose and DDKey on
the case, all the keys that apply to the customer under the AML category are set in the
Global KYC/AML case itself, thereby driving the applicability of the corresponding KYC
Types. The same is for the simplified mode where all the relevant purposes and keys
are populated on the single Global KYC/AML case that is created to cover all the due
diligence categories and subcategories.
You can update the applicability of the KYC types provided with Pega Know Your
Customer Regulatory Compliance or configure the applicability of your custom KYC
types by copying and updating the KYCApplicabilityMatrix decision table and the
GetKYCTypeLocators data transform in your implementation layer classes. Or you can
also configure even Standard Filtering applicability. However, use the KYC Applicability
Matrix to configure the applicability without overriding the KYC Types, unlike the
Standard Filtering applicability.
KYC locators
This article seeks to guide an implementer through configuration of locators for
enhancing Applicability matrix provided by CLM.
For example, a financial institution might be responsible for managing KYC Types for
AML, Tax and Regulatory needs. All KYC Types can be presented in a single assignment
to a single user, but it is likely that the organization prefers to group the KYC Types by
category, and route them to different users. Each of these groups can, in turn, be
further subdivided into smaller groups. For example, AML KYC Types can be divided
into Global AML, US AML, UK AML, and so on.
To meet these operational needs, the system provides the ability to define locators, an
abstract construct that facilitates the dynamic grouping of KYC Types without having to
make significant changes to the orchestration layer.
The application can use locators to group together related KYC Types under the same
case. The locators represent the logical groups that the KYC types are grouped in. You
must provide the system with two pieces of configuration information to make use of
locators.
The first piece of information that you must provide is the definition of the locators that
will be managed by a certain case. For example, if your application has a case type
specifically created to manage AML KYC Types, you must configure that case to register
the locator that will pull only the KYC Types which are related to AML.
The locators in a case are implemented using two properties: DDPurpose (which can be
thought of as a group), and DDKey (which represents a subgroup). These two fields are
of type page group, with the first field hosting pages of class PegaKYC-Data-
TypeLocator-Purpose, and the second field hosting class PegaKYC-Data-TypeLocator-Key.
The clipboard structure that supports locators is as follows:
.DDPurpose(AML)
.DDKey(US)
.DDKey(EU)
.DDKey(CA)
.DDPurpose(REG)
.DDKey(US)
.DDKey(EU)
.DDKey(CA)
.DDPurpose(TAX)
In this example, the case that is being configured manages the KYC Types created for:
AML for the US, EU and CA jurisdictions, for regulatory purposes (REG) for the same
jurisdictions, and for tax purposes (TAX) in all jurisdictions.
This structure must be programmatically populated on the case page by specializing the
GetKYCTypeLocators data transform. You can define your own locators and logic to
populate those locators, on condition that the structure matches the one shown above.
Once the case has been configured to use certain locators, you can refer to them from
the applicability matrix. The KYCApplicabilityMatrix template rule includes a locator
column that can be used for that purpose. For example, if you want a certain KYC Type
to apply in cases configured for AML Global (DDPurpose=AML and DDKey=Global), you
can add the following condition to that column: @KYCUtils .LocatorExists ("AML
","Global")
The LocatorExists utility function checks against the locators defined for the case. Both
parameters (DDPurpose and DDKey) must pass the check. The function traverses
through the locator structure of the case to determine if the passed DDPurpose and
DDKey are present on the case. If the parameters are found, the engine evaluates the
rest of the conditions configured in the row, and applies the KYC Type to the case if all
conditions are satisfied.
The function follows an inclusive approach, where a purpose with no keys defined on
the case implies that all the keys for that purpose are active. For example, in the
structure defined above, the case takes all cases for TAX regardless of the jurisdiction
(no keys are associated to TAX). With that configuration, the system only checks that the
first parameter is passed, and the second parameter is simply ignored.
Standard Filtering follows the same paradigm as the KYC Applicability Matrix.
Applicability is defined in the decision tables, but with one major difference: the
decision tables are associated with the KYC Questionnaire in the KYC Type rule form as
applicability condition. As each KYC type can have a different applicability condition
configured, the system must locate and execute the applicability condition each time it
has to determine the applicability of a KYC Type, even if all of them refer to the same
decision table. The Standard Filtering decision tables can potentially be executed as
many times as there are KYC Types in the application.
Pega Client Lifecycle Management and KYC provides three decision tables that
represent the three due diligence categories that the application supports. Each of
these decision tables is associated with the KYC Types that fall under that category. The
decision tables are described in the following table:
AML AMLRegulationMasterAppliesWhen
Tax TaxRegulationMasterAppliesWhen
Across the decision tables, the table includes one row per KYC Type, with the factors
driving applicability listed as columns. The factors include customer type, customer risk,
entity type, jurisdiction code, and even allowing execution of a complex when condition.
The most important columns in the tables are the KYC Type name and the KYC Case
type. The KYC Type name column defines the KYC Type for which the applicability
condition is being evaluated. The single-row only drives the applicability for that type.
The KYC Case type column defines the case type for which the applicability is being
evaluated, in essence representing the subcategories.
LKYC (Local/
Jurisdictional cases)
Customer Type* 0 0
0=Individual
1=Organization
2=Fund
Related Party
Enhanced DD
@evaluateWhen("IsF
oreignFI")
@evaluateWhen("IsC
orrespondentAccoun
t")
@evaluateWhen("IsU
SAMLFFIApplicable")
@evaluateWhen("Any
ProductLifeInsurance
")
@evaluateWhen("Enti
tyIsAMutualFund")
@evaluateWhen("GK
YCInsuranceCompan
y")
@evaluateWhen("IsJu
risdictionUS")
Note:
Two
letters
ISO
Country
code can
be
replaced
for any
supporte
d
jurisdictio
n
when the booking location is Canada and EDD_Applies is true. The EDD_Applies value is
driven by the AML DD Profile of the party, and when certain questions are answered
with certain values, that is stating that the party is a Politically Exposed Party (PEP).
If you are using Pega Know Your Customer Regulatory Compliance in your
implementation, you can change the applicability to the standard KYC Types by copying
the decision tables into the implementation layer and update the conditions or
columns based on specific business needs. However, if the implementation does not
use Pega Know Your Customer Regulatory Compliance or demands the creation of new
KYC Types then applicability needs to be driven by the creation of equivalent logic,
including when rules, decision tables, decision tree, and associating them with the KYC
Types.
The application provides a new case type, Tax (PegaCLMFS-Work-Tax), to support the new
intermediate case orchestration mode. This new case type is configured as a child case
to the Client lifecycle management case-type. To ensure seamless support for the
intermediate mode you must perform the following two steps:
1. Create a Tax case at implementation layer and register the new case class in DCR.
For more information on registering a new case class, see Updating Dynamic Class
referencing.
2. Modify the Client lifecycle management case-type at your implementation to have
this newly added class as subtype and add the PropagateDataToTax data
propagation rule.
The case orchestration modes are backward compatible, and you can choose to use
any of the new case orchestration modes at any time. The system ensures that change
in the case orchestration mode takes effect only for the newly created cases and that all
the inflight cases from your previous version retain the case structure created with the
cases. However, ensure that the applicability of the KYC types is adjusted based on the
mode that you select.
Though the Standard Filtering applicability mechanism continues to work after the
upgrade, rewriting the applicability logic using the new KYC Applicability Matrix
leverages all the previously stated benefits of a comprehensive view, easier
maintenance, and increased performance benefits.
Note: As you update and the Client lifecycle management case-type in your
implementation layer, the system automatically copies the associated stage
rules from the PegaCLMFS-Work-CLM class to the corresponding
implementation class. However, the new copies of the stage rules in the
implementation layer can mask their specialized versions from the child
classes in the product. Hence, you must review the stage rules in the product
in the classes that extend from PegaCLMFS-Work-CLM and manually copy all the
stage rules to the corresponding implementation layer classes.
To provide an efficient process, the organization’s systems must identify each customer
party, make ongoing assessments, and ensure that the required level of due diligence
activities is carried out. Any business rules that drive the identification and assessment
processes must be configurable.
The system categorizes every party as either a contracting or a related party. Specific
logic is then applied to determine an Anti-Money Laundering (AML) Customer Due
Diligence (CDD) profile across Exempt, Simplified, Full, or Enhanced levels of due
diligence.
• Optional: Adding a new entity type to the AML CDD profile assessment
Previous topic:
Flexible due diligence case orchestration
Next topic:
Reviewing Regulatory due diligence case structure
Enhanced Due Diligence (RPEDD) Enhanced due diligence applied for related
parties
At the time of creating the KYC cases for a given party, Pega Client Lifecycle
Management and KYC uses the AML profile to determine whether the cases must be
created (SDD, FDD, and EDD profiles), or not (NoDD).
During creating a new case, the system automatically transfers the AML profile
information to the KYC engine to configure the applicable KYC types for that case, and
show and hide questions according to the profile. For example, an SDD form is a leaner
version of the FDD form and skips more than half of the FDD questions. For more
information, see KYC due diligence profile.
The system performs the profile calculation during the synchronization of the Master
Profile. The system reacts dynamically to any changes in the data that might trigger
changes in the AML profile, for example, a change in risk. The value is based on risk, but
you can consider the other variables.
For example, the orchestration engine automatically ensures that the appropriate level
of due diligence is carried out using eScreening results and the current customer risk
assessment. The logic is fully configurable to meet your business needs.
If there are changes required in the logic to calculate the AML profiles (for example, a
new data variable is required, or the values are different), you can overwrite the rules
and implement the necessary logic.
In addition, if you need to use another variable instead of using the type of party to
categorize the logic across multiple tables, you overwrite the AMLCDDProfilePartyRouter
decision table.
The Pega Know Your Customer Regulatory Compliance application is the source of
regulatory intelligence for Pega Client Lifecycle Management and KYC and contains KYC
types configured to manage the same AML profiles that Pega Client Lifecycle
Management and KYC does (CPSDD, RPSDD, and so on).
The system configures the AML profiles by using the KYC Due Diligence Profile Suite rule
type that is available in Know Your Customer Core Engine. All profiles are recorded
under an instance of the Default rule and placed under the PegaKYC-Data-Type-Policies-
Global rule type.
Although the AML profiles represent the same business scenarios, the profiles under
the PegaKYC-Data-Type-Policies-Global rule have different codes than those that the Pega
Client Lifecycle Management and KYC gives. The mapping between the Pega Client
Lifecycle Management and KYC AML profiles and the Pega Know Your Customer
Regulatory Compliance AML profiles is done in the PegaFS-Work.DDProfilesMap decision
table. If the AML profiles of the application are changed, this rule must be modified
accordingly. For more information, see "Adding a new AML CDD profile to the mapping
rule" in the chapter Adding a new AML CDD profile to the mapping rule.
Add the new AML CDD profile to the decision table. The application allocates the new
AML profile according to the business logic in the modified decision table.
4. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
5. In the decision table, modify an existing row or create a new one if new logic is
required.
6. In the Return column, modify the decision table to either include or edit a value,
for example, MediumCDD.
7. In the CDD AML Profile rational column, enter the name of a field value that
describes the new AML profile value.
For example:
8. Click Save.
If the new logic requires data that is not available in the PegaFS-Data-Party class,
make a copy of the data transform with the same name as the decision table (in
this case AMLCDDProfileContractingOrg), with the Init suffix (for example,
AMLCDDProfileContractingOrgInit), and then initialize the data.
Create a decision table if the logic of your new AML CDD profile does not fit well in any
of the existing decision tables.
5. Click Save.
6. Create a data transform in the PegaFS-Data-Party class with the same name as the
decision table and add the suffix Init.
For example:
Enter AMLCDDProfileContractingDivInit
7. Add any logic to the data transform that might be required to initialize the data
that the decision table uses.
Leave the data transform empty if no logic is required, for example, all required
data is available in the PegaFS-Data-Party class.
8. Make a copy of the AMLCDDProfilePartyRouter decision table and add a row with
business logic that triggers the execution of the new table.
Ensure that the new row is reachable as expected because the decision table
stops executing after the first time that an applicable row is found.
You can start referring to the new AML CDD profile from the appropriate KYC types.
Open the KYC type form and create a local profile configured as inheritance. For more
information, see KYC due diligence profile.
1. In the header of Dev Studio, click the Launch portal > KYC Rule Manager Studio.
2. In the KYC Rule Manager Studio, select and open the KYC type that you need to
modify.
3. In the Type Definition tab, in the Profiles list section, click Add Local Profile.
4. In the Profile ID field, enter a new unique profile identifier, for example CPMDD.
The unique identifier can differ from what the modified decision tables return.
5. In the Description field, provide a description. and select Active based on
Inheritance.
6. From the Active Based On drop-down list, select the Inheritance value.
7. Click Submit.
8. In the Item Definition tab, open the items that you want to show or hide based on
this new profile, and configure them accordingly.
9. Save the KYC type.
10. Optional: To overwrite the Default KYC DD Profile Suite that Pega Know Your
Customer Regulatory Compliance provides and add the profile there instead if the
majority of your types are impacted, perform the following actions:
a. In the navigation pane of , click Records.
b. Expand the Technical category and click KYC Profiles Suite.
c. Open the rule with name Default that is applied to the PegaKYC-Data-Type-
Policies-Global rule type. Make a copy of the Default rule in your
implementation ruleset.
For more information, see Copying rule or data instance.
d. Click Add Profile and enter the profile data as described in step 4.
e. Save the new copy of the Default KYC Profiles Suite.
Result:
The new profile is available to all of the KYC types under the PegaKYC-Data-
Type-Policies-Global class.
Modify the DDProfilesMap mapping rule to translate the profile as defined in "Adding
the new AML CDD profile to the decision table" to its definition in the KYC Due Diligence
profile that is created before.
1. In the header of Dev Studio, in the search box, enter DDProfilesMap and select
the decision table, that is applied to the PegaFS-Work class.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. In the Decision Table tab, add a row to the decision table by clicking the Insert
Row button.
4. In the Customer AML Profile column, enter the value returned by the AML
decision table that you create in the chapter Adding the new AML profile to the
decision table, see Adding a new AML CDD profile to the decision table.
For example:
Enter MediumCDD.
5. In the Return column, enter the value that is entered in the Local Profile Suite or
in the Default Profile Suite, for example, CPMDD.
6. Click Save.
1. In the header of Dev Studio, in the search box, enter and search for
AMLCDDProfileContractingOrg and select the decision table that is applied to
the PegaFS-Data-Party class.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. In the Table tab, add a row to the decision table by clicking the Insert Row button.
4. In the Organization Entity Type column, add a row and specify the entity type.
5. In the Client Overall Risk column, specify the risk factor associated with the new
entity type.
6. In the Return and CDD AML Profile rational columns, specify the level of due
diligence that the entity requires.
7. Click Save.
PropagateDataToLocalKYCRegulatory
Used to propagate the data that is
required during the creation of local
regulatory cases.
PopulateLocalCasesList
Used to prepare the data based on which
local and regulatory cases are created.
PopulateLocalCasesListData
Used to create regulatory cases for
organizations and individual customers.
PopulateLocalCasesListDataForFunds
Used to create regulatory cases for funds.
Previous topic:
AML CDD profile categorization
Next topic:
Creating a KYC due diligence case
To collect additional information, you can create a new subcase based on existing KYC
due diligence subcases. For example, you can add a new due diligence section,
Miscellaneous, to the Due Diligence stage that contains a single subcase to collect
Crown Dependencies and Overseas Territories (CDOT) regulation information.
For more information about how the existing case is implemented, see Due diligence
case creation.
Previous topic:
Reviewing Regulatory due diligence case structure
Next topic:
Case navigation
4. In the Label field, enter a short description or a title for the new record.
5. In the Class Name field, enter PegaCLMFS-Work-Misc-CDOT.
6. Click Create and open.
7. Click Save.
To include work types for the CDOT case type that you created, add the subcase to the
parent case in the case type rule. In the case type form, add a data propagation data
transform to propagate data from the child to the parent class.
The new Miscellaneous case type requires new case-specific work parties in the
pyCaseManagementDefault rule.
Create a flow to prevent the subcase from completing until the parent KYC cases are
complete.
Note: Open and check KYC, Legal, or Tax due diligence flows for
examples of how to create your flow.
Create an activity that checks for a valid work party role. If the party role is available on
the case, then the activity checks for the default workbasket that is associated with that
work party and assigns the assignment to that work party. If the work party role is not
available on the case, then the case is routed to routing_error@clmfs, and the system
sends an error message.
For example, if the SSManager role is passed as a parameter, then the activity rule
checks the pyWorkParty(SSManager) property rule for a routing workbasket name and
assigns the assignment accordingly. If pyWorkParty(SSManager) is not available, then the
assignment is routed to routing_error@clmfs.
1. In the header of Dev Studio, in the search box, enter ToCasePartyWB, and then
select the activity rule that is applied to the PegaCLMFS-Work-Tax class.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. In the Applies to field, enter PegaCLMFS-Work-Misc.
4. Click Create and open.
5. Edit the conditions for the when rules as necessary.
6. Click Save.
Create a data transform to populate a list of subcases that are initiated from your new
due diligence subcase.
Case navigation
Pega Client Lifecycle Management and KYC uses the Enrich stage to capture core data
that drives the due diligence activities that are generated as cases in the Due
diligence stage. In the Due diligence stage, you can make changes to the data
and use the updated core information action to re-initiate the stage accordingly. You
use the Action option to move the main case back to the Enrich stage to capture the
desired data.
When the case moves back to the Enrich stage, the system checks whether users
actively work in due diligence cases. If so, the system displays a list of the active cases
that must be closed. The main case does not navigates back to the Enrich stage until
all active due diligence cases are closed and unlocked.
When all cases are closed or resolved, the system advances the case. The system shows
a confirmation message that explains the impact that moving to the Enrich stage has
on KYC cases that do not complete the review step. The temporary policy memory
storage mechanism collects the KYC types and type-associated data in those cases and
reuses them when returning to the Due diligence stage. By default, the system
deletes policy memory older than 30 days (you can configure this threshold by updating
the PolicyMemoryCleanupInDays dynamic system setting). When the corresponding KYC
cases are resolved, the system deletes the KYC types stored in the policy memory and
synced with the policy profile. For more infromation, see the Know Your Customer
Engine on Pega Client Lifecycle Management and KYC product page.
Once the user proceeds, the system withdraws all due diligence KYC cases, and the
main case navigates back to the Enrich stage. In the Enrich stage, only the necessary
steps are available and reused. You can update all data categories as each journey
allows. The Monitor document collection and Business sponsorship
approval steps reflect your changes to the data. For example, if you add a product
from a jurisdiction in a new region, the user is prompted for additional documents and
the system generates a Business Sponsor Approval case for the new region.
After the Enrich stage is completed a second time, the case can then proceed to the
Due diligence stage again, leading to a recalculation of the appropriate due
diligence activities. The created due diligence cases are placed in the correct
assignments, based on the state of the corresponding due diligence case when the
system withdrew it. The STP mode configuration drives this behavior of the due
diligence case.
The STP mode has three possible values NOSTP, REVEIEWSTP, and FULLSTP. These mode
values are determined based on the completion of KYC types and the status of the
original KYC case when the system withdraws it (this status is captured in the policy
memory when the onboarding case navigates back to the Enrich stage). Each of these
values is active in the following modes:
FULLSTP
The NOSTP mode implies that the case is automatically resolved. A KYC case uses
the STP mode when all the KYC types in the case are complete and approved.
REVIEWSTP
The REVEIEWSTP mode moves the KYC case to the review assignment. A KYC case is
set to this mode if its corresponding case was at the review step before navigating
back to the Enrich stage.
NOSTP
The FULLSTP mode stops the case at both the Data collection and Review
assignments. This mode is activated when the original case was at the Data
collection assignment when it was withdrawn. This mode also overrules the other
two modes if the KYC types on the case are incomplete.
These modes are not just limited to case navigation, but also apply to the KYC case
flows.
Previous topic:
Creating a KYC due diligence case
Next topic:
Funds specialization
During onboarding and maintenance, banks need to capture data from customers. This
data can be both required for general use, and essential to determining which due
diligence activities must occur. If you need to update core driver data later in the Due
diligence stage, the system refreshes this driving logic and updates the new activities
accordingly.
The functionality is designed with a few extension points that can modify the behavior
of the module to meet your business needs.
to recognize the
modes.
Funds specialization
The topics in this section describe a dynamic system setting that enables you to control
which tax or due diligence cases get created while managing fund managers and funds.
Previous topic:
Case navigation
Note: If the result of the when rule is true, the default value is Unique.
If the result is false the default value is Inherited.
3. In the Dynamic System Settings tab, search for and select the clmfs/
SpinoffRegulatoryDDCasesForFunds dynamic system setting.
4. In the Value field, enter a value that meets your business needs:
• To create the regulatory cases (LKYC) for each fund selected for that
particular journey, enter Unique.
• To create the regulatory cases (LKYC) only for the fund manager, enter
Inherited.
• To customize the creation of the regulatory cases by specifying a when rule,
for example, TaxCaseCreationWhenRuleEvaluation, enter If is valid when
rule.
Note: If the result of the when rule is true, the default value is Unique.
If the result is false, the default value is Inherited.
Perpetual KYC
Automatically generate the necessary due diligence activities for customer events.
Financial institutions (FI) must be able to manage the high numbers of customer-
related events across their clientele that occur every day. The impact of these events
can range from insignificant to those greatly increasing risk to the business or requiring
legally mandated actions and reporting. Example events, that can be generated from a
variety of internal or external systems include: a change of address from Pega
Customer Service, a full name change of an individual from a 3rd party data provider, or
an update in beneficial ownership for an Entity.
The FI must ensure that every event is received, processed and actioned in a reliable
way that is appropriately audited and can be reported upon. Different events will have
different business and regulatory needs based on the event types, event subtypes and
their content. The resulting processing and actions for different events will vary around
areas such as prioritization, level of interaction required or complexity of work. For
example, a simple mailing address update may trigger a review process that can be
completed without human interaction (STP), whereas a Legal Name – Update may
require a set of manual actions like a re-screening of the customer or the completion of
additional due diligences.
Perpetual PKYC (PKYC) provides a framework to manage the reception and processing
of events. It seeks to reduce unnecessary customer interactions or back-office work
where appropriate, while helping the FI remain compliant with internal business rules
and external laws and regulations.
From an architectural perspective, Perpetual KYC utilizes the Event Driven Architecture
(EDA) for the reception and audit of events, as well as to implement the process that
will result in the creation of the Customer Review journeys that will address the specific
business and regulatory needs of the different event types.
This document seeks to outline how PKYC ensures that the correct due diligence
activities are carried out in direct response to specific customer-based events, and how
the framework can be configured to support different events and business needs.
Reference events
The event type of the list, Legal Name – Update, will be used throughout the following
sections to illustrate the different functional behaviors that can be implemented under
the Perpetual KYC framework and how those can be configured.
During the following sections, this document refers to some key terms which should be
understood upfront. Some of them represent Event Driven Architecture entities and
concepts and are listed here for your convenience (see Event Driven Architecture for
more details). Some others are terms introduced by Perpetual KYC itself.
• Event Type – Events can be organized in event types and subtypes according to
their nature and needs. The event type is the top level of that hierarchy of events.
For example, an organization may decide to group all the customer data changes
under a generic event type Confirmed Customer Data Changes (CCDC), while
having all the events that represented unconfirmed data changes under a group
Unconfirmed Customer Data Changes (UCDC).
• Event Subtype – The event subtype is the second level of the hierarchy of events.
Along with the event type, it uniquely identifies a specific type of event and
determines the payload and therefore the data that comes with the events under
that subtype. For example, within an event type of Confirmed Customer Data
Changes (CCDC), an organization may need to manage two different subtypes of
events, Legal name – update (LGN-UPD), Residential address – Update (REA-UPD).
Each event subtype will have its own data and its own processing options.
• Event Qualifier – Event subtypes can be further qualified upon their reception to
drive different behaviors. For example, a Registered address – Update event may
contain a different country than that of the previous registered address of an
organization. The financial institution may associate different risks in doing
business with that different country and drive the due diligence behavior
accordingly.
• Process - A sequence of logic to be applied to the events upon reception based on
their type and subtype. For example, PKYC Event reception or PKYC bundle
preparation.
• Domain - A domain represents a logical group that the processes can be grouped
into, typically associated to one application and security context (access group).
• Channel - The interface through which events are submitted. A JSON/REST API is
provided by default with the Event Driven Architecture, but this can be extended
to other integration mechanisms such as KAFKA or SOAP.
• Event Producer – A system that generates and sends events to the EDA channel.
There can be multiple and very different event producers depending on the
organization’s architecture and business needs. In the case of Perpetual KYC,
these systems are likely to be related with the management of customer data, For
example., CRM, Screening, 3rd-party data providers, and so on.
• Validation – Validation to ensure that the inbound events are correctly structured
and contain the necessary minimum fields for their process. It is implemented in
two levels. A preliminary validation at EDA checks the presence of basic attributes
like event type or subtype. A more granular validation at Perpetual KYC level
conducts additional validation like, for example, in a Legal name – Update, that the
customer ID provided is actually an organization.
• Enrichment – The process to bring additional data from other systems related to
the event that allows further processing to continue. It is also articulated in two
phases. One executed at the EDA level where generic data can be brought to drive
the EDA process and a Perpetual KYC specific one.
◦ Bundling - The ability to group together for tactical purposes or efficiency
gains all those events to be actioned in a similar way. The bundling of events
is executed after a configurable delay from the event reception time, which
opens a time window that gives time to other events of similar
characteristics to arrive and be part of the same bundle. Bundles result in
the creation of new Customer Review journeys or, if there are already
Customer Review journeys in-flight for the customer, in their merge (see
Merging) or in their scheduling for later processing after the existing journey
is finished.
◦ Merging – When a new bundle of events is created and there is an in-flight
Customer Review journey for the customer, the system can decide, based on
configuration, whether to wait until the completion of the existing journey to
create a new one, or to merge the events of the new bundle into the existing
Customer Review journey.
• Process overview
• Reference events
Process overview
The overall process for EDA and PKYC
The following diagram represents the main elements in terms of data, modules and
processes that take part in the Perpetual KYC solution.
• Event producers – The events are generated by the event producers and sent to
Perpetual KYC through an available EDA channel.
• Event reception – The EDA processes in charge of the reception of the events,
their grouping under bundles, and the creation and update of Customer Review
journeys.
• Customer Review – Customer journey that handles the appropriate due diligence
for the received events.
Event reception and Customer Review are explain in detail using the end to end ‘Legal
name – Update’ use case, one of the three use cases shipped as part of the solution.
The primary contact for Hawthorne Investments has contacted their UPlus Bank
relationship manager to notify them that the organization’s legal name has changed to
Platinum Advisory Inc. The relationship manager completes the legal name update
using the organization’s CRM system and concludes the call.
The due diligence activities appropriate to this customer change must now be carried
out in a timely fashion to ensure that UPlus Bank has an up to date understanding of
the risk of doing business with this customer. For that purpose, the CRM system
produces a Legal name – Update event that requires the following tasks to be executed,
most of which handled in the resulting Customer Review journey in Pega Client Lifecycle
Management:
• Risk calculation - Recalculate the risk assessment and AML DD Profile value of
the customer based on the updated data. This will be carried out automatically.
• Document collection - KYC Analyst to collect new documentation to confirm the
new legal name – For example, the updated formation documents of the entity.
• Screening and adverse media - Automatically execute screening and adverse
media searches on the new legal name for the KYC Analyst to review and assess.
This is necessary to assess if the new name represents any higher risk to UPlus
Bank via links to Politically Exposed Persons (PEPs), presence in Sanctions lists, or
subject of Regulatory and Law Enforcement actions, as well as presence in
Adverse Media sources.
• Targeted due diligence - KYC Analyst to answer a small set of targeted due
diligence questions specific to the change of legal name event. For example,
analysts should provide a reason for the change and any supporting evidence that
may be required.
• Event reception
• Customer Review
Event reception
The full process related to event reception.
The following section outlines the process conducted under the Event Driven
Architecture engine, from the event submission, receipt, and processing to the point
where the Customer Review journey is created.
Event Submission
The event producer – in this case, the CRM system where the name changed was
performed - submits an event request for a Legal name – Update to the Event Driven
Architecture (see REST/JSON channel). The payload of the event will look like similar to
the following:
{
"ReferenceKey": "CustomerID",
"ReferenceValue": "158001",
"EventType": "CCDC",
"EventSubtype": "LGN-UPD",
"ExternalID": "9473856385",
"Source": "Customer Service",
"Data":{
"Description":"Legal name change event",
"Details":"Details of legal name change",
"NameType":"Legal Name",
"PreviousName":{
"LegalName":"Hawthorne Investments"
},
"NewName":{
"LegalName":"Platinum Advisory Inc"
}
}
}
This request contains a range of information that will be used by EDA to correctly
receive the event and ensure the correct treatment is applied.
As per the example above, the request includes, but is not limited to, a reference to the
customer that the request is for (see ReferenceKey and ReferenceValue), an event type
and subtype (see EventType and EventSubtype), a name that identifies the event
producer (see Source), as well as the actual data payload containing the legal name
change.
The event type and subtype provided with the payload is used by the EDA to find the
configuration associated to that specific event type, what will drive the steps described
below.
Base validation
Events can be configured to have an initial validation performed at the EDA level. In the
specific case of Legal name – Update, this base validation consists of the following
components:
Base enrichment
Events can be configured to be enriched upon reception at the EDA layer. In the specific
case of Legal name – update, there is no enrichment logic associated. If, for any reason,
the enrichment process fails, the event is marked with a status of Failed and made
available to the administrators at the EDA Operations console.
• For more details about the configuration of this enrichment step, see Creating and
registering a new event type.
Process validation
Upon reception of the event, the system can perform some additional data validations.
In the specific case of Legal name – Update, the system checks that the customer that is
referred by the event reference key and value is of type organization. If the validation
fails, the Perpetual KYC process run on the event is marked with a status of Failed and
made available to the administrators at the EDA Operations console.
For more details about the configuration of this validation logic, see Configuration of
the Perpetual KYC module.
Process enrichment
Upon reception of the event the EDA processes created for Perpetual KYC processes, ,
the system can perform some additional data enrichment. For example, a change in
entity type for an organizational customer must be enhanced to determine if the new
entity type is riskier than the previous one. In the specific case of Legal name – Update, it
does not require any specific enrichment and the process therefore continues.
For more details about the configuration of this enrichment logic, see Configuration of
the Perpetual KYC module.
Bundle preparation
To drive efficiency, bundles allow for multiple events for a specific customer to be
handled under the same Customer Review case. Events are usually bundled together
based on similar types of processing needs.
For example, customer data changes already confirmed in the financial institution
System of Record do not require that a user makes an explicit acceptance of the data.
The Customer Review journey can start directly by asking the user there to conduct the
due diligence that applies for the type of event(s) received (For example, targeted due
diligence or screening). All those events that can be processed under that same
processing model and that can be processed all together under a single Customer
Review case can be grouped under the same bundle type.
Another example could be customer data changes triggered by 3rd party data provider,
where the Customer Review journey should start by asking the user to review the
changes and explicitly accept them for the organization. Events associated to that type
of changes should be managed under a different bundle type.
When an event is received, Perpetual KYC determines the type of bundle associated to
the event and the associated bundle delay (see Creating and registering a new bundle
type). With those two pieces of information, it takes the following steps:
• It checks if there is already in the system any open bundle for the same customer
and the same bundle type; For example, a bundle created by a previously received
event associated to the same bundle type and customer that has not been
processed yet.
• If there is no open bundle, the system creates a new one for the given customer
and bundle type and schedule its processing according to the delay specified at
the event configuration. For example, if the event is configured to have a delay of
one hour, the bundle process will be scheduled for an hour later.
• If, on the contrary, there is an open bundle, the system checks if the delay time
configured for the event is shorter than the time to the execution of the bundle. If
it is, the bundle schedule is changed according to the shorter delay time.
Otherwise, the bundle schedule remains unaltered. For example, if the event is
configured with a delay of one hour but the open bundle was initially scheduled
for a time that is still two hours away, the bundle will be rescheduled so that it is
executed after an hour.
The diagram below shows an example of how the reception of different events for the
same customer and bundle type drives the creation, scheduling, and execution of the
bundle.
The first point in the timeline represents the arrival of an event configured to use
bundle CCD1 with a delay of 12 hours. The second point shows the reception two hours
later of an event also associated to use CCD1 and with a delay of 4 hours.This, makes
the system reschedule the bundle created initially so that it is processed in 4 hours
since the reception of the second event. The third point represents the execution of the
bundle, which will collect the two events waiting for it and put them under the
appropriate Customer Review journey.
Bundle execution
The first thing that a bundle does when it is executed is collecting all the events that are
waiting for that bundle i.e., events received for the same customer and configured for
same bundle type. Then, it determines whether there is already a Customer Review
journey in flight for the same customer and act according to the following scenarios:
bundle will wait until the Customer Review journey is completed and then will be
executed again.
Following with the example use case, when the change of legal name arrives for
Hawthorne Investments, the process checks for an active Customer Review case for that
customer. No such case exists so a new case is created, and the Perpetual KYC EDA
process completes. The event is marked as Resolved at EDA.
Customer Review
The Customer Review case which processing customer changes.
The Customer Review journey ensures that the required due diligence activities are
carried out based on the specific event or events included within the case.
When events are transferred into a new or an existing Customer Review case, there is
an event adoption logic that is executed. The logic mainly consists of:
• Data mapping – The data coming with the event of the payload is mapped into
the Customer Review journey as required. For example, the event that represents
the change of name of Hawthorne Investments has been classified as a confirmed
customer data change (CCDC), what means that the change has already been
verified by a reputable source - a trusted third-party data provider or an employee
of UPlus Bank – and that the change can be implemented without additional
approvals. Being that the case, the mapping logic associated to this event type
maps the new name of the organization into the customer data available at the
Customer Review, which displays the new name from that moment on.
• Actions preparation – As it will be explained in subsequent sections, each event
can trigger specific actions at the Customer Review journey. The preparation of
those actions – for example, the invalidation of previously obtained e-screening
results – will be performed during the adoption of the events. Although Perpetual
KYC supports only certain specific actions, customers can add their custom logic
to meet their specific needs.
Once the event adoption is completed, the Customer Review journey starts its way. In
normal circumstances, the customer will have all the data required to meet the
organization requirements and will proceed directly into the Due Diligence stage, where,
if required, the first manual steps will be awaiting. If the case was able to conduct all
due diligence automatically, it would then proceed to Fulfillment and subsequent
closure.
Coming back to the example use case, the Legal name – Update event for Hawthorne
Investments has been configured to have few associated actions that will prevent the
journey to go automatically beyond Due Diligence. The next sections describe the
actions and what is expected from the user.
Documentary Requirements
For example, according to the out-of-the-box configuration, the reception of the Legal
name – Update event for Hawthorne Investments must force the re-triggering of all the
requirements where the name of the organization is critical. The screenshot below
shows the content of the Requirements gadget after the event has been received: all
the requirements that were previously satisfied appear as complete, except the one to
collect the Formation documents. This specific requirement must be completed to
provide documentary evidence of the new legal name. The analyst must open it and
complete it.
Events can also be configured to influence the behavior of the Screening and Adverse
media functionality at the Customer Review journey (see Screening and adverse media).
For example, during the original onboarding and any subsequent periodic reviews,
UPlus Bank will have carried out screening activities on the name Hawthorne
Investments. This was necessary to check if the customer represented a high risk of
doing business with. The system will have automatically searched a variety of sources
However, the update in the customer’s name to Platinum Advisory Inc represents a
potential new risk for UPlus Bank. New screening and adverse media checks must be
conducted against the new name. The out-of-the-box configuration for Legal Name –
Update is such that the system automatically creates new cases to conduct this due
diligences. The screenshot below shows the two new cases created for that purpose.
Events can be configured in Perpetual KYC to trigger Targeted Due Diligence (TDD).
Under that configuration, the system adds an additional KYC type (Global – AML – Entity
TDD ) to those KYC Types applicable during the AML due diligence conducted under the
Global KYC of the contracting party. The content of the TDD KYC Type varies depending
on the events that triggered the creation of the Customer Review journey. The name
will also vary based on the party type of the customer. For example, Global – AML –
Individual TDD for a customer of type individual.
For example, the event Legal Name – update is preconfigured to Targeted Due Diligence.
When an event of this type is received, the system takes note and when the case
reaches the Due Diligence stage of the Global KYC, it adds the new TDD KYC Type to the
case.
When the KYC analyst opens the case, they will be presented with a questionnaire
covering a small set of questions related to Legal Name – Update. This will be in addition
to the relevant questionnaires completed during onboarding or potentially updated
during other customer reviews.
Completion
After all the steps described above are completed, the Customer Review journey moves
into the Fulfillment stage and executes all the related wrap-up activities including, in the
case of Legal Name – Update, the propagation of the new name of the organization into
any external system that might require it, such as the System of Record. All the
necessary due diligence will be considered completed and the Customer Review case
automatically resolves.
The previous sections of this document described the functional paradigm that
Perpetual KYC introduces. It also refers to the three sample events that are provided
with out-of-the-box and that can be used as reference by Financial Institutions to
implement a comprehensive Perpetual KYC solution. The following sections describe
the steps to configure Processes and Events for Perpetual KYC.
The following sections explain how to configure new events so that they can be added
to the solution. Although the existing sample event Legal name – Update is used as
reference throughout these sections, the steps listed below are applicable to all new
events. These are instructions to configure new events, not to make the existing sample
event work – no additional configuration is required for that.
The process to configure new events can be organized in two main phases:
All the configurations can be performed from the EDA Configuration landing page
available at Dev Studio (Configure >> Financial Services >> Event Management >>
Configuration), which contains a series of tabs that host different types of
configurations.
Note: The following sections refer to basic EDA concepts and resources that
should be well known before proceeding with the Perpetual KYC configuration
and therefore the reading of Event Driven Architecture is highly recommended.
This default domain can be used to receive and process events within your
implementation. You just need to create a new equivalent access group to
PegaCLMFSCIB:EDA, make it point to your implementation application, set up the right
roles and associated configuration, and modify the Perpetual KYC EDA domain to refer
to it.
After this configuration is implemented, the two EDA processes created for Perpetual
KYC, one for processing of events and a second for the processing of bundles,- will be
launched within the context of the new access group and therefore of your application.
Creation and registration of a new event type within the EDA engine.
Once the EDA domain has been configured, it is time to create the infrastructure to
support the new event type. The Event Driven Architecture documentation contains all
the details needed to complete that configuration, from the creation of its base class,
the definition of its data model and validation rules, to its registration at the EDA
engine.
You can also use the configuration of the sample event Legal name – Update (outlined
below) as reference. The configuration is available at the EDA Configurations landing
page (see Configure >> Financial Services >> Event Management >> Configuration >>
Event Registry).
These are the most relevant attributes to be provided during the event registration:
• Event name – The end user-facing name for the event. For example, Legal name –
Update. This name will appear at the EDA Configuration and Operations landing
pages, and also at the events gadget that will be made available within the
Customer Review journey to access the events.
• Type – A short set of characters to identify the high-level type of the event. Each
organization can maintain their own taxonomy of events and use their own event
types and subtypes to organize and manage their events. The sample events
shipped with Pega Client Lifecycle Management are all within an event type CCDC
(Confirmed Customer Data Change), which represents changes made to customer
data already committed into the System of Record of the organization and where
Pega Client Lifecycle Management will only be used to conduct the associated due
diligence. Other types of events that could have been considered are events
triggered due to data changes in 3rd party data providers or events triggered due
to new matches in external periodic screening processes.
• Subtype – A short set of characters to identify the subtype type of the event. For
example, LGN-UPD, for Legal name – Update. The subtype, along with the event
type, determines the exact type of event that is being registered.
• Classification – Whether the event that is being registered represents a single
event or a bundle for multiple events. In this scenario where a new event is being
registered, this value should always be set to Event.
• Description – Full description of the event. This description is just for
documentation and administration purposes, and it is not used at runtime.
• Payload configuration class – Class that defines the data model of the event and
therefore the payload of the event message that will be received and parsed at
EDA.
• Enrichment data transform – Name of the data-transform rule that enriches the
event with the additional data necessary to process the event. For example, in
Legal name – Update, the enrichment logic retrieves the existing full customer
name based on the customer reference number provided with the event. It is
important to note that this enrichment process is executed at the EDA level – it is
executed every time the event is received, regardless of the processes that might
be subsequently executed on the event, and should therefore not contain any
Perpetual KYC specific logic. If there is Perpetual KYC logic that is required, it can
be configured later within another data-transform that can be registered later at
the Perpetual KYC event configuration.
• Validation data transform – Name of the data-transform rule that contains the
validation logic for this event. For example, the data-transform associated Legal
name – Update enforces that the mandatory fields provided for the event are
present. As with the enrichment data transform, this logic is executed at the EDA
level and should not contain any process specific logic. Logic specific to Perpetual
KYC can later be configured within the Perpetual KYC event configuration.
• Processes to execute – All Perpetual KYC events should be configured to be taken
through the Perpetual KYC – Event process that is shipped out-of-the-box. If there
are additional EDA processes that the event needs to go through, they can be
configured along with the Perpetual KYC one.
After this configuration, the EDA engine will be able to receive the registered event
through one of its channels, parse it, validate it and routed to the Perpetual KYC
process.
Creation and registration of a new bundle type within the EDA engine.
All the Perpetual KYC events are managed through bundles (see PKYC Event - Bundle
preparation). The different types of bundles that will be managed by the application
need to be defined upfront and registered in the system as events of type Bundle. This
will allow the system to maintain a catalogue of bundle types and refer to them from
the Perpetual KYC configuration that will be used later in this process.
By default, Pega Client Lifecycle Management comes with one type of bundle created for
the management of events of type CCDC/CCD (Customer Confirmed Data Change).
Organizations can use this default bundle type or create their own bundle types to
group and manage events in different manners.
For example, an organization can decide to bundle together all the events that are
related to screening under a new bundle type called Screening and identify that bundle
type in the system with type/subtype CSCR/Bundle. This configuration would give the
organization the ability to manage all the screening related events that could be
received in a given time window under the same Customer Review journey, a much
more effective approach from an operational perspective than creating multiple
independent journeys and manage them in a sequential manner. In addition, it would
give the organization the opportunity to specialize certain behaviors of the Customer
Review journey based on the new bundle type. For example, if the organization wanted
the journey to stop in an initial step and review all the screening changes there before
taking any other action, a simple check on the bundle type that triggered the Customer
Review case would suffice to drive the process.
The following is the configuration of the default bundle type CDCC/CCD that is shipped
with Pega Client Lifecycle Management and that can be used as reference for new bundle
types (see Configure >> Financial Services >> Event Management >> Configuration >>
Event Registry).
• Event name and description – Short and long description of the event that
represents the bundle. Used only for administration purposes.
• Type/subtype – Type and subtype given to the bundle type. No restrictions are
applied on these fields so it is up to each organization to make use of these two
fields to uniquely identified the type of events in a meaningful manner.
• Classification – When registering a bundle type, the classification should be set to
Bundle. Events with this classification will later be available for selection within the
Perpetual KYC configuration during the configuration of the bundling and delay
options (see Bundling and delay).
• Payload configuration – All bundles must point to the out-of-the-box class
PegaFS-Data-EDA-EventPayload-Bundle. That class defines the data model that
the Perpetual KYC engine will need to generate and track bundles. Configure
EDADisplayDataPayload as the display section.
• Processes – All bundles must be configured to trigger the execution of the out-of-
the-box EDA process Perpetual KYC – Bundle. This process is the one on charge of
the retrieval of associated events, detection of in-flight Customer Review cases,
and the creation of new cases, and so on.
After this configuration has been completed, the Perpetual KYC engine will be able to
refer to the new bundle type (see Bundling and delay).
That configuration can be made from the Custom process configuration tab of the EDA
Configuration landing page (Configure >> Financial Services >> Event Management >>
Configuration), where a list of all Perpetual KYC events is maintained. Each line in the
table contains identifying information with a summary setting for each of the
configurable areas.
Any new event type registered at EDA and configured to be routed to the Perpetual KYC
process must also be registered in this list. The Configure new event button, available at
the bottom of the list, can be used to register new event types.
The configuration details of each of the entries can be accessed through the View and
Configure options, available at the contextual menu at the right of each entry. This gives
access to the five main groups of configurations that are described in the subsequent
sections of this document.
General settings
The screenshot below shows the general settings configured for the Legal name –
Update event.
• Enrichment data transform - Used for Perpetual KYC event specific data
enrichment. In the case of Legal name – Update, it is empty as all the event
enrichment that was required was done before at the EDA level (see Creating a
new event and associated rules).
• Validation data transform – Used to maintain Perpetual KYC specific validations.
For example, the event Legal name – Update has been configured with a data
transform that ensures that the customer reference number exists in the system
of record.
• Repeatability settings – This configuration is related to how the Customer
Review journey that will eventually manage the event will behave when containing
multiple events of the same type and subtype. The three options are outlined
below.
◦ Use all occurrences - Information and data from all events of the same type
and subtype will be present after bundling and transferring them into a
Customer Review journey case. This configuration can be used when
multiple events of the same type and subtype represent different episodes.
For example, if an event represents a transaction, all the events of the same
type should be kept and managed, as one transaction event does not
invalidate the previous transactions. The Customer Review journey therefore
should contain all the transactions that might have been received.
◦ Use most recent occurrence only - Only information and data from the most
recent event will be used. This configuration can be used when the reception
of an event invalidates previous events of the same type and subtype that
might have been received before. For example, if a customer changed the
email address twice in a very short period of time and the two events fall
under the same bundling window, there is no point in processing the initial
change and only the second one should be considered for due diligence.
◦ Use Data Transform to determine behavior – For complex scenarios where
advance logic is required, developers can provide the desired custom
behavior per the designated data-transform rule.
• Event data mapping data transform – When an event is transferred into a
Customer Review journey for its subsequent processing, it is likely that the data of
the event payload needs to be mapped into the internal data model of that
journey. For example, a Legal name – Update event will require that the previous
customer’s name loaded when the journey is initialized is overwritten with the
new name received with the event. The data transform rule that can be specified
in this attribute is the one in charge of that data mapping.
• Custom configuration – During the event adoption process, some organizations
might need to implement some additional logic to the one provided out-of-the-
box. That logic can be implemented in a data transform and refer from this piece
of configuration.
The bundling configuration allows Perpetual KYC events that are to be processed in a
similar manner to be bundled and transferred into a single Customer Review journey.
• Bundle name – Reference to the bundle type that will manage the events of the
given type and subtype. For example, the Legal name – Update event is configured
to be processed as part of the default bundle type PKYC Bundle. See Creating and
registering a new bundle type for more details.
• Initial execution delay – Length of the bundling time window that will be opened
upon the reception of the event. The window length can be set in days, hours, and
minutes. If when the event is received, there is no bundle of the same type already
open, the system creates a new one and schedules its execution using this initial
execution delay. Otherwise, if there is already an in-flight bundle of the same type,
the system uses the delay set in this configuration to readjust its schedule, leaving
it as it was before the event arrived or moving its execution forward. See PKYC
Event - Bundle preparation for more details.
Screening and adverse media configuration for the Perpetual KYC module.
The screenshot below shows the available settings related to Screening and Adverse
media cases created by the Customer Review case.
Both Screening and Adverse Media settings supports the following working modes:
An event like Legal name – Update, for example, immediately invalidates the previous
screening results and requires that both screening and adverse media checks are
carried out again, this time on the new name.
The reception of a certain events might require the execution of targeted due diligence.
The configuration below allows users to determine whether the event in question
should trigger due diligence or not. For example, the screenshot below shows how Legal
name – Update has been configured to trigger targeted due diligence.
As soon as one of the events in the Customer Review journey has this configuration
active, a new KYC Type in charge of managing targeted due diligences
(Global_AML_Entity_TDD or Global_AML_Individual_TDD) is made available and displayed
along with the other applicable AML KYC Types that might apply as well (For example,
CDD/EDD).
This new KYC Type has been configured for sample purposes. It has an item group for
each of the sample event subtypes that are shipped out-of-the-box. For example, the
event Legal name – Update has a correspondent item group at that KYC Type that is
visible when an event of this type is present in the Customer Review journey (see rule
G_ETDD_LegalNameUpdate).
Organizations can decide to implement targeted due diligence using this default
mechanism new item groups can be created under the existing Global_AML_Entity_TDD
Type and making them visible based on the presence of certain events at the case – or,
if they have more complex needs, implement their own mechanisms where the use of
different KYC Types might be required.
Documentary requirements
The reception of certain events might require that previously satisfied documentary
requirements are triggered again. For example, a documentary requirement that was
satisfied in the past as a proof of identity is likely to be invalid after a change of legal
name.
The Documentary requirements tab of the configuration allows users to determine which
documents are not valid anymore and which requirements and requirement sets need
to be retriggered. The configuration below is the one provided for the sample Legal
name – Update.
In addition to the list of documents to be invalidated, the system also allows to select
an Invalidate all associated requirements option. This configuration is an alternative
one where the system invalidates not only the documents that were listed, but also all
the requirements where those documents might have been used for their satisfaction.
As soon as an event is transferred into a Customer Review journey, all this configuration
is applied, and the documentary requirement engine executed.
As explained in the introduction of this section, each of the event types configured for
Perpetual KYC has an entry in the Perpetual KYC configuration list (see Configure >>
Financial Services >> Event Management >> Configuration >> Custom process
configuration). Each entry in that list defines how that specific event type behaves when
taken through the Perpetual KYC process. However, there might be situations where an
organization expects a different behavior for events of the same type. For example, an
organization might decide that repeating the screening for a low-risk customer to not
be required, whereas it is for a high-risk customer.
In these kind of situations, organizations can make use of the specialization mechanism
that Perpetual KYC provides. This specialization mechanism allows for a single event
type to have different configurations based on what is called the event qualifier. The
qualifier is nothing but a property (EventQualifier) that can be set on the event during
the EDA enrichment stage. There are no restrictions that the value that this qualifier can
take or in how this specialization mechanism can be used. It is up to each organization
to decide whether specialization is required, which values the qualifier should take, and
how to articulate it based on its specific business needs.
The sample event type Registered address – Update has been configured for your
reference to use the risk of the customer to maintain different configurations.
In this specific example, the event type is configured through three different entries:
one for a base configuration (the one with event qualifier NULL, also called the base
configuration), one for a high-risk customer and one for a medium-risk customer.
To create a specialized version of a configuration, just access the Specialize option that
is available at the contextual menu at the right of each of the entries in the list.
The configuration instructions provided so far in this section are applicable to the
scenario where there is a single instance of Pega Client Lifecycle Management deployed in
the environment. There could be organizations, however, that need to implement
multiple instances of Pega Client Lifecycle Management at the same environment to
manage different regions or segments (this might not be the preferred deployment
approach, but it is an option). In that kind of situation, organizations need to make
some additional changes to the configuration steps listed above.
In addition, a multiple domain scenario will require the registration of the Perpetual
KYC processes under the new domains. The out-of-the-box application comes with two
EDA processes:
The same principle must be observed for the registration of bundle types (see Creating
and registering a new bundle type). In a multiple domain scenario, the event
representing the bundle type should point to all the domains where that specific
bundle is applicable. Under this configuration, the system will trigger the bundling
option throughout all the domains.
Reference events
Details of the reference events provided with the solution out-of-the-box with Perpetual
KYC.
The following sections include the JSON payload to submit the three reference events
shipped with out-of-the-box along with some basic default configuration. For more
details about the specific configuration for each of them, see the configuration available
at the Event Driven Architecture Operations console (Configure >> Financial Services >>
Event Management >> Configuration >> Custom process configuration).
Attribute Value
JSON Payload
{
"ReferenceKey": "CustomerID",
"ReferenceValue": "158001",
"EventType": "CCDC",
Attribute Value
"EventSubtype": "LGN-UPD",
"ExternalID": "9473856385",
"Source": "Customer Service",
"Data":{
"Description":"Legal name change event",
"Details":"Details of legal name change",
"NameType":"Legal Name",
"PreviousName":{
"LegalName":"Hawthorne Investments"
},
"NewName":{
"LegalName":"Platinum Advisory Inc"
}
}
}
Base validation Previous legal name and the new legal name
cannot be empty
Attribute Value
JSON Payload
{
"ReferenceKey": "CustomerID",
"ReferenceValue": "101004",
"EventType": "CCDC",
Attribute Value
"EventSubtype": "FLN-UPD",
"ExternalID": "BUG24234-713608-18",
"Source": "Customer Service",
"Data":{
"Description":"Full name Update",
"NameType":"Full Name",
"NewName":{
"FirstName":"Ravi",
"MiddleName":"B",
"LastName":"Teja"
},
"PreviousName":{
"FirstName":"Iqbal",
"MiddleName":"",
"LastName":"Mohamad"
}
}
}
Attribute Value
JSON Payload
{
"ReferenceKey": "CustomerID",
"ReferenceValue": "31013",
"EventType": "CCDC",
"EventSubtype": "REA-UPD",
"ExternalID": "REG150001FIdfds",
"Source": "Customer Service",
"Data":{
"Description":"Registered address update
",
"AddressType":"Registered address",
"NewValue":{
"Country":"Iran",
"AddressLine1":"299 Park Avenue",
"AddressLine2":"8th, 9th and 11th Floors",
"CityOrTown": "New York",
"StateOrProvince": "New York",
"PostalCode": "10171"
},
"PreviousValue":{
"Country":"Ucsdsades",
"AddressLine1":"600 Washington Blvd",
"AddressLine2":"test",
"CityOrTown": "Stamford",
"StateOrProvince":"CT",
"PostalCode": "06901"
}
}
}
Functional components
The documentary requirements are applicable to each specific journey that the system
determines based on the assessment of different drivers like jurisdictions, product
types, customers, and related party roles or risks. The system determines which
documents are required in each specific scenario and adjusts the requirements as the
data changes during the journey lifecycle.
Review the functionality that is structured in the following four main functional
components:
Requirements definition
The first step to use the functionality is to have the rules that drive the
documentary requirements defined in the system. The process is usually
performed in design time using the Requirements Portal or by developers using
the Dev Studio.
Synchronization engine
This component is in charge of keeping up-to-date the documentary requirements
specific to a given case. The system automatically executes it in multiple points of
the application, including synchronizing with the Master Profile. Under this
CLM Requirements
The layer implements the advanced requirements functionality that CLM requires
to orchestrate the satisfaction of documentary requirements. The layer is
implemented on top of Pega FS Requirements as an extension of that module.
Although the design of the module supports the use of the functionality by other
applications. It is currently provided with CLM and can only be used within the
context of that application. The rulesets that make up this layer are
PegaCLMFSRNGFS (CLM specific logic) and PegaCLMFSRNGBase (application-agnostic
capabilities), both referred from the PegaCLMFS application.
Pega FS Requirements
The layer implements the basic requirements functionality shipped with Pega
Foundation for Financial Services. This base functionality is the one used by CLM
until the introduction in 8.5 of the CLM Requirements layer and it is still in use to
support in-flight cases during upgrade scenarios and CLM customers that are not
ready to move into the new CLM Requirements functionality. The module is
implemented in two main rulesets, PegaFSRequirements (provided with the
By default, customers installing Pega Client Lifecycle Management and KYC inherit
all the capabilities of these two layers by just building the applications on top of
the PegaCLMFS application.
Generation 1G vs 2G
The Requirements functionality is initially developed in the early versions of Pega Client
Lifecycle Management and KYC. The Pega FS Requirements provides the core of the
functionality. Although the module comes with some basic functionality that covers
most of the customer's immediate needs, there are some important aspects that are
not considered in this version of the module. This version is the one referred above as
Pega FS Requirements, and it is known as 1st Generation (1G).
In the 8.5 version, Pega Client Lifecycle Management and KYC introduced a new version
of the functionality to overcome some of the most important limitations. The module is
refactored to enable financial institutions to take the document management process
to the next level. This new version of the module is accessed through the layer referred
to above as CLM Requirements, and it is known as 2nd Generation (2G) or
Requirements Next Generation (RNG).
The parallel existence of these two versions of the functionality affects customers
differently depending on where they are in their journey with Pega Client Lifecycle
Management and KYC.
Fresh install
Financial institutions installing Pega Client Lifecycle Management and KYC for the
first time automatically adopt the latest version of the functionality (2G) without
having to do any specific configuration.
Update
Financial institutions updating from the previous versions of the product must
perform some manual tasks to activate the new functionality. Meanwhile, the
legacy version of the application (1G) is used, before the manual tasks are
completed, For more information, see "Updating from previous version to 8.5" in
the chapter Updating from a previous version to 8.5.
Note: Once you work in the new version of the functionality (2G), stay in that
version. Any attempt to deactivate the new version and to resume work in the
legacy version might have unpredictable results, and it is not supported.
For more information about how to move from 1G into 2G, see Updating from 1G into
2G.
• Requirements definition
• Synchronization engine
• Requirements case
• Background processes
• General settings
Requirements definition
The definition of the documentary requirements is articulated through the three main
data entities that are requirement sets, requirements, and documents.
The maintenance of these three data entities can be performed from the following
locations:
Requirements Portal
The portal allows users to maintain requirements set, requirements, and
documents in a user-friendly manner. It is available by default for the standard
CLM administrative access groups and can be added to other access groups as
required (look for RequirementsPortal portal).
Dev Studio
Developers can manage the documentary requirements from Dev Studio by
accessing the three data entities, all available under Records > Requirements.
Users can configure most of their documentary requirements by using the forms
available from these two points. However, some advanced capabilities should be
enabled to take additional steps that are described in the following sections.
The Requirements Portal helps users to maintain the requirement sets, requirements,
and documents.
The requirement sets and requirements are implemented in the system as rules that
must be associated with a specific application, ruleset, ruleset version, and class. At run
time, these values are determined by the configuration loaded through
D_ReqConfiguration.
Use the available extension point LoadConfiguration_Ext to set the correct values.
Property Description
pyRuleSet
Ruleset, where the new and updated rules
are stored
Property Description
pyRuleSetVersion
Ruleset version, where new and updated
rules are stored
pySelectedApplicationName
Name of the application that belongs to the
ruleset
RequirementAssetsConfigurationClass
The requirements application by default uses
the PegaReq-Data-ReqVerify- class.
The second stage in the processing of a requirement is data capture. In that stage, the
user performs two basic operations. The first operation confirms that the collected
document is in good order, and the second captures the date that the document might
require according to its type.
The definition of the attributes required for each specific document type is done in two
different places.
After completing the changes, the new section appears in the data-capture stage for
the documents of the selected type. The defined attributes are captured through that
section and automatically stored along with the document. The information is stored in
two different places:
The activity opens the definition of the document identified by DocId and invoke
SetDocAttributeRelevance to register the attributes.
While the persistence of attributes in the document metadata table is essential for the
good behavior of the functionality, the attributes indexes are not currently in use. If
your organization is not planning to implement any functionality based on this table, it
can be disabled and, that way, reduce the overall database footprint. The following
when rules can be used to disable the functionality.
If, on the contrary, you want to use the index of the attribute, but you also would like to
make changes in the way the index is generated, you can make some adjustments.
Synchronization engine
During the lifecycle of a customer journey, the system performs the synchronization
engine multiple times to ensure that the status of the documentary requirements is up
to date and in sync with the customer and related party data. The process consists of
two main phases of synchronization: determination and implementation.
During the first phase, no action is taken. The system just builds up the data structure
with the requirements that the relevant case parties need to satisfy. This structured
data is known as the DocGroup. The main tasks executed in this part of the process are:
Applicability assessment
After the relevant requirements sets are identified and retrieved, and the Context
is set, the system assesses each individual requirement set, requirement, and
determines their applicability.
Pega Client Lifecycle Management and KYC comes with a fully functional configuration
of the module for its CIP requirements. Financial institutions, however, might want to
make changes to the standard configuration or expand the use of requirements to
other purposes and types of documents.
Next topic:
Second phase of synchronization: implementation
Pega Client Lifecycle Management and KYC considers all the KYC Significant parties as
subject to documentary requirements. To implement a different logic, modify the
following rules.
To find the relevant requirement sets for a given purpose and party, the system uses a
composite key made up of three values: purpose, locator key, and locator modifier. The
system compares the key generated during the synchronization against the key defined
in the requirement set at the creation time.
For example, a requirement set is defined with the following values: purpose CIP,
locator key Entity, and locator modifier Australia. At the time of synchronization,
the system uses this information to assess this requirement set only during the
evaluation of CIP requirements, when the customer is an entity and only if there are
products in the Australian jurisdiction.
A given party can trigger the search of requirement sets based on multiple locators. For
example, a customer with products not only in Australia but also in the US triggers a
search for requirements sets with multiple locators: purpose CIP, locator key Entity,
and locator modifier Australia or US.
It can also be specified that, if no requirement set is found for the provided locators,
the system searches again using a locator with the same purpose and locator key that
the one provided and a generic modifier with value Default.
Pega Client Lifecycle Management and KYC set the following values of the locators:
• Purpose CIP/RetailCIP
• Locator key Entity/Individual
• Locator modifier Default
The applicability of each relevant requirement set is assessed for each relevant party.
Use the Context to execute the assessment, a data set that contains all the pieces of
data that drive the applicability logic.
The Context is used during the following three assessments. Any of those assessments
require any piece of data that must pass through the Context.
In Pega Client Lifecycle Management and KYC, the party information drives the
applicability logic of CIP requirements. See the most relevant pieces of information on
the following list:
• Customer Type
• Main customer type
• Regulated organization flag
• Listed organization flag
• Existence of business address
• Existence of screening results
• PEP indicator
• Customer subtype
• Jurisdictions
If you had to pass additional data to the applicability logic and passing it through the
context is not a good option, consider adding a new parameter using these two rules:
In this phase, the system iterates through all the requirement sets and requirements
that are deemed applicable before and creates or reuses cases for them. In addition,
the system withdraws all those cases that, according to the results of the previous
phase, are not required anymore. The second phase executes the following tasks:
Case initialization
In the situations where the synchronization engine determines that a new case is
required, the system creates the new instance and initializes it with all the data
required for its later processing.
Case reopening/reuse
If the system determines that the requirement is already managed in the current
customer journey and, therefore, there is a case for it in the system, it process to
reuse the requirement (if it is in flight) or reopen it (if it is already resolved).
Case withdrawal
After the system ensures that all applicable requirements have their
corresponding case, it withdraws all those cases that were created in previous
runs of the synchronization engine and whose requirements are not applicable
anymore.
Requirements reuse
If a given requirement is processed before for the same party and within the same
Context in a previous customer journey, it can be reused.
Documents reuse
As part of the case initialization, the system checks if the targeted party already
has some of the documents requested in the requirement. If any match is found,
it is automatically attached to the case and the case made progress to the next
applicable stage.
There are few extension points provided in this part of the process for financial
institutions to change the default behavior of the engine.
Previous topic:
First phase of synchronization: determination
During the initialization of requirement cases, the synchronization engine transfers all
the requirement relevant information, along with party data, routing information, and
so on.
You can reuse requirements completed in previous customer journeys for the same
party and under the same context (the same contracting party).
To enable or disable this functionality based on the case data or the journey type, use
following when rule:
If the functionality is enabled, the system will search in the requirements metadata
table for any completed requirement created in the past that meets the following
criteria:
Same party
The requirement is for the same party.
Same context
The main journey is for the same contracting party.
Same requirement
The same name as per requirement definition.
Same scenario
The requirement is resolved with the same scenario that is currently active.
No changes in the requirement
The requirement definition is the same since its use.
To implement the last of the checks, to determine whether the requirement definition is
changed since the requirement is resolved, the system uses the Satisfaction Logic
Indicator. This indicator is a string that is set at the time of the resolution of the
requirement and which is later used during the requirement use logic.
By default, the system sets the last update time of the requirement definition
(pxUpdateDateTime) as the Satisfaction Logic Indicator. So any update in the definition
While the conservative approach works well for many financial institutions, you might
want to make changes and create a more sophisticated logic to generate the string. For
example, generate a hash from the satisfaction logic and its associated documents to
give more flexibility to the reuse process and therefore reduce the operational load of
the teams.
To implement your the Satisfaction Logic Indicator, use the following rule:
At the time of the assessment, the system also considers if any one of the previously
provided documents to satisfy the requirement is expired or stale. If that is the case,
the requirement cannot be reused. To change this behavior, extend the following rule:
RNGIsReqReuseInvalidDate When
The rule returns the false
value by default, so the
requirement with a past
valid end date cannot be
reused.
When you created the requirement case, the system checks if the customer has already
provided documents requested by the requirement in the past. The system
automatically attaches all those documents that could previously exist and, depending
on the status of the document and the satisfaction logic, make the case progress to the
appropriate stage.
For example, if a requirement expects either a passport or a driver license from a given
party, and that party already provided the passport in a previous journey, the system
can automatically attach the document and try to make the case progress. To satisfy the
requirement that only one document is required, the case leaves the initial stage of
Satisfy and goes into the next Data capture stage. If the passport went already
through the Data capture stage in the past, the system determines that the second
stage is not required and makes the case progress to the third one, the Verification
stage.
However, if the logic of the requirement is different and requires both documents,
passport and driver license, the system can attach the document automatically but is
not able to make it go beyond the first Satisfy stage, where the case waits for a
second document to be uploaded.
To change some of the behavior of this functionality, use the following rules:
RNGIsReuseOfStaleAndExpiry When
The rule determines
Doc
whether the system must
reuse or not the expired or
stale documents. By default,
the rule returns the false
value, so the system does
not consider expired
documents.
Invocation points
Pega Client Lifecycle Management and KYC executes the requirements synchronization
engine in most of the synchronizations with the Master Profile.
To change the behavior of the functionality and restrict its execution to only certain
stages or steps in the process, use the following rules:
RNGSynchronizeRequirements When
The rule acts as a general
Enable
switch for synchronization.
The rule can be turned on
and off based on certain
conditions. By default, it is
always turn on.
application without
modifying all the invocation
points or doing it selectively
under certain
circumstances. The decision
should be based on case
data.
ReadyToTriggerRequirementS When
The rule gives some control
ubcases
to processes that want to
skip or include the
synchronization based on
certain conditions. The
calling process can set the
bTriggerRequirements
property that the rule
Use the
bTriggerRequirements
property to enable or
disable the synchronization
from specific processes.
Requirements case
Requirements cases implement the process to collect and verify documents from the
customers. They are created from the requirements synchronization engine when it is
deemed that one requirement needs to be satisfied by a given party.
Document collection
A document collector reviews what is needed to satisfy the requirement, uploads
a document, or requests a waiver, and submits the case as having all the
necessary documents provided. This stage sources documents into the party
document library.
Data capture
A data collector determines whether the document is in good order, captures any
data that is defined as being needed, and, if correct, submits the case as having all
the document data captured. If the document is not in good order, the data
collector rejects the document, and the system returns the case to the document
collection stage. This stage collects data that is stored in the party document
library.
Verification
A document verifier determines that the document meets the requirement’s
criteria and, if correct, submits that the document met the requirement. If the
document is incorrect for this requirement, the verifier rejects the document, and
the system returns the case to the collection stage. The document status is
specific for the requirement that it is verified for, and the same document can
have a different status in any requirements using it.
Waiver review
A document verifier reviews any waivers that were requested and if correct,
submits the case as having the waivers verified. If the waiver is not appropriate,
the document verifier rejects the waiver and the system returns the case to the
document collection stage.
Approval
A document approver reviews that all documents and waivers meet the
requirement’s criteria and, if correct, submits the case as being approved. If
documents or waivers are rejected, and the satisfaction logic is not met, then the
case is returned to the document verifier for review. The document status is
specific for the requirement that it was approved for, and the same document can
have a different status in any requirements using it.
Pega Client Lifecycle Management and KYC comes with a fully functional configuration
for this component which does not require additional changes to work. However,
financial institutions may need to modify the base behavior of the component to meet
their specific business needs. The following sections describe some of the most
common configurations made to this part of the functionality.
The different actions that a user can perform in the process are given by the privileges
available to that user. It is important to note that a single user can perform multiple
roles in the process and therefore act in more than one process stage.
Privilege Description
These privileges have been grouped in the out-of-the-box using a series of roles that
represents the different personas that can be involved in the process. The roles
available for direct use by applications are:
RequirementsDocumentCollector
Represents an operator that can upload
documents in the initial stage of the process.
Associated privileges:
RNGGatherInformation
RNGRequestWaiver
RNGShowCaseAttachmentPrivilege
RNGShowExistingDocPrivilege
RNGShowViewHistoryPrivilege
RequirementsDataCollector
Represents an operator in charge of the
capture of data from the document.
Associated privileges:
RNGDataCollection
RNGShowViewHistoryPrivilege
RequirementsVerifier
Represents an operator that can verify
documents in a requirement. Associated
privileges:
RNGVerification
RNGShowViewHistoryPrivilege
RequirementsProcessWaiver
Represents an operator that can verify and
approve waiver requests. Associated
privileges
RNGProcessWaiver
RNGShowViewHistoryPrivilege
RequirementsApprover
Represents an operator that can perform the
final approval on the case. Associated
privileges:
RNGApproval
RNGShowViewHistoryPrivilege
RequirementsMultiRole
Represents an operator that can performed
the first four stages of the process.
Associated privileges:
RNGGatherInformation
RNGRequestWaiver
RNGDataCollection
RNGVerification
RNGProcessWaiver
RNGShowCaseAttachmentPrivilege
RNGShowExistingDocPrivilege
RNGShowViewHistoryPrivilege
Pega Client Lifecycle Management and KYC configures its access groups to use all these
roles in the following manner:
CLMFSCIBSysAdmin PegaRequirements:RequirementsMultiRole
CLMFSRetSysAdmin PegaRequirements:RequirementsApprover
CLMFSRet_Branch_User PegaRequirements:RequirementsDocumentColl
ector
CLMFSCIB_RM PegaRequirements:RequirementsMultiRole
PegaRequirements:RequirementsApprover
CLMFSCIB_SS_Manager PegaRequirements:RequirementsApprover
CLMFSRet_SS_Manager
CLMFSCIB_SS_User PegaRequirements:RequirementsMultiRole
CLMFSRet_SS_User
If your organization needs a different configuration to meet its own operational needs,
you can distribute the roles and privileges listed in this section across the access groups
and roles of your organization.
By default, the application executes the stages of the requirement case in the order
provided before. However, some financial institutions can determine that some of the
stages of that process may not be required to meet their business needs and might
want to skip them. For that purpose, the following rules are made extension points:
For that purpose, the following rules are made extension points:
Configuring routing
Pega Client Lifecycle Management and KYC uses that configuration to establish a
routing model based on the operational structure of the application. For more
information, see Configuring operating structure).
Satisfy SS_Docs
Validate SS_Docs
Approval SS_Docs_Approve
As in any other routing in Pega Client Lifecycle Management and KYC, the system looks
in the operating structure for a workbasket available with the same suffix as the one in
the table and routes the assignments it.
For example, Pega Client Lifecycle Management and KYC provides UPlus operating
structure with two workbaskets UPFS_GM_AME_US_SS_Docs and
UPFS_GM_AME_US_SS_Docs_Approve, created under Global Market – America – US and
serve to that jurisdiction. Equivalent workbaskets are created in many other
jurisdictions and are used by the system according to the operator configuration.
Although customers are expected to use this routing model based on the operating
structure, they can always opt for their logic. They can also use the out of the box, but
configure it to route to different workbaskets. Look at these rules to implement the
required changes.
The following sections describe those tables, as well as briefly enumerate those built to
complete the functionality.
Property Description
MetadataId
The document metadata identifier that is
unique for each instance
DocumentIdentifier
Type of document that must match the
unique identifier of one of the documents
defined in the system.
DocumentName
The label that is given to the document
during uploading into the system
Property Description
PartyRefs
List of customer identifiers of associated
related parties
WorkAttachKey
pzInsKey of the document in the out of the
box attachments table (Data-WorkAttach-File)
Status
Status of the document in the system:
Uploaded, Uploaded-Collected, Uploaded-
Rejected
StaleDate
The date on which the document becomes
stale
The following records are created during the requirement cases creation and updated
as the cases progress up to completion.
Property Description
MetadataId
Requirement metadata identifier that is
unique for each instance
RequirementId
Requirement identifier as defined in the
design time
RequirementName
Requirement name as defined in the design
time
RequirementCaseID
Requirement case pyID
RequirementCaseKey
Requirement case pzInsKey
Status
Status of the requirement: New, InProgress,
Resolved-Completed
Property Description
ActiveScenario
Name of the scenario active at the time of
resolution
ContextKey
The identifier that determines the context in
which the requirement is satisfied
ValidEndDate
The earliest date on which one of the
documents expire or delete or becomes
invalid (the date is for the document, not the
requirement SLA)
In addition to the main two metadata tables, there are some additional tables that are
used to support relationships between data entities and additional functionality.
documents and
requirements.
Extension points
Few extension points are created to support organizations that add additional
properties to these tables.
Supported scenarios
The configuration comes out of the box, and no changes are required.
The configuration can be easily configured using Pega platform capabilities and
does not require changes in the Requirements module. For more information, see
instructions available at File and content storage.
Physical documents stored in DMS and metadata stored in both Pega and DMS
The model extends the previous configuration by implementing a data
propagation mechanism that transfers the metadata captured through the Pega
application into the DMS system. The metadata synchronization process ensures
that all metadata maintained internally by the Pega module is made available to
the enterprise through the DMS application.
The configuration requires, in addition to the configuration for the previous one,
the implementation of certain rules.
Integration points
Implement two main data flows in the integration with DMS: selection of documents in
DMS and document metadata propagation.
When a user opens a requirement case and select Attach, the system shows by default
the following three different tabs:
Note: A fourth tab can be made available for organizations using integration
with DMS.
The DMS selection tab (labeled as Other document stores) shows a list of documents
available for the customer in DMS. Upon user selection, the document is registered in
Pega in two different ways:
Data page
D_FetchDocument Retrieves the list of
documents available for a
MetaDataFromDMS
given customer in DMS
DisplayDocumentsFromDMS Section
Displays the list of
documents available for the
customer in DMS
IsDMSDocLibraryEnabled When
Conditionally displays the
DMS tab
Overwrite
LoadConfiguration_Ext and
set the
IsDMSDocLibraryEnabled
property in the
D_ReqConfiguration
configuration page to the
true value (no need to
change the when rule).
Every time the metadata associated to a given document change, it is persisted into the
document metadata table. The system can be configured to synchronize the data in
that table with an external DMS system.
Synchronous
The propagation of the data happens at the very same moment of writing the
entry in the document metadata table. It ensures immediate synchronization
between Pega and DMS, but comes with a certain performance overhead.
Asynchronous
The propagation is scheduled using a queue processor, which takes care of its
execution in an asynchronous manner. This is the recommended configuration in
most of the situations.
Background processes
After installing Pega Client Lifecycle Management and KYC, the system activates the
background process that detects the expiration of documents by default. If you want
your organization to make use of it, you must configure the following job scheduler to
point to an access group in your implementation.
If you want your organization to make use of the background process, you must
configure the following job scheduler to point to an access group in your
implementation.
a new version of the document is collected. The system detects in which context the
document was used and then creates review cases for those customers who are acting
as a contracting party when the document is collected.
For example, during the onboarding of customer A, the system creates a requirement
case for a related party B. The requirement is satisfied with a document that expires
after some time. When the expiration is detected, the system creates a new customer
review case for the contracting party at the time of collection (customer A). The new
review case, in turn, creates requirement cases for all related parties, including
customer B, which enforces the collection of a new version of the document.
The sections below describe the process that you must analyze and then schedule the
activation of the functionality accordingly. The activation of 2G functionality does not
necessarily need to happen immediately after installing the Pega Client Lifecycle
Management and KYC 8.5 version.
Activating 2G
The first step in the process is the activation of 2G. This configuration ensures that all
new requirements are created using the new capabilities and that the engine follows
the new rules.
At the time of activating the functionality in the production environment, there are two
active sets of rules:
Legacy requirements
The system uses the requirement sets, requirements, and documents defined
before activating 2G to resolve the in-flight cases that might exist at the time of
activation.
New requirements
The new engine uses the new set of rules to create new requirement cases taking
advantage of all the new capabilities provided by 2G.
If you are using the out-of-the-box requirements shipped with Pega Client Lifecycle
Management and KYC, these two sets are immediately available after installing the
version of the product. If, on the contrary, you are using your own defined
requirements, you need to create that new set of rules.
The new functionality introduces an entirely different paradigm. Trying to replicate the
current arrangement of sets, requirements, and documents in 2G without adopting that
new paradigm can have significant implications in terms of performance and
operations.
The following differences to be considered at the time of creating the new set of rules
for your organization:
Requirement-document cardinality
In the new version, requirements can manage multiple documents. The
requirements can also manage different business scenarios and a satisfaction
logic that can dynamically change as the data of the case changes. The new
approach significantly changes how to use the requirement sets and
requirements.
For example, you have two different groups of requirement sets, one group
created for customer identification and another for tax purposes. The customer
identification sets do not need to be considered in an offboarding journey.
For example, we can tag a group of requirement sets created to collect documents
for customer identification with a purpose Customer Identification Purpose (CIP).
We can then configure the engine to consider the CIP purpose only under certain
For the actual creation of requirement sets, requirements and documents, you can use
Dev Studio or the Requirements Portal. If you are going to use the latest one, you must
configure it to generate the new rules under the appropriate rulesets. For more
information, see Configuring ruleset for the portal, in chapter Requirements definition).
After the new requirements are defined, configure the synchronization engine. If you
are using the requirements that come out of the box with Pega Client Lifecycle
Management and KYC, the synchronization engine is already configured. If you are
using your own requirements, you must configure the engine to ensure they are
considered during the synchronization.
For more information how the engine works and what you must configure, see the
chapter Synchronization engine. Take under consideration the following tasks:
The new version of the requirement case type drives a new functionality that is
implemented in the requirement cases. The new version is a circumstanced version of
the legacy case type, and that might not be accessible from your implementation layer.
To ensure that your application uses the right version of the case type, make a copy of
the circumstanced version. For more information, see "Copying requirements case type
to the new class structure" in the chapter Application Configurations.
When the application runs in 1G in your production environment, the system collects
multiple documents and metadata that the legacy data structures store. The new
version of the functionality comes with new data structures that provide flexibility and
enable the future evolution of the module.
The migration process ensures that all documents and metadata captured in 1G
becomes available for its use in 2G. There are two upgrade strategies supported:
On-demand
Under this model, the system, as required, steadily migrates the metadata
associated with the documents. Every time one journey requires access to the
documents of a given customer, the system executes an on-demand migration
that copies all the document metadata available for that customer in the legacy
data structures into the new ones. The process is transparent to final users, who
do not know about the migration happening behind the scenes. The on-demand
migration strategy is by default.
Bulk migration
The application provides a set of tools to do bulk migration of document
metadata. The tools must be executed right after the activation of the 2G
To determine which migration strategy is the more appropriate for your organization,
consider the following factors:
• On-demand migration
• Bulk migration
On-demand migration
The main objective of the on-demand migration process is to migrate document
metadata from its current storage in the Data-WorkAttach-File class (pc_data_workattach
database table) into the new database table to host document metadata or record
customers with already migrated document metadata.
PegaReq-Data-Metadata-Document
New database table to host document metadata
For more information, see "Document metadata table" in the chapter Document
and requirements metadata.
PegaReq-Data-Document-Migration
New table to record those customers that have already their document metadata
migrated
The process avoids double executions of the migration on customers that are
already migrated.
The system executes the logic of the migration every time the documents of a customer
is requested through one of the following two data pages:
• D_RNGFetchMetadataDocumentsForCustomer
• D_RNGDisplayExistingDocumentsForCustomer
In the current version of the application, these are the three invocation points:
Customer 360
While rendering the Documents tab in the Customer 360 view
Synchronization engine
During the assessment of documents for reuse during requirements case creation
Manual selection of documents
When the customer is uploading documents in the Satisfy stage, where they
can select from the list of existing documents for that party
Once the migration has been completed for a customer, it is not executed ever again.
For a given customer, the migration is only be triggered from one of the three
invocation points listed above. The user or the system reaches the first invocation
point.
Activation
This migration strategy is set by default with the installation of Pega Client Lifecycle
Management and KYC 8.5. Ensure the following dynamic system setting has the
expected value:
Settings Value
During the migration process, the system can automatically map document types that
are used in 1G requirements into the new ones created for 2G. For example, you can
configure the system to unify the document type that you use in 1G specific for
jurisdiction (for example, Passport Australia) under a single type (for example,
Passport).
Use the following rules to change the default behavior of the application:
Bulk migration
The bulk migration is an accelerated version of the on-demand. It executes a logic
equivalent to the on-demand migration but in a bulk manner. For a complete view of
the process and how to execute it, see Bulk Migration of Document Metadata guide on
the Pega Client Lifecycle Management and KYC product page.
After the migration process is executed, ensure that the following dynamic setting has
the expected value that prevents that the system looks anymore to the legacy data
structures:
Settings Value
General settings
The configuration of the module is maintained in the D_ReqConfigurationdata page.
Most of the settings are loaded from dynamic system settings that can be changed as
required. The remaining settings are set through the data page initialization data
transform. Use the LoadConfiguration_Ext extension point to make changes to following
settings (the most relevant properties, that are available in the D_ReqConfiguration data
page):
LastGeneration
Source: DSS – PegaCLMFS Requirements/LastGeneration
Default Value: 2
ActiveGeneration
Source: DSS – PegaCLMFS Requirements/ActiveGeneration
Default Value: 2 (8.5 installs), 1 or <empty> (upgrades to 8.5)
LocatorModifierUseDefault
Source: DSS – PegaCLMFS Requirements/LocatorModifierUseDefault
Default Value: true
RNGSkipApproveStage
Source: DSS – PegaRequirements Requirements/SkipApproveStage
Default Value: false
Use during the processing of requirement cases to automatically skip the approval
stage. It can be used in combination to the extension point
RNGSkipApprovalStage_Ext, which can be configured to skip the approval in specific
circumstances.
RNGSkipValidateStage
Source: DSS – PegaRequirements Requirements/SkipValidateStage
Default value: false
IsDMSDocLibraryEnabled
Source: DSS – PegaRequirements Requirements/DMS/DocLibraryEnabled
Default value: false
Use to display additional tab to allow users to select documents from DMS
repositories.
IsDMSOutboundEnabled
Source: DSS – PegaRequirements Requirements/DMS/
OutboundSynchronizationEnabled
IsDMSOutboundAsyncModeEnabled
Source: DSS – PegaRequirements Requirements/DMS/OutboundAsyncModeEnabled
Default value: false
LegacyDocUpgradeStrategy
Source: DSS – PegaCLMFS Requirements/LegacyDocUpgradeStrategy
Default value: OnDemand
RNGReqPrefix
Source: Initialization Data-transform (LoadConfiguration)
Default value: R
RNGDocPrefix
Source: Initialization Data-transform (LoadConfiguration)
Default value: D
pyRuleSet
Source: Initialization Data-transform (LoadConfiguration)
The Requirements Portal uses to determine the ruleset version where new and
updated rules are saved.
pySelectedApplicationName
Source: Initialization Data-transform (LoadConfiguration)
Default value: <empty>
The Requirements Portal uses to determine the application where new and
updated rules are saved.
pySelectedBranchIdentifier
Source: Initialization Data-transform (LoadConfiguration)
Default value: <empty>
The Requirements Portal uses to maintain new and updated rules in a branch.
RequirementAssetsConfigurationClass
Source: Initialization Data-transform (LoadConfiguration)
Default value: <empty>
Set this value in the LoadConfiguration_Ext rule to use a specific class for all the
rules.
ReqCaseWorkClass
Source: DCR - D_AppExtension.WorkClass_Requirement
Default value: PegaCLMFS-Work-Requirement
Customer Review
As part of the business with customers, many potential events can occur from internal
and external sources. A financial institution must identify and process a change, such as
a customer updating their address or an identification document expiring. Failing to do
so can lead to non-compliance and increased risk. Forcing manual checks on every
change can delay or suspend revenue generation for the financial institution’s
customers and incur high operational costs.
Pega Client Lifecycle Management comes with a default Customer Review journey that
enables financial institutions to review customers upon the occurrence of certain
events and therefore ensure compliance. These are the following business scenarios
supported by the default configuration:
These five business scenarios are articulated in the application through the Customer
Review journey and the four journey subtypes under it. The following table shows the
relationship between the business scenarios and the underlying journey subtypes used
to support them.
Each journey subtype presents slight behavioral differences in aspects like straight-
through processing, partial review versus full review, or data capture. These differences
will be explained in detail in the sections created for that purpose.
Financial institutions with business needs that might not be addressed by any existing
business scenarios and journeys can create their own triggering points and journey
subtypes to implement their own processes (see Journey subtype). It is important to
note, though, that Perpetual KYC already offers a highly flexible framework that can
accommodate many different scenarios, and it is likely to support any business
scenario not covered in the default offering. It is therefore highly recommended that it
be considered when implementing new processes (see Perpetual KYC).
For example, let’s say a customer was onboarded into the financial institution on the
1st of January 2016. Based on the risk profile of the customer and the internal
organizational policies, the system sets the next review date to the 1st of January 2018.
When that date is reached, if no other journey was run on the customer before, the
application automatically triggers a Customer Review journey on the customer.
If for any reason, another journey was initiated before the date is reached – for
example, Maintenance, Add product, or a Customer Review journey under any of the other
business scenarios – the next review date is recalculated, and the waiting period for a
new customer review restarts. By default, any journey conducted on a customer resets
the next review date. Financial institutions can modify this logic and bring into the
equation different data elements like type journey, change, or customer risk (see
Process configuration).
The next review date is calculated using business rules that can use multiple data
elements. By default, the application calculates the new date using the customer’s risk,
but this logic can be configured to bring other important factors like the business
segment of the products that the customer is associated with, jurisdictional, and
regulatory factors. The following tables show the default configuration (see Process
configuration for changes in this table).
Low 5 years
Medium 3 years
High 1 years
Extreme 1 years
The customer’s next review date can be found in the Customer 360 view of that
customer.
Customer identification
Periodic customer reviews are triggered from a background process that scans the next
review date of all customer master profiles. The system will trigger a customer review
journey when the next review date is reached. The background process in charge of the
detection of customers is implemented under a job scheduler called
CreateCustomerReviewEvents and configured to run by default once every six hours.
The process starts by finding the list of customer master profiles to act on. The list will
contain all the master profiles that meet the following criteria:
• The master profile has a next review date that falls into the time window created
between the time of execution and has a length in days determined by
configuration (by default, one day – see Process configuration). For example, if the
configuration has a value of three days, the report will return all master profiles
whose next review date expires in the following three days.
• The master profile is already queued for processing from a previous execution of
the detection process. This mechanism avoids triggering duplicate customer
review journeys for the same customer.
Once the list of master profiles is in place, the process iterates through it and creates
one new event for each of its entries. The new events are of type FEV1 and are routed
through the legacy Pega FS - Event Driven Architecture so that the second part of the
process, the actual creation of the Customer Review journey, can be completed. The
events are scheduled for execution using a configuration existing in the system, which
specifies the number of days before the events should be processed (by default, one
day - see Process configuration).
Customers are onboarded into various business segments, such as commercial and
investment banking (CIB), retail, etc. It is important for the system to track the
customer's assigned business segment and ensure that periodic reviews are conducted
within that corresponding segment.
The business segment of a customer not only determines the segment that the periodic
review will be associated with, but also the access group under which the journey will
be created. For example, in an organization where there is a single application to
manage all segments, all periodic reviews will be created under the same access group,
even when created journeys are associated to different segments based on customers’
business segments. Other organizations, however, may have different applications for
different segments. In that case, the system will use different access groups to create
the different review journeys.
When the Event Driven Architecture queue processor receives the FEV1 event, it is taken
through a centralized piece of logic that manages the automatic creation of Customer
Review journeys for all business scenarios (except Perpetual KYC, which has its own
journey creation mechanism). For more details about this process, see Customer
journey creation.
Process configuration
There are a few aspects of the process that organizations can configure to meet their
specific business needs.
By default, all journey types are configured to reset the next review date of the
customer. The decision table EvaluateCustomerNextReviewDate can be used to change
that behavior.
The default configuration that determines the next review date of a customer is
maintained at the decision table MapCustomerNextReviewDate. Make changes to this
table to change the time intervals or add more data elements to the decision.
The length of the time window used by the system to detect customers whose next
review date is approaching is configured using the Dynamic System Setting
NextReviewWithin. The default configuration is one day. Organizations can change it to
anticipate the next review date further.
The Dynamic System Setting, EventLeadDays, sets the delay in the process of FEV1
events. The default configuration is one day.
During a customer journey, the risk associated with the customer can change
depending on various factors – related party risk, screening, due diligence, and so on.
When that happens, that change in the risk must be propagated to all those customers
where the customer whose risk changed plays the role of a related party.
For example, if the risk of Customer A changes from Low to High during a Customer
Review journey, all the customers in the organization where Customer A plays a specific
role should be reviewed as well so that changes in the risk and, therefore, in the
regulatory obligations are detected and managed. That could result, for example, in the
creation of Customer Review journeys for Customer B and Customer C, where Customer A
plays the role of beneficiary owner.
Pega Client Lifecycle Management checks at the end of every journey run on a customer if
there was a change in risk and, if there was, creates the appropriate Customer Review
journeys.
Customer identification
The application looks for changes in the customer’s risk at the end of the journeys, at
the Wrapup step of the Fulfilment stage (see Create Risk Assessment Cases).
At that point, the application checks the audit history of the customer for any change in
the risk that occurred. If there is no change, the process finishes. If there was a change,
the system retrieves all the party-party relationships where the current customer plays
a specific role and, after checking that the counterparty in that relationship is active,
generates an FEV3 event, which is then routed through the legacy Pega FS - Event
Driven Architecture so that the second part of the process, the actual creation of the
Customer Review journey, can be completed.
When the Event Driven Architecture queue processor receives the FEV3 event, it is taken
through a centralized piece of logic that manages the automatic creation of Customer
Review journeys for all business scenarios (except Perpetual KYC, which has its own
journey creation mechanism). For more details about this process, see Customer
journey creation.
For that purpose, the system monitors the maintenance of the Pega FS country
reference table and, when there is any change in the risk, triggers a process that results
in the creation of Customer Review journeys on all customers related to that modified
country.
• Customer identification
Customer identification
The business scenario is triggered when there is a change in the sensitivity level of any
of the countries maintained at the Pega FS country reference table (see in Dev Studio,
Configure >> Financial Services >> Reference Data).
For each customer in the resultant list, the system generates an FEV4 event, which is
routed through the legacy Pega FS - Event Driven Architecture so that the second part
of the process, the actual creation of the Customer Review journey, can be completed.
All the business scenarios where a Customer Review journey needs to be created use the
Pega FS - Event Driven Architecture framework except Perpetual KYC, which has its own
journey creation mechanism. The process starts with the reception of one of the four
events currently supported:
1. The system determines the operator used as context for creating the new journey.
By default, the operator used is EventDrivenAgent. The configuration of that
operator (e.g., the workgroup the operator is associated with or the business
calendar configuration) will be used during the initial steps of the journey until the
journey is routed to a worklist or workbasket for manual intervention or, in
straight-through processing scenarios, until its resolution. If the operator
configured is unavailable, the process ends here. To configure this operator, see
Process configuration.
•
2. Using the event type, the system determines the journey subtype used during the
case creation. To make changes to the logic used for this determination, see
Process configuration.
3. It then gets a user-friendly description for the event so that it can be displayed on
different screens throughout the new journey. To change these descriptions or
add a new event type, see Process configuration.
4. The system then creates a new Customer Review journey associated with the
customer for whom the event was generated; it sets its journey subtype with the
value obtained in the previous steps and initializes it with all the event information
available (event details and description).
Process configuration
There are a few aspects of this process that organizations can configure to meet their
specific needs. The following sections list those aspects and the rules used to manage
that configuration.
Context Operator
By default, the system tries to use an operator with the name EventDrivenAgent. The
system can be configured to use a different operator with equivalent privileges. The
operator must be referred at the AgentOperatorId map value.
If you decide to use the default EventDrivenAgent, you should note that this operator is
not shipped with the base product. It is only available with the sample package shipped
out-of-the-box, a package that must not be deployed in a production environment.
Journey subtype
The determination of the journey subtype based on the event type and, therefore, on
the business scenario is done at the map value MapForCustomerReviewCustSubJourney. If
your organization wants to use different event types or journey subtypes, this rule must
be modified accordingly.
Event description
The user-friendly description of the events is maintained at the map value MapFSEvents.
Make changes to this rule if you want to change an existing event's description or add a
new event type.
Each business scenario is represented in the system by an event type (For example,
FEV1). Events are registered at the map value MapFSEvents. Add the new event type
with a meaningful description of that rule.
Create the appropriate rules to detect the business scenario. Each business scenario
will require a different infrastructure. For example, a scenario where the user explicitly
requests for a customer review on a customer can be implemented by a local action
made available on the Customer 360 screen. However, a scenario where the system
needs to periodically scan customers looking for those related to Financial Crime
investigations will likely be implemented using job schedulers or other background
processes.
Regardless of the process implementation, the outcome of the process should be the
generation of customer events that can be routed to the second part of the process, the
actual creation of the Customer Review journey. The generation of events can be done
by simply invoking the CreateEvent activity (use activity
RNGCreateCustomerDocExpiryEvents as a reference, for example). For more details about
the generation of the event, see Pega FS - Event Driven Architecture.
When the event is received at the Event Driven Architecture, it should be routed to the
centralized logic in charge of automatically creating Customer Review journeys. That
routing can be done at the decision table FSIFEventDrivenProcess, which takes the
event type as input and returns as output the name of the activity that will take care of
that process – in this case, CreateCustomerReviewCase.
The final step is configuring that centralized logic to manage the new event, registering
the description of the new event type so it can be displayed later at the customer
review journeys, and adding the necessary logic to establish the journey subtype for the
latest event (see Process configuration).
Journey configuration
No additional configuration is required if you can use any of the standard journey
subtypes shipped out-of-the-box. However, your organization might need a different
process flavor to manage this specific business scenario. Depending on how different
the functionality that you require is compared with the out-of-the-box, you may want to
consider these two other options:
• If the differences between the organization’s needs and one of the out-of-the-box
journey subtypes are insignificant, simple conditional logic on the event type –
available anytime throughout the journey – would be the best approach. It would
minimize the configuration and maximize reuse.
• If the differences are significant, a new journey subtype can be created under the
Customer Review journey (see Creating a new journey subtype). Once the new
The Client outreach case type enables the client to provide required data or documents
through a web self-service mechanism accessed from a laptop, tablet, or phone. Data
and documentation received from the client can then be used in the ongoing customer
journey.
The following tasks enable you to extend the client outreach functionality. For
information about this functionality, see the Client outreach.
During the creation of new Client outreach cases, the system displays a list of
customers that the case can be assigned. By default, the list contains the main
customer of the case (base Pega Foundation for Financial Services implementations) or
any party associated with the case with a Contact role category (Pega Client Lifecycle
Management and KYC). Customers can change this behavior to send cases to different
parties.
1. In the header of Dev Studio, in the search box, search for and select the
ClientOutreachRecipientListPrepare_ext data transform.
Note: This rule is the extension point for the data transform that
retrieves the list of contacts for organizations and the actual client for
individuals.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Add parties that can be assigned with the Client outreach case.
4. Save your changes.
Upon selecting a recipient, the system uses the ConsolidationKey property (a
combination of customer and recipient identifiers) to determine whether there is
already an active case for the current client and the selected recipient and let the
user create a new case accordingly. New cases cannot be created if there are in-
flight cases for the same consolidation key as one built after recipient selection.
The consolidation key can be modified to consider customers only or can be
removed altogether to avoid consolidation.
a. In the header of Dev Studio, in the search box, search for and select the
CalculateConsolidatedKey_ext data transform.
When an active case is edited, the data captured on the screen (items and instructions)
is transferred into the case to reach the customer eventually. If additional data is
required, changes can be done to the PropagateEditedData_ext data transform.
1. In the header of Dev Studio, in the search box, search for and select the
PropagateEditedData_ext data transform.
Note: This rule is the extension point for the PropagateEditedData data
transform.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Add data to the case.
4. Save your changes.
Client outreach cases can be withdrawn either automatically by the system or manually
by the user. In both situations, cases are closed, and the assignments created for
customers to fulfill the requests are deleted. Some additional logic (for example, to
send a notification to the recipient or to send a message to the front-end system to
remove notifications for customer) can be added to the application by extending the
two rules ClosePendingClientOutreachCases_ext and WithdrawPost_ext.
1. In the header of Dev Studio, in the search box, search for and select the
ClosePendingClientOutreachCases_ext activity.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Make changes to the behavior of automatically withdrawn cases.
4. In the header of Dev Studio, in the search box, search and select the
WithdrawPost_ext activity.
Note: This rule is the extension point for the WithdrawPost activity that is
used to manually withdraw the Client outreach case using a local action.
5. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
6. Make changes to the behavior of manually withdrawn cases.
7. Save your changes.
Configuring notifications
The system generates an email notification to the recipient when the case is created
and when there are changes made to the case. In addition, the system can send extra
notifications when the case reaches, in the Fulfillment stage, a certain goal or
deadline Service Level Agreement (SLA).
Change the following rules to modify the content of all email notifications.
1. In the header of Dev Studio, search for one of the following correspondence rules.
NotifyContactCODeadline Correspondence
The extension for the
WithdrawPost activity
Correspondence is sent to
notify the recipient that
the Client outreach case
has reached its deadline
SLA.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Make changes to the correspondence rules.
4. Save your changes.
The logic to build the list of recipients for a given case can be modified or extended. In
the Pega Client Lifecycle Management and KYC layer, the list is built using the
D_ClientOutreachRecipientList data transform, which pulls any associated party from the
case with a role category of Contact.
To change this logic, modify the data page or the data transform that feeds its content.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
All the extension points for the Client outreach case can be override using a ruleset
specialization (saved in the same class, using a different ruleset). However, if customers
want to introduce new types of items (the system currently supports only basic data
and documents), they need to create their classes to support those types. The classes
need to be created either under the same hierarchy as the default ones or in a new
class structure. To keep a clear distinction between base product (Pega Foundation for
Financial Services and Pega Client Lifecycle Management and KYC) and customer
implementation, apply the second approach.
Pega Client Lifecycle Management and KYC provides an example of how these classes
can be extended. New classes are created under the PegaCLMFS rulesets to add new
types or change the behavior of the existing types. Customers can follow a similar
pattern.
Modify the content of the email to meet your organization business needs.
1. Log in to Dev Studio using the credentials with access to the Retail layer.
2. In the header of Dev Studio, in the search text field, enter
NotifyCustomerProduceWelcomePack, and then select the correspondence
rule from the results.
3. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
4. Review the template correspondence and modify it to contain information specific
to your organization.
5. Click Save.
1. Log in to Dev Studio using the credentials with access to the Retail layer.
2. In the header of Dev Studio, in the search text field, enter
PrivacyDisclosureNotice, PersonalAccountAgreement, UPlus Bank
Signature Card, or UPlus Welcome Letter and then select the eForm File
rule from the results.
3. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
4. Click Download file.
5. Review the template correspondence and modify it, if needed.
6. Upload the modified file.
In the client risk and due diligence space, an increasing number of external data
providers offer wide ranges of valuable data and services to speed up these processes.
Multiple providers provide the same types of data and services but are chosen based
on factors, such as geographic focus, specialized knowledge, and price.
Pega Client Lifecycle Management and KYC provides implementers with a common set
of functional business services to obtain types of data and services from one or more
external data providers. Where already integrated, existing providers can be used with
little effort. New providers can also be added with less impact on the application.
The following sections describe how Pega Client Lifecycle Management and KYC
implements access to external data providers, where and how they are used in the
main CLM journeys, and how they can be configured in the system.
Integration components
The layer where the technical interfacing assets and functionality reside. Pure
technical connectors to integrate or interface with external data providers are
implemented in this layer. The layer is Pega industry and data model agnostic,
and it is intended to contain reusable assets across all Pega industries and
applications. Integration components are available at Pega Marketplace and must
be installed and active in the environment where they are used.
Industry abstraction layer
The layer where the actual business services are implemented as abstract,
provider agnostic, components. In addition to the abstract services, the mapping
between the Pega Foundation for Financial Services data model and the external
data provider-specific data model is implemented at this layer.
Orchestration layer
In this case, Pega Client Lifecycle Management and KYC, which consumes the
business services implemented in the industry abstraction layer at different points
of its case types and processes to achieve the efficiency required by your
business.
The architecture guarantees that Pega Client Lifecycle Management and KYC uses of the
business services without being tightly coupled with the actual external data providers
that implement those services. When a data provider changes or a new data provider is
required, only the two layers at the top require configuration, while the orchestration
and user experience remained unaltered.
The Pega Foundation for Financial Services implements the following list of the abstract
business services:
There are few third-party data provides that are supported natively by Pega Foundation
for Financial Services. Each of them implements different business services and is used
at various points of the application.
ID Verification Equifax
In certain situations, users need to obtain more information about a given entity or
individual. The application provides functionality where users can search for customers
in external data providers. The data obtained from the search and any relevant
documents found for the customer in the external data provider are available for users
to enrich new or existing cases. You can access functionality from two different
locations in the application:
• From the main customer search functionality available at the user’s portal, see
Customers > Actions > Search external providers. You can search for customers
by name and, if required, create new onboarding cases, which can be initialized
with the data and documents obtained from the search.
• From existing in-flight cases, see Actions > Enrich with external data. Using the
customer name, you can search for a customer and select which pieces of
information or documents or both want to transfer to the contracting party case
information. This option is available only for a customer of type organization and
under certain journey types (for example, onboarding, maintenance, customer
review). For more information, see P_EnrichCustomer_Required.
During the process of a case, the application can automatically access external data
providers to enrich the customer information of the contracting party. There are two
points of the application prepared for this purpose.
In the Retail segment, the application can invoke various business services
(eIDVerification, eFraud, eOFAC, eCredit), enrich the case with the additional customer
information obtained from those services (see the eKYC step in the Capture stage) and
take automated decisions based on the new data.
Customer investigation
The application uses the Customer Investigation step in the Enrich stage of the
main journey case types to conduct additional investigations on the contracting
customer. The two types of investigation currently supported are Sanctions/PEP/Law
Enforcement and Adverse Media Information.
For each type of these types, the system creates e-Screening subcases that, after
accessing the external data providers, bring the different matches that might result
from the inquiries. You can then review the matches, assess the associated risk and
determine if the match is ultimately positive or false. The outcome of the investigations
is propagated up to the main case and, from that moment on, becomes available for
consideration during the assessment of attributes like customer risk or due diligence
profile.
The application also conducts investigations on those parties deemed KYC significant
during the due diligence process. The investigation on those parties is triggered in the
Customer Investigation stage of the Global KYC cases created for them.
To complete the business service invocation, met every condition on the following list:
The first step to activate given business services is to configure a general switch that
enables or disables the invocation to the service from the process. To implement the
switch, use a Dynamic system setting with a name built according to the following
standard: PegaFS fs/services/<business-service>/enable=true|false
For example, if you need to activate the invocation to eScreening from your process, set
the Dynamic system setting accordingly: PegaFS fs/services/eScreening/enable with the
true value.
If multiple Pega FS applications reside in the same server, configure the activation of
the service at the application level to avoid any impact on other applications that your
change of configuration might bring. For example, to enable the eEnrichment
functionality for Pega Client Lifecycle Management and KYC, create a Dynamic system
setting with a name: PegaCLMFS clmfs/services/eEnrichment/enable=true|false.
For the names given to the Dynamic System Settings of each specific business services,
see Configuring supported data providers.
After the business service is active, a default external data provider in the business
service must be configured. You can choose for explicitly passing of applications the
name of the provider based on certain case data (for example, use one provider for
customers of type individual and another one for organizations). If the provider is not
explicitly mentioned during the invocation to the business service, the system uses the
name of the provider set by default for that service.
The configuration of the default providers for given business service is implemented in
a very similar manner to its activation by using a Dynamic system setting built
according to the following standard: PegaFS fs/services/<business-service>/
Provider=<provider-name>
For example, if you need to register Equifax as the default data provider for electronic
ID verification, create a Dynamic System Setting with the following name: PegaFS fs/
services/eIDVerify/Provider with the EQ value.
For the activation, the configuration of providers can also be set at the application level.
The same configuration for Equifax can be done only for Pega Client Lifecycle
Management and KYC by using the following Dynamic system setting name: PegaCLMFS
clmfs/services/eIDVerify/Provider with the EQ value.
For the names given to the Dynamic System Settings of each specific business services,
see Configuring supported data providers.
Applications may have more complex use cases where the name of the provider must
be determined at run time based on a case or application data. For example,
implement certain business service for customers of type individual by using a data
provider, while a different data provider must take care of those of type organization.
To cater for these complex user cases, Pega Client Lifecycle Management and KYC
provides a set of the following extension points that Financial Institutions can override
to implement their logic and bypass the default provider configuration.
For more information about extension points, see Configuring supported data
providers.
For testing or demoing purposes, sometimes it is desired to run the business service
even if the external data provider is not available. Pega Foundation for Financial
Services provides a set of basic simulators for the natively supported Business services.
The simulators can be switch on or off using the following Dynamic System Settings.
The simulators can be switched on or off using Dynamic system setting. For more
information, see Configuring supported data providers.
If you license Markit, download its corresponding integration component from Markit
for PFFS at Pega Marketplace product page. Install and configure the component
following the instructions in the user guide available on a product page.
After the integration components are installed, set the following Dynamic system
setting to activate the business service and register the provider.
PegaCLMFS clmfs/services/
eEnrichment/enable=true
AVOX
If you license Refinitiv AVOX, download its corresponding integration component from
Refinitiv Avox for PFFS at Pega Marketplace product page. Install and configure the
component by following the instructions in the user guide available on a product page.
After the integration component are installed, set the following Dynamic system setting
to activate the business service and register the provider.
PegaCLMFS clmfs/services/
eEnrichment/enable=true
Note: In addition to the configuration steps that are listed in the integration
component user guide, the outbound IP of your servers must be added to the
AVOX allow list. Discuss with your IT department and AVOX how to implement
the changes.
SWIFT KYC
If you license SWIFT KYC, you can request the integration component for SWIFT KYC by
raising a Service Request in Pega. Once you have the component, install and configure
it by following the instructions that are provided in the user guide supplied with the
component.
After the integration component are installed, set the following Dynamic system setting
to activate the business service and register the provider.
PegaCLMFS clmfs/services/
eEnrichment/enable=true
Equifax
Equifax OneView and Interconnect products from Equifax detects and retrieves the
fraud, credit risk and OFAC alerts of a customer.
Prior to Infinity '24.1 release, CLMFS application retrieved customer credit, fraud check,
ID verification and OFAC alert services via Equifax interconnect service. In Infinity '24
release, CLMFS application has adopted Equifax’s One View service which assesses
credit risk with access to a holistic view of a consumer via a single delivery to improve
decision making.
For more details on Equifax OneView, refer Equifax OneView User Guide. For more
details regarding Equifax Interconnect, refer Equifax interconnect User Guide | Pega.
A new DSS EQService has been introduced to either use OneView or InterConnect
service
If you license Equifax, download its corresponding integration component from Pega
Marketplace product page. Install and configure the component by following the
instructions available on the product page of the user guide.
After the integration component has been installed, set the following Dynamic system
settings to activate the business service and register the provider.
• PegaCLMFS fs/services/
eOFAC /enable=true
• PegaCLMFS fs/services/
eCredit/enable=true
• PegaCLMFS fs/services/
eIDVerify/enable=true
•
BusinessServiceFraudCh
eckGetDataSetProvider
•
BusinessServiceCreditCh
eckGetDataSetProvider
•
BusinessServiceOFACGet
DataSetProvider
World-Check
After the integration component are installed, set the following Dynamic system setting
to activate the business service and register the provider.
PegaFS fs/services/
AdverseMedia/enable=true
PegaCLMFS clmfs/services/
eScreening/enable=true
PegaCLMFS clmfs/services/
AdverseMedia/enable=true
fs/services/AdverseMedia/
Provider=Powered by Refinitiv
World-Check
PegaFS
ConnectorDemoAdverseMedia
=false
BusinessServiceAdverseMedia
GetDataSetProvider
Note: Once the World-Check services are activated, and screening cases are
created, the application starts using certain assets like classes, sections, and
flow actions from the component to store and present the information on the
cases and the master profile. Removal of the component from the stack
renders these assets inaccessible and leads to data validation issues for any
cases and master profiles that carry eScreening information. Hence, you must
ensure that the component is never removed from the application stack and
only the settings listed above are used to deactivate the service.
Encompass
You can configure the number of times that the application tries to connect to the
eScreening server. If the number of attempts to connect to the server exceeds the limit,
the screening case is automatically resolved. Set the following Dynamic System Setting
with the desired amount of attempts: PegaFS eScreeningAttemptsLimit=<number of
attempts>.
For example:
You can configure the amount of time that the eScreening data stored at the Master
Profile for a given customer is valid. At the point of the process where the application
determines if the customer goes through e-Screening, the system checks if the data is
already available for that customer on the Master Profile and, if there is, it checks its
validity time. If the data is still valid, the system bypasses the creation of the case and
reuses the existing data. If the validity time is exceeded, the system creates a new case
and invokes the eScreening business service to get fresh results. The following Dynamic
System Setting can be used to establish that validity period: PegaFS
ScreeningValidity=<validity period in days>.
For example:
Data propagation
Pega Client Lifecycle Management and KYC comes with a default data propagation logic
to move the results of the eScreening cases to the main journey case. That way, the
data becomes available to any subsequent logic that might require it, for example,
customer risk. If you want to add more data to the one that is currently propagated by
the out of the box, you can modify an extension point created for that very purpose,
see the SynchronizeScreeningData_Extension. data transform.
The settings must be replaced with the corresponding ones available at the new
configuration mechanism.
Pega Client Lifecycle Management and KYC application provides a new case type,
Adverse Media, implemented using PegaCLMFS-Work-Screening-AdverseInformation class.
This new case type is configured as a child case to the Client lifecycle management case
type and is used to manage the results obtained through Adverse media services. If you
choose to use this new case type in your implementation, you must perform the
following steps:
1. Create an Adverse Information class at implementation layer and register the new
case class in DCR. For more information about registering a new case class, see
Updating Dynamic Class referencing.
2. Modify the Client lifecycle management case type at your implementation to have
this newly added class as subtype and add the PropagateDataToScreening data
propagation rule.
3. Activate the adverse media business service by following the steps defined in
"Business Services Activation" World-Check.
Note: As you update and the Client lifecycle management case type in your
implementation layer, the system automatically copies the associated stage
rules from the PegaCLMFS-Work-CLM class to the corresponding
implementation class. However, the new copies of the stage rules in the
implementation layer can mask their specialized versions from the child
classes in the product. Hence, you must review the stage rules in the product
in the classes that extend from PegaCLMFS-Work-CLM and manually copy all the
stage rules to the corresponding implementation layer classes.
more easily and accurately, resulting in the reduced turn-around time while onboarding
new customers.
Introduction
The reception of business events has become critical in an application like Pega Client
Lifecycle Management, which needs to manage events coming from different systems
and modules to ensure continuous regulatory compliance. Customer data changes,
ongoing e-screening alerts or periodic review deadlines are just a few examples of types
of events that the application might need to manage.
For this purpose, the application includes an infrastructural component that facilitates
the implementation of event-driven processes. The component manages the reception
of events from different sources and route them to the appropriate applications and
processes in charge of their resolution. This component is called Event Driven
Architecture (EDA) and it was refactored to get to its current form in the 8.8 version of
Pega Client Lifecycle Management.
• Introduction
• Entry channels
• Processing channel
• Administration
• Implementation
• Reference
Introduction
The reception of business events has become critical in an application like Pega Client
Lifecycle Management, which needs to manage events coming from different systems
and modules to ensure continuous regulatory compliance. Customer data changes,
ongoing e-screening alerts or periodic review deadlines are just a few examples of types
of events that the application might need to manage.
For this purpose, the application includes an infrastructural component that facilitates
the implementation of event-driven processes. The component manages the reception
of events from different sources and route them to the appropriate applications and
processes in charge of their resolution. This component is called Event Driven
Architecture (EDA) and it was refactored to get to its current form in the 8.8 version of
Pega Client Lifecycle Management.
Solution Overview
The solution involves multiple actors, data entities and processes, which all together
facilitate the end-to-end process of events. The following diagram represents the most
significant elements in the solution.
Entry Channels
Depending on the architecture and business needs of each organization, events can be
produced by very different systems in the enterprise. Many of them can be external
systems to Pega Client Lifecycle Management interacting with the application through
the EDA REST/JSON interface or through any other channel that might have been
enabled. Some examples of external systems that can generate events are CRM
applications, transaction monitoring systems and screening services. We call those
systems event producers.
Pega Client Lifecycle Management, the recipient of the events generated by these
systems, can be considered an event producer itself, as it can generate new events and
inject them into the EDA engine for their subsequent processing. For example, during
the onboarding of a customer, an organization might determine that a full customer
review of a related party is required. The event to trigger that customer review journey
could well be routed through EDA using the queue processors available for that
purpose.
The nature of the events that can be processed through the EDA engine can vary
significantly. While some events might represent simple changes in customer data (for
example, a change of address or a change of legal name), others might represent new
findings detected by an ongoing screening service or changes to the risk level
associated to a given country. Therefore, each event type has its own characteristics,
including a very specific data model, validation rules and enrichment logic.
In addition, each event type can have very different needs in terms of processing. One
change of address event, for example, might require the creation of a new Event Driven
Review journey to assess the impact of the change on the customer. A country risk
change event, however, might translate into something completely different like a
remediation process for all the customers associated to that country. Some events
might even trigger multiple processes at the same time – for example, a customer
review plus a notification to an external organization. Each event type will be configured
in EDA to be taken through one or more processes.
Processing channel
Regardless of the nature of the events, the entry channel through which they are
received and the processes to be executed on them, all events go through a similar
process called the EDA channel. The process is made up of the following seven basic
phases.
• Validation – During this phase, the system ensures that the event is delivered
with the minimum metadata required (for example, event type and event
subtype), that the payload of the event meets the data model defined for the
event type and that the event was not sent before. If the event does not pass this
check, it is rejected and, if the event was sent through a channel where a response
is expected (e.g., REST/JSON), it is communicated to the producer in the response.
• Registration – Once the system has determined that the event data is complete,
it allocates a unique identifier to the event and registers it within the event
tracking table. This will be used from that point on to track the progress and
status of the event.
• Enrichment – The event data is then enriched in a number of different ways,
including the retrieval of additional data - from other systems or from internal
sources like the Customer Master Profile -, or the generation of data that can be
used later during the next steps of the event lifecycle (for example, an event
qualifier indicating the risk associated to the event or the customer type).
Any new event received in the system goes through these seven phases. If one of the
phases fails, the system stops the execution, sets the status of the event to Error and
records the latest phase executed so that the process can be retried later from that
point on.
The execution of all the phases of the channel should number in milliseconds in most
cases. The channel process has been articulated in different phases to facilitate the
implementation of certain features like a retry mechanism or the dispatching of events.
However, all phases are to be executed one after the other and with an overall
response time ideally under the second. Therefore, this phase-based orchestration
process cannot be considered a regular flow of stages and steps. It is just a convenient
arrangement of the logic that in similar modules is traditionally implemented under a
single activity or automation rule.
The EDA engine makes use of a set of database tables to maintain event and process
configurations. The tables are called registry tables. They are maintained in design-time
by administrators and used in runtime at many points of the process, from the
reception of events and data model validation to the determination of the processes to
be executed on the events.
In addition, during the management of events, the EDA engine feeds three tables that
facilitate the tracking of the status of the main entities managed by the module: events,
processes and bundles. These tables are called tracking tables. The first two, events and
processes, have been already briefly described in the process overview above. The third
one, bundles, is a very light infrastructure to help those processes that require event
bundling to keep track of them in the system.
Administration
EDA offers a configuration console for administrators to maintain the registry tables
and general processing settings. From the console, administrators can register in
design-time new event types, new processes, configure domains and access groups,
change engine processing settings, activate and deactivate configurations, and so on.
Along with the configuration console, administrators can access the content of the
tracking tables through the operations console. The console provides a list of all the
events received in the system and the processes that were applied on those events. It
also gives the ability to trigger some corrective actions like the retry of a certain event
or its re-injection into the system.
Entry channels
The EDA engine currently supports two different mechanisms to receive events: a REST/
JSON service and direct invocation. The first mechanism is intended for external
applications that need to inject messages into the engine (for example, transaction
monitoring systems or e-screening services), while the second one is thought for the
injection of events from other Pega Client Lifecycle Management modules and from the
EDA engine itself. Additional custom reception channels can be added as required to
integrate through other mechanisms like Kafka, JMS or SOAP/HTTP.
• REST/JSON channel
• Direct invocation
REST/JSON channel
The REST/JSON service shipped with the EDA engine provides two action methods that
can be used by event producers to interact with the engine.
Event injection
This action method is used for the injection of events into the system and can be
accessed as a POST on:
https://<eda-host:port>/prweb/api/EDA/v2/Events
The method expects the event to be injected in the system as a JSON message in the
body of the POST request. The JSON message expected should contain the following
attributes:
{
"ReferenceKey": "CustomerID",
"ReferenceValue": "9912345955",
"EventType": "FINS",
"EventSubType": "TXN-INT",
"ExternalID": "f0215bb3-391d-49e5-8e4b",
"Source": "Transystem",
"Data": {
"Amount":"2345.90",
"Currency":"EUR",
"SourceAccountName":"Felipe Perez EXTRA",
"SourceAccountNumber":"ES341234876512348765",
"DestinationAccountName":"Armando Jaleo",
"DestinationAccountNumber":"ES355554876512347777",
"Notes": "Monthly payment"
}
}
This example represents a change of legal name of an organization. The event producer
is requesting in this one that the EDA engine skips the duplicate check at the Validation
phase.
{
"ReferenceKey": "CustomerID",
"ReferenceValue": "9912345999",
"EventType": "CCDC",
"EventSubtype": "LGN-UPD",
"ExternalID": "f0215bb3-391d-49e5-8e4a",
"Source": "Customer Service",
"Data": {
"Description": "Name Update",
"NameType": "Legal Name",
"NewName": {
"LegalName": " John Flower’s Corporation"
},
"PreviousName": {
Upon reception of the JSON message, the EDA engine executes the entry channel
process as described before in the solution’s overview. It starts by parsing the message,
validating the payload, registering the event, and then proceeding with the remaining
phases of the process.
The EDA engine takes the event as far as it can before sending a response back to the
event producer. The response to the producer can be generated at different points of
the process, depending on how far that specific event makes it in the overall channel
process.
If the event fails at any time before the registration – for example, during the validation
of the event data -, the engine generates a response that consists of a copy of the
original message and an error attribute indicating the reason for failure. For example,
the message below is the response to a message that was sent with an unexpected
attribute at the payload.
{
"Status": "Error",
"ReferenceKey": "CustomerID",
"Messages": [
{
"Content": "The payload contains attributes not supported by event payload cl
ass",
"Level": "ERR",
"Code": "EDAERR206"
}
],
"EventType": "CCDC",
"ReferenceValue": "9912345999",
"ExternalID": "f0215bb3-391d-49e5-8e4a-49944562",
"Channel": "REST",
"Data": {
"Description": "Name Update low 1",
"NameType": "Legal Name low 1 ",
"NewName": {
"LegalName": "Tesco corp 1"
},
"InvalidAttribute": "Force error",
"PreviousName": {
"LegalName": "Tesco 1"
}
},
"Source": "Customer Service",
"StatusDescription": "Errors encountered in process(s) execution",
"EventSubtype": "LGN-UPD"
}
If the event passes the initial phases, gets registered in the system, and then fails, the
system generates a response message that consists of the unique identifier given to the
event at the EDA tracking tables and its associated status. The message below, for
example, is a response for an event that was accepted but that failed during the
enrichment process.
{
"Status": "Error",
If the event manages to go through all the phases of the channel, the response
message includes status information of the processes that were executed on the event.
These processes might have finished successfully, with errors or might have not
finished yet if they have been dispatched for execution under other domains. For
example, the message below is a response to an event that was accepted and
dispatched to another domain for its execution of the process PKYC.
{
"Status": "Waiting",
"StatusDescription": "Process is currently in queue",
"ExternalID": "f0215bb3-391d-49e5-8e4a-544947384",
"EventID": "27b83494-0e55-4557-ba17-6b0e0ca31625",
"Processes": [
{
"Status": "Dispatched",
"ProcessName": "PKYC",
"StatusDescription": "Event dispatched to destination application for executio
n",
"ProcessDomain": "Perpetual KYC",
"ProcessQualifier": "Event",
"ProcessID": "4eddbc64-91a4-4068-a0c7-73aa604b056b"
}
]
}
The HTTP responses generated by the service are in the 200-299 range when the event
is accepted and there is total or partial success. A value 400 is returned in case of error.
This action method can be used by event producers to check the status of a previously
injected event. The method is available when invoking the GET method of the following
endpoint:
https://<eda-host:port>/prweb/api/EDA/v2/Events/{EventID}
The EventID to be passed via parameter is the unique identifier that was returned when
the event was injected using the Event injection method. The response message that
the engine generates has the same format than the Event injection service.
{
"Status": "Waiting",
"StatusDescription": "Process is currently in queue",
"ExternalID": "f0215bb3-391d-49e5-8e4a-554485900",
"EventID": "66c73f29-f065-45ba-b8b3-9eb81fe630fa",
"Processes": [
{
"Status": "Waiting",
"ProcessName": "PKYC",
"StatusDescription": "Process is currently in queue",
"ProcessDomain": "Perpetual KYC",
"OutcomeType": "Bundle",
"ProcessQualifier": "Event",
"ProcessID": "288825dc-73d3-44c2-92ae-85392c2ffc76",
"OutcomeValue": "2727a928-6e11-4ea1-8507-532e38029e53"
}
]
}
Both services are linked to a Service Package called EDA (see Records > Integration-
Resources > Service package). The configuration of the Service Package should be
reviewed and changed to make the services available to external systems.
Regardless of the authentication type configured above, the system uses the
credentials given by the event producers at each invocation to find the operator and
access group under which the EDA engine runs for that specific invocation. For
example, in a service configured to use Basic authentication, the event producer needs
to provide the operator identifier and password with each of the requests. Once the
credentials have been validated, the system uses the operator identifier being passed
with the request to find the operator and sets the appropriate access group under
which the EDA engine will run.
Organizations might opt for maintaining a single operator to facilitate the access of all
the event producers, or multiple operators - one per event producer – to facilitate the
independent maintenance of each of the producers (for example, to disable/enable the
access to the service or for tracking/audit purposes).
Regardless of the number of operators, each operator should be pointing to the access
group that will be used to run EDA engine. In a common setup, organizations want EDA
to execute the processes associated to the event at the time of reception and, with that
purpose, they make the operators used for injection point to the access group of the
domain of the processes (see Registering a new process in the registry).
If, on the contrary, they want the processes associated to the event to be dispatched to
a different domain and be executed asynchronously after reception, the operators used
for the injection can be configured to point to a different access group. It should be
noted that this type of configuration introduces a delay in the execution of processes
and makes impossible the return of a final process status in the response of the REST
call. Under this scenario, the event producer receives an initial response with the event
identifier and an additional access is required to check the status of the event and its
final status (see Event status check).
If during the installation of Pega Client Lifecycle Management the demo package was
installed, the operator CLMFSEDA will be available for your reference. Make as many
copies of the operator as required according to the approach decided – one global
operator or one per event producer - and make them point to the right access groups
as explained before. If the demo package was not installed, just create new operators
and follow the same guidelines regarding the configuration of their access group.
Direct invocation
Event injection
Pega Client Lifecycle Management, either from its case management module or from
the EDA engine itself can also inject events. During the injection, the engine expects as
input an instance of the event tracking table (PegaFS-Data-EDA-Tracking-Event). The
module or function generating the new event should have first created, pre-registered
the event in the tracking table and, if required, put the event into one of the queue-
processors available for this purpose.
Mandatory
identifier. Used by
the event and
process specific
logic butnot by the
engine.
The module invoking the activity should consider that the activity does not commit any
of the database or queuing operations, and it is up to the module to manage the errors
(which will be raised through the activity status) and commit and rollback accordingly. It
is important to note, that depending on the Local parameter, the scope of the commit
or rollback will be different. In a local execution, the scope will include the end-to-end
process of the event. In a deferred process, however, the scope will go only from the
initial pre-registration until the event is queued for its later processing.
After the event has been injected, the module can access the status of the event by
using the EventID returned by the activity (see data-page D_EDAGetEventDataByEventID).
For more details around the invocation of this activity, see activity EDAReinjectAsNew,
which uses EDAInjectEvent to put events from the event tracking table through the EDA
channel for a second time.
If the module invoking the activity opts for a deferred execution (Local equals to false),
the events will be put in a queue. The application comes with two queue-processors
that provide similar services (see Records > System > Queue-Processors).
Queue-processor Description
There is no need to make any changes to these queue-processor rules to have the EDA
engine working. However, some organizations might want to review the parameters to
meet their specific needs. These are some of the most important parameters to be
considered:
It should be noted that there is a limit to the number of threads that can be available
for a given queue processor in a cluster. For example, having three background nodes
with a configuration of five threads per node does not guarantee 15 threads available.
The limit is imposed by the size of the underlying Kafka topics associated with a queue-
processor, which by default is seven. Look at PEGA0137 to determine whether the limit
at cluster level should be adjusted and, if required, use activity pxAlterStreamPartitions.
The logic associated to an event entry channel should be implemented in the system as
an extension of the class PegaFS-Data-EDA-Channel. For example, the logic specific to
out-of-the-box REST/JSON channel is implemented in class PegaFS-Data-EDA-Channel-
REST, where two subclasses have been created to support each of the two services
provided (PegaFS-Data-EDA-Channel-REST-Event and PegaFS-Data-EDA-Channel-REST-
EventStatus). To add an additional service to the REST/JSON channel or manage different
message formats, more subclasses must be added to PegaFS-Data-EDA-Channel-REST to
contain the appropriate logic.
The new class should contain all the logic to manage the events received through that
channel. That does not mean that the class should contain the reception logic as such
(for example, in the case of Kafka, the actual logic to connect to the Kafka topic can be
easily managed by a Kafka data-set and a data-flow), but it should contain the
management of the events received by that channel. The main responsibilities of the
new class are:
• Message parsing and format translation required to make the event message fit
into the EDA message format (see REST/JSON channel for more details).
• The invocation to the EDA engine to process the event and the corresponding
error-handling required to ensure a robust integration.
• The generation of a response as expected under the integration protocol being
used. This might imply the immediate generation of a response in a synchronous
protocol (for example, SOAP/HTTP) or a deferred one in an asynchronous one (for
example, Kafka)
In order to fulfill the second of the responsibilities listed above, the invocation of the
EDA engine, the channel should meet the following requirements:
• The received event should be mapped into the property .Event, a page property of
class PegaFS-Data-EDA-Data-Event. The property should have all the properties
populated as specified in the main REST/JSON interface i.e., event type, event
subtype, data, and so on.
• Once the .Event property has been populated, the channel should invoke the EDA
engine activity EDAChannelEntry. This activity takes care of the processing of the
event, including all the phases of the process i.e., validation, pre-registration,
enrichment, and so on.
• During the execution of the EDA engine, the extension point EDAInitializeChannel is
invoked. The new channel class needs to specialize this rule to set the
property .Event.Channel to the name of the new channel (for example, Kafka).
• Once the EDA engine activity has been executed, errors must be checked by using
StepStatusFail or checking for exceptions. If there are any, it means there was
some unexpected and uncontrolled error that was not managed by the EDA
engine. Generate a generic error response message if required by the integration
protocol.
• If the activity did not return any error, check the status of the event and, if
required, generate a response message. Depending on how far the event made it,
there can be two different scenarios:
◦ If the event did not pass the validation and registration, the status of the
event should be taken from the .Event page itself, which will contain some
additional fields with the errors.
◦ If the event passed the registration, the status of the event should be
obtained from the event tracking entry, accessible through the data-page
D_EDAGetEventDataByEventID and the event identifier available
at .Event.EventID.
To facilitate the management of these two scenarios and the generation of the
response message, the system provides the rule EDAPrepareResponseEventStatus. This
rule generates a response data structure at the property .EventResponse of class PegaFS-
Data-EDA-Tracking-Event. The new channel class can use that resultant data structure to
then generate the response message in the appropriate format and send it to the event
producer.
• Configuring events
• Configuring processes
Configuring events
The types of events that can be managed by the application in EDA are defined in the
system through two main artifacts:
• Event data class - The class that represents the data model of each event type
and handles basic aspects like the validation or the visualization of the event via a
gadget on the UI.
• Events registry - The registry that contains all the different types of events,
identify them uniquely, associate them with their corresponding data classes and
with the processes that needs to be run on them.
In order to add a new event type to the system, both artifacts need to be configured as
explained in the following sections.
This class and all the properties above act as a generic envelope to the actual payload
of the event, which is expected under the property Data. This property is the one that
brings to the data model the polymorphism required to manage event types of very
different nature. For example, in an event representing a change of name, the property
Data will likely contain the previous name and the new name; while in an event
representing a transaction, the property will consist of the transaction identifier, the
amount, the parties involved and other related attributes.
The polymorphic nature of this property is managed through the payload class. Any
new event type that needs to be accepted in the system must be associated to a
payload class. The association to the payload class is made during the registration of
the event, but at that time, the class should already exist in the system.
The first thing to be considered during the creation of a payload class is the location.
The EDA engine is expecting that all payload classes extend from PegaFS-Data-EDA-Data-
EventPayload. For example, the payload class for an event that represents a change of
name can be created as PegaFS-Data-EDA-Data-EventPayload-NameUpdate.
Payload classes can be arranged as in any other class hierarchy, having inheritance,
encapsulation, and extensibility as the main drivers for that arrangement.
If there are multiple event types that share a significant part of their data model, they
can all be arranged under an intermediate node that acts as container for that data
model. For example, if the application needs to manage different types of names – legal
vs commercial -, we can consider the creation of a root class PegaFS-Data-EDA-Data-
EventPayload-NameUpdate and, under it, two different payload classes, one for each of
the specific types: Data-EDA-Data-EventPayload-NameUpdate-Legal and Data-EDA-Data-
EventPayload-NameUpdate-Commercial.
There could also be situations where different event types can be represented using
exactly the same data model. In those cases, a single payload class can be defined and,
at the time of registering the event types in the registry, make them all point to it.
The payload classes are not going to be persisted by themselves, so they must be
created as abstract classes so that all the artifacts associated to concrete classes –
history class, database tables, and so on – are not created.
Once the payload class has been created for the event type, the data model that will
represent the event must be created. For example, for an event that represents a
change of name, a few properties to maintain the old name, the new name, the date
when the change was made effective, and so on should be created.
This process should not be different to the creation of any other data model in a Pega
application. The only aspect that should be considered is that the data model that is
being defined at this class is the one that the event producer will need to provide at the
time of injecting events in the system. For example, if the application is using the out-
of-the-box REST/JSON interface, the data model defined here will in turn define the
JSON message that will be expected when an event is received. The data model
therefore should be easy to read, manipulate and parsed by the Pega parsing and
mapping rules.
The following is an example of the data model defined for the payload of an event that
represents a name update.
Users and administrators might need to access the data of the event payload, either
from the EDA operations console or from the modules that make actual use of the
events. For that purpose, the payload class should implement a section rule that
displays its data. The section will be later associated to the event type during the
registration of the event type.
After the reception of the event, at the Validation phase, the channel performs an initial
validation of the event payload. The validation logic should ensure that the data is
received from the event producer as expected. There are two types of data that usually
need to be validated: basic event data – for example, the existence of a reference key or
a reference value – and data specific to the event type payload.
For example, in order to validate an event type that represents the change of name, a
data transform of name ValidateFullNameEventPayload can be created under PegaFS-
Data-EDA-Data-Event. The data transform can start by validating the presence of the
basic fields (for example, reference key) and then proceed with the event type specific
fields (for example, old and new names).
If the validation logic establishes that the event is not valid, it should add an EDA
message to the event by using the EDAMessageLibrary data transform (see EDA Message
Library). The EDA engine will detect the presence of the message and reject the event.
The following is an example of a validation data transform that checks the presence of
a field NameType at the payload and, if not present, adds a message to the event so that
it is rejected.
The validation logic at event type level that is being discussed here is a generic one that
should ensure that the basic data of the event is present and is valid. There can be,
however, additional validations performed at each of the processes that will be
executed on the event. This validation therefore should set the focus on the essential
data of the events.
In the same way that the event data can be validated at reception, it can also be
enriched. Upon reception of the event and after its registration, the system invokes to
the enrichment logic, which should have been previously encapsulated in data
transform under PegaFS-Data-EDA-Data-Event and associated to the event type during its
registration in the event registry.
The enrichment logic can be used to retrieve additional data about the event – for
example, the segment of the customer that the event is related to or some additional
details about a transaction that might be referred from the event. It can also be used to
prepare the data so that later it is easier to consume by the processes associated to the
event.
If during the enrichment process there is any kind of error, the EDA engine should be
notified using the activity status. This will detect the situation and mark the event with
an error.
The following is an example of an enrich data transform that, among other things,
retrieves the registration address available at the customer master profile in a change
of address event.
The additional data that might be produced during this enrichment phase will be
available later to all the processes. The event itself will be persisted at the event
tracking table with the additional data. It is therefore recommended that the
enrichment implemented at this level is only for that data generic enough to be used by
all the processes. If there is data that is only required by a specific process, it is
recommended to obtain it during the early stages of the process itself. That will reduce
the database footprint of the events in the system and therefore improve the
performance.
To add a new event type to the registry, click on Add event and provide the following
information:
• Event name - Short description of the event type being registered. It does not
need to be unique but, given it will be used to help the user distinguish between
different types of events, it is highly recommended.
• Type and subtype – The different types of events are uniquely identified in the
system by their type and subtype. When a new event is received at the EDA
channel, the engine extracts its type and subtype from the message and uses it to
find the entry in the registry with its associated configuration. The pair type and
subtype is the one that determines how an event will be processed.
to indicate the generic type of change that the event represents (for example,
change of customer data, customer activity, screening) and the subtype to
determine the exact nature of the event (for example, change of address, card
replacement request or new PEP matches). Some others might follow a
completely different one.
Regardless of the approach at the time of using types and subtypes, the system is
expecting in these properties a short meaningful identifier without spaces. For
example, a type customer data change can be represented with a type CDC and a
subtype change of legal name as LGN-UPD.
• Processes to execute – The processes that are to be executed upon the event.
The system will show a list of all the processes available in the process registry.
One or more processes can be selected for a given event. To add new process to
the registry, see Configuring processes. Upon reception of the event, the EDA
engine will ensure that all selected processes are executed, either immediately
during the channel process or by dispatching them to a different domain. It is
important to note, however, that the system will not guarantee the order in which
they will be executed and some of them might even execute in parallel threads.
The order established in this selection is not relevant in runtime.
The following is an example of an event of type Legal name update that is configured
with its own payload class, to be validated and enriched with its own logic, and to be
taken through the Perpetual KYC process for events.
After the event type has been registered, the EDA engine is ready to start receiving
events of that type.
• Property creation – A new scalar property must be created under the class
PegaFS-Data-EDA-Data-Event. Another new property of the same name and type
must be created as well under the class PegaFS-Data-EDA-Tracking-Event.
• Complete data mapping – To ensure that the property is persisted and retrieved
as required, add the property to the mapping rules between the incoming event
and the event tracking entry: EDAGenerateEventTrackingInfo_Ext and
EDAConvertToEvent_Ext.
• Expose at the UI – If the new property needs to be added to the UI of the EDA
Operations console, use the UI inspector to navigate through the UI sections and
add the new property there.
After these changes, your application will be able to receive and persist the new
property.
Configuring processes
Processes are defined in the system through different artifacts:
• Process manager – The process manager is the class in the system in charge of
the execution of the process. It contains the logic to be executed on the events.
• Process configuration – Some processes may require process specific
configuration for the events that are going to be managed. The data model, user-
interface and governing rules for that configuration should be managed in the
system by a process configuration class.
• Process registry – Processes should be registered in EDA for their use by the
engine and execution on EDA events.
Processes are implemented as activities in the process manager classes (see Creating a
process manager activity). Applications can opt for different approaches at the time of
distributing these activities and therefore the associated processes across process
managers. The following are three models that can be considered:
• One process manager per process – Under this model, the application defines
one process manager per process at the application. As good practice, it is
recommended that the process managers of the applications extend from a
common class where the application reusable logic is placed (see Creating a
process manager class). This model works well for most applications with fairly
complex processes that benefit from isolation in process specific classes.
• One process manager per application – Under this model, the application
defines a generic process manager to contain all its processes and related rules.
This model works well for applications with simple processes that do not require a
complex data model.
• One process manager for all applications – EDA already comes with a default
process manager (class PegaFS-Data-EDA-Engine-ProcessManager) where
applications can create their processes. This model works well for extremely
simple enterprises, but the process manager can quickly become a container of
logic of different nature and purpose difficult to maintain - in general, the previous
two options are recommended over this one.
If the application being implemented opt for one of the two first options, a class
manager will need to be created (see Creating a process manager class). Otherwise, the
next step would be the creation of the activity associated to the process (see Creating a
process manager activity).
All process manager classes should extend from the class PegaFS-Data-EDA-Engine-
ProcessManager, which contains some of the logic to manage the invocation from the
EDA engine.
The classes can be created directly under this root class under a class structure that
maximizes reuse across different processes. For example, if a module for Perpetual KYC
implements different processes, creating an intermediate class PegaFS-Data-EDA-Engine-
ProcessManager-PKYC to act as a container of reusable logic is highly recommended.
Specific process managers can then be created as subclasses like in the form of PegaFS-
Data-EDA-Engine-ProcessManager-PKYC-Process1, PegaFS-Data-EDA-Engine-ProcessManager-
PKYC-Process2, and so on.
The process managers are just containers of logic and have a very brief lifecycle. The
process manager classes are instantiated by the EDA engine when their logic needs to
be executed on a certain event and then immediately destroyed. Therefore, the classes
should be created as abstract classes and the logic should consider that each event will
produce and use a fresh instance of the class.
Under the process manager class, there must always be an activity in charge of the
execution of a process. There is no restriction in the name of the activity, which will be
later be identified and registered along with the process (see Registering a new process
in the registry). However, it should meet certain specifications in the input and output
parameters, which should be defined as per the following table:
Regarding the body of the activity, there are also some specifications that should be
observed:
• Process status – The execution of the process manager is associated to one entry
in the process tracking table. It is the responsibility of the process manager activity
to update the status of that entry before the process completes. For that purpose,
the process can use the savable data-page D_EDASaveEventProcessTrackingRecord
and the process tracking identifier that will be available under the
property .ProcessId. There are three important properties that should be set:
◦ Status – Final status of the process after its execution. Successful execution
should result in a status of Success. If the event still requires some further
processing, the status of the tracking entry can be set to Pending. In case of
error, see Error handling below.
◦ OutcomeType – Each process can record the outcome of the process. The
type of outcome will vary significantly between processes. Some examples of
outcome types can be Case (to indicate that the process resulted in the
creation of a case) or Bundle (to indicate that the event was left for its later
processing by a bundle). This outcome type is just a label and there are no
restrictions in the value it can take. It is important though that it is
meaningful as it will be presented to administrator users and might be used
later by other processes.
◦ OutcomeValue – The process should return the actual value of the outcome.
If, for example, the outcome type set before was Case, the outcome value
should contain something like the pzInsKey of the case that was created. If it
was Bundle, it can contain the bundle identifier. As with the outcome type,
there is no strict imposition from the engine on the value that this property
can take.
• Error handling – In case of any error during the process, the activity should raise
it using the activity status. The EDA engine will detect the circumstance and will
automatically update the status of the process to Error.
An error scenario does not trigger the retry of the execution, which can only be
triggered by explicit request of the process as explained in the parameters table above.
The process will need to decide which scenarios require a retry (probably those caused
due to the temporary unavailability of a certain resource like the lock on a case), and
which are error scenarios (error in the data or case status that will not be resolved by
just simply retrying).
The EDA engine provides a placeholder for process managers that require a custom
configuration mechanism so that the configuration user interface can be exposed at
the EDA Configuration console. That is to say, the only value provided here by EDA is
the exposure of the configuration user interface and the centralization of all the
processes configuration at a single point. Everything else is the responsibility of the
process manager: from the user interface to enter the configuration, its storage in the
database and its retrieval at runtime.
In order to create a custom configuration for a process manager, a new class must be
created. The class must extend PegaFS-Data-EDA-Configuration-Process and must have a
section that manages the configuration data capture. Both class and section will be
associated with the process manager during the registration of the process.
If the process manager requires some custom storage to store the configuration (for
example, a database table to store the configuration for each of the expected event
types), a class under PegaFS-Data-EDA-Configuration-Storage can be created. This is
optional, and its only purpose is keeping all the EDA processes configuration
centralized; For example, the process manager can implement storing its configuration
somewhere else.
To add a new process to the registry, click on Add process and provide the following
information:
single process manager can be registered for its execution in different domains. In
that case, each registration will require a different identifier. For example, if the
Perpetual KYC process is going to run in two separate applications, one for CIB
and one for Retail, the process needs to be registered twice: one with identifier,
for example, PKYC-CIB and another with identifier PKYC-Retail.
• Qualifier – The qualifier of a process allows users to configure different process
managers if the process is run on a single event or on a bundle of events. If the
process manager being registered does not support bundling, Event must be
selected. If the process manager supports bundling, register the process twice:
once selecting Event and once again selecting Bundle, and each time select the
appropriate process manager.
• Domain – If no value is entered in the domain field, the EDA channel always
executes the process as the event is received. Otherwise, if a domain is specified,
the EDA channel ensures that the process is executed under that specific domain.
This can result in the immediate execution of the process if the EDA channel is
running under the same access group as the one of the required domain, or in the
dispatching of the event to a different domain when the EDA channel is running
under a different access group.
In very simple setups, using an empty domain could simplify the configuration.
However, it is highly recommended that processes are associated to specific
domains. That will ensure that no matter how the EDA channels are configured
now or in future, the processes will always run in the right context.
If the process that is being registered should run in a domain that is not present in
the list, add a new domain to the registry (see Registering a new process domain
in the registry).
• Name – Short description of the process. It should represent the purpose of the
process and will be displayed in a few places in the EDA Configuration and
Operations consoles.
• Description - Long description of the process. Used in the process registry to
provide some more information to users about the purpose of the process.
• Process manager class and activity – Users should select the process manager
class and the activity within that class that will handle the execution of the event.
The system will list all classes extending from PegaFS-Data-EDA-Engine-
ProcessManagerand, once a class has been selected, all the activities within that
class. If the required process manager is not in the list, see Creating a new process
manager.
• Custom configuration settings – If the process manager requires custom
configuration this option should be checked and the class that implements it
selected. The system will list all the classes that extend from PegaFS-Data-EDA-
Configuration-Process and, once one is selected, the list of section rules available at
that class. If the required configuration class is not present at the list, see Creating
a new process manager configuration.
Once the new process has been registered, it automatically becomes available for its
selection during the registration of new event types in the event registry (see
Registering the event at the registry).
To add a new domain to the registry, click on Add domain and provide the following
information:
It is important to understand that the EDA engine can be deployed and used in many
different manners. The most important ones are the following:
Processing channel
As explained in the Solution overview, the EDA channel takes the event through a
process made up of multiple phases. The sections below describe briefly the
functionality performed in each of these phases and how to modify or extend their
behavior.
Validation phase
The validation phase starts with the identification of the event type and its
corresponding configuration in the event registry. The system extracts the event type
and subtype from the event and uses that pair as the key to find the event type in the
registry. If the event type or subtype are not present in the event, or they do not match
with any of the event types registered, the process fails, and the event is rejected.
Once the event type configuration is retrieved, the system reads the event’s external
identifier (ExternalID) and, if it is present and the event producer did not explicitly say
otherwise (see parameter SkipExternalIDCheck), it checks for any event in the event
tracking table with the same external identifier. If there is any, the process fails and the
event it rejected.
The system then tries to cast the event payload received into the payload class
associated to the event type (see Creating the payload data model). If there is an error
during this process such as if the payload received does not match the data model
specified for the event payload class, then the process fails, and the event is rejected.
Finally, the system checks if there is any validation data transform configured against
the event type (see Configuring the event validation logic). If there is one, it executes it.
If the event page has an EDA message after its execution, the system determines there
was a validation error and rejects the event.
Registration phase
Once the system has determined that the event data is complete and valid, it allocates
a unique identifier to the event and registers it within the event tracking table. This will
be used from that point on to track the progress and status of the event. If there is any
problem during this process, the event is rejected.
Enrichment phase
The system then checks if there is an enrichment data transform associated to the
event type in the registry (see Configuring the event enrichment logic). If there is any, it
is executed. If the execution finishes with an error, the status of the event is set to error
and the channel process jumps directly into the wrap-up phase. Otherwise, it persists
the new event data under the event tracking entry created at the previous phase.
Preparation phase
The next step is the elaboration of the list of processes that will run on the event. The
default behavior is that the system obtains the list of events from the event type
registry (see Registering the event at the registry).
It should be noted that this mechanism is based on the static linkage between the
event types and the process that the EDA provides through the event and process
registries. There could be, however, some more complex use cases where the
determination of that linkage must be carried out dynamically at the reception of the
event.
For example, an organization opts for having two instances of Pega Client Lifecycle
Management in the same environment. One is to manage the business in Australia and
the other for New Zealand. There is a single event producer and the EDA engine needs
to determine on the fly whether the event needs to be routed to the instance
associated to Australia or to the one in New Zealand and that the determination needs
to be done based on where the customer was onboarded initially.
This dynamic logic cannot be configured using the standard event and process
registries. For that kind of scenario, the system includes the extension point
EDABuildProcessesList_Ext, which has access to all the event data available and can
dynamically elaborate the list of processes and therefore domains where the events
need to be processed.
Another important step in this phase is the classification of the processes in those that
will be executed in the later Processing phase of this very same process and those that
will be dispatched for a later execution under a different domain. The distinction is
made based on the domain associated to the process in the process registry (see
Registering a new process at the registry). If the domain points to the same access
group that is being used to execute this current EDA channel, the process is considered
local and will be executed at the Processing phase. Otherwise, the process will be
flagged as queue and it will be executed under a different domain after the event is
queued in the next Dispatching phase.
Dispatching phase
At this phase, the system iterates through the list of processes associated to the event
and determines whether there is any of them flagged in the previous phase as queue. If
there is any, it prepares a list of domains that the event needs to be dispatched to.
It should be noted that if there are multiple processes configured against the same
domain, the domain is included in the list only once, avoiding that way that the event
dispatched twice. For example, if in an EDA channel running in domain A, an event is
associated to three processes, two of them configured to run under domain B and a
third one configured to run on a domain C. The system will determine that the event
needs to be dispatched to domains B and C.
If the event does not need to be dispatched to any other domain, the phase is finished,
and the system moves into the next one. Otherwise, the event tracking entry is queued
once per domain in the list elaborated before. The queue that is the event is sent to is
EDAProcessEvent, the same one that is used for the direct injection of messages (see
Configuring the queue-processors).
Already under the new domain (access group), the queue processor will pick the event
tracking entry and take it through the EDA channel again. However, as the event already
completed the first phases of the channel, it the channel will push the event directly
into the Processing phase, where the processes intended for the new domain will be
executed.
This dispatching mechanism brings significant flexibility in the way the EDA engine can
be deployed and process events. It brings some important factors as well that should
be considered when using it:
• Multiple processes can be run in parallel for the same event under different
access groups. This means that, once the dispatcher phase has been completed,
the access to the event tracking table is not safe anymore and should always be
done using a mutual exclusion area (implemented in the application by a lock on
the event tracking identifier).
• The execution of processes in other domains will bring a certain delay to give time
to the queue processor to pick the dispatched events. By default, the system will
queue them for immediate execution. There is a Dynamic System Setting that can
be used to introduce a certain delay and to give some time to the main channel to
complete before the process in other domains is started (see Settings).
• The final status of the event cannot be known until all the processes in all the
domains have been completed. This means that the status should be re-evaluated
at the end of the channel process of each of the domains where the event has
been executed (see Wrap-up phase).
Processing phase
The next step for the channel is the execution of the processes considered local. For
each of them, the system loads the configuration from the process registry, instantiates
the associated process manager (see Creating a new process manager), and executes
its main activity. The execution will result in three possible scenarios:
• If during the execution of the process, there is any error detected, the status of
the process tracking entry associated to the process is set to error and the system
moves into the next process.
• If there is no error in the execution, but the process requested its re-execution
(see Retry parameter at Creating a process manager activity), the system evaluates
if the retry request should be accepted based on the maximum number of
attempts associated to the process. The number of attempts is set at the
Preparation phase using a Dynamic System Setting (see Settings), and it is
persisted with the process tracking entry.
If the number of attempts has not been exhausted, the event is queued for the
same domain using the standard queues (see Configuring the queue-processors).
If the process established along with the request a waiting interval, it will be used.
Otherwise, a default waiting interval will be used as established at the system
settings (see Settings).
If the number of attempts has been exhausted, the process status is set to error
and system moves on to the next process.
• If no error occurred and no retry request has been received, the process is
considered completed and the system moves into the next process in the list.
After all processes have been executed, the system moves into the next phase.
Wrap-up phase
During this phase, the system consolidates the event status based on the status of the
associated processes. These are the possible values that the status can take after the
consolidation:
Status Condition
Along with the consolidated status, the system also calculates the elapsed time to
process the event, the process time to take it through the last channel process and
other metrics that can be accessed through the EDA Operations console.
Administration
The EDA engine exposes some of its data and actions through the EDA Operations
console (see Configure > Financial Services > Event Management > Operations). The
console has in turn three different tabs that provides different information.
• Operations
• Configuration
Operations
The EDA engine exposes some of its data and actions through the EDA Operations
console (see Configure > Financial Services > Event Management > Operations). The
console has in turn three different tabs that provides different information.
Bundles
The Bundle tab of the EDA Operations console gives access to all the bundles generated
by EDA processes that implement this feature.
Given the high number of bundles expected in a regular system, the list of bundles is
pre-filtered using some pre-defined date ranges. The ranges available are: Last day, Last
3 days, Last 7 days and Custom range. By default, the landing page shows the bundles
received during the last day, but this default behavior can be changed using Dynamic
System Settings (see Settings).
At any time, users can change the date range used to filter the bundles. However, the
use of wide date ranges in systems with a high volume of bundles can present
performance issues. Narrowing down the date ranges as much as possible is highly
recommended. If there are concerns about the infrastructure supporting certain
volumes, imposing a restriction in the data ranges should be considered.
Each of the rows in the table displays information about the status of one bundle, the
domain under which it should be executed, its type and subtype and other related
properties.
It also contains information about the dependencies that the bundle needs to observe
before it is executed (see Dependency Type and Dependency Value). There are currently
three different scenarios supported. These are the
• No dependency – The bundle does not have any dependency and can be
executed at any time. The property Dependency Type has no value.
• Time – The bundle is waiting until a certain point in time to be executed. Under
this scenario, the property Dependency Value has a value of Time and the
property Dependency Value contains the exact time bundle will wait before
executing.
• Customer subscription – The bundle is waiting for all the cases associated to a
given customer to finish before it can be processed. Under this scenario, the
property Dependency Value has a value of Subscription and the property
Dependency Value contains the unique identifier of the customer.
The EDA engine does not implement any dependency management mechanism by
itself. It provides the data structures required for the processes to implement these
dependencies. For example, the above scenarios (Time and Customer subscription) are
taken from the Perpetual KYC process module implemented on EDA as part of Pega
Client Lifecycle Management, which sets and reads these two properties to implement its
bundling functionality.
Dashboard
This tab of the EDA Operations console implements a demo dashboard to show the
kind of data and reports that can be produced from the data managed and stored by
EDA.
Given the high volumes expected in many of the implementations and the complexity
of the database configuration required for an effective retrieval of the data, the console
is shipped as a demo asset. It is therefore necessary to carry out a proper performance
assessment under the specific conditions of the implementation.
It is for this reason and to avoid undesired incidents in production environments that
the Dashboard tab is hidden by default. Organizations can easily make it visible by
overriding the when rule EDAShowDashboardAdministrationCharts.
Events
The Events tab EDA Operations console gives access to all the events that have been
received in the system. It provides status information and shows the processes that
were executed on them.
Given the high number of events expected in a regular system, the list of events is pre-
filtered using some pre-defined date ranges. The ranges available are: Last day, Last 3
days, Last 7 days and Custom range. By default, the landing page shows the events
received during the last day, but this default behavior can be changed using Dynamic
System Settings (see Settings).
At any time, users can change the date range used to filter the bundles. However, the
use of wide date ranges in systems with a high volume of events can present
performance issues. Narrowing down the date ranges as much as possible is highly
recommended. If there are concerns about the infrastructure supporting certain
volumes, imposing a restriction in the data ranges should be considered.
As in many other Pega tables, users can sort by different columns, filter the results,
show or hide columns, group records and save a certain view definition to be used in
future.
For each of the events, the system presents a list of actions that can be accessed
through the icon at the very right of each event record. The actions in the list are
enabled or disables based on the specific status of each event.
View data
The view data action gives access to the details of the event tracking entry. The data
presented contains some general event data (type, subtype, time of reception,
references key and value, and so on.), as well as some processing details (status, phase,
execution times, elapsed times, and so on.).
The action also gives access in its second tab to the payload of the event, both using the
section view associated to the event (see Configuring the visualization of the event
payload), and to its XML payload.
The XML returned when the user clicks on View XML is the one of the event tracking
entries, which is an enriched version of the original event that was received. The XML
will contain the original event as received from the producer, plus the data that might
have been added during enrichment, plus all the control properties that the EDA engine
status has added.
View process(es)
This action provides a view of the processes that have been executed on the event. It
contains some basic processing information (for example, the domain under which the
process was executed, the number of attempts made, the maximum number of
attempts allowed, execution times and, of course, the overall status), along with the
actual outcome of the process (in the example below, a case - see Creating a process
manager activity for more details about possible outcomes).
From all the data in the screen, there is a pair of properties that deserves special
mention: Dependency Type and Dependency Value. These two properties determine if
the process can be executed straight away or if there is some sort of dependency that
needs to be observed. For example, in those processes that implements bundling,
some events cannot be executed straight away at their reception. They must wait until
the appropriate bundle pick them up and process them along with other related events.
In those cases, the Dependency Type property can be set to Bundle and the
Dependency Value to the bundle identifier.
It should be noted, though, that the EDA engine does not implement any dependency
management mechanism by itself. It just provides the data structures required for the
processes to implement these dependencies. The above example (Bundle) is taken from
the Perpetual KYC process module implemented on EDA as part of Pega Client Lifecycle
Management, which sets and reads these two properties to implement its bundling
functionality. Other processes might use different values to implement bundling or
other functionality that involves the management of dependencies.
This action shows the XML payload of the event tracking record, which is an enriched
version of the original event that was received. The XML will contain the original event
as received from the producer, plus the data that might have been added during
enrichment, plus all the control properties that the EDA engine status has added. This
action is also available at the View data action.
This action is available for events with a status of Error or Partial Success. It gives the
ability to the users to retry the process from the point where it failed. The system re-
queues the event into the EDA channel, which takes up the process from the last phase
that was executed. For example, if an event failed during the enrichment process, the
system skips the first two phases of the EDA channel (Validation and Registration) and
goes directly into Enrichment to continue the process from there. If the event failed
already at the Processing phase, the EDA channel goes directly to that phase and retry
the execution of those processes that fail (those that succeeded will not be executed).
This action is available for events with a status of Error or Partial Success, and its
purpose is very similar to the one of Re-queue failed process(es). The only difference is
that in this action the system does not re-queue the event into the EDA channel.
Instead, it reprocesses the event straight away under the requestor and session of the
user launching the action. This gives users the ability to trace the execution of the EDA
channel and find some errors that might not be visible at the console.
It is the responsibility of the user launching the action to ensure that the access group
being used has full access to all the processes associated to an event and has all the
required privileges. Otherwise, the event can fail due to a lack of an appropriate
security context instead of the original reasons.
By default, the system does not show this option in production environments (systems
with production level 5). Organizations can modify the default behavior and make
action available in production environments and/or to restrict the access to only certain
users by updating the when rule IsEDAAdvancedOperationsEnabled.
There might be situations where an event was processed, and for some reason, it needs
to be processed again from the very beginning. For example, the enrichment process
that took place at the reception of the event did not retrieve the right data or a process
that was executed was successful but did not produce the expected results. In those
situations, administrators might have to re-inject the event as if it was a completely new
one reaching the EDA channel. This action reinitializes the EDA control properties of the
event and puts the event in the queue again for its subsequent process.
However, there are some risks in the use of this action. The first one is that the system
will automatically disable the external identifier check to allow for a second execution. If
the external identifier check was the reason for a first failure, the re-injection will be
somehow bypassing that logic and letting duplicates into the application. The second
risk is that a second processing of an event might have undesired consequences. The
processes associated to the event might put the application at risk if the process is
unable to manage the second execution of the event. It is the responsibility of the user
launching the action to ensure that the re-injection will not have an adverse impact on
the application.
By default, the system does not show this option in production environments (systems
with production level 5). Organizations can modify the default behavior and make
action available in production environments and/or to restrict the access to only certain
users by updating the when rule IsEDAAdvancedOperationsEnabled.
Configuration
The EDA engine can be configured at the EDA Configuration console (see Configure >
Financial Services > Event Management > Configuration). The configuration is
organized in different tabs according to their nature. The following sections describes
each of them.
Domain registry
The domain registry shows all the domains registered at the system for their use by the
EDA engine. For more details about the role of domains in the system and the different
values that need to be provided at their creation, see Configuring processes and
Registering a new domain at the registry.
Process registry
The process registry gives access to all the processes registered at the EDA engine. For
more details about the role of processes in the system and the different values that
need to be provided at their creation, see Configuring processes and Registering a new
process at the registry.
Event registry
The event registry gives access to all the events that have been registered with the EDA
engine. For more details about the role of events in the system and the different values
that need to be provided at their creation, see Configuring events and Registering the
event at the registry.
Some EDA processes can be configured to have custom configuration (see Creating a
new process manager configuration). The system will display under this tab all the
processes configured to have a custom configuration and embeds here the custom
configuration user interface specifically defined to manage it.
General configurations
This first tab provides access to two main types of settings: logging categories and
general settings.
Logging categories
The EDA engine is made up of multiple background processes that are sophisticated in
nature. In order to facilitate tracking and troubleshooting any issues, the activities and
rules that make up the core of the engine have been configured to produce a significant
number of entries in the log. The entries provide information about when the events
are received, their main attributes, the key decisions taken by the EDA engine and other
similar information.
This logging information can be significant so by default the engine logging is disabled.
If the need for engine logging arises it can be activated by increasing the level of the
following logging categories to INFO.
In addition to these logging categories, different processes built on the EDA engine
might have their own categories. For example, the Perpetual KYC process built on EDA
and ship with Pega Client Lifecycle Management comes with a logging category CLM.PKYC
that can be activated as required.
Miscellaneous settings
The miscellaneous settings area of the tab gives access to the different Dynamic System
Settings that can be used to modify the default behavior of the EDA engine. The
settings that are currently supported are the following:
Setting Description
Minimum execution delay time (mins) For processes that implement bundles, the
minimum time in minutes that new bundles
will need to wait until they can be executed.
By default, 2 minutes.
Data-page: D_AppSettings.EDABundleMinDelay
Data-page:
D_AppSettings.EDAProcessRetryAttempts
Wait time to retry for lock (secs) Number of seconds that the EDA engine will
wait for the re-execution of a process that
requested a retry, if no waiting period was
specified in the request (see Creating a
process manager activity). By default, 30
seconds.
Setting Description
Data-page:
D_AppSettings.EDAProcessRetryDelay
Wait time to execute after dispatch Number of seconds that the EDA engine will
wait until an event that has been dispatched
to another domain is processed (see
Dispatching phase). By default, 5 seconds.
Data-page:
D_AppSettings.EDAProcessDispatchDelay
Data-page: D_AppSettings.EDAMutexAttempts
Retry interval while accessing mutual Number of milliseconds that the engine will
exclusion areas wait between retries to access a mutual
exclusion area (see MutexStart). By default,
200 milliseconds.
Setting Description
Data-page: D_AppSettings.EDAMutexInterval
Data-page:
D_AppSettings.EDAMutexEventAttempts
Retry interval while accessing event tracking Number of milliseconds that the engine will
mutual exclusion areas wait between retries to access the mutual
exclusion area associated to an event
tracking entry (see EDAComputeEventStatus).
By default, 200 milliseconds.
Data-page:
D_AppSettings.EDAMutexEventInterval
Setting Description
Data-page: D_AppSettings.
EDAMutexCustomerAttempts
Retry interval while accessing customer Number of milliseconds that the engine will
exclusion areas wait between retries to access the mutual
exclusion area associated to a customer (see
EDACustomerMutexStart). By default, 300
milliseconds.
D_AppSettings.EDAMutexCustomerInterval
Default time window size for events at EDA Size of the data range used by default to
Operations console filter events in the EDA Operations console.
By default, 1 day.
Data-page:
D_AppSettings.EDAOperationsEventsFilterDateTy
pe
Setting Description
Default time window size for bundles at EDA Size of the data range used by default to
Operations console filter bundles in the EDA Operations console.
By default, 1 day.
Data-page:
D_AppSettings.EDAOperationsBundlesFilterDate
Type
As all the settings are loaded and accessed through the D_AppSettings data-page,
organizations that want to use different settings under different applications or
contexts can easily customize the loading process to do so (see data transform
AppExtension_EDASettings).
Implementation
Implementing a process
All the sections dedicated to the Event Driven Architecture described the main data
entities and process that EDA engine manages. They also provide details about their
default behaviors and different ways to configure and extend the functionality.
The following is a high level plan for the implementation of a new process on top of the
EDA engine. It does not contain all the steps to be performed, as many of them will
depend on specific business needs and are described in specific sections throughout
the document, but it points to the main elements of configuration to make a basic
process run under the engine.
Step Content
Step Content
Packaging configuration
DomainRegistry application.
Reference
Tracking tables
There are three main data entities managed and tracked by the EDA engine: events,
processes, and bundles. Each of them has its own dedicated database table and data
model. The following sections include the details about their implementation.
Events are tracked in the system using instances of the class PegaFS-Data-EDA-
Tracking-Event, classthat is configured to point to the database table
eda_tracking_event. The following are the most relevant properties of the class and
table.
Processes are tracked in the system using instances of the class PegaFS-Data-EDA-
Tracking-Process, classthat is configured to point to the database table
eda_tracking_process. The following are the most relevant properties of the class and
table.
Bundles are tracked in the system using instances of the class PegaFS-Data-EDA-
Tracking-Bundle, that is configured to point to the database table eda_tracking_bundle.
The following are the most relevant properties of the class and table.
Data-pages
The EDA engine provides multiple data-pages to access the different data entities
managed by the application. Many of them will be used internally by the application,
while others might be required by EDA processes implemented on the engine.
The following section contains the most important data-pages by data entity and
module, and determines if they are expected to be used exclusively within the engine
or if some EDA processes might need to implement their functionality.
Event tracking
Event channel.
PegaFS-Data-EDA-Tracking- channel.
Event
Process channel.
Bundle tracking
PegaFS-Data-EDA-Tracking- bundling.
Bundle
Operations Console
PegaFS-Data-EDA-Data- Events).
Operations-Search-Events
PegaFS-Data-EDA-Data-
Operations-Search-Bundles
Settings
The following data-pages are used to load settings within the EDA Operations console.
Registries
The following data-pages are used to access the event, domain and process registries
from both the EDA Configuration console and the EDA channel.
PegaFS-Data-EDA-Config-
Storage-DomainRegistry
PegaFS-Data-EDA-Config-
Storage-DomainRegistry
PegaFS-Data-EDA-Config-
Storage-DomainRegistry
PegaFS-Data-EDA-Config-
Storage-ProcessRegistry
PegaFS-Data-EDA-Config-
Storage-ProcessRegistry
PegaFS-Data-EDA-Config-
Storage-ProcessRegistry
PegaFS-Data-EDA-Config-
Storage-EventsRegistry
PegaFS-Data-EDA-Config-
Storage-EventsRegistry
PegaFS-Data-EDA-Config-
Storage-EventsRegistry
The EDA engine maintains its own specific set of codes and messages to describe the
different situations that an event can go through. They are used throughout the EDA
engine to implement effective error handling. Events and processes built on the EDA
engine need to make use of them to flag error situations that need to be managed by
the engine and, if required, propagated back to the event producers.
EDA engine produces. The second one is an extension point given to applications to
register their own message codes that can be invoked, for example, from their event
validation logic.
Each decision table takes a code as an input and generates three pieces of information
that are added as a message page: the input message code, the level of the message
(for example, ERR) and a text describing the message.
When a piece of logic needs to add a message to a page (for example, a validation logic
on an event), it invokes to the data-transform EDAMessageLibrary and passes the
parameter code with the code of the message to be added. The data-transform first
checks if the code is available at the decision table EDAMessageLibrary. If it is there, it
adds the message to the page and exits. If the code is not there, it then tries with the
EDAMessageLibrary_Ext.
extended to potentially store a copy of external Customer System of Record (SoR) data
for efficient maintenance avoiding constant interaction with external systems.
• Creation
• Storage
• Retrieval
• Persistence
Creation
Every customer that has gone through any CLM journey will have an instance of the
Master Profile uniquely identified by a Master Profile ID (CMP-X) and linked to the
customer through Customer ID. The Master Profile of a customer is created the first
time a CLM journey is run for the customer in the application, regardless of whether the
customer is already known to the organization and exists in the Customer SoR. Once
the Customer Master Profile is created, it is used in the initialization of new journeys for
that customer as well as to automatically trigger the creation of journeys based on its
data (for example, periodic reviews).
Storage
Master profile is represented by PegaFS-Data-Party-MasterProfile class and is stored in
pc_work_masterprofile table, and, by default, contains a full copy of the customer data
that is maintained at case level. It can be skimmed to just contain the required driving
data, to minimize storage footprint. All the information resides in a single data instance
(BLOB) and some of the main data entities (for example, business relationships) are
exposed via declare-indexes.
Retrieval
Master profile of a customer can be retrieved using any of the two APIs detailed in the
following. Both these APIs rely on the customer ID which is a unique identifier of a
customer to retrieve the CMP. The Pega Client Lifecycle and Management product
utilizes the following APIs to retrieve the master profile during the case processes.
D_GetMasterProfile
This data page is scoped at thread level and
takes customer ID as an input parameter to
retrieve the corresponding master profile if it
exists. This rule is also equipped to invoke
PegaFS-Work.GetMasterProfile
This data transform utilizes the customer ID
stored at the case (pyCustomer) and passes it
to the data page D_GetMasterProfile for the
actual retrieval. Unlike the data page, this rule
also performs the following steps.
Persistence
Within the Pega Client Lifecycle and Management product, most of the updates to a
CMP are done through a case. The data is collected/modified as part of a case and is
later synchronized to CMP. The synchronization process with the master profile occurs
at various stages throughout a case's lifecycle. This iterative approach ensures that data
remains current and readily accessible for other cases and processes. Master profile
update is performed through the following API’s which not only persist the data but
also update the retrieval cache to ensure it is ready to serve subsequent accesses to the
CMP. The API also invokes the data synchronization module listed in the following.
Note: Though a CMP can be persisted as any other data instance using a
savable data-page, it is always recommended to persist it through
SaveWorkFolder which would ensure that the synchronization happens with all
required attributes and the retrieval cache is updated.
Business problem
In Client Lifecycle Management, synchronizing cases and the Master Profile involves
complex data processing. This synchronization operation involves intricate data
handling and multiple concurrent processes, such as recalculating risk scores and
assessing Anti-Money Laundering (AML) profiles. Frequent synchronizations can lead to
significant performance strain and data mapping complexities. This results in
undesirable performance overheads, impacting operational efficiency and user
satisfaction.
The Solution
To optimize the data processing during synchronization, the process has been
configured to accept a series of parameters. These settings allow applications to choose
which parts of the customer profile should be synchronized and which parts can be left
out from synchronization.
For example, if a user makes changes in data that is not considered by the application
risk model, there is no need to trigger the risk and AML profile recalculation. The
synchronization of the risk can be disabled during synchronization.
Note: If this parameter is configured, then the sytem executes the complete
synchronization process.
Parameter Structure
The user has the option to initiate synchronization for either the entire category or
specific subcategories within the category. The supported sub-categories include:
Code Category
G General Data
C Contact Data
R Regulatory Data
P Product Data
F Financial Data
D Credit Data
L Legal Data
S Secondary Risk
Code Category
E External Risk
P Product Risk
O Occupation Risk
I Subindustry Risk
U Country Risk
Configuration details
• If you want to execute a category, then the category should be entered in the
main page-group of the page CMPCategory. Else, the logic associated to the
category is not executed.
• If you want to execute a category fully, set its property Mode to Full. By default, it
will have a value of Partial and the subcategories to be considered will be passed
in the property PartialDataSyncCoded of the category.
For example, if an application wants to synchronize the General Client Data fully, it will
pass CMPParamsPage to SaveWorkFolder as follows:
CMPParamsPage
.CMPCategory(GCD)
.Mode=Full
If the application wants to execute only few subcategories (for example, general data,
contact data, and regulatory data), it will pass them as follows:
CMPParamsPage
.CMPCategory(GCD)
.Mode=Partial
.PartialDataSyncCoded=G,C,R
CMPParamsPage
.CMPCategory(GCD)
.Mode=Partial
.PartialDataSyncCoded=G,C,R
.CMPCategory(RAP)
.Mode=Partial
.PartialDataSyncCoded=S,B,R,E
CMPParamsPage
.CMPCategory(GCD)
.Mode=Partial
.PartialDataSyncCoded=G,C,R
.CMPCategory(RAP)
.Mode=Partial
.PartialDataSyncCoded=S,B,R,E
.CMPCategory(REQ)
.Mode=Full
Pega Lifecycle Management makes use of this functionality during the initial steps of
the main journeys. You can take PegaCLMFS-Work.PostCollectInitialData as reference.
Synchronization process
Extensions
Your application can use this mechanism for synchronization. You can create
customised categories and subcategories and manage them without registration, as
long as their codes don't conflict with existing ones.
During the synchronization process, you can use the following when rules to determine
whether a category or subcategory is active or not.
Rule Category
has been done in that part of the application. A proper impact assessment is required
in that case.
The new model keeps the interface with the rest of the application unaltered.
Customers are still expected to retrieve the Master Profile using GetMasterProfile (data
retrieval plus synchronization) and D_GetMasterProfile (data retrieval). The persistence of
the data is still expected through SaveWorkFolder.
The main changes are implemented when loading D_GetMasterProfile, a thread level
data page loaded in every interaction. The changes ensure that the master profile is
only loaded from the database when data has changed. Otherwise, it reuses
information already in memory.
D_GetMasterProfileCache
(Requestor) – Data page to store the last version of the master profile loaded or
saved for a given customer.
D_GetMasterProfileCommitTime
(Interaction) – Data page that returns the last time the master profile was
persisted into the database (the last time that changes were made).
To make the most of this mechanism and to automatically adopt it in future related
features, it is important that customers let the application manage the master profile
or, if required, access it using the existing interface as defined above (GetMasterProfile/
SaveWorkFolder).
To enable customers to find the right balance between memory footprint and
performance in their applications, the Master Profile access mechanism support two
different working modes:
No cache
Under this working mode, D_GetMasterProfile bypasses the cache and retrieves the
data directly from the database. This working mode benefits customers with
powerful database infrastructure and with complex Master Profiles such that an
in-memory copy of the Master Profile from the cache takes longer than access to
the database.
Cached
Under this mode, D_GetMasterProfile checks first the cache and, if the customer
Master Profile requested is available, it retrieves it from there. This working mode
benefits customers with simple Master Profiles, where an in-memory copy is more
efficient than retrieval from a database. The application uses this configuration by
default.
To keep the growth of the cache under control, the application maintains a limit in the
number of Master Profiles that can be maintained in the cache. By default, the
application is configured to manage no more than ten Master Profiles, but this is a
threshold that can be changed through configuration.
gather essential information enhancing client onboarding, review processes, and overall
value chain interactions.
•
◦ Basic information: Fundamental details about the customer, such as their
name, contact information, and any essential identifiers.
◦ Products: Various products or services associated with the customer. This
has details such as product type, booking jurisdiction, and booking entity.
◦ Ownership structure: It applies exclusively to entities, illustrating the primary
aspects of ownership and control within the organization.
◦ Risk: Overall risk profile of the customer, including country risk, business
risk, product risk, related party risk, secondary risk, relationship duration risk
and external data risk.
◦ Existing business relationships: Ownership and control, management, and
authorized representative roles of the customer in other relationships.
◦ Current journey details: A snapshot of the cases created in the current
journey along with the number of in progress sub-cases information.
To know more about Pega GenAI customer summary, see Generate Customer Summary
with Pega GenAI in the Product Overview.
feature. Activating Pega GenAI enables you to view the Generate Customer Summary
icon on the Customer Master Profile (CMP) screen in CLMKYC.
number of open cases. Below are few extension points to include additional
information in this snapshot.
You can download PDFs for further analysis and future reference.
Result:
Result:
Benefits of externalization
• Process of externalization
• Solution overview
Process of externalization
The externalization of third-party data allows organizations to strategically manage the
influx of external data during the customer lifecycle. It utilizes a dedicated table and a
declare trigger to seamlessly persist, retrieve, and synchronize external data for
informed decision-making.
external data separately from the Master Profile, while keeping key aggregated
properties within the profile for quick access.
Data storage
Data persistence
Once the master profile is saved, the declare trigger ExternalizeThirdPartyData manages
the process of moving third-party data to a dedicated table. Each response from the
service provider, identified by its execution timestamp, is stored as a distinct record in
the table. Upon successful externalization, the data is removed from the master profile,
which is persistently saved. To enhance efficiency, this logic is also invoked during the
retrieval of the customer master profile, ensuring externalization occurs both when
reading and writing data.
Data retrieval
Solution overview
When the externalization mechanism is active, the third-party data which gets
synchronized to the Customer Master Profile is externalized to a dedicated table
corresponding to PegaFS-Data-ExtProvider-Storage class. The table has a primary key
consisting of four different properties.
Data persistence
To accelerate the positive impact of the externalization, this logic is also triggered
during the retrieval of the Customer Master Profile. That is to say, the
externalization takes place at both reading and writing the Customer Master
Profile.
Data retrieval
Setting Description
AppExtension_ExtSettings to set
IsThirdPartyDataExternalizationEnabled flag
using a different source.
Business service provider and execution timestamp are part of the key to the
external storage table. While externalizing the data, the system expects the
provider and the timestamp to be available on the page representing the third-
party data. In scenarios where these values are missing, default values are being
set. Below mentioned extensions can be updated to set up the default Provider
and execution timestamp based on the implementation.
Rulename/type Description
Rulename/type Description
Rulename/type Description
• CustomerID – Uniq
• BusinessService – T
externalized for e
• Provider – Provide
Rulename/type Description
• ExecutionTimeStam
external data was
• MapFromPageRef
containing the th
That third-party data will also need to be managed at retrieval time, when the
Customer Master Profile APIs are used with GetThirdPartyData parameter as true.
Use the following rules to retrieve externalized implementation specific third-party
data.
Rulename Description
Supporting functions
The topics in this section describe supporting functions, which enable search data,
identify and track data objects, and check the detailed view of a case.
• Data traceability
You can use the Relationship Manager Portal to perform searches on cases in your
environment. You can add fields to the search results to help refine your search.
For information about how case searches are implemented, see Case searches.
1. In the header of Dev Studio, in the search text field, enter pyWorkSearch and
select the Work- report definition rule.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Click Add column, and enter the name of a custom property that you want to add,
for example, pyCustomerName.
4. Click Save.
1. In the header of Dev Studio, in the search text field, enter pysearch and select
the Work- custom search properties rule.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Select the custom property that you added, for the pyCustomerName instance, to
include it in the search results.
4. Click Save.
Include the custom property column in the results grid in section pyWorkSearchResults
as required to show in the search results screen.
1. In the header of Dev Studio, in the search text field, enter useDataInstances
and select the Pega-Searchengine Indexing/usedatainstances dynamic system
settings rule.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Enter the true value in the Value field.
4. Click Save.
You must re-index all work in order to see the new search fields.
1. In the header of Dev Studio, click Configure > System > Settings > Search.
2. Click the Re-index button for All work.
For more information about re-indexing, see Rebuilding a search index.
Data traceability
Financial Services institutions are subject to several regulations that vary between
jurisdictions. In addition to meeting these regulations globally, institutions need to
prove to regulatory auditors how specific decisions are made. A large volume of data
that drives those decisions is captured from various sources, such as internal
databases, customer self-service, manual entry by employees, and third-party systems.
The business needs to track where the data was initially captured and how it changes
over time.
Use the data traceability feature to identify and track data objects, and then configure
auditable entries on a particular data object. The data change tracking engine scan for
changes and save them in an exposed, easily accessible data change repository.
For information about how data traceability is configured, see Data traceability.
To extend data traceability, do the following "Extending tracked security changes" task.
Configure auditable entries on a particular data object that you want to track, for
example, business goals.
The process of onboarding a new customer can vary depending on factors, such as the
customer type, their location, and the products they use. Increasing regulation in the
financial services industry adds pressure to understand the progress of ongoing work,
related information, and those main parties involved in onboarding activities. This
context-specific view of the overall parent case or separate units of related work must
provide relevant data to the user to help them continue their task or understand
blockers to progress.
For more information about how the Case summary feature is configured, see Case
summary.
The progress gadget shows the percentage of completion of the case and the status. By
default, the indicator is red if the task is delayed and green if the task in on track. If you
want to modify the colors, do the following steps.
1. In the header of Dev Studio, in the search text field, enter simple-percentage-
chart and select the HTML or CSS file.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. In the CSS file, change the color properties to a color of your choice.
4. Click Save.
The progress gadget shows the percentage of completion of the case and the status.
You can configure at which percentage of completion a case is considered delayed.
The progress gadget shows the percentage of completion of the case and the status.
You can configure the conditions for when the deadline for completion has passed.
1. In the header of Dev Studio, in the search text field, enter SLADeadlineIsPast
and select the when rule.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. In the Advanced tab, modify the conditions of the when rule, as needed.
4. Click Save.
You can configure which data categories appear on the Case summary. To modify these
categories, edit the data transform and when rules for the categories.
If you use parallel fulfillment, the system creates the related sibling fulfillment cases for
jurisdiction when all due diligence activities for that jurisdiction are completed.
The related cases are displayed based on the ObjectType property which equals pzinskey
from the main Pega Client Lifecycle Management and KYC case.
If you want to change the condition on which related cases should be visible, modify
the data page and report definitions.
At the end of the Capture stage, the system creates a unique requirement case for
every requirement. You can view a list of pending requirements in the bottom-right
corner of the Case summary screen from the Enrich stage onwards.
Depending on the conditions configured in the requirement set rule, the respective
requirements are only applicable to the customer.
If you want to update the applicability of the requirement, do the following steps.
You can use event-driven architecture to define and configure business events. Certain
events are processed automatically, while others require manual attention. When an
agent rule identifies an event, it immediately triggers the respective validation and
decision mechanisms to take appropriate actions.
Overriding agents
You must save the scheduling agents into your application layer in order to use them in
an event-driven architecture operation.
1. In the header of Dev Studio, expand System > Operations > Agent Management.
2. Filter the Queue Class to PegaCLMFS and save it into your ruleset.
3. Stop each agent.
4. Click on the name of an agent. The system opens an agent rule that you can copy
into your implementation ruleset for changes.
5. Review the configuration of the agent (frequency, access group, and so on) and
save the changes.
You can create new events for your financial institution. In this example, the documents
provided by the customer during onboarding are set to expire, and the financial
institution has to ask the customer to provide the latest documents to adhere to due
diligence processes.
1. Create a new property to store the valid end date of the uploaded document.
For example, call the property validenddate. Place this property on the document
collection UI screen. If the documents collected are stored in a content
management solution (such as Alfrecsco), ensure that while accessing the
customer documents, the validenddate property is also brought up in the journey.
2. Create an event code and event type for the document review case and map it in
the MapFSEvents map value.
3. Create an advanced agent which runs at required intervals.
These intervals are determined by the duration in which the documents are valid.
Use the CreateCustomerReviewEvents agent rule as a basis for this rule.
4. Create an activity that retrieves the customer profiles whose documents are
expired. Link this activity to the advanced agent in step 3, which calls it
periodically.
5. Create a report definition that retrieves the customers whose documents are
about to expire, per the validenddate rule.
6. Create an activity to check if a case has already been created for customers with
documents that are about to expire.
7. Create an activity which creates a document review case and add this activity to
the FSIFEventDrivenProcess decision table.
For processes where the waiting period is excessive, the system uses asynchronous
processing. This gives control back to the users as soon as they take action on the case
to keep working on other cases while the system processes the slower case. The system
works asynchronously until human intervention is required or the case is resolved.
What to do next:
If you do not want all of the default asynchronous processes to run, you can specify
which individual processes you want to run asynchronously. For example, you can
decide that only two of them should be enabled from the processes configured by
default, while others must remain running in the foreground as they did in previous
versions of the product.
The configuration is stored in a decision table that stores all the trigger points
(processes) where asynchronous processing is used.
1. In the header of Dev Studio, in the search text field, search for the
AsyncProcessingSwitchTriggerPoint decision table rule.
2. In the row corresponding to the trigger point, enable or disable a certain trigger
point by setting the value to true (enabled) or false (disabled).
By default, all processes are enabled. If you set the clmfs/AsyncProcessingSwitch to
false, the configuration of this table is irrelevant, and no process runs in
asynchronous. The following asynchronous processes are enabled by default.
CaptureToEnrich
Navigates from the Capture stage to the Enrich stage
EnrichToDuediligence
Navigates from the Enrich stage to the Due diligence stage
It includes the synchronization with the system of record and the generation
of all the Global KYC, Regulatory, Tax, Credit, and Legal due diligence cases,
including their automated solution, if applicable.
LKYCCases
Creates LKYC cases after the completion of the initial stages of Global KYC
cases (Related party Global KYC and Customer investigation)
1. In the header of Dev Studio, in the search box, search for the AsyncProcessing or
AsyncProcessingDelayed rules and select the queue processor.
2. Save the rule into your implementation layer by clicking Save as.
For more information, see Copying rule or data instance.
3. Make the necessary changes.
4. Click Save.
5. To ensure that the system recognizes a new configuration and considers your
application by the new System Context mechanism, do the following steps.
a. Open the application model by clicking <application name> > Definition.
b. Expand the Advanced section and ensure that Include in background
processing is selected.
1. In Dev Studio add the AsyncProcessing sub-flow to the flow of the stage to which
you want to add asynchronous processing.
2. Double-click the AsyncProcessing flow shape, and name the trigger point in its
parameters.
3. Search for the AsyncProcessingSwitchTriggerPoint decision table, add the new
trigger point, and set it to true to enable it.
4. Search for the AsyncProcessingConfig decision table to configure the amount of
attempts, and how often to retry those attempts, and so on.
What to do next:
Configure the following rules to modify the behavior of both default processes and
custom processes.
AsyncProcessing
The flow is used to run a process in the background using agents and queue
processors. It must be invoked as a sub-flow, which creates an event and waits at
an assignment shape, and proceeds in the background with the help of an agent.
The flow requires a TriggerPoint parameter, which needs to be passed when the
flow is invoked. This trigger point is used to capture the location where the flow is
invoked. The same parameter needs to be configured in the
AsyncProcessingSwitchTriggerPoint decision table. While customers are expected to
add this flow into their own processes, it is not expected that the flow itself is
modified to obtain custom behaviors.
AsyncProcessingSwitchTriggerPoint
The decision table is used for capturing all trigger points where asynchronous
processing is used. When the AsyncProcessing flow is invoked in each stage, it
passes a parameter from this decision table.
AsyncProcessingPreSetValues
An extension data transform can be used to set values on the case before it is
queued for asynchronous execution.
AsyncProcessingPostSetValues
An extension data transform can be used to set values on the case after it is
queued for asynchronous execution.
For more information about the core capabilities, see Know Your Customer Engine.
Know Your Customer for Financial Services 8.2, that is provided with this product,
contains some significant changes to the way customer policy profiles are stored. These
policy profiles include the KYC questions and answers collected from KYC cases which
are stored for audit and reuse purposes.
In previous versions of the application, customer KYC data was stored in their master
profile. You can now store the KYC Types associated with a customer in a new policy
profile repository. By default, the system uses the new policy profile to store the KYC
data of new customers. Existing customers' KYC data stored in their master profile are
automatically migrated into the new repository the next time their master profile is
saved. This migration process automatically transfers all KYC data into the new
repository.
This change significantly reduces the footprint of the master profile and improves the
overall performance of the application. However, implementations with custom master
profile functionality need to assess the impact of the changes and act accordingly.
For more details about how to configure this functionality, including how to disable it
and restore the legacy mode, see the Configuring the policy profile section of the Pega
Know Your Customer for Financial Services Implementation Guide.
Pega Client Lifecycle Management and KYC is required to keep the KYC cases updated
with the latest KYC polices so that the application can remain compliant with regulatory
rules. Pega Client Lifecycle Management and KYC uses the SPU in both the background
and manual update mode and ensures in-flight cases receive changes to regulatory
policies.
Management and KYC uses the SPU in both the background and in manual update
mode and thus ensures in-flight cases receive changes to regulatory policies.
When a policy update is applied to a KYC case, an in-flight case in the review stage
might be rendered incomplete due to the application of new or updated policies. These
cases must be reverted to the data collection assignments for the KYC operator to enter
the required information. Restart the in-flight case to ensure that the STP mode
determines the correct assignment for the case. See Case navigation to learn more
about the STP mode.
When a policy is updated on a case, the assignee is shown a message on both the
confirmation and the data collection screens to ensure that the assignee is informed of
the update and can perform the next actions as required.
Approved and resolved KYC policies are isolated from posterior changes in the rules
using the KYC Engine policy-freezing mechanism. Thus the resolved cases and the KYC
types present in the master profiles present the KYC types in the form that they were
approved in the case.
For more information about Surgical policy updates, see the Surgical policy update
topic in the Pega Know Your Customer for Financial Services Implementation Guide.
1. Ensure that you have a copy of the CLM and KYC case types in your
implementation layer rulesets.
Result:
When two users access the same assignment in the same case simultaneously,
and the first user completes the assignment, the second user receives a
message saying that the case has been modified. The system, however, leaves
the second user in the original assignment instead of checking the validity of the
assignment. It directs the second user to a review harness with a message
stating that someone else already performed the assignment.
When a background process, such as the one that manages service level
agreements or correspondence, takes control of the case for few milliseconds to
update the case (for example, to increase the urgency), users that have the
assignment open, will get an error saying that the case was modified and needs
to be refreshed to regain control of it. Refreshing the case may result in the loss
of some data entered in that assignment.
The access to products relies on two additional data types: Product matrix inclusions
(listing rules for product eligibility per jurisdiction) and Product matrix exclusions (listing
exceptions to these rules). To determine the products displayed for a user, the
application checks the user's jurisdiction, determined by the organization unit
configured at the operator ID record, against the rules in these data types. However, in
some cases, financial institutions may wish to override the jurisdiction-based product
access rules for certain users. There are two ways to implement this change to the
default behavior.
• Grant access using a role: Organizations can grant access across jurisdictions by
allocating a certain role to the user groups with the specific requirement.
1. To grant access using role, click Records in the navigation pane of Dev Studio.
2. Expand the Security category, and then click Access Group.
3. In the Access Group name column, select the access group for which multi-
jurisdiction products are required and click to open the access group.
4. In the Available roles section of the access group, click Add role link and add
PegaCLMFS:MultiJurisdictions role. Click Save.
Pega Client Lifecycle Management and KYC generates two types of PDF documents:
Document styling
By default, Pega applications apply the UI styling (CSS) during the generation of PDF
documents using either the “Create PDF smart shape” or HTMLtoPDF API. However, the
UI styling is usually heavy and might introduce performance issues. To accelerate the
creation of PDF files and improve rendering, the latest versions of Pega Client Lifecycle
Management and KYC use compact styling. Compact styling offloads heavy application
CSS and uses a simple CSS file with minimum styling applicable only to the context in
the document.
If you modify your application to generate your PDF documents, you can also enable
the use of compact styling by:
Application performance
To ensure optimal performance under difficult load and volume conditions, Pega Client
Lifecycle Management and KYC comes with database indexes in relevant tables and
columns.
Database indexes
Certain database indexes cannot be automatically created in all the environments and
database vendors. The table below contains some database indexes to address
common performance related issues that, depending on your environment, load and
use of the system, you might be experiencing.
Associateinfo the
varchar_pattern_ops varchar_pattern_ops
option.
pxobjclass
pxinsname the
varchar_pattern_ops varchar_pattern_ops
option.
pxobjclass
Determine the items from the list below that apply to you. For each of them, open the
equivalent class at your implementation layer (for example, using below UPlus) and
register them in the External Mapping tab.
Reference
The topics in this section provide reference to the list of journey types and subtypes
supported out-of-the-box and dynamic system setting rules.
• Background processes
For more information about the functional behavior of these journeys, see the Product
Overview available on Pega Client Lifecycle Management for Financial Services product
page.
PegaCLMFS-Work-CLM-MaintainExistingCust
PegaCLMFS-Work-CLM-CustDueDiligence
Reactivate group/ReactivateGroup
Async processors
PegaCLMFS This dyn
or disab
clmfs/EnableAsynchronousProcess
the CLM
transact
required
long.
A true
process
Otherw
applicat
true to
clmfs/AsyncProcessingSwitch Capture
creating
clmfs/RetryOnLock set to t
is not in
locked t
fs/FSIFTrackingFunctionalityEnabled chosen
clmfs/ret/OnlineBranch default,
ORBr.
fs/EventLeadDays leading
a queue
fs/DocumentExpiredWithin review e
fs/NextReviewWithin review e
LicensedToUseEScreening subcase
custome
function
for spin
they are
clmfs/bSpinoffFulfillment product
enabled
enabled
clmfs/SpinoffFulfillmentBy default.
approac
Jurisd
clmfs/EnableSpinoffFulfillment configur
the valu
configur
parallel.
availabl
KYCTypeRootClass directly
pattern
time.
EnableDueDiligenceIndexing
KYC Engine
PegaKYC This dyn
regulato
RegulatoryImportInProgress
also use
generat
Set valu
complia
Set fal
complet
yearsKYCvalid/highriskcustomers creating
KYC/DD
PegaCLMFS This dyn
the func
clmfs/LicensedToUseKYC
subcase
Only lice
function
for spin
are not
KYC/DD
PegaCLMFS This dyn
appropr
clmfs/SpinoffRegulatoryDDCasesForFunds
If the va
created
If the va
created
If
Regula
ion is s
setting a
Unique
the dyn
the whe
behavio
KYC/DD
PegaCLMFS This dyn
configur
clmfs/CreateAllLocalRegulatoryUpfront
out all t
with rel
regular
With a t
out in p
KYC/DD
PegaCLMFS The diff
does wh
PegaCLMFS/SkipValidationForDemo
disabled
Set tru
Set fal
KYC/DD
PegaCLMFS If the va
for all ju
clmfs/SpinoffTaxCasesForFunds
Tax case
If TaxCa
value fo
is true,
as abov
to Uniqu
If the wh
behavio
clmfs/bEnableMaterialChangeAssessment product
disabled
enabled
clmfs/OECDResidenceCountry applicat
membe
how the
related
KYC/DD
PegaKYCFS This dyn
the gen
GenerateDueDiligencePDF
Type aft
When th
the user
must be
When is
and aut
yearsKYCvalid/lowriskcustomers creation
RPNewGeneration/UseQuickDBAccessForRelNetFetching
RPNewGeneration/ UseQuickDBAccessForCustomerData
RPNewGeneration/maxDepthRelNet
RPNewGeneration/enableNewVisualizer network
Requirements
PegaCLMFS This dyn
determi
Requirements/ActiveGeneration
requirem
If the se
uses the
A value
used.
Requirements
PegaCLMFS This dyn
determi
Requirements/LastGeneration
function
It is set
requirem
maintai
Requirements
PegaCLMFS This dyn
determi
Requirements/DMS/OutboundAsyncModeEnabled synchro
library f
The def
If the va
synchro
library h
If the va
synchro
library h
Requirements/DMS/OutboundSynchronizationEnabled docume
requirem
false.
Requirements
PegaCLMFS This dyn
requirem
Requirements/LocatorModifierUseDefault
determi
process
with the
applicat
If there
false,
requirem
If this se
the syst
using th
modifie
Default)
Requirements Skips Va
PegaRequirements
Requirements/SkipValidateStage
Requirements/SkipApproveStage
Routing
PegaCLMFS This dyn
level of
clmfs/DefineRMInChargeLevel
Instituti
possible
0 - Singl
capture
1 - Optio
level as
Busines
clmfs/ret/yrsAtPrimaryAddress address
of gettin
services
WSS
PegaCLMFSRet This dyn
consiste
clmfs/ret/wssdemomode
starting
demo fo
To enab
Mobile,
true - d
WSS or
for the c
false -
applicat
LoggingFunctionalityEnabled if loggin
eda/operations/events/defaultdatefilter default,
eda/operations/bundles/defaultdatefilter
eda/engine/processretry/delay manage
If the pr
minutes
bundle
returns
one tha
eda/engine/mutex/interval area. By
eda/engine/mutex/attempts area. Af
the proc
attempt
eda/engine/mutexcustomer/attempts exclusio
the num
process
eda/engine/processdispatch/delay another
phase).
null the
eda/engine/processretry/attempts default,
SPUMode updates
KYC Engine
PegaKYC This dyn
KYC Qua
QualityControlEnable true (D
function
false –
function
KYC Engine
PegaKYC This dyn
flexibilit
ItemApplicabilityMode
differen
All – KYC
items in
Editable
availabl
None –
KYC Engine
PegaKYC This Dyn
granula
QualityControlMode
are ena
Item (De
perform
ItemGro
perform
KYC Engine
PegaKYC This Dyn
Quality
QualityControlMetadataReuse
custome
review,
true - K
false (
reused.
EncompassConnectionTimeout provide
seconds
DCCEnable wide.
true (D
false –
true (D
dedicate
false -
to a Bin
table.
Background processes
• Queue processors
• Job Schedulers
Queue processors
Following is the list of queue processors that are available in Pega Client Lifecycle
Management and KYC application. By default, the application makes use of most of the
queue processors in this list and organizations should review at the time of
implementation that their default configurations meet their needs. Queue processors
only used for certain optional functionalities are highlighted in the following table so
that they can be disabled as required.
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
PegaRequirements
Requirements /DMS/
OutboundAsyncModeEnabled
PegaRequirements
Requirements /DMS/
OutboundSynchronizationEna
bled
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Attributes Description
Queue processor
Functionality
Configuration Description
Number of threads
Number of threads per node available for
processing of the configured activity.
Initial delay (in minutes) The number of minutes for the processor to
wait before retrying to process an item. The
default value is 1.
Configuration Description
If, for any reason, you do not want the system to run a particular queue processor,
make a copy of the queue processor and disable it.
Job Schedulers
Following is the list of job schedulers that are available in Pega Client Lifecycle
Management and KYC application. By default, the application makes use of most of the
job schedulers in this list and organizations should review at the time of
implementation that their default configurations meet their needs. Job schedulers only
used for certain optional functionalities are highlighted in the following table so that
they can be disabled as required.
Attributes Description
Job scheduler
Functionality
Attributes Description
Job scheduler
Functionality
Attributes Description
Job scheduler
Functionality
Watchdog Infrastructure
section in KYC Types loading
optimization.
Attributes Description
Job scheduler
Functionality
Attributes Description
Job scheduler
Functionality
CreateCustomerReviewEv
Access group: Identifies customers subject
ents
to customer periodic review
CLMFS_Agents Frequency:
and creates events for EDR
Multiple times a day after
case creation. Events are
every hour
queued to
Attributes Description
Job scheduler
Functionality
CreateCustomerDocExpiry
Access group: Identifies active customers
Events
subject to customer review
CLMFS_Agents Frequency:
due to expiration of
Multiple times a day after
documents (1G) and creates
every six hours.
events for EDR case
Attributes Description
Job scheduler
Functionality
strategy is configured as
Bulk in D_ReqConfiguration
data page.
RNGCreateCustomerDocE
Access group: Identifies active customers
xpiryEvents
subject to customer review
CLMFS_Agents Frequency:
due to expiration of
Multiple times a day after
documents (2G) and creates
every six hours.
events for EDR case
Attributes Description
Job scheduler
Functionality
To ensure that these job schedulers run within the context of your application, you can
update the access group CLMFS_Agents to point to your application. As an alternative,
you can copy the job schedulers rules and make them point to your own access group.
There is no need to make any changes to these job scheduler rules. However, some
organizations might want to review the parameters to meet their specific needs. These
are some of the most important parameters to be considered:
Configuration Description
If, for any reason, you do not want the system to run a particular job scheduler, make a
copy of the job scheduler and disable it.
For more information about managing these background processes, see the Managing
background processing resources.
This catalogue provides comprehensive details regarding the usage, table keys,
database indexes, class keys, and estimated data storage size for each table. For more
information, see Database table catalogue.
Resources
Document type Format
• Process overview
About
Intended audience
In Pega Client Lifecycle Management and KYC, the activation of the new Requirements
functionality (2G) requires that document metadata collected with the previous (1G)
version of this functionality is migrated to the new (2G) data structures. Such metadata
can be migrated by following two different strategies, either in bulk or on demand. This
guide describes the differences between those two strategies, so that organizations can
decide which strategy best suits their needs. In addition, this guide describes the steps
which must be followed by administrators who carry out the process of bulk migration
using the tools supplied by Pega.
Process overview
• Migration strategies
Migration strategies
During the activation of the new version of the Requirements functionality (2G),
organizations which make use of the previous version of this functionality (1G) need to
decide how to manage the document metadata that they have been collecting in their
production environments up to the time of that activation. Pega supports the following
two upgrade strategies:
The legacy version of the Requirements functionality (1G) stores document metadata
inside the BLOB of Data-WorkAttach-File instances. Although this mechanism is very
efficient in many cases, it may not be optimal in the case of bulk migration, where this
process typically runs in restricted time windows, and therefore the retrieval of the
BLOB from the database can introduce significant delays.
Note that running the Extractor component is optional. If you decide not to run the
Extractor component, the retrieval of the document metadata from the BLOB takes
place during the Bulk Migration process. In organizations where the binary of the
attachments is stored externally (for example, on a DMS or Web Storage), this decision
may not have a significant impact on your migration strategy. However, it will have
significant impact if your organization stores the binary of theattachments in the table
shipped out-of-the-box by Pega. In that case, the retrieval of metadata from the BLOB
can make your Bulk Migration exceed the time window that you have available for
activation. In such a case, we recommend using the Extractor component.
This utility implements the actual migration process. It must be run after your system
has been upgraded to the version of the software that ships the 2G functionality (8.5 or
a later version), and it must be run at the moment of activation of the 2G functionality
(see Upgrading from 1G into 2G in the Implementation Guide on the Pega Client Lifecycle
Management and KYC product page.). If you run this utilityafter 2G has been active in
the production environment and documents have been collected, this can result in the
creation of duplicates and loss of data.
The Bulk Migration utility iterates through all the documents collected in 1G, and copies
their documentmetadata into the new 2G tables. If the Extractor component was run
before you run the Bulk Migration utility, the Bulk Migration utility uses the output from
the Extractor component to accelerate the process. If the Extractor component was not
run, or if a particular document was added after the the Extractor component was run,
the Bulk Migration utility opens the BLOB of that document, and retrieves the metadata
from that location.
Batch data flows are executed as background processing in the nodes of the cluster
configured for that purpose. The two different types of nodes where data flows can be
run are:
• Batch – If you have CDH installed in your environment and you have nodes with a
classification of Batch, these nodes can be used for the execution of the data
flows.
• Background – If you do not have CDH installed, or you do not have Batch nodes,
you can still run the tools using the standard Pega background nodes. However,
bear in mind that these data flows are then sharing the load with other
background processes.
To optimize the speed of the process, batch data flows work with “partitions”. These
partitions determinethe maximum number of parallel threads across the cluster that
can be used during the execution of the data flows.
For example, if you have two nodes available for the process, and you estimate that
each of the nodes can manage ten concurrent threads each, you should configure the
process to use 20 partitions. It is important to note that having more threads than
partitions does not bring any benefit in terms of performance. You must adjust the
number of partitions, threads and nodes to achieve the required performance and
throughput.
Downloading Extractor
The Extractor is publicly available for download in Pega Marketplace. If you are an on-
premise customer and have access to the distribution of Pega Client Lifecycle
Management, you can also find the Extractr in the OptionalComponents folder of
the distribution ( RNGBMExtractor_v1.1.zip ).
Importing Extractor
Because it can take a significant period of time for the Extractor component to
complete, you must run the Extractor component prior to the activation of the new
version of the Requirements functionality (2G). Running the component does not
require you to shut down the server. You can run the component whenever the
production system is up, and users are accessing it. However, in order to not impact on
performance for users, consider running the component so that it uses fewer
resources, for example, by running it in an isolated node.
1. Open your application rule, and then click the Manage Componentsbutton.
3. Navigate to the folder where you downloaded the component, and select the file.
4. After the import is successful, you will see the component in the list. To dismiss
the dialog box, click Ok.
5. In the Available components section of your application, you will see that this
component has the status Enabled.
The Extractor component uses an input data table as the index of the documents to be
processed. The Extractor component also uses an output data table where the results
of the process are stored. In order to prepare these two tables, carry out these steps in
the following order:
1. To create a replica of the attachments table with only the fields required to kick-off
the process, execute the following SQL statements in the database:
select pzinskey as
recordkey, mod(CAST(floor(random()*100000) AS INTEGER),5) as partitionk
ey from
pegadata.pc_data_workattach;
In the second query, note that the value of “5” matches the number of partitions
to be used during the execution. Adjust this number according to the number of
partitions that you are planning to use.
2. Once the new table is created, set the relevant indexes by executing the following
SQL statements:
Dependingon the number of records in the table, the indexing process can take a
significant amount of time.
3. To delete any records from the output table that might be there from previous
executions, execute the SQL statement below. This statement ensures that the
output tables are empty. Otherwise, the system issues an error message due to
duplicated keys.
truncate pegadata.bm_extractordata;
It is now time to run the Extractor component. Complete the following steps:
1. Go to Configure > Decisioning > Decisions > Data Flows > Batch Processing, and
click on New to create a new instance.
Then select the access group for the execution, and set the service instance name
based on the classification of the nodes where the process will run
( BackgroundProcessing or Batch ). Based on the number of server
instances and the number of partitions that will run this process, set the number
of threads to be the optimal number. For more details, see the section on Data
flows and partitions above.
3. You can now start the process. Click on Start, and wait until the process is
finished. Depending on various factors (number of documents, partitions/threads,
hardware capacity, and so on.), the process can take from a few seconds to a few
days.
4. After the process has finished successfully, verify the output of the Extractor
component by executing the following SQL statements:
At the end of the execution, there should be as many records in this table as there
were originally in the bm_extractorindex input table.
You can also check the results for specific documents by referring to a document
identifier and/or expiration date, using the following statement:
If there are many documents, make sure that you filter rows based on count.
You must make this ruleset available in your application by adding it to your
implementation layer application rule. As an alternative, you can create a new
temporary application rule based on your implementation application, or directly on
the CLM base (PegaCLMFS). Note that, for the alternative approach, you will also need to
create an access group to run the utility in the context of this new application.
The Extractor component uses an input data table as the index of the documents to be
processed. Italso uses an output data table where the results of the process will be
stored. In order to prepare these two tables, carry out these steps in the following
order:
1. Start by dropping the input table that comes with the standard distribution.
2. Create the index table that is used by the process to iterate through the
documents. The generation of the table depends on whether (or not) you
executed the Extractor component previously. Depending on which migration
strategy you are following, either execute all the queries below for Case 1, or
execute all the queries below for Case 2. Do not execute the queries for both Case
1 and Case 2.
Inthe second query, note that the value of “10” matches the number of partitions
to be used during the execution. Adjust this number according to the number of
partitions you are planning to use.
3. Once the new table is created, which can take a few seconds, it is time to set the
appropriate indexes. Execute the following SQL statements:
alter table
pegadata.bm_inputindex add primary key (recordkey);
create index
bm_inputindexpartition on pegadata. bm_inputindex (partitionkey);
Dependingon the number of records in the table, the indexing process can take a
significant amount of time.
4. 4. To delete any records from the output tables that might be there from previous
executions, execute the SQL statement below. This statement ensures that the
output tables are empty. Otherwise, the system issues an error message due to
duplicated keys.
truncate pegadata.req_migration_output;
truncate pegadata.req_metadata_document;
truncate pegadata.pr_index_pegareq_data_partyref;
At this point, it is important to remember that you should always run the Bulk
Migration tool immediately after the 2G functionality has been activated and
before any documents have been collected using the new functionality.
Otherwise, these SQL operations will result in loss of data.
It is now time to run the Bulk Migration utility. Complete the following steps:
1. Go to Configure > Decisioning > Decisions > Data Flows > Batch Processing, and
click on New to create a new instance.
2. Select the class that the data flow belongs to ( Data-Req-BM-Input-
Attachment ) and the instance ( BulkMigrate ).
Then select the access group for the execution, and set the service instance name
based on the classification of the nodes where the process will run
( BackgroundProcessing or Batch ). Based on the number of server
instances and the number of partitions that will run this process, set the number
of threads to be the optimal number. For more details, see the section on Data
flows and partitions above.
3. You can now start the process. Click on Start, and wait until the process is
finished. Depending on various factors (number of documents, partitions/threads,
hardware capacity, and so on.), the process can take from a few seconds to a few
hours.
4. After the process has finished successfully, verify the output of the process by
executing the following SQL statements:
The two tables above should contain the same number of records as there are
attachments in the pc_data_workattach table with a customer-id in the
AssociateInfo column.
• Overview
• Appendix
Intended audience
This guide describes the configuration required to access the Client Outreach
functionality available in Pega Client Lifecycle Management and KYC for Commercial
Banking from a Web Self-Service application using Pega Mashup. Although it provides a
high-level description of the functionality, it is intended for technical architects in
charge of the configuration of the module.
Overview
Overall process
The Client Outreach case type makes it possible for the client to provide the required
data or documents through a Web-based self-service mechanism which can be
accessed from a laptop, tablet,or phone. Data and documentation received from the
client can then be used in the ongoing customer journey.
The case type articulates the above data and document retrieval process in four main
stages:
For detailed information about the overall process and the stages listed here, see the
Client Outreach articles available at Pega Community.
Fulfillment channels
From the four stages listed below, Fulfillment is the only one where an interaction with
the client is expected. Three different channels are provisioned to support that
interaction:
• WSS - Financial institutions can embed in their Web Self-Service (WSS) applications
a simple to-do list with all the Client Outreach cases addressed to a client. Clients
can access cases
throughthat gadget and action them. The integration between the financial institution‘s
WSS application and Pega is done by using Pega Mashup (see Pega Community for
additional information).
• REST API - Financial institutions can build their own applications to manage the
interaction with the clients. These applications can get and edit Client Outreach
cases by using the provided REST API. This enables organizations to use their own
systems to contact clients. See REST API for additional information.
• Back-office portals - If a back-office operator has collected data from the client
through an external channel, the operator can open the assignment from a
workbasket, and enter this data as required.
The scope of this guide is the configuration required to enable the Web Self-Service
(WSS) channel.
Pega provides a demo Web Self-Service application for customers to explore Pega
Mashup capabilities. The application is available through a public URL, and comes in
different flavours for different industries and segments. In this case, you will use the
one created for Commercial Banking:
https://pegasystems.github.io/uplus-wss/commercial_bank/index.html
This URL can be used to test against different Pega applications or servers. You can set
different configurationsin different browser sessions (the configuration is stored locally
in a browser cookie). Configurations can be exported and imported for a quick switch
between environments and demos. For detailed information about the application and
how to configure it, see UPlusWebSelf-Service.
The application can be run directly from the URL provided above. However, there are
some situations where financial institutions will need to download it, and then install it
in their own systems. For example, that is the case when financial institutions
customize the application: to demonstrate a certain capability; or, to integrate with
third-party web-based tools like eID Verification (see eID Verification). In those cases,
the application should be downloaded from its GIT project homepage.
Server-side configuration
Before accessing the Pega application from the WSS application, there are some
configurations required on the Pega server.
In all the tasks under this section, the assumption has been made that the functionality
is going to betested against the out-of-the-box version of the application. The access to
the application is done through the PegaCLMFSWSS application, which is based on
PegaCLMFS, which in turn contains all rules specific to access through WSS.
Ifyou are planning to test against your own application, you will need to adjust your
stack to include the out-of-the-box PegaCLMFSWSS as a built-on application. For
example, if you have a UPlus application you must create a UPlusWSS application that
includes both PegaCLMFSWSS and your base application.
For those tasks listed below where access to Dev Studio is required, use an operator
with administrative rights. If you are working directly against out-of-the-box, use an
operator pointing to the access group CLMFS_WSS_User (if you installed the sample
• Application configuration
• Channel configuration
• Client-side configuration
• Verification
Application configuration
If you are using the out-of-the-box configuration, you do not need to do any of the tasks
described in this section. In all other cases, you need to register the URL of the WSS
demo application as a trusted origin in your application.
For that purpose, open your application rule and register the URL of the WSS
application in the Integration & Security area.
In addition, register the WSS URL as an allowed frame-ancestor website in the Content
Security Policy rule of your application.
If you are testing against the out-of-the-box application, configure one operator to point
to the access group CLMFS_WSS_User (if you installed the sample application, use the
operator [email protected] as reference). If you are testing against your
own application, create a new operator with an access group pointing to your
implementation layer (for example., UPlusWSS).
Channel configuration
The last step in the configuration of the application is the creation of the channel. Open
the channels landing page (Application > Channels and interfaces), and select Web
mashup. In the New Web mashup interface screen, provide the following information
(but leave the default configuration for those fields not listed here):
Name Description
Name
Name of the channel
Example – CLMWSS
URL
URL used by the WSS application to access
Pega. It is autogenerated by the system.
Ensure that the host name and protocol
match the public URL of the server (the
hostname should be the one used by server
certificate). If it does not match, log in to Dev
Name Description
Example - https://lab.pega.com/prweb/app/
CLMWSS_2350/
Class Data-Portal
Harness pyToDoList
Skin CLMWSS
Clickon Saveto persist the changes and then, before leaving the screen, click on
Generate mashup code. The system opens a modal window with two code snippets.
Look at the second one, the one at the bottom (Mashupcode), and copy the value of
the parameter data-pega-channelID.
Save the value, as it will be used later during the configuration of the Web Self Service
application.
Client-side configuration
Now that the server is ready to accept requests, it is time to open the Web Self-Service
application and configure it. Open the application using the following URL:
https://pegasystems.github.io/uplus-wss/commercial_bank/index.html
Then scroll down to the bottom of the screen and click on Settings.
Configuring users
The Users tab presents a configuration screen where you can set the Pega operator
that will be used to access the Pega server. The demo application comes with two
different users that can be used to maintain different profiles. Select one of them, and
provide the following information:
Name Description
Username
Name of the user that will be used to log in
to the Web Self-Service application. This
username is fictitious, and does not need to
exist in any system. It represents the user
identifier of the client when accessing the
WSS application.
Example – [email protected]
Password
This is the password for the client. This
password is used to authenticate the client
during the login.
Example – Password
Pega user id
Pega operator identifier (see Access groups
and operators)
Example – [email protected]
Contact ID
Customer identifier of the client. For a given
Client Outreach request, this identifier must
match the identifier (in the system of record)
of the recipient of that request.
Example – 7777777792
To Do component
As the last step in the configuration, you must instruct the Web Self-Service application
to include a To Do gadget that points to your application. The gadget will appear just
immediately after login, embedded in the landing page for the client. It lists the Client
Outreach requests waiting to be fulfilled, and gives access to them for their resolution.
Name Description
Action Display
URL
URL generated by the system during the
creation of the channel interface (see the
Channel configuration)
Example - https://lab0650.lab.pega.com/
prweb/app/CLMWSS_2719/
Channel ID
URL generated by the system during the
creation of the channel interface (see the
Channel configuration)
Example –
MASHUP63942705ac7a4e7e85e27dd41a868ac
4
Verification
After configuration, it is time to verify the access. Complete the following steps:
3. In the Add products screen, click on Actions > Create client outreach. Select the
primary contact as the recipient of the request, provide the data and
documentation expected from the client and submit the request.
4. Open the Web Self-Service application and log in with the user registered
before (see the Configuring Users). Provide user identifier and password.
5. After login into the application, the system should display a landing page with a To
Do gadget which lists any Client Outreach requests that were created before.
6. Click on Begin, provide the required information, and submit the request.
To Do Widget
The parameters set during the configuration of the client-side (e.g. customer identifier)
are received by the Pega application in the section where the To Do list is rendered.
Unfortunately, those parameters cannot be passed directly to the data-page that will
bring the list of requests for the customer.
In order to work around that limitation, a new section ClientOutreachWSSCtx has been
created. This section transfers all the parameters in the parameter page into a
clipboard page that can later be referred to from the data-pages. If you need to pass
additional parameters, they should all be made available through this mechanism. For
more details, see the following rules:
PegaFS-Data- Class
Class of the page that
ClientoutReach-WSSCtx
contains the parameters
received from the WSS web
application so they can be
used by the gadget.
Contains a RecipientId
Appendix
REST API
Formore information about these methods, in Dev Studio, click <application name> >
Channels and interfaces > select API > select the Application tab. Alternatively, see the
article on “Client outreach API” on community (Client outreach API |Pega).
• Supporting evidence
• Policy profiles
• Policy memory
Overview
It is critical for financial institutions to understand the risk of doing business with their
portfolio of clients. Internal compliance teams need to ensure that internal processes
are sufficient to satisfy the scrutiny of external regulators. Failure to do so can result in
regulators issuing hefty fines and strict timelines to remediate raised compliance
issues. In the worst-case scenario, this can even impact the financial viability of the
institution.
The need to understand risk is not just during the initial onboarding, but over the
lifetime of the client, right through to offboarding. Many events such as periodic risk-
based reviews, data changes, document expiry, and ownership updates will result in the
need for re-assessment of risk.
In the Pega Client Lifecycle Management and KYC application, Pega Client Lifecycle
Management and KYC is a utility component that delivers the core dynamic
questionnaires that implement and enforce regulations, policies, and procedures by
geography, line of business and product. These questionnaires, also known as KYC
Types, facilitate the collection of the relevant customer due diligence information in
order to ensure compliance with local laws and regulations like AML/CTF Rules, FATCA,
CRS, and MiFID II.
This article describes how to configure the Pega Know Your Customer Engine, a set of
generic industry-agnostic capabilities which financial institutions can use to either
extend or maintain the KYC Types that Pega Client Lifecycle Management and KYC
provides, or to create KYC Types of a different nature, in order to support their business
needs.
KYC Item
KYC Item represents each of the questions in a questionnaire. Each item defines
not only the text associated to the question, but also the property where the
answer will be stored, its visibility and mandatory conditions, associated actions
and many other options that the engine will use to drive the data capture.
KYC Group
Items can exist by themselves in a questionnaire, but they can also be grouped
under KYC Groups. The KYC Group displays items on the screen under a
collapsible section that facilitates the navigation and maintenance of the KYC
Type. Groups can be shown or hidden based on visibility conditions.
KYC Type
The KYC Type is the container that puts together all items and groups under a
questionnaire. It is also used to maintain questionnaire level configuration such as
the applicability rules of the KYC Type, initialization and data propagation logic, or
the definition of profiles.
KYC Types are defined in design time using the KYC Rule Manager Studio, a tailored
version of Dev Studio that facilitates the navigation through the existing KYC Types and
their organization. After KYC Types have been defined, the Pega Know Your Customer
engine initiates them using the applicability rules associated to each KYC Type, making
them appear as required in run time.
The following image is an example of how a KYC Type is displayed in run time and how
the different elements defined in design time are used during that rendering.
Pega Known Your Customer provides by default a structure organized by global regions
and regulations and it has a corresponding set of rulesets organized in the same
manner. In addition to the structure that the application provides, there are pre-
configured due diligence rules that can serve as a design model or as reusable assets in
production implementations.
A tree structure occupies the left-side navigation of the portal with the different nodes
that make up your KYC Types hierarchy: a root node Policies, regulation level nodes,
regional nodes, and KYC Types.
The hierarchical structure is supported internally by a regular class structure. The one
that the application provides by default has PegaKYC-Data-Type-Policies as its root
element, which is represented as Policies in the navigation tree above. The regulatory
nodes and regional nodes are subclasses of the root class (for example, PegaKYC-Data-
Type-Policies-Global, PegaKYC-Data-Type-Policies-Global-APAC). All the KYC Types that the
product includes are under one class of this class structure and are instances of the
custom rule Rule-PegaKYC-Type.
You can use this hierarchy to maintain the KYC Types that the product provides by
default and your own KYC Types. The structure can be easily extended to customize
and facilitate your implementation. You can create your own regulatory or regional
nodes as needed.
If the class structure does not meet your business needs, you can always create your
own one. However, it is important to note that defining the right class structure at the
beginning of an implementation is key to facilitating maintenance and promoting reuse
in the future. The class structure provided by default is a rich one and, most of the time,
creating a new structure is not required. Take time to review the existing one in detail
before creating your own one.
If you decide to use the class structure provided by default, there is nothing else that
should be done before start maintaining your KYC Types. If you finally opt for creating
your own class structure, follow the instructions provided in the following sections of
this chapter.
Once you have created your root class, along with the subclasses that might be
required, it is time to register it so that the KYC Rule Manager Studio can use it to
render the appropriate navigation tree. The registration is done by using the Dynamic
System Setting KYCTypeRootClass that is associated to the KYC ruleset. For example, if
you created your class structure under MyCo-Data-Type-Policies-, you should configure
the Dynamic System Setting as in the following image:
After these changes are done, the hierarchy displayed at the KYC Rule Manager Studio
will be the one defined by your class structure.
You will need to decide then the unique identifier of the rule and the ruleset and class
where the KYC Type will reside. KYC Types are regular rules that follow the same
principles than other rule-resolve rules. Instances of the rules in higher versions of
rulesets hide instances in lower versions, instances in rulesets higher in the stack take
precedence over those down below, branches take precedence over rulesets, and so
on. It is important to consider all these factors when creating the rule.
After you have provided these basic parameters, the KYC Type form will be presented.
The form is organized in three tabs: Type Definition, Item Definition, and History. The
following two sections describe the different elements of configuration that can be set
through the first two tabs. The third tab, History, is the standard one for all rules and
does not have anything specific to KYC Types.
• The KYC Types become available for application-wide use only after they are
checked in. The run time engine does not process the KYC Types in personal
rulesets.
• New KYC Types are considered for new cases only. For existing cases that already
have KYC Types applied to them, new KYC types and new versions of the types are
available until either the reevaluation routine runs (something that is triggered
only from specific questions), or when the Surgical Policy Update detects the
change in the regulatory rulesets and force the reinitialization of the case (see
Surgical policy updates).
• Similarly, KYC Types that might have been discontinued or that are not applicable
anymore in specific business scenarios remain in in-flight cases until the re-
evaluation routine runs or the Surgical Policy Update forces the re-initialization of
the case (see Surgical policy updates).
• Type definition
• Item definition
• Item validations
Type definition
The Type Definition tab allows maintaining questionnaire level configuration. The
configuration presented is of different nature and used in a very different manner by
the run time engine.
The KYC Engine determines in run time which of the available KYC Types are
applicable in each specific business scenario. This determination can be driven in
two different ways: using standard applicability conditions or the Applicability
Matrix (see Defining applicability conditions). If you use standard applicability, you
can set here the decision rule type and instance that will determine the
applicability of this specific KYC Type. If you use the Applicability Matrix, this
configuration will have no effect as the applicability of the rule will be determined
by the matrix.
By default, the KYC Types are not applicable when using this configuration if no
condition is configured. If there is a condition configured, the KYC Types are
applicable when the configured rule returns a value true.
You can use several types of decision rules, including when, map value, decision
table, and decision tree rules. For ease of configuration, a condition of Always is
available to specify that the KYC Type must always be selected for every case.
Display Order
You can use this attribute to configure the horizontal (or vertical) display order for
the KYC Type in the case user interface. Multiple KYC Types can share the same
ordering sequence number. In this case, the second sequence is alphabetical,
from lowest to highest (A-Z). To view the ordering of all available KYC Types, click
on the Info icon, which will display the list of all the KYC Types in the system as
they would appear if they were all applicable at the same time.
Data Transform
This data transform is invoked when the KYC Type is loaded into a case. It is used
to initialize KYC Items, usually by pulling data from the case data structure or
other data sources. This feature is used to prevent users from re-entering data
that is already available in the system. For example, if there is a KYC Item where
the users are asked to enter the full name of a customer, and the name is already
available at the case, there is no point in asking the user to enter the data again.
This data transform can take the value available in the case and map it into the
property associated with the mentioned question.
These two configuration elements support the use of due diligence profiles in the
KYC Type. For detailed information about this functionality, see Due diligence
profiles.
The Profile suites is a list of all the profile suites that will be considered during the
execution of the KYC Type. When the KYC Type is created, the list is initialized with
a default profile suite called Default. You can add additional profile suites to add
more dynamism to your questionnaire.
The Profiles list that appears on the right of the screen is a consolidated list of all
the profiles defined in the profile suites. The order and consolidation process is
driven by the order in which the suites appear in the profile suites list. Profiles
defined in suites at the top of the list take precedence of profiles of the same
name defined in suites down below. In addition to the profiles inherited from the
profile suites, you can also define local profiles that will take precedence over
those coming from the suites.
You can expand Profiles by clicking on their name, which will reveal a few
definition elements. Those elements include the decision rule that determines
when that specific profile is active and therefore can be used to drive the visibility
and mandatory conditions of the KYC Items, which is nothing but the ultimate
purpose of the due diligence profile.
Item definition
The Item definition tab allows to define the actual content of the questionnaire: the
questions and groups of questions that will be displayed to the user. For example, on
the following image, the system shows the content of a KYC Type that contains seven
groups (in orange) and where the Group 5 (Cooperative) has eight different questions
(in green).
When applying this definition in run time and rendering the questionnaire for the user,
some of these groups and items might not be applicable and therefore might not be
There are four main options that can be accessed from the top of the Item Definition
tab:
Add
You should click on this option to add a new item or item group. The new element
will be created immediately after the item or group selected in the screen. The
item configuration of the new element should be done as explained later in this
section.
Import
The utility automatically creates items and groups from an Excel spreadsheet. For
more details, see Adding items and groups from Excel spreadsheet.
Export
The groups and items of the KYC Types can be exported into a spreadsheet so
that they can be shared with users with no access to Pega.
Delete
You should click on this option to remove multiple groups or items from the list.
Select each of the elements to be removed (see the checkbox at the left of the line
that contains the element) and click on the button to proceed with the deletion.
At the same time, each of the elements in the questionnaire presents some additional
options through the information icon and the three-dots icon displayed at their very
right. Most of the options are there to support an easy management of the elements
(for example, cut or paste) and to give access to their configuration.
• Configuration of items
The following elements need to be configured when creating a new item group:
Item/group #
The number of the selected group within the KYC Type. It is presented in read-only
and the user cannot change it.
Field value
Field value that contains the name of the group that will be used during the
rendering of the questionnaire in run time. This is the name that the final user will
see on the screen. Click the open rule button to create or edit a field value rule.
You can use the option to display the item group based on the active due diligence
profile. The profiles association type can be inclusion, exclusion, or show all. If the
association type is inclusion, the item group will be displayed when the selected
profiles are active. If it is exclusion, the item group will be hidden for the selected
profiles. If it is show all, the item group will be visible regardless of the active
profiles.
For example, there are three due diligence profiles: Simplified, Full, and Enhanced.
If the item group has the profile association type as inclusion and the selected
profiles are Full and Enhanced, the item group and its associated items will be
displayed only when one of those two due diligence profiles are active.
Display conditions
In addition to the use of profiles, the visibility of an item group can be driven by a
decision rule. When rules, map values, decision trees, and decision table are
available for use. Use the autocomplete control to select an existing decision rule
or enter a new rule name and click Edit to create. If no decision rule is configured,
the item group is displayed by default.
It is important to note that the system uses this configuration in combination with
profile association. The item group will be visible when both configurations,
profiles and display conditions, allow visibility. If any of the two determines that
the group should not be visible, the group will not be displayed.
Audit note
Enter audit information when adding, deleting, or updating existing KYC Item
groups. The configuration allows for change tracking and reporting across
versions of KYC Types.
Configuration of items
You can configure items in a very similar manner to item groups, but items have their
own specific pieces of configuration. The system presents a modal dialogue to facilitate
the configuration.
Item/group #
The number of the selected item within the KYC Type. It is presented in read-only
and the user cannot change it.
Item property
This field references the response properties that define the data to be displayed
and captured at run time. To define a response property, carry out one of the
following tasks:
Depending on the type of property, the item will be classified as simple (scalar
properties) or complex (page and page-list properties), a classification that
introduces some variances in behavior.
Item properties must reside in the hierarchy under the same class than the KYC
type that is being edited or under one of its parent classes. It is important to keep
the reuse of item properties under consideration and create and move them as
required to maximize reuse.
simple items with special needs is done through ad-hoc sections created for that
purpose. Use the open rule control to select the section in your class hierarchy
that will manage the display and capture of your item.
Is Complete Condition
The system can easily determine whether a simple item is complete or not by just
checking if the associate property has a value. However, it cannot do it without
additional configuration for complex items. The Is Complete Condition allows you
to specify the when rule that will determine whether a complex item is complete
or not. For example, in a complex item created to support an address with
multiple fields (for example, address lines, city, postal code, country, and so on),
the Is Complete when rule could check for the presence of the very minimum data
to consider the address complete (for example, address line 1 and postal code).
Item Text
Internal name given to the question and used to identify the question within the
KYC Type in design time. This text is not displayed to users in run time.
Display condition
In addition to the use profiles, the visibility of an item can be driven by a display
condition. When rules, map values, decision trees, and decision table are available
for use. Use the autocomplete control to select an existing decision rule or enter a
new rule name and click Edit to create. If no decision rule is configured, the item is
displayed by default.
It is important to note that the system uses this configuration in combination with
profile association. The item will be visible when both configurations, profiles and
display conditions, allow visibility. If any of the two determines that the group
should not be visible, the group will not be displayed.
Required condition
Use to drive the mandatory condition of the item. If a decision rule is associated
and it returns true in run time, the item is considered mandatory and denoted
with the default icon. If there is no rule configured or the rule returns false,
the item will be considered optional.
This attribute works only for simple items. Complex items require mandatory
conditions to be configured within the edit section attribute associated with it (see
above Item property and Display and Edit section).
This configuration enables the control using decision rules of the edit permissions
on the item. If a decision rule is associated to the item and it returns true in run
time, the item is presented in read-only. Otherwise, the item is in read-write
mode.
It is important to note that other factors can also drive the read-only condition of
an item. For example, when a KYC Type is displayed for review in a maker-checker
process, all items in the KYC Type are presented in read-only. Another mechanism
to control this condition is by setting procedurally the property IsReadOnly from
one of the data-transforms used by the engine (for example, initialization data
transform).
Supporting evidence
This configuration enables the capture of supporting evidence for the item (see
Supporting Evidence). When configured, the item is presented in the screen along
with a link that let users select supporting evidence from various sources (for
example, attachments, requirements). In design time, select the KYC Supporting
Evidence rule that enables the appropriate sources and the decision rule that will
determine when to make the option appear. The list of KYC Supporting Evidence
rules available out-of-the-box are:
Audit note
The audit information provided here will be used to keep track of changes across
different versions of the KYC Type. It is only used in design time and has no impact
in run time.
There are two conditions that should be met to have the button enabled:
• The KYC Type rule should be ready for edition, either because it belongs to a
ruleset that does not require check-out or because it was already checked out.
• The KYC Type rule should not have been checked out for private edit. The rules
should be copied over into an open application ruleset or branch and checked out
from there.
When a user clicks on the button, the system opens the following modal dialogue:
The following three actions that can be taken in the modal dialogue:
Download template
The import utility uses a pre-defined Excel spreadsheet template to be
downloaded and filled. As there can be minor changes in the template across
product versions, downloading the latest version of the template before initiating
any new import process is highly recommended.
available for download. You must take the time to go through the guide to
discover all the capabilities of the utility.
Select file
Once you have downloaded and filled the template as per your needs, you can
select the file and click the Next button to start the process.
The next step in the process is the validation of the data uploaded through the
workbook. The system performs some basic validation to ensure that all the items in
the spreadsheet are properly configured. If any item does not have a correct
configuration, it is flagged out to the user, who needs to amend the workbook and
upload it again.
After all the items that are going to created have been reviewed, the process can be
initiated by clicking on the Import button. Once the process is completed, the system
presents the following confirmation message.
The utility is available at the top of the Item Definition tab of the KYC Type rule form,
through the Export button. When the user click the button, the system generates the
following Excel spreadsheet:
It is important to note that the format of the Excel spreadsheet is very different to the
one used for the import (see Adding items and groups from Excel spreadsheet). The
amount of data and nature of some of that data is different, and the spreadsheet
cannot be as it is to initiate an import process on another KYC Type.
Item validations
The answers provided for the different items that make up a questionnaire are
validated in run time against the definition of the properties associated to the KYC
items. If, for example, a property has been configured to have an edit-validate rule, the
associated validation logic is executed during the data capture process to ensure that
the value is according to expectations. In the same manner, if another property is
configured as a date, the system ensures that its value has the right format and it
represents a valid date.
The nature of the validations that can be implemented in the following rules varies:
For example, an onboarding application consisting of multiple stages might only require
the dynamic nature and audit capabilities of the KYC Types in one of those stages. You
must configure that stage, with its associated cases and processes, in order to present
the applicable KYC Types to the users. Depending on particular business and
operational needs, each application can have multiple points where KYC Types are
used.
At the same time, the application needs to determine which KYC Types are relevant at
each of these points of the process. For example, a KYC Type created to capture the
information from customers of type individual during an onboarding process is not
relevant if the customer that is being onboarded is an organization. In addition, the KYC
Type might be relevant in a business-specific scenario but, according to the operational
model of the financial institution, this KYC Type must not appear at certain points in the
process.
The following two sections explain how to configure these two important aspects of the
KYC Types: the points of the application where they are used, and the business logic
that drives their applicability.
You can include them both as two independent steps in one of the stages of the
process, or you can wrap them under a single step that orchestrates the invocation to
the two flows.
ApplyKYCTypes
CollectKYCData
This flow implements an assignment where users can complete the KYC Types
that are deemed applicable during the initialization process executed in the
previous step of the process. The assignment presents to the user all applicable
KYC Types, and ensures that they are completed before processing continues in
the process. After completion, the KYC Types are stored in the customer Policy
Profile for future use.
These two flows can be added to as many stages and case types as required, in
accordance with the client’s business requirements.
Until 8.6 version, the standard applicability condition configuration mode was the only
mode that was available. With 8.6 version, the applicability matrix is introduced, and
this is now the mechanism used by default.
• Applicability matrix
The configuration can be done in the definition tab of the KYC Type (see Applies when
condition), and allows users to specify the decision rule that drives the applicability of
the KYC Type. The decision rules available for selection are: When rules, Decision Tables,
Decision Trees and Map Values. You can also select Always to apply the KYC Type without
any conditions.
At runtime, the engine traverses through all the available KYC types in the application,
executing the applicability condition associated to each of them to determine if they are
applicable to the current case. Once the applicability is established, the KYC types are
copied to the case, processed, prepared, and presented to the user for data collection.
Applicability matrix
With this approach, the applicability of the KYC Types is configured in the applicability
matrix decision table that gives a comprehensive view of all the KYC Types managed by
the case or the application. Using this configuration, the engine does not need to
traverse all the KYC Types in order to determine their applicability. Instead, it evaluates
the central decision table and, in a single execution, determines the applicability of all
the KYC Types. This approach offers significantly better performance than using
standard applicability conditions.
You can add any condition that you deem appropriate to drive the applicability of the
KYC Types. The decision table is run in the context of the case where the applicability
process is invoked (see Configuring case processes ), and has access to all the case
data, including any customer information that might be required. For example, if a
certain KYC Type is only applicable for customers of type individual, a condition can be
added to the table and configured to check against the property in the case that holds
the customer type.
The applicability logic ensures that the KYC Types that are shown to the customer are
applicable in the specific business scenario defined by the case data. However, it can
also be used to drive the operational needs of the financial institution by distributing
KYC Types across multiple cases that can be routed to different departments based on
the nature of the data to be captured and assessed.
For example, a financial institution might be responsible for managing KYC Types for
AML, Tax and Regulatory needs. All KYC Types can be presented in a single assignment
to a single user, but it is likely that the organization prefers to group the KYC Types by
category, and route them to different users. Each of these groups can, in turn, be
further subdivided into smaller groups. For example, AML KYC Types can be divided
into Global AML, US AML, UK AML, and so on.
To meet these operational needs, the system provides the ability to define locators, an
abstract construct that facilitates the dynamic grouping of KYC Types without having to
make significant changes to the orchestration layer. For more information, see the
subchapter "Configuring locators" below.
The applicability matrix is used by default in new installations. If you upgraded from
version 8.5 or earlier, and you want to make use of the applicability matrix, you must
enable this functionality.
Use the following Dynamic System Setting to switch the use of the matrix on and off:
If you want to make the configuration specific to an application, instead of applying the
configuration to the whole system, you can override the value of the Dynamic System
Settings in the KYCSettings_DDSmartFilter data transform.
Note that this switch allows you to choose the applicability mechanism that your
application will use. However, it does not automatically migrate the applicability of the
KYC types from one method to another. Therefore, before you switch from one
mechanism to another, you must ensure that the corresponding applicability
conditions are properly configured.
The applicability matrix is implemented as a decision table that holds the applicability
conditions for each of the KYC Types that must be applied to a case. The KYC engine
provides a template version of the decision table (called KYCApplicabilityMatrix), and this
is available under Work- Class. This template rule defines how the applicability matrix
should look.
In the table, every row represents the logic that makes applicable one KYC Type. One
KYC Type can have multiple rows if its applicability logic cannot be expressed in a single
row. All the rows in the decision table are evaluated. If the conditions of a row are met,
the associated KYC Type, which is defined in the last two columns of the table, is added
to the list of applicable KYC Types of the case. At the end of the process, the list of
applicable KYC Types will have as many rows in the table as were satisfied.
To define your own applicability logic, make a copy of KYCApplicabilityMatrix under your
class hierarchy. You can choose to have a single copy under your main workpool if you
want to have a single point of configuration. If your logic is too complex, or the number
of KYC Types is too large to be maintained in a single decision table, you can specialize
it by class by creating copies under the subclasses supporting the different case types.
Once you have your own copy of the applicability matrix, you can add rows to define
the applicability of your KYC types. You can also add new columns for any new
condition that you would like to evaluate. However, you cannot remove the last two
columns of the table, which are used to add the KYC Type name and class to the list of
applicable types.
Configuring locators
The application can use locators to group together related KYC Types under the same
case. The locators represent the logical groups that the KYC types are grouped in. You
must provide the system with two pieces of configuration information to make use of
locators.
The first piece of information that you must provide is the definition of the locators that
will be managed by a certain case. For example, if your application has a case type
specifically created to manage AML KYC Types, you must configure that case to register
the locator that will pull only the KYC Types which are related to AML.
The locators in a case are implemented using two properties: DDPurpose (which can be
thought of as a group), and DDKey (which represents a subgroup). These two fields are
of type page group, with the first field hosting pages of class PegaKYC-Data-TypeLocator-
Purpose, and the second field hosting class PegaKYC-Data-TypeLocator-Key. The clipboard
structure that supports locators is as follows:
.DDPurpose(AML)
.DDKey(US)
.DDKey(EU)
.DDKey(CA)
.DDPurpose(REG)
.DDKey(US)
.DDKey(EU)
.DDKey(CA)
.DDPurpose(TAX)
In this example, the case that is being configured manages the KYC Types created for:
AML for the US, EU and CA jurisdictions, for regulatory purposes (REG) for the same
jurisdictions, and for tax purposes (TAX) in all jurisdictions.
This structure must be programmatically populated on the case page by specializing the
GetKYCTypeLocators data transform. You can define your own locators and logic to
populate those locators, on condition that the structure matches the one shown above.
Once the case has been configured to use certain locators, you can refer to them from
the applicability matrix. The KYCApplicabilityMatrix template rule includes a locator
column that can be used for that purpose. For example, if you want a certain KYC Type
to apply in cases configured for AML Global (DDPurpose=AML and DDKey=Global), you
can add the following condition to that column: @KYCUtils .LocatorExists ("AML
","Global")
The LocatorExists utility function checks against the locators defined for the case. Both
parameters (DDPurpose and DDKey ) must pass the check. The function traverses
through the locator structure of the case to determine if the passed DDPurpose and
DDKeyare present on the case. If the parameters are found, the engine evaluates the
rest of the conditions configured in the row, and applies the KYC Type to the case if all
conditions are satisfied.
The function follows an inclusive approach, where a purpose with no keys defined on
the case implies that all the keys for that purpose are active. For example, in the
structure defined above, the case takes all cases for TAX regardless of the jurisdiction
(no keys are associated to TAX). With that configuration, the system only checks that the
first parameter is passed, and the second parameter is simply ignored.
We highly recommend that you move as soon as possible to the new applicability
matrix mechanism. The application matrix not only provides performance benefits, but
also a centralized and case-specialized mechanism that makes it possible to easily
manage the applicability of the KYC types.
Let us assume that, during the Standard Due Diligences process in an onboarding
application, the system presents to the user the question: “Is the customer a politically
exposed party (PEP )?”. If the user provides an affirmative response, depending on the
business policies of the financial institution, the system probably needs to carry out a
number of automatic actions. For example, the system might need to:
The following sections describe how the items in the KYC Types can be configured to
support these kinds of dynamic changes in the KYC questionnaires.
The engine ensures that every response to any question is followed by an automatic re-
evaluation of the associated conditions to all questions in the KYC Type. This means
that, every time that an answer is provided by the user, a process to re-evaluate the
basic attributes of the items is triggered. This process is started automatically and does
not require any special configuration.
The different processes listed above are only executed by the system when the
response to the associated KYC Item changes. Most of the time, that change in the
response is made by the user using the standard user interface of the application.
However, there are other ways in which a response to a question can be changed. The
following scenarios can lead to a change in a response:
User-Initiated
When a user responds to a question using the user interface of the application,
the system triggers the on-change actions associated to that question.
KYC Type Initialization
When a KYC Type is brought into a case, it can be pre-populated using responses
previously stored in the Policy Profile or Policy Memory of the customer, or set
through the initialization data transforms of the KYC Type. The engine registers
the changes made to the responses during this initialization process, and executes
the on-change actions associated to the pre-populated questions.
On-Change imposed
There can be situations where a response to a question results in a change in
another question. For example, a response to a question about “Country of Birth”
can be used to automatically populate a question about the relevant “Country of
Citizenship”. These scenarios are usually implemented using the on-change
launch data transform associated to the first question. However, the changes
made to the second question in that data transform are not automatically
identified by the engine. To trigger the on-change actions that the second
question might in turn require, the change should be flagged up to the system.
This can be done by invoking the RegisterPropertyForChange data transform, and
passing the name of question that has been changed.
Visibility driven
Setting the response to a question triggers the re-evaluation of the simple
conditions of all the questions within the KYC Type. This can result in some
questions either gaining visibility or losing visibility. When visibility is lost from a
question that was already answered, this is considered to be a change. In addition,
when the question regains its visibility, the previous responses are recovered, and
this event is also recorded as a change by the system. Both changes, that is, that
the item has lost its visibility and that the item responses were recovered, then
trigger the execution of the on-change processing logic associated to the
questions.
To avoid any infinite loops resulting from a chain reaction of linking on-change actions,
these actions are executed in a sequential manner, starting from the first question and
ending with the last one. This means that any changes made by a question to a
question above it in the KYC questionnaire is not taken into account for execution of the
on-change actions. However, these changes are still taken into account for the
execution of simple conditions, because simple conditions do not pose a risk of
generating infinite loops.
With 8.6, the KYC engine registers all the possible types of changes and reacts to such
changes, and therefore the programmatic execution of the engine routines from the
on-change data transforms is no longer required, and this can be replaced with a call to
SetupResponseChangeList by passing the name of the property that you want to set
programmatically.
conduct due diligence under three different profiles or categories: simplified, standard,
and enhanced.
This categorization can be used to dynamically shape the different KYC questionnaires
shown to the user, making sure that only the relevant questions are displayed
according to the profile of the customer. Simplified due diligence, for example, is a
leaner version of the standard and skips many of the questions from its standard
sibling.
The KYC Engine provides the ability to create different profiles based on business
needs. These profiles determine which questions are used based on when those
profiles are active. These profiles can be viewed as simple flags that can be turned on
or off based on configurable decision rules that determine the visibility of questions.
The module that enables this functionality is called KYC Due Diligence Profiles.
For more information, see the article on "Know Your Customer due diligence profiling"
at Pega Know Your Customer for Financial Services knowledgebase articles.
Users can define a set of business-related profiles under a custom KYC Profile Suite rule.
The rule contains a list of profiles and the conditions and business rules that make
those profiles active or inactive.
Users can create a suite with three main profiles (simplified, standard and enhanced)
and define the rules for each of them to be active or inactive (for example, an enhanced
profile is for high risk customers, while a standard profile for medium risk customers,
and a simplified profile is for low risk customers).
1. In the header of Dev Studio, click Create > Technical > KYC Profiles Suite to create
a due diligence profile suite.
2. In the Label field, type a short description of the profile suite.
3. In the Context section, select a context.
4. In the Add to ruleset list, select the specialized application’s associated ruleset.
5. Click Create and open.
6. Click Add Profile.
7. Enter the Profile ID and Description and select the Active based on logic.
8. Save and check in the profile suite.
Result:
Once a KYC Profile Suite is created, it can be referenced from all the KYC Types
under the same class hierarchy. The profiles are all recorded under a Profile
Suite named Default and placed under PegaKYC-Data-Type-Policies-Global. Users
can choose to either use this suite, modify it, or create a new one.
Profiles that are not used widely across the application can be defined directly as local
profiles under the specific KYC Type where they are used. Open the impacted KYC Types
and manually add the required profile or profiles. The suites associated with that KYC
Type and the local profiles, together define the profiles that the KYC Type manages.
Each KYC Type has a list of applicable profiles, because the Type was configured to use
certain policy suites, because profiles were defined locally in the KYC Type, or both.
Users can then associate each item or item group to one or many profiles, using either
the inclusion or exclusion association type. The inclusion association type renders the
associated items and item groups when the profile is active and conversely the
exclusion association type hides them if the profile is active.
Supporting evidence
The KYC questions sometimes must be supplemented it with relevant evidence. This
evidence includes scanned documents, screen shots, and even another case associated
to the customer. The KYC application provides an extensible Supporting Evidence
framework that you can use to configure the type of evidence that you want to
supplement the KYC questions.
You can customize these evidence types or add new evidence types that better
represent your business needs.
Financial institutions may require that each of the questions on the KYC Type have a
distinct supporting evidence rule. Each supporting evidence rule can be specialized by
the application layers so that each layer in a multi-layered application structure can
have different evidence types without having to specialize the KYC Type.
You can customize these evidence types or add new evidence types that better
represent your business needs.
1. Create a class that represents each of the instances of that type (for example, a
URL from external system, a reference to certain type of cases, and so on). The
new class should extend the class PegaKYC-Data-SupportingEvidence-Evidence,
which requires the following properties:
Property Description
2. Add a new evidence type to the application by creating a new abstract class
inheriting directly from PegaKYC-Data-SupportingEvidence-EvidenceType. For the
evidence type to be recognized and interpreted by the engine, this newly created
class must contain the specialized versions of the following assets:
Data Source
A data source that can retrieve the pieces of evidence that the user may
want to link to the question. This data source should be a data page. The
items retrieved through this data source should be of the evidence class that
was initially created (for example, URL) and stored under the Evidence page-
list property.
Initializer
Each evidence type must have a unique ID, name, and description. This
should be initialized by creating a pyDefault data transform and setting the
pyID, pyLabel, and pyDescripion properties respectively. Ensure that the ID of
the evidence type is not changed once it is set and rolled to production.
Changing the ID will process it as a new evidence type and may result in loss
of data.
Setup
If the data source requires parameters or if you wish to do any preprocesing
before the evidence types are displayed then you may specialize the
SetupHandler data transform. To make the parameters available to the data
source you must also create the relevant properties in this class and set
those in the SetupHandler data transform to be later used by the data source.
User Interface
KYC application ships a reuseable user interface component that can present
and facilitate the selection of the evidence. Using the default user interface
ensures consistency across all evidence types and ensures seamless access
to preconfigured components created for selection and deletion.
Note: To use this component, you must comply with the directions listed
above for the data source and specialize the ListSupportingEvidence
section, which should include, within the context of the data source, the
AvailableSupportingEvidence section. To implement any behavior related
What to do next:
In addition to list-based evidence, you can add complex evidence such as screen
shots or video recordings. However, adding such evidence types requires you to
create advanced user interface components that enable you to capture, select
and delete of such evidence. If you create complex evidence types, you must
ensure that the instances selected by the user are stored in the temporary
TempSupportingEvidence.EvidenceType(<Evidence Type Id>).Evidence(1) page. This
enables the system to seamlessly integrate complex evidence types into KYC
processing.
Optionally, if the evidence type and evidence need to be audited, you can use the
audit traceability features available in Pega Foundation for Financial Services. To
enable this configuration, the corresponding class must contain the
FISIFTrackSecurityChanges data transform that lists the properties to track.
For more information, see "Creating a new evidence type" in Supporting evidence.
When a question is configured to use a certain supporting evidence rule, users can use
(as evidence) the different evidence types registered in that rule. To create a new
instance of the supporting evidence rule, do the following steps.
1. In the navigation pane of ,, click Records > Technical > KYC Type and select the
KYC Type that contains the KYC Item to which you want to add supporting
evidence.
2. Click the Item Definition tab and double-click the KYC Item.
3. Save the KYC item into your ruleset.
a. In the Label field, type a short description of the supporting evidence.
b. In the Context section, select a context.
4. In the Supporting evidence field, select one of the following types of evidence to
require.
• CustomerDocuments
• Requirements
• CustomerDocumentsandRequirements
5. In the Supporting evidence condition list, select the condition that determines
whether the evidence is required, such as always, or depending on a when rule.
Design Time
With this approach, the migration is performed for a KYC type when it is copied
from an older version of the engine to Pega Know Your Customer for Financial
Services 8.5 or higher. This ensures that a KYC Type uses supporting evidence in
both the form of a rule and in the case.
Runtime Migration
With this approach, the KYC types rule instances do not need to be re-saved, but,
instead, their run-time copies are migrated as they are applied to the case. This
approach ensures that all the newly created cases use Supporting Evidence in a
uniform way. However, this approach does not guarantee uniformity of the KYC
rules at design time.
If your business demands uniformity of the KYC type rule instances, then you can
migrate all the KYC types by re-saving them all in a higher ruleset version.
After a KYC type has been updated to use the Customer Documents handler, the data
that could have been collected before 8.4 for that KYC Type will be automatically
migrated at the time of initializing new KYC cases. The system will automatically change
the supporting documents into supporting evidence, and ensure seamless reuse of the
data.
Any open cases that use the older version of the KYC types will continue to use
supporting documents, unless they are updated using the Surgical policy updates
feature.
Policy profiles
The Master Profile is a central source of data, which is referred to and synced multiple
times in the lifecycle of a case. During the persistence of the Master Profile, the
application extracts all the policies (processed KYC Types) of that customer and stores
them separately in the Policy Profiles repository, an independent database table used
to maintain all the policies of the customers. Under this configuration, the system
keeps the KYC-specific data separated from the other application classes and database
tables, bringing significant benefits in terms of performance and data segregation.
For customers upgrading from previous versions of the application (8.1 or earlier), there
are three different options available.
Bulk Migration
You can migrate all of KYC Types currently stored in the existing Master Profiles
into the new Policy Profiles using a bulk re-save process of the Master Profile
instances. During that process, the system will extract the policies and place them
in the new repository.
On-demand Migration
If no bulk migration is done, the policies will be migrated into the new repository
as the Master Profiles instances are being updated for different reasons (for
example, changes in contact information, products, and so on). This approach
facilitates the implementation as no bulk migration is required but requires some
time to have all the Master Profiles moved into the new table.
Disabling the Policy Profile
This option is not recommended and can have significant impact in the
performance details. This is a legacy configuration that should be considered only
when upgrading from a previous version of the application that was implemented
with custom logic that reads KYC data directly from the master profile. See
"Disabling the Policy Profile" in Policy profiles.
When you turn the Policy Profile off it may affect the application performance. You can
disable this enhancement if you have any specific customizations that restrict you from
using it.
CAUTION:
Disabling the policy profile increases the size of the master profile and
adversely affects system performance. It is hence recommended that you do
not disable the policy profile. Usage of the policy profile ensures that any
enhancements made to the policy profile and KYC Type data in subsequent
releases are seamlessly available for consumption. Disabling the use of the
policy profile disables functionality such as KYC Type audit, data reuse, and so
on.
Policy memory
The policy profile is the verified source of data for maintaining a record of the KYC
policies associated with a customer. You can only store polices that have been reviewed
and approved in the policy profile. You may need to store the temporary unapproved
KYC policies for reference or reuse. Policy memory serves an alternative storage
mechanism.
Policy memory has many uses. For example, if a KYC case is withdrawn and recreated
due to change in the driver data, you can use the policy memory to temporarily store
KYC policies and reuse them when a similar case is triggered, thus preventing the loss
of valuable KYC information.
You can create a record in the policy memory by invoking the Policy Memory Creator
method. This method, located in the Work- class, and uses customer ID, memory ID, and
a source page as parameters. The API creates a policy memory that is uniquely
identified by the customer ID and the memory ID. It is loaded with the KYC polices
supplied on the source page.
The memory ID can be anything that signifies the purpose of creating the policy
memory for a customer and can uniquely identify it. For example, in a complex
onboarding case created for a customer, the memory ID can be set to the ID of the
onbarding case, implying that all of the KYC cases created for the onboarding case will
use the same policy memory identifier. At the time of retrieving records from the policy
memory, the system can use the customer ID and memory ID to ensure that the KYC
types stored as temporary data in one onboarding case are not picked and reused from
another.
The engine does not impose any restrictions on the number of policy memories that
can be created for a customer. However, for maximum reuse of the KYC policies data, it
is recommended that you use one policy memory record per customer per case
hierarchy.
The policy memory creator API also accepts KYC Status as parameter. This parameter
can be used to denote the status of the type when the policy memory was created (for
example, in initial capture, in review, and so on). The status can be used later to decide
the action to be taken on the case after retrieval of KYC types from policy memory.
You can access policy memory by invoking the D_GetPolicyMemory data page, which
uses the customer ID and memory ID as mandatory parameters. Most of the time,
customers prefer using the Policy Profile interface to automatically consolidate the
results from the policy profile and the policy memory, instead of doing an explicit
invocation to the policy memory. To get that consolidated view, you must invoke the
D_GetPolicyProfile data page by passing the memory ID along with the existing customer
ID parameter. If the parameter is not passed, D_GetPolicyProfile continues to present
pure policy profile information. Data from policy memory is not considered and no
consolidation is made).
In the process of consolidation, if a KYC type exists in both the policy profile and policy
memory, then the most recently updated one is preferred, thus ensuring the latest
information is on the resultant consolidated page.
The KYC engine has been updated to use this new consolidated data page in its
processing. To reuse the KYC type information from the policy memory in the KYC case,
you must have the system configured to use the KYC Policy Profile and the pyCustomer
and PolicyMemoryID properties must be available on the work objects primary page.
If your implementation cannot make use of the job scheduler or you wish to cleanup
the policy memories, use the CleanPolicyMemoryArchive API, present in the work class.
Similar to the D_GetPolicyProfile, this API uses the customer ID and the memory ID to
locate and delete the policy profiles. The method has been designed to delete only
those KYC types from the policy memory that are present on the page that the API is
invoked from. The API must be plugged into the KYC case flow right before the KYC type
data is synchronized to the policy profile and the case is resolved. The method
automatically deletes the record in the policy memory when no more KYC types are
available on it.
1. In Dev Studio search for the following rules and edit them, as necessary.
All KYC cases are created with the latest KYC policies. If there is a change in the policies
while a case is in progress, the KYC Engine ensures its compliance against the latest
regulatory policies through the Surgical Policy Update (SPU) engine.
You can use the SPU engine to define a list of rulesets that contain the KYC policies and
associated assets. This list of rulesets is called the regulatory stack and every KYC case
refers to the regulatory stack that was active when the case was created.
The SPU engine monitors the total rule count of all regulatory stack rule sets. If there
are any additions to overall rule count in any of the configured regulatory rulesets, SPU
engine takes it as a change in policy and updates the policy on inflight cases with the
latest version of the stack.
For example, you configure your application to have two rulesets in the regulatory
stack: KYCRCEMEA and KYCRCAPAC. The first implementation of the application goes to
production with ‘X’ rules in the version 01-01-01 of these rulesets. Few months later
either via a hot fix or new release, if you import a new version of the ruleset (for
example, KYCRCEMEA 01-01-02) which has new/updated rules in it or you add a new
ruleset to the regulatory ruleset list (KYCRCAmerica) with certain rules in it, the SPU
engine detects the change and updates the policy on all the inflight cases. In the new
release if there are no new additions of rules to any of the regulatory ruleset list then
SPU engine does not make any changes to the policies as there are no regulatory
changes.
The cases that are identified for a policy update are processed in the background, to
avoid disruption to the user. If users are actively working on KYC cases while the SPU
background processing is in progress, those cases are skipped. To address this scenario
and ensure that all cases are completed with the latest policies, the SPU engine also
includes a manual KYCPolicyUpdate update processor utility. This utility can be plugged
into the KYC data collection flows to ensure that each KYC case is checked for the latest
regulatory compliance before it is submitted. If required, the case is stopped for an
update.
When a case needs to be updated, the SPU engine makes an internal copy of the data
in the policy memory (for more details see Policy memory). After the reinitialization of
the KYC Types, the SPU engine uses that data to pre-populate the answers to the
questions in those types.
The regulatory stack is also used to isolate resolved cases from posterior changes in the
rules. The SPU engine can pause the read-only display of the KYC types so that they can
appear in the form which they were applied in the case.
• Defining in-flight-cases
• Manual updates
The rulesets listed in this registry are monitored by the SPU engine for the addition or
deletion of ruleset versions. When a change is detected, a policy update is triggered.
Similarly, the addition or deletion of rulesets in the registry is treated as a policy change
and thus is considered for a policy update.
1. In the header of Dev Studio, search text field and enter PegaKYC-Data-
RegulatoryStack.KYCRegulatoryRulesetsRegistry and select the dynamic system
setting.
2. Create an application-specific version of the data transform if you have not
already created one.
Result:
A ruleset that is added to the regulatory stack should not be removed from the
list or the application. Doing so adversely affects the policy-freezing capability of
the SPU engine.
Defining in-flight-cases
The SPU engine considers for update only those cases that are marked as in-flight. It is
important for each application to define the criteria that qualifies a KYC case to be an
in-flight case. The KYC engine comes with basic criteria based on the status of the case.
If you need changes to that criteria, you can extend the Work-
MapStatusWorkToKYCCaseStatus decision table. This decision table must be configured to
return “Inflight” when the KYC case meets the specified criteria.
The queue processors come by default configured to run with multiple concurrent
threads – a configuration that you may need to adjust to your needs.
This is the number of concurrent threads that the system processes per node in the
cluster. One thread translates into one case being processed. A configuration of 3
threads in, for example, a cluster of 4 nodes configured for background processing,
provides an overall throughput of 12 simultaneous cases. You may need to adjust this
number based on your volumes and the size and performance of your system.
The queue processors are also configured to manage error scenarios. This is the
configuration that comes with KYC by default:
Max Attempts: 3
In some situations, the system can find errors while processing a case. For
example, an external system may be down or there is an unexpected situation
that prevents the process from finishing. Retying for 3 times is usually enough to
resolve those situations. After the max number of retries is reached, the case goes
to the broken-process status and an administrator needs to take action.
Initial delay (in minutes): 5
This is the time that the system waits between a first failed execution attempt and
a second attempt.
Delay factor: 2
This is a correction factor applied to the initial delay between subsequent retries.
For example, if the initial delay was 1 and the delay factor is 2, the system waits 1
minute between the first attempt and the second, 2 minutes between second and
third, 4 minutes between the third and fourth attempts and so on.
You can modify these settings in the implementation to yield a higher or lower
throughput as desired. To change the settings do the following steps.
What to do next:
Any queue processor that runs for more than fifteen minutes generates a
PEGA00131 alert. If you have deployed your application on Pega Cloud, this alert
results in marking the node as unhealthy, and in shutting the node down.
Therefore, to keep a check on the period of time that the SPU Master queue
processors runs for, the processor is configured to re-queue after processing
every 5,000 records, which takes approximately 10 minutes to execute. You can
configure the threshold of the number of records before the SPU master is re-
queued, and therefore the duration. You can change the value by updating the
PegaKYC/RecordsThresholdForSPURequeue Dynamic System Setting.
• Trigger the SPU Master processor to start the SPU update processing.
• Update the D_Types data page to ensure the latest regulatory hash is available.
The KYC default KYCWatchDog watchdog is fully equipped to process policy updates on
KYC cases. This watch dog must however be specialized into the implementation layer
and updated to use your application-specific access group. To specialize the watch dog,
perform the following steps.
Manual updates
Cases may be skipped by the SPU queue processors due to various issues, ranging from
locked cases to cases having processing errors. To ensure that such cases are not left
with outdated policies, the case flows must be updated to include the KYCPolicyUpdate
activity. This activity checks and applies the latest policies to the cases if they are
outdated.
the system using a partial list of KYC Types while applying them to new cases, and also
result in an inconsistent surgical policy update for the inflight cases.
Therefore, you must pause the regulatory watchdog throughout the duration of the
installation/upgrade, and resume it only when the installation/upgrade is completed
successfully. The regulatory watchdog can be paused or resumed by toggling the
dynamic system setting PegaKYC/RegulatoryImportInProgress to true or false
respectively. To access the setting complete the following steps:
required. The data stored on the node level data pages includes attribute conditions
such as applies when and read-only, sequence numbers, and initializer data transform
rule references.
• KYC Type attributes that include the KYC type identifiers and the applicability
conditions are loaded and stored on the D_Types data page. This data page is an
index of all the types available to the application and their main applicability
conditions.
• The actual content of each of the types – all KYC items, conditions, triggers and
data transforms, and so on – is stored under the D_TypeObject data page. The
system maintains one instance of this data page per KYC Type in the system.
In order keep these node-level data pages up to date after changes in the KYC Types
and to minimize performance impact on users, the system uses two different
resources.
The first one is a declare-trigger that cleans the node level data pages and forces them
to reload as soon as there is a change in an existing type or when a new type is created.
It is important to note that this trigger is only used if the operator making the changes
has KYC in the stack. For example, the import of rules using a generic administrator
without KYC will not trigger the reload of the data pages.
In addition to the trigger, the system also periodically refreshes the data pages. Every
day, the data pages become stale and need to be reloaded. Without any special
configuration, the data pages are reloaded the next time that they are invoked, usually
during a user session. To avoid the impact on the first user of the day, the application
has an agent that can be configured to run at night to reload the data pages.
In addition to the trigger, the system also includes a watchdog under the new KYC
Watchdog infrastructure. The new watchdog monitors the KYC rule type table and
flushes the data pages if any new rule was added or modified.
Watchdog Infrastructure
The Pega Client Lifecycle Management and KYC Engine's watchdog infrastructure
simplifies the implementation of the all jobs that must be executed on regular intervals.
The watch dog is driven by the KYCWatchDog job scheduler. This job is executed
periodically based on the configuration of the scheduler (by default, every 5 minutes).
In each execution, it iterates through the different watchdog classes registered in the
system and execute them.
To add a new watchdog, users must copy the KYCWatchDog job scheduler to their
implementation and update it to have an access group that best represents the
application in the implementation layer. You may require one copy of the job scheduler
per application, if there is more than one application in the implementation that needs
an application specific-watchdog.
Once the job scheduler is copied and updated, users must create a sub-class of
PegaKYC-Data-WatchDog-Logic (for example PegaKYC-Data-WatchDog-Logic-Types) and
create an activity with the name PerformWatchTask under it. This activity must contain
the logic that is intended to be executed by the watchdog.
Without any additional configuration, the system executes the logic that is defined
under each of the logic classes in the PerformWatchTask activity.
For a more advanced configuration where the watchdog must keep a track of the last
execution of the task and drive the next execution based on it, use the PegaKYC-Data-
WatchDog-Log log class. The logic to read or write these instances should be included in
the PerformWatchTask activity and should consider the following fields for the class:
Last Run
Last execution time stamp of the watchdog
Watchdog Name
A standard short name for the watchdog. For example, KYCTypes
Watch Class Name
The class that was used for implementation of the watch dog. For example,
PegaKYC-Data-WatchDog-Logic-Types
A user can use this data in the next run of the PerformWatchTask activity, by accessing
the D_WatchDogLog data page.
The application includes a watchdog to detect changes in KYC Types and reload node
level data pages accordingly. The watchdog uses the last execution time stamp of the
watchdog to determine if there are any new KYC types that were added to or updated
in the system since the last run of the watchdog. The watchdog flushes the D_Types and
D_TypeObject data pages if there are any new types added or updated in the system.
Since the D_Types data page uses rule resolution, it is incredibly complex and time
consuming for the watchdog to determine whether the application was impacted by
the change. The watchdog flushes all node-level data pages and relies on the system to
load each of the types on the data pages when the first user accesses them. Therefore,
every day and every time a KYC type is added or updated in the system, the first user of
each of the KYC types may experience a slight delay while applying the KYC types to the
case.
example, the answer to the question about the political exposure of a party can be
used to analyze the demographics of the financial institution’s clients by determining
the percentage of clients that are politically exposed. This information can be used by
organizations to take business or operational decisions to support their needs.
Although Pega platform provides some reporting and analytical capabilities, this
analysis is usually carried out in an external data mining system. These systems require
data to be supplied to them in a predefined format that can interpreted and analyzed.
To expose the KYC data to those systems, we recommend using Pega Business
Intelligence Exchange (BIX).
The follow sections describe how the KYC data can be exported. You should consider
two main options:
• The first option is to take a full extract of the data in the Policy Profile.
• The second option reduces the performance overhead of the extraction by
targeting a reduced subset of the data .
Review the following sections to determine which option suits your needs best, and
refer to the material available in Pega Community Page on Data Extraction for details
about how to configure BIX.
However, it is important to note that the exported data will also include associated
metadata such as visibility conditions or on-change actions that are used for the display
and processing of the KYC type and items. This metadata contains the names of
decision rules and properties that, although necessary at runtime, may not be of any
value in your data mining systems. This metadata can amount to up to 70 % of the size
of an average Policy Profile. Therefore, we recommend that you only follow this
approach if some of that metadata is required for your reporting (for example, if you
need to analyze the literal wording of the questions in the KYC questionnaire), or if you
want to export KYC data quickly and where the size of the extract is not a matter of
concern. If the size of the extract is a concern, see Configuring a selective data extract.
• Enabling BIX
Enabling BIX
To configure a full extract, the first step is to include the BIX ruleset in your application
stack.For details about how to carry out this step, see Enabling the BIX ruleset .
After the ruleset has been made available, you must create the assets that will be used
for generating the BIX extracts for your due diligence data. These assets include: the
extract rule, and a job scheduler to periodically run the extract rule in order to generate
the extracts.
To extract all the data available in the Policy Profile, you must create the extract rule in
the PegaKYC -Data -PolicyProfile class, which is where all the KYC data is stored. For a
partial extraction of data, see Configuring a selective data extract. The extract rule has
three important configuration elements that must be carefully coordinated:
Target data
This configuration is available in the Definition tab of the extract rule, and allows
you to select the data to be extracted for each object in the extract. BIX gives you
the possibility to select an output format (CSV, Database or XML ), and the
individual properties to be extracted. However, given the very large number of
properties that the KYC Types hold and the nature of the data structure that
support it, the only output format that can be used in this specific scenario is XML
(and with all the current properties). For this purpose, select XML as Output
format, and check the Get all properties check box.
Filter Criteria
When the Extract rules are executed, they generate by default an output file with
all the objects of the class that they belong to. If used with this default
configuration, the size of the output file could exponentially increase over time
and become difficult to handle. Therefore, it is important to carefully pick your
filtering criteria from the Filter criteria tab on the Extract rule. We recommend
that you extract KYC data by generating incremental extracts, where each extract
holds only those objects that were updated since the last extract. This can be
easily configured by checking the Use last updated time as start check box.
File name
This configuration is available as the File Specification tab on the extract rule. It
allows you to specify where the output file will be placed, and what the name of
the file will be. If you choose to generate automated extracts on a periodic basis, it
is important that each output file has a unique name. Otherwise, each new output
file will override the previous one, which will result in losing data. You can ensure
that each output file has a unique name by including the %t wild card, which
appends the extraction time stamp to the file name.
For more details, see the Pega Community article Using Job Scheduler rule to Extract
Data with BIX .
As part of the configuration of the job scheduler, you must determine the optimal
frequency of generating extracts that best suits your business needs and technical
infrastructure. The frequency of extracts must be dictated by the size of the data that
you wish to extract, and by the time that it takes to extract it. We recommend that you
schedule frequent runs with smaller data extracts that last for a number of minutes,
rather than a single run that can last for a matter of hours.
For example, a financial institution that onboards fifty thousand customers every day,
and has a job scheduler that executes daily, may need around four hours to generate a
single extract that contains the fifty thousand Policy Profiles. If there is any error, or if
there is a system outage, during the execution of the extraction job, the job will be
aborted and the output file may not be saved, or may be corrupted. After that, the job
scheduler will start from the beginning, which wastes valuable time. This situation can
be avoided by:
• either reducing the size of the extracts by minimizing the data that is extracted (by
deleting any data which is not required from the Policy Profiles, see Configuring a
selective data extract)
• or increasing the frequency of executions to, for example, every one hour or every
two hours, which leads to more lightweight extractions that are easier to maintain.
Before following this approach, it is important to understand that all the data stored in
the Policy Profile is required by the system for a range of different purposes (for
example, for the initialization of new KYC cases, or for the purposes of audit). Therefore,
manipulating the data to reduce the size of the Policy Profile cannot be done directly on
the Policy Profile itself, but takes place on an alternate storage area with a trimmed-
down version of the Policy Profiles that can be used subsequently for extraction.
You can generate the reduced version of the data by creating the assets described
below. Use the steps defined in Configuring a full extract as the basis of the process,
but apply the steps that are specific to configuring a selective data extract.
Create a concrete class with a database table mapped to it. To avoid the creation of
properties that may be already present, you should extend this class from the PegaKYC
-Data -PolicyProfile class, and you must copy its associated database table from the one
used by the Policy Profile class. This ensures maximum reuse of the assets from the
PolicyProfile class, as well as a physical separation of extraction and application needs.
Once the storage class is ready, the next step is to populate this storage with the copies
of the Policy Profiles. This must be done in the background to avoid any performance
impact on case processing. Create a declare trigger on the Policy Profile class and
configure the trigger to execute on commit and in asynchronous mode (In background
on copy).
The trigger invokes to an activity that can copy the policy profile to the new table you
created before and, during that process, cleans up the metadata by iterating over the
Types and deleting the Items property list. Once the trimmed-down version of the
Policy Profile is complete, the activity saves the new instance into the database so it can
be used later by the extraction process.
As the last step, you must create an extractor rule in the new class, and this rule must
be executed through a job scheduler following the general guidelines.For more
information, see Configuring a full extract section.
• Solution overview
Introduction
In Pega Client Lifecycle Management and KYC, while performing due diligence for
contracting and related parties, KYC questionnaires are taken to the KYC analysts for
completion. Questionnaires are of different nature - Global KYC, Local KYC, Regulator
KYC and Tax – and contain a combination of mandatory and non-mandatory items
spread across simple and complex KYC items.
During the completion of the KYC questionnaires, there may be scenarios where KYC
questions are considered mandatory for due diligence process to complete, but they
are not applicable from a business perspective. In those situations, KYC analysts tend to
respond those questions with random or inaccurate values, a practice that compromise
the quality of the KYC data. The KYC Item Applicability functionality bridges this gap and
provides flexibility to the KYC analysts to flag any irrelevant questions as Not applicable
so that those questions are considered completed even when no answer is provided.
Solution overview
In Pega Client Lifecycle Management and KYC, the KYC item applicability feature
provides an option for the KYC analyst to mark KYC items as Not applicable, so that the
KYC item is considered completed and does not impact the due diligence process of the
party. The applicability feature is invoked through a flow action available at the
contextual action menu available at each KYC item.
On launching the Applicability action for a particular KYC item, users need to confirm
whether the item is applicable or not, as well as provide the reasoning behind the
decision. These details are stored along with the item and made later available to the
KYC reviewer during the subsequent approval process and in any future use of the KYC
item.
When a KYC Item is marked as Not applicable, the control to capture its response is
made read-only and a tag is displayed along with the question to highlight this
circumstance.
It is important to note that the same action that is used to declare an item as Not
applicable can be used to revert the situation. Based on the action taken by the user,
the below sequence of steps is executed by the KYC engine.
2. Clearing the existing KYC item response along with any supporting evidence.
3. Re-assess KYC Type associated to KYC item
• Re-assess the completion and read-only logic for the KYC item.
• Re-assess any dependent KYC items based on the new blank response.
4. Update the KYC type screen to visually indicate the latest changes by
displaying the Not applicable tag, additional details link and item completion
icon.
It is important to note that the applicability tag and additional details provided by the
KYC analysts are persisted along with the KYC Item at the KYC Policy Profile, and
therefore used during the initialization of KYC Types in subsequent journeys on the
same party like maintenance or periodic reviews.
Setting Description
These settings are loaded in runtime during the creation of the main journey case and
then propagated to different due diligence cases. The configuration is maintained using
the data-transform and data-page listed below. These rules can be modified to obtain
more flexible behaviors – for example, the use of different modes based on the
customer or journey data.
Rule Description
Different financial institutions have varied approaches to quality control in Know Your
Customer (KYC) processes. Previously, Pega Client Life Cycle Management and KYC only
had a case-level quality control process. KYC reviewers had to assess the entire case,
either approving it or sending it back with comments. While this method is
straightforward, it becomes cumbersome in complex KYC scenarios with extensive
questionnaires.
The KYC Quality Control process offers more flexibility. Reviewers can now approve,
reject, and comment at the item or item group level. This allows analysts to easily
correlate feedback with specific items or groups, streamlining the process. Moreover,
the interaction history is recorded as KYC remarks, ensuring transparency and
compliance with operational and regulatory standards.
• Configuration settings
• Extension settings
During that process, users can take certain actions on the KYC cases, items and items
groups based on their roles and the pre-configured working modes. The following
sections provides more details about the internals of this maker-checker process and
its main configuration options. The possible statuses for a KYC case are outlined below.
Case Statuses
Users can perform various actions on the KYC cases, items, and item groups based on
their roles and predefined working modes. The various statuses of the case include:
Status Usage
Review modes
The system provides three different working modes for a KYC Reviewer to complete the
approval of a KYC case. The main differences are in the granularity at which the KYC
Quality Control actions are performed.
• Case level – Under this working mode, the KYC Reviewer has to accept or reject
the entire KYC case as a whole. The KYC Reviewer does not have the ability to
partially accept the questionnaires or make comments directly against specific
items or groups. The approval or rejection in on the entire KYC case.
• Item level – The KYC Reviewer can approve or reject individual KYC items. If all the
KYC Items are approved, the KYC case can be approved and subsequently
resolved. Otherwise, if there is any item that is rejected, the KYC case is taken back
to the KYC Analyst, indicating which items were rejected and the reasoning
behind.
• Item group level – The KYC Reviewer can only approve or reject all items under
the same KYC item group. The KYC case can be approved and resolved when all
item groups are approved. Otherwise, the case can only be rejected and taken
back to the KYC Analyst for further action.
In the 24.1 release, the Due Diligence process has been enhanced to provide
analysts with a clear view of incomplete, approved, rejected, and pending
mandatory items, and as a result making the process more efficient and faster.
• KYC banners: Based on the action taken on the KYC item, a status is set. This
helps KYC Analyst to know which items are completed and which ones are
pending for action. Mandatory items with responses are highlighted in Green,
while items with Incomplete, Rejected, Pending and Approved statuses display
corresponding banners. The number of items is displayed exclusively for Rejected
and Incomplete labels.
• KYC counters: KYC counters at the item group level provides visual indication on
the number of items completed and number of items pending for action. This
offers KYC Analysts a quick overview of essential details within each group. The
KYC Analysts and KYC Reviewers can view the status of each item by expanding
the item group.
In the onboarding case, since the KYC Reviewer has not taken any action, only the
Complete/Incomplete label will be displayed to the KYC Analyst.
Note: Counters are only visible to KYC Analysts, while KYC Reviewers will see
counters of approved items out of the total items in the group. During
maintenance journeys or post-renavigation, or if a case returns from the KYC
Reviewer, then Approved, Rejected and Pending labels are displayed, and
counters are updated accordingly upon item response updates.
Configuration settings
Depending on business needs, organizations can configure the KYC Quality Control
functionality to meet different business requirements. All the KYC Quality Control
configurations are maintained using Dynamic System Settings that are then loaded and
read through the rules listed below; rules that can be modified to load the configuration
from a different source if required.
Setting Description
The following sections described the different points of configurations and the different
behaviors that can be obtained from them.
Based on the needs of an organization, the KYC Quality Control can be enabled or
disabled at application wide. The Dynamic System Setting below drives this
configuration. By default, the KYC QC solution is enabled in new installations.
Setting Description
If the KYC Quality Control is disabled, KYC cases still go through the process where KYC
analysts fill the questionnaires and KYC reviewers approve/reject them, but the entire
process is done at case level – users do not have the ability to work at KYC item or item
group level.
The KYC Quality Control supports two working modes: one at the item level and other
at item group level. (see Working modes for additional details). The Dynamic System
Setting below let users select the mode they want their applications to work under.
Setting Description
If any organization wants to conduct their quality control activities at case level, then
the Quality control feature must be disabled as mentioned in the previous section (see
Enabling the functionality).
By default, the decisions and comments taken in a context of a particular KYC case are
not reused in subsequent cases on the same party and KYC type. That data remains at
the case. It is not persisted into the Policy Profile and therefore not used in the future
initialization of new KYC cases. That default behavior can be modified by changing the
following Dynamic System Setting.
Setting Description
Extension settings
The KYC Quality Control feature can be extended to support different regulatory,
compliance or business needs. Some of the common and basic extensions that can be
done are explained in the following sections.
By default, the quality control actions that users can performed are triggered by a
group of buttons with labels like, Approved, and Rejected. The buttons are maintained
under the rule KYCQualityControlActions.
At run time, based on the response give to an item, different combination of button
gets displayed as shown in below image. Initially, if no action was performed on an
item, Approve and Reject buttons get displayed. Once an item is approved, Approved
button would be displayed instead of Approve. Similarly, when an item is rejected,
Rejected buttons get displayed instead of Reject.
However, there might be situations where an organization wants to use different set of
labels. For example, an organization would like to see Accept, Accepted, Deny and Denied
labels instead of those provided out-of-the-box. In those situations, the following field
values can be updated at the implementation layer.
After changing the labels, the quality control actions are shown in the following.
Approve and reject are the two out-of-the-box actions provided by KYC Quality Control
solution. However, based on the organization’s business needs, different or additional
actions might be required. For example, an organization might want to have an
additional action to waive KYC items or groups so that any non-mandatory question
that does not have any response can be waived off by the KYC reviewer instead of
approving.
The following is a high-level implementation plan to include the new action on top of
the out-of-the-box. It does not contain all the steps to be performed, as many of them
will depend on specific business needs, but it points to the main elements of
configuration to make the basic process run.
• To add and control the action button, modify the section KYCQualityControlActions.
The section must be customized in the implementation layer to add new actions
as shown in the below image.
• After adding the new action button, the conditions that would drive the visibility of
those actions has to be applied. Create a new when rule in the implementation
layer (for example, PegaKYC-Data-Item.DisplayKYCQualityControlWaiveButton) and
configure the section to drive the visibility of the buttons.
• Configure the onclick event of the new buttons to invoke to the out-of-the-box
data-transform KYCQualityControlProcessResponseWrapper. Pass Waived as the
action value in the call.
• KYC Quality Control counters must be updated to count Waived items. For
implementing this, a new property of type number must be created to hold the
count (for example, PegaKYC-Data-Item. CountOfWaivedItems). Then, the logic to
maintain the counter can be added to out-of-the-box PegaKYC-Data-
Item.KYCQualityControlEvaluateCounters.
• When an item is marked as Approved, Rejected or Waived, the KYC engine must
process the action and set the appropriate KYC Quality Control status which would
be used later by UI/UX engine to display the banners. Modify the out-of-the-box
data-transforms PegaKYC-Data-Item. KYCQualityControlProcessResponse and
PegaKYC-Data-Item.KYCItemStatusAmendment to manage the ew action and set the
new status accordingly.
• When the Waived action is performed, the system should display the new status
banner for Waived. Modify the section PegaKYC-Data-Item.KYCItemActions to add the
new banner for the new status and create a when rule (for example, PegaKYC-
Data-Item.IsKYCItemWaived) to drive its visibility.
The following sections give you an understanding of the KYC DCC in CLM KYC, its
purpose, configuration and extension settings.
During the customer onboarding, KYC reviewers ensure that the data provided by the
customer is properly captured and meticulously reviewed. However, during the
subsequent customer review journeys, when the customer goes through the due
diligence again, the reviewer needs to go through the entire review process and revisit
these extensive questionnaires. This process strains operational efficiency and
escalates regulatory vulnerability due to the potential for inaccuracies.
To mitigate this challenge, the Data Change Control (DCC) mechanism flags altered
responses, enabling KYC reviewers to focus solely on modified data elements instead of
reviewing the entire questionnaire. This optimization not only streamlines operations
but also fortifies regulatory compliance.
Organizations do not want the KYC reviewers to review the same data present in the
certain customer journeys, like Maintenance or Add Products repeatedly. In such cases,
they rely on the system to conduct due diligences. This is called Straight-Through
Processing (STP). However, relying solely on the system isn't suitable for every situation.
Hence, the following two requirements need to be met for an automated due diligence:
Data control
KYC DCC provides a structured mechanism to manage due diligence data modifications,
offering precise control over comparing the following two distinct data sets:
• Current data
This is the latest version of the data, representing the value that the KYC item
holds at the moment of evaluation. It can further change when:
◦ the user updates the user interface
◦ the data programmatically propogates from Enrich into Due Diligence
◦ the associated KYC items from the previous journeys are modified.
• Baseline data
This is the version of a KYC item response that has undergone review
process. It serves as a reference point for constant comparisons. This
includes both KYC item responses and supporting evidence.
Note: Both the KYC item responses and their supporting evidences are
included in these comparisons for the data sets mentioned above.
During the initialization of a KYC case, the system decides if the DCC (Data Change
Control) should be used for that case. This depends on whether the case is set to use
Straight-Through Processing (STP) by default. STP means the process can continue
automatically if all needed KYC information is complete and correct. Most processes
like Onboarding and Maintenance use STP, while others like Periodic Custom Review
need human review. DCC works for STP cases only. In such cases, baseline data is
initialized for comparision.
Once the system determines need of the baseline data, it initializes the baseline data of
the KYC items. Depending on the source of KYC items, the following values are
initialized:
The values initially set for the baseline are used for DCC comparisons until they are
explicitly approved or rejected. If KYC Quality Control (KYC QC) functionality is active,
the rebaseline occurs when the reviewer clicks either Approve or Reject on an item or a
group. The approved or rejected value is then stored as the new baseline for future
comparisons. When KYC QC is disabled, the rebaseline happens when the entire KYC
questionnaire is approved or rejected. This replaces all previous baseline values with
the newly approved or rejected ones.
Data comparison
During lifecycle of a due diligence case, DCC compares the current data and the
baseline at three different steps, each comparison serving a different purpose.
• Data collection
KYC Analysts submits due diligence cases for review. KYC Reviewer takes
necessary actions and can route the case back to KYC Analyst if the data
provided needs any modification. When a previously reviewed item is
DCCEnabled/enable
This dynamic system setting enables or disables KYC Data
PegaKYC Change Control.
PegaKYC-Data-Settings.
Data transform rule that acts as an extension and is invok
LoadDynamicSettings_Ext from Declare_PegaKYC_Settings.
Note: This setting is loaded during runtime while creating the main journey
case. From there, it's propagated to various due diligence cases. Hence, the
ongoing cases created before enabling KYC Data Change Control, proceed to
resolution unaffected by DCC.
Update this rule to return true for disabling DCC for the
PegaCLMFS-Work-CLM.
designated journey.
DisableDCCForJourney
When rule
When a due diligence case is approved and resolved, the data synchronized to the
customer’s policy profiles and becomes the DCC baseline. In subsequent customer
journeys, based on the nature of journey, sub journey, material/non-material aspects of
the case, the due diligence process is either stopped for manual intervention or is taken
through Straight-Through-Processing (STP). In the scenarios where case is stopped for
manual intervention, the process can be controlled using TentativeSTP mode.
While DCC compares both KYC item responses and their supporting evidence, there
might be instances where you want to limit comparison to responses only or perform
supporting evidence comparison selectively. To achieve this, consider the following
extension:
A KYC Item can either be simple (such as a single-value scalar) or complex (like a page
or a page list). When comparing scalar properties, the system uses an equality
comparison. However, for complex items, the system employs the pxComparePages
function. This function can consider backlist properties as input, effectively excluding
them from the comparison process. You can use the following extension to mention the
properties that should be excluded from comparison.
Data transform
Based on the comparison result, an item’s status is set to Amended when the
comparison result is Modified. This can be further extended to meet any different
business needs. For instance, you might want to track the count of modified items
within a group. This extension can be used to update the count based on the
comparison result.
To help the Analysts see incomplete, rejected, and pending mandatory items that need
action and skip others, the counters visible in the KYC questionnaires are added.
This solution provides an intuitive and visual way to perform the due diligence process
faster and significantly reducing the time required to complete the process.
• Solution
Solution
The counters are added at the item group level for a quick overview of the details in
each group. The user will find the status of the underlying items now by expanding the
item group. The answered mandatory items have the regular green mark, and the
Approved, Rejected, or Amended banners are shown.
In the onboarding case, if the KYC Reviewer have not taken any action, only the
Complete/Incomplete label will be displayed to the KYC Analyst.
Onboarding case
Complete and Incomplete labels are displayed irrespective to the Quality Control (QC)
and Data Change Control (DCC) functionality.
The number of items will be shown only for Rejected and Incomplete labels.
This solution is does not evaluate non-mandatory items and the labels Incomplete/
Complete will be evaluated only to the mandatory items.
The final status “Approved” and “Complete” are shown when the relevant fulfilled logic
is met (no need for the counters there this is applicable for all items).
The color of the counter helps to highlight the item that requires action.
When the case comes back from the KYC Reviewer, during the maintenance journey
scenarios or post-renavigation, the KYC Analyst can see Approved/Rejected/Pending
labels. Whenever the item response is updated, the respective label and counter will be
updated.
Onboarding case
The following table shows the scenario of the item group and labels.
Scenario (the item group has 10 mandatory fields) UI for KYC Analyst
4 items are approved, but for 6 items there was no Pending | Complete
action from the KYC Reviewer – All items are answered
All the items are approved – All the items are answered Approved | Complete
All the items are approved – 3 new ones are displayed Approved | 3 Incomplete
and not answered
4 items are rejected – 2 new ones are displayed and not 4 Rejected | 2 Incomplete
answered
The case is opened for the first time – No items are 10 Incomplete
answered
The case is opened for the first time and all the items Complete
are answered
Scenario (the item group has 10 mandatory fields) UI for KYC Analyst
There was no action from the KYC Reviewer on any of Pending | Complete
the items – All items are answered
The following table shows the logic determining which value (Approved, Rejected, or
Pending) gets displayed.
10 6 4 Rejected
10 5 3 2 Rejected
10 10 Approved
10 7 3 Pending
10 10 Rejected
10 3 7 Rejected
10 10 Pending
In this table the counters are visible only to the KYC Analyst.
The KYC Reviewer can see the counters of the items that are Approved in the group.
Onboarding case
• Product regulatory
• Tax
AML/CFT
Nature and purpose of the business relationship
Related data
The N&P comprises primarily, but not only, of information about the product(s) the
customer requested. The information to collect depends on legal requirements as well
as the policy and procedures of the financial institution. There are two main categories
of information:
However, in certain situations, the nature and purpose of the business relationship can
be assumed. Usually the N&P can be considered self-evident:
• When the product is designed to serve a specific scope only, such as savings
account, leasing, safe deposit box, and pension scheme.
• Based on the type of customer, which is normally considered to be a combination
of entity type and overall customer risk being low.
Dual goal
1. It feeds the determination of the overall customer risk profile and the AML profile
1 applied to the customer.
2. It is a factor in the set-up of the detecting logic in the Transaction Monitoring (TM)
system.
The relationship of N&P towards goals 1 and 2 is bi-directional, because the overall
customer profile as well as the alerts generated by the TM system can lead to review
the N&P and/or increase the amount and depth of information collected to fulfill and
substantiate the N&P requirement.
1 The “AML profile” refers to the level of due diligence applied to the customer or its
related parties. If they are not exempted from any due diligence, in CLM there are three
due diligence levels: Simplified (SDD), Full (FDD), and Enhanced (EDD).
Challenges
The Target Operating Model of financial institutions as well as their policy and
procedures can be very different with respect to the nature and purpose of the
business relationship: what to ask, when to ask it, how and by whom, can easily
constitute a challenge when you want to comply with requirements, but remain
operationally efficient and provide a smooth customer experience.
• Does the customer provide directly some of the answers directly that the FI
employee must then check?
If yes, at which point in time within the onboarding/review process are the
answers provided?
• Because the extent to which the N&P is investigated might depend on the
customer risk profile, how to calibrate this dependency with the overall customer
risk which may not be final at the time the N&P information is collected?
Within Pega CLM application, the N&P can be handled in different places:
Note: The figures in this section show a customer being an entity, but similar
fields apply also to customers being an individual.
Add products is the screen where the products requested by the customer are
indicated, together with the relevant booking jurisdiction, which is the country where
the product is opened.
Add Products
By default, the products are indicated by type and not by instance; therefore it is not
possible, for example, to enter a cash liquidity management product three times for the
same booking jurisdiction, because only one entry for type ‘Cash management’ in a
specific country is allowed. This out-of-the-box behavior can be modified during the
implementation of CLM.
The GKYC sub-case contains fields and questions related to CDD and, if applicable, to
EDD.
Within the CDD questionnaire, the item group ‘Customer risk’ contains the N&P field
(“Describe the purpose for establishing/continuing the business relationship.”) as well
as other N&P related fields and questions, like source of funds/assets (to be) used in
the business relationship.
Customer risk
The item group ‘Product/Service and Transaction risk’ contains also some N&P fields
and questions, such as the purpose of the product(s) and how they are intended to be
used, and the expected level of product activity on a monthly basis. Regarding the
expected level of product activity, for each product/booking jurisdiction combination it
is required to enter in a pop-up window (called ‘Product activity details’) the volume,
value, currency, frequency, and countries involved in the incoming and outgoing
transactions.
• The item group ‘Product/Service and Transaction risk’ is displayed after the item
group ‘Customer risk’, , which is where the N&P field “Describe the purpose for
establishing/continuing the business relationship” is displayed.
• The same ‘Product activity details’ window appears for every type of product, but
FI can tailor the data required for each different type or category of product.
• Because it is placed in the Due Diligence stage, the data in the ‘Product activity
details’ window is entered by the KYC Analyst.
However, it is possible that at implementation FIs require users having the sales/
commercial relationship with the customer to enter such product activity data,
since they would know or have already gathered this data. Therefore, the fields
related to product activity can be configured to be displayed in the Enrich stage.
Within the EDD questionnaire there is a whole item group on ‘Source of funds/assets’
(triggered when the source of funds/assets is not considered logical and commensurate
with the customer's profile, as per answer in the CDD questionnaire).
Source of funds/assets
The EDD questionnaire contains in in the item group ‘Customer risk’ also the question
on whether both the purpose for establishing/continuing the relationship and the
customer’s financial profile depict a legitimate business reason for such business
relationship.
2 The CDD questionnaire comprises fields that apply to all the three due diligence
levels, i.e. SDD, FDD, and EDD.
3 The EDD questionnaire comprises fields that apply only to customers and related
parties subject to Enhanced Due Diligence.
In their out-of-the-box behaviour, the answers to the N&P components do not directly
affect the calculation of the overall customer risk rating, although indirectly it could if
the user leverages the assessment of the N&P to justify the risk override.
However, the FI can also configure the N&P as either an independent risk factor or as
feeding into another existing risk factor, depending on the risk model they apply.
Different from the overall customer risk rating (although very often there is a 1-to-1
match with it) is the Due Diligence (DD) profile applicable to the customer. In that
respect, certain answers to the GKYC questionnaire affect the DD profile of the
customer, and can possibly raise it to Enhanced Due Diligence (EDD) level.
Implementation approaches
The most frequent implementation approaches to N&P we have seen within the
CLM/KYC application include the following:
1. The Add products screen gets enhanced to cater for extra due diligence data only
for those products which require further information and not for those where the
N&P can be considered self-evident.
Such additional product related data then pre-populates the relevant fields in the
item group ‘Product/Service and Transaction risk’ of the Global questionnaire. The
pre-populated data is made either read-only or editable by the KYC Analyst.
and Transaction risk’ item group or in a dedicated place of the GKYC sub-case) can
be completed.
However, such additional product related data is collected only for the purpose to
set up the detecting logic in the TM system. Therefore, it does not appear in the
GKYC sub-case, but it feeds the TM system.
If later on the TM system (for example, due to a serious discrepancy between the
actual usage of the product and what the initial expectation was) raises an alert,
such alert could lead to opening a review of the customer file, including a possible
modification of the product related data, which in turn refines the detecting logic
of the TM system.
Product regulatory
• Dodd-Frank Wall Street Reform and Consumer Protection Act (DFA)
Introduction
trading activity between US persons and trading activity of a non-US person with a US
person.
CLM supports following aspects of DFA with a questionnaire that is applicable only to
entities:
Triggers
• First scenario
1. The product selected is an OTC product1
and
2. The booking location for the OTC product is United States of America.
• Second scenario
1. The product selected is an OTC product1
and
2. The booking location for the OTC product is not "United States of America"
and
3. The user answers Yes to the question about the OTC product that is
displayed in the Regulatory details step within the Enrich stage, as a result of
fulfilling triggers 1 and 2.
1The assumption is that the list of products that the financial institution uses contains
this flag.
Configure the applicability logic for the DFA questionnaire by using the
KYCApplicabilityMatrix decision table. For information on general configuration of this
rule, see the Applicability matrix section of the implementation guide.
The other important decision logic rule is the isDoddFrankApplicable When rule.
Configure the specific DFA applicability logic for the financial institution in the above-
mentioned rules.
In the ‘Regulatory details’ step within the Enrich stage, when the second scenario
applies – see section Triggers – the user (who, depending on the financial institution’s
Target Operating Model, can be for example the back-office department responsible for
performing due diligence on the customer, or the Relationship Manager or a Mid-office
employee) must answer the question about whether in relation to the OTC product
booked outside of the USA the customer has a US nexus that requires obtaining a
Cross-Border Representation Letter.
Regulatory details
The financial institution can configure this field in the Regulatory details step according
to their procedures.
In its out-of-the-box behaviour, the input entered within the ‘Regulatory details’ step
does not pre-populate the relevant DFA item in the Global AML questionnaire of the
customer (i.e., the GKYC sub-case) within the Due Diligence stage, but that can be easily
configured.
The answer Yes to the following DFA item in the GKYC sub-case of the customer
determines whether the DFA questionnaire is displayed.
The DFA questionnaire is encapsulated in a KYC Type which is available in the Due
Diligence stage, and contains the following item groups:
DFA Cross-Border
DFA Cross-Border Representation Letter is the item group in which the user e
Representation Letter
the data that the DFA Cross-Border Representation Letter (CBRL) contains.
The goal of the CBRL is to determie whether the customer has a US connecti
that makes them a "U.S person" according to the definition of CFTC.
Firm and counterparty Firm and counterparty classification is where, among other things, the custo
classification classified as either Major Swap Participant (MSP), Swap Dealer (SD), or Non-S
MSP.
For MSP and SD customers, the classification is the steppingstone from whic
other DFA requirements derive, like capital, margin, reporting, record keepin
operational requirements.
Special Entities Special Entities is the item group that contains information for customers wh
fall under the definition of “Special Entities”, as there are additional duties to
satisfied when the CP is classified as such.
Institution suitability Institutional suitability is the item group that focuses on to determining whet
the customer or its agent can independently evaluate the risks of the swap o
risks of the trading strategy.
Introduction
EMIR affects all market participants of a derivative transaction, and from an applicability
perspective its geographical perimeter is not limited to counterparties established in
European Economic Area (EEA) 1, but it extends also to:
Because of these geographic considerations, the EMIR can have certain extra-territorial
effects.
1The European Economic Area (EEA) consists of the EU member states as well as
Iceland, Liechtenstein, Norway, and United Kingdom.
CLM supports with questionnaires – one for entities and one for individuals – not only
the determination of the classification the customer belongs to, but also the three main
EMIR obligations:
Triggers
and
2. The fields about the EMIR products being displayed in the ‘Regulatory details’ step
within Enrich stage are answered in a certain way.
EMIR applicability
3 The assumption is that the list of products used by the financial institution contains
the EMIR flag.
Configure the applicability logic for the EMIR questionnaire by using the
KYCApplicabilityMatrix decision table. For more information on general configuration of
this rule, see the Applicability matrix section of the implementation guide.
The other important decision logic rule is the ApplyEMIR When rule.
Configure the specific EMIR applicability logic for the financial institution in the above-
mentioned rules.
In the ‘Regulatory details’ step within the Enrich stage, the user (who, depending on the
financial institution’s Target Operating Model, can be for example the back-office
department responsible for performing due diligence on the customer, or the
Relationship Manager or a Mid-office employee) must perform an initial assessment to
distinguish the class and asset class of the requested products, to determine whether
or not EMIR applies. The relevant fields of such initial assessment are displayed when
the product selected is EMIR eligible. For more information, see section Triggers.
The fields in the Regulatory details step can be configured by the financial institution to
suit their procedures.
In its out-of-the-box behaviour, the input entered within the ‘Regulatory details’ step
does not pre-populate the relevant EMIR items in the Global AML questionnaire of the
customer (i.e., the GKYC sub-case) within the Due Diligence stage, but that can be easily
configured.
The customer's answers to the EMIR items in the GKYC sub-case determines whether or
not the system displays the EMIR questionnaire.
The EMIR questionnaire is encapsulated in a KYC Type in the Due Diligence stage. The
questionnaire six item groups:
• Product applicability
• Jurisdiction
• Classification
• Clearing obligation
• Risk mitigation requirements
• Reporting
Product applicability Product applicability indicates the reason why the EMIR
questionnaire is displayed, that is because in the Global
questionnaire it has been indicated that the customer
wants to trade certain OTC products subject to EMIR.
Risk mitigation requirements The Risk mitigation requirements section is where data
related to risk mitigation techniques is collected.
Introduction
FINRA's main goal is to protect investors from losses caused by fraud and misconduct,
this way guaranteeing the safety and fairness of the financial markets.
FINRA Rule 2090 Commonly referred to as the “Know Your Customer Rule,” or “KYC Rule”,
this rule requires financial advisors, at the opening of an account and
throughout the customer lifecycle, to use reasonable diligence to know
the customer and to retain the essential facts concerning the customer
and the authority of each person acting on their behalf, prior to
providing any recommendation.
FINRA Rule 2111 This rule requires financial advisors to make recommendations on a
transaction or investment strategy that are in the best interest of the
customer, based on the information that the advisors obtain.
These two rules work together, because suitability can only be determined if the
financial advisor knows the customer. As such, the customer risk profile limits
investments to those that are suitable for that customer.
CLM supports the following four aspects of the FINRA KYC and suitability rules with a
questionnaire that is applicable to both entities and individuals:
The data to be collected to build the investment profile of the customer, which
partly varies depending on the type of customer.
• Suitability assessments
Covering the three obligations that compose the suitability requirement: i.e.,
reasonable basis, customer-specific, and quantitative.
• KYC
The FINRA questionnaire does not cover the FINRA Rule 2090 KYC obligation to
understand the authority of each person acting on behalf of the customer, because this
essential information is collected within the CLM application during the 'Add related
parties' step, and by the relevant AML questionnaire in the corresponding Related Party
sub-case. The information is also obtained during collection of documents that are
relevant to such persons.
Triggers
1. The product selected is a Security product (based on the assumption that the list
of products used by the financial institution contains this flag)
and
and
Configure the applicability logic for the FINRA questionnaire by using the
KYCApplicabilityMatrix decision table. For more information on general configuring this
rule, see the Applicability matrix section of the implementation guide.
The other important decision logic rule is the IsUSFINRAApplicable When rule.
Configure the specific FINRA applicability logic for the financial institution in the above-
mentioned rules.
In the ‘Regulatory details’ step within the Enrich stage, the user (who, depending on the
financial institution’s Target Operating Model, can be the back-office department
responsible for performing due diligence on the customer, or the Relationship Manager
or a Mid-office employee) must answer the question on whether the Security product
booked in USA is sold or recommended by a US-registered broker-dealer or its
associated person – see section Triggers.
US FINRA Assessment
User can configure the US FINRA Assessment radio button in the Regulatory details
step, in accordance with their company procedures.
In its out-of-the-box behaviour, the input entered within the Regulatory details step
does not pre-populate the relevant FINRA item in the Global AML questionnaire of the
customer (i.e., the GKYC sub-case) within the Due Diligence stage, but that can be easily
configured.
The answer Yes to the following FINRA item in the GKYC sub-case of the customer
determines whether the FINRA questionnaire is displayed.
The FINRA questionnaire is encapsulated in a KYC Type available in the Due Diligence
stage. This questionnaire has five item groups:
• FINRA applicability
• Reasonable basis and institutional customer
• Customer-specific suitability
• Quantitative suitability
• Know Your Customer
FINRA applicability FINRA applicability merely indicates the reason why the FINRA
questionnaire is displayed, that is, because in the Global questionnaire it
has been indicated that the Security product having USA as booking
jurisdiction has been sold or recommended by a US-registered broker
dealer or its associated person.
Reasonable basis and Reasonable basis and institutional customer is the item group focusing on
institutional customer the first type of suitability (that is, the reasonable-basis) and the exception
from the obligation to satisfy the customer‐specific suitability for those
classified as institutional customers.
Customer-specific suitability Customer-specific suitability is where the user enters the data delineating
the investment profile of a particular customer, which includes – though it
is not limited to – their financial situation, liquidity needs, as well as the
customer’s investment objectives, experience, and time horizon.
Quantitative suitability Quantitative suitability is the item group about the quantitative suitability
obligation. Such obligation requires that when recommending a
transaction or a series thereof, financial advisors evaluate whether it is
suitable not in isolation but viewed as a whole in relation to the
investment profile of the customer.
Know Your Customer Know Your Customer is the item group where the user can register
whether any special handling instructions for the account have been
provided by the customer.
Introduction
MiFID requires investment firms and banks operating in the European Economic Area
(EEA)1 financial markets to:
MiFID also sets out other requirements, such as standardising the regulatory
disclosures and reporting to the relevant authorities.
In terms of financial instruments covered by MiFID, the initial list (see letter C of Annex I
of Directive 2004/39, which included amongst others: transferable securities; money-
market instruments; options, futures, swaps, forward rate agreements having certain
characteristics) has been expanded by MiFID 2 to include certain Over The Counter
(OTC) derivatives.
1The European Economic Area (EEA) consists of the EU member states as well as
Iceland, Liechtenstein, Norway, and United Kingdom.
CLM supports with questionnaires – one for entities and one for individuals – the MiFID
obligations related to:
1. Product categorization.
2. Determination of the MiFID category to which the customer belongs.
3. The request from the customer to be assigned a different category; this is the so
called opt-down/opt-up process.
4. The suitability and appropriateness of the investment for the customer.
During a review of an existing customer file (for example, Periodic Review, and Event
Driven Review), users can re-categorise the customer. Reasons for re-categorisation
include incorrect categorisation or an entity becomes regulated or de-regulated.
Triggers
1. ‘Investment services' and activities (“core” services) and/or ‘ancillary services’ (“non-
core” services) are provided2.
and
2. The combination of the following three factors are as per the following table.
• The Registered address country/Nationality of the customer.
• The EEA status of the financial institution offering the product.
• The booking location of the product.
Factors
2 The assumption is that the list of products used by the financial institution
contains the MiFID flag.
Configure the applicability logic for the MiFID questionnaires by using the
KYCApplicabilityMatrix decision table. For more information on general configuration of
this rule, see the Applicability matrix section of the implementation guide.
The other important decision logic rule is the ApplyMIFIDCaseWhen When rule.
Configure the specific MiFID applicability logic for the financial institution in the above-
mentioned rules.
In the ‘Regulatory details’ step within the Enrich stage, the user (who, depending on the
financial institution’s Target Operating Model, can be for example the back-office
department responsible for performing due diligence on the customer, or the
Relationship Manager or a Mid-office employee) has the option to indicate whether the
financial instrument(s) selected as a product falls within the MiFID list of services and
activities.
MIFID Assessment
For customers being an entity, the user also can indicate the relevant MiFID party type.
The financial institution can configure the fields in the ‘Regulatory details’ step
according to their procedures.
In its out-of-the-box behaviour, the input entered within the ‘Regulatory details’ step
does not pre-populate the relevant items of the MiFID questionnaire in Due Diligence
stage, but that can be easily configured.
The MiFID questionnaire is encapsulated in a KYC Type available in the Due Diligence
stage. It has the following six item groups:
1. MiFID categorisation
2. Product categorisation
3. Customer categorisation
4. Opt-down/up
5. General suitability and appropriateness
6. Suitability and appropriateness for investment advice and/or portfolio
management
MiFID categorisation MiFID categorisation is pre-populated based on the input of other item grou
it is read-only – and it provides the user with the overview of the outcome o
analysis, which is often used for reporting purposes.
Product categorisation Product categorisation is where the user can indicate what services and acti
related to the financial instrument(s) are provided to the customer, that is, in
services and activities and/or ancillary services.
Customer categorisation Customer categorisation is where the determination of the MiFID classificat
“Eligible Counterparty” (selectable only for customers being an entity), “Prof
“Retail” – occurs.
Opt-down/up Opt-down/up is where the user handles the possible request from the custo
assigned a different MiFID classification than the one determined by the fina
institution; such classification can be:
This item group also contains the relevant data and documents to substanti
customer request and also the decision by the financial institution to either
refuse such request.
The last two item groups are both about suitability and appropriateness for, respectively, Retail customers (item
and customers different than Eligible Counterparties being provided investment advice or portfolio manageme
(item group 6).
General suitability and Suitability is the process by which it is assessed whether the customer’s kno
appropriateness experience in the relevant investment field, their financial situation includin
to bear losses, and their investment objectives match the product or service
or demanded by the customer.
Suitability and Appropriateness is the process by which, for investment services other than
appropriateness for advice or portfolio management, the customer possesses the necessary kno
investment advice and/or and experience to understand the specific type of product or service offered
portfolio management demanded as well as the risks related to it.
Tax
• Common Reporting Standard (CRS)
Introduction
CRS is also sometimes referred to as the FATCA for the rest of the world.
For more information about jurisdictions that participate in CRS, see the OECD website.
CLM supports with questionnaires – one for entities and one for individuals – only the
determination of the tax status of the customer, which is the steppingstone from which
other CRS requirements (for example, reporting) derive.
The questionnaires follow the Model Competent Authority Agreement (MCAA), which
provides the legal framework for the exchange of information, but it also defines the
information to transfer and its modalities, as well as the responsibilities of authorities.
Because country tax laws and CRS guidance can modify aspects of the customer
onboarding and review process, financial institutions must ensure that they comply
with all relevant local laws and CRS requirements and therefore they need to configure
the questionnaires accordingly.
Triggers
Out-of-the-box, CLM triggers the display of the relevant CRS questionnaire in every
customer onboarding, regardless of the products that the customer requests. In fact,
most financial institutions prefer to already have the relevant tax classification in the
customer file at the time of their onboarding, in case the customer triggers the CRS
classification and reporting obligations after the onboarding.
CLM can also support the other approach, in which the relevant CRS questionnaire is
displayed only as strictly dictated by the regulations, that is when the product
requested by the customer falls within the CRS definition of “financial account”.
In both approaches, the financial institution that performs the tax classification of the
customer is located in the country where the product is booked, if that country is a CRS
participating jurisdiction. However, if the financial institution of the product booking
location is a branch in a non-CRS participating jurisdiction, but their head office is in a
CRS participating jurisdiction, then that branch must also be CRS compliant.
Configure the applicability logic for the CRS questionnaires by using the
KYCApplicabilityMatrix decision table. For more information on general configuration of
this rule, see the Applicability matrix section of the implementation guide. The other
important decision logic rule is the ApplyCRSCaseWhen When rule.
Configure the specific CRS applicability logic for the financial institution in the above-
mentioned rules.
In the ‘Regulatory details’ step within the Enrich stage, the user (who, depending on the
financial institution’s Target Operating Model, can be the back-office department
responsible for performing due diligence on the customer, or the Relationship
Manager, or a Mid-office employee) has the option to perform an initial CRS
assessment of the customer based on information the financial institution already has
– for example, because collected during onboarding – or is publicly available. In such
cases, because the FI does not have to obtain and process the customer’s self-
certification form, completed and signed, the time needed to determine the CRS status
of the customer is shortened, which increases operational efficiency.
For customers who are individuals, the user can indicate whether the customer is tax
resident in a reportable jurisdiction. If the individual is in a reportable jurisdiction, the
user enters the data about the relevant tax residence country/ies and the Tax
Identification Number (TIN) or equivalent number.
For customers who are entities, the user can indicate the relevant CRS classification.
Financial institutions can configure this section according to their procedure and local
law, which might allow for all CRS classifications or only a subset of them (for example,
what makes the customer non reportable) to be selected without first collecting the
customer’s self-certification form.
In its out-of-the-box behaviour, the input entered as initial CRS assessment within the
‘Regulatory details’ step does not pre-populate the relevant items of the CRS
questionnaire (i.e., the CRS sub-case) in the Due Diligence stage, but that can be easily
configured.
The CRS questionnaire is encapsulated in a KYC Type, which is available in the Due
Diligence stage.
For a customers who are an individuals, the questionnaire contains the following item
groups:
• CRS status
• Self-Certification form
For customers that are entities, the questionnaire contains the following item groups:
• CRS status
• Classification based on available information
• Self-Certification form
CRS assessment
Classification based on available information is where the user can indicate the CRS
classification whenever it is possible not to collect the self-certification form from the
customer.
CRS assessment
Self-Certification form is where the user registers the relevant CRS classification of the
customer based on the form that the financial institution receives.
If the customer is an individual, the user enters the tax residence country/ies and the
TIN or equivalent number of the customer.
If the customer is an entity, additional mandatory fields are displayed when the CRS
classification is either “Passive NFE” or “Professionally Managed Investment Entity in a
non-CRS participating jurisdiction”. These additional fields capture the following
information:
• The tax residence country/ies and the TIN or equivalent number of the customer.
• The identification details about the controlling person(s) of the customer and their
tax residence country/ies as well as the TIN or equivalent number. “Controlling
person” indicates the natural person who exercises control over the customer.
Introduction
FATCA applies also to Participating FFIs (Foreign Financial Institutions), which are
financial institutions that, although located in a country that has not signed an IGA,
have decided to willingly cooperate with the IRS.
For an updated list of countries that have signed an IGA, see the Foreign Account Tax
Compliance Act .
CLM supports only the determination of the tax status of the customer with
questionnaires, one for entities and one for individuals. The tax status is the first step
toward other FATCA requirements (for example, reporting).
The questionnaires are prepared based on both Model 1 and Model 2 IGAs, and do not
address any other US tax obligations, specific IGA implementing legislation, or the US
Treasury Department’s own specific FATCA regulations. Therefore, should a financial
institution need to ensure compliance with other relevant local laws and FATCA
requirements, they have to configure the questionnaires accordingly.
Triggers
Out-of-the-box, CLM triggers the display of the relevant FATCA questionnaire in every
customer onboarding journey, regardless of the products that the customer requests.
In fact, most financial institutions prefer to already have the relevant tax classification in
the customer’s file at the time of their onboarding, in case the customer triggers the
FATCA classification and reporting obligations after the onboarding.
CLM can also support the other approach, in which the relevant FATCA questionnaire is
displayed only as strictly dictated by the regulations, that is when the product
requested by the customer falls within the FATCA definition of “financial account”.
In both approaches, the financial institution that performs the tax classification of the
customer is located in the country where the product is booked, if that country has
signed an IGA. However, if the financial institution of the product booking location is a
branch in a non-IGA country, but their head office is located in a IGA country, then that
branch must also be FATCA compliant. The same rule also applies if the financial
institution is a Participating FFI.
Configure the applicability logic for the FATCA questionnaires by using the
KYCApplicabilityMatrix decision table. For more information on general configuration of
this rule, see the Applicability matrix section of the implementation guide.
The other important decision logic rule is the ApplyFATCACaseWhen When rule.
Configure the specific FATCA applicability logic for the financial institution in the above-
mentioned rules.
In the ‘Regulatory details’ step within the Enrich stage, the user (who, depending on the
financial institution’s Target Operating Model, can be the back-office department
responsible for performing due diligence on the customer, or the Relationship
Manager, or a Mid-office employee) has the option to perform an initial FATCA
assessment of the customer based on information the financial institution already has
– for example, because collected during onboarding – or is publicly available. In such
cases, because the FI does not have to obtain and process the customer’s self-
certification form, completed and signed, the time needed to determine the FATCA
status of the customer is shortened, which increases operational efficiency.
For customers who are individuals, the user can indicate whether the customer is a US
citizen, and if so, enter their Tax Identification Number (TIN).
For customers that are entities, the user can select one of the following four values as
the relevant FATCA classification: “Active NFFE”, “Reporting Model 1 FFI”, “Reporting
Model 2 FFI”, and “None of the above”. Financial institutions can configure this section
according to their procedure and local law, which might allow for other FATCA
classifications to be selected without first collecting the customer’s self-certification
form.
In its out-of-the-box behaviour, the input entered as initial FATCA assessment within the
‘Regulatory details’ step does not pre-populate the relevant items of the FATCA
questionnaire (i.e., the FATCA sub-case) in the Due Diligence stage, but that can be
easily configured.
The FATCA questionnaire is encapsulated in a KYC Type that is available in the Due
Diligence stage.
For customers who are individuals, the questionnaire contains a maximum of four item
groups, which are displayed based on the answers given:
• FATCA status
• Citizenship and residency
• Tax certification form
• U.S. indicia
For customers that are entities, the questionnaire contains a maximum of four item
groups, which are displayed based on the answers given:
• FATCA status
• Classification based on available information
• Tax certification form
• Classification based on tax certification form
FATCA status This is pre-populated based on the input entered in other item
groups. As a result, the content is read-only. This item group
provides users with the overview of the outcome of the FATCA
analysis, often for reporting purposes.
FATCA assessment
Tax certification form ‘Tax certification form’ is where the user indicates the type of
form that they received, performs an assessment of its validity,
and if applicable, enters the customer's US TIN.
Citizenship and residency ‘Citizenship and residency’ is where the user would assess
whether the customer is considered a US taxpayer by either
citizenship or residency.
US indicia ‘U.S. indicia’ is where the user will assess possible indicators
that the customer has some affiliation with the United States,
for example, when it is their country of birth or when their
mailing address is in the US.
Classification based on available ‘Classification based on available information’ is where the user
information states the customer's FATCA classification, whenever it is
possible to do so without collecting the relevant tax form from
the customer.
FATCA assessment
Classification based on tax ‘Classification based on tax certification form’ is where the user
certification form enters the Chapter 3 Status of the customer and their FATCA
classification, as well as any additional information relevant to
certain tax classifications.
When the customer is classified as either “Owner Documented FFI” or “Passive NFFE”,
the system displays additional mandatory fields to capture identification details of the
Specified US Persons/Substantial US Owners, if there are any, as well as their US TIN.
• Overview
• Appendix
Intended audience
This guide describes the configuration required to expose the customer onboarding
journey available at Pega Client Lifecycle Management and KYC for Retail through a
Web Self-Service application, using Pega Mashup. Although it provides a high-level
description of the functionality, it is intended for technical architects in charge of the
configuration of the module.
Overview
• Introduction
• Overall process
Introduction
Many financial institutions provide to their customers a Web Self-Service application
(WSS) where customers can take actions like managing their contact information,
adding or removing products, providing required documentation, and other similar
operations. In addition, such applications usually support the onboarding of new
customers interested in getting products or services.
Pega Client Lifecycle Management and KYC makes it possible for financial institutions to
expose Pega CLM’s functionality and processes directly to customers through these
Web Self-Service applications, in this way creating a direct channel between the
customer and Pega Client Lifecycle Managementand KYC. The integration between the
financial institution‘s WSS application and Pega is done using Pega Mashup (see Pega
Community for additional information).
Overall process
In order to demonstrate this capability, Pega Client Lifecycle Management and KYC for
Retail implements an onboarding journey that can be accessed through WSS
applications. Such an onboarding journey has four stages (Capture, Enrich, Due
Diligence and Fulfillment), and customers can complete the first stage independently by
using the WSS application.
After the first stage, a case transitions into Enrich, is routed to the back-office services,
and follows the same process as if it had been created through any of the other
available channels (e.g. branch officer portal).
https://pegasystems.github.io/uplus-wss/retail_bank/index.html
This URL can be used to test against different Pega applications or servers. You can set
different configurationsin different browser sessions (the configuration is stored locally
in a browser cookie). Configurations can be exported and imported for a quick switch
between environments and demos. For detailed information about the application and
how to configure it, see UPlusWebSelf-Service.
The application can be run directly from the URL provided above. However, there are
some situations where financial institutions will need to download it, and install it in
their own systems. For example, that is the case when financial institutions customize
the application: to demonstrate a certain capability; or, to integrate with third-party
web-based tools like eID Verification (see eID Verification).In those cases, the
application should be downloaded from its GITproject home page.
• Application configuration
• Channel configuration
• Client-side configuration
• Verification
Server-side configuration
Before accessing the Pega application from the WSS application, there are some
configurations required on the Pega server.
In all the tasks under this section, the assumption has been made that the functionality
is going to betested against the out-of-the-box version of the application. The access to
the application is done through the PegaCLMRetWSSapplication, which is based on
PegaCLMFSRet, which in turn contains all rules specific to access through WSS.
Ifyou are planning to test against your own application, you will need to adjust your
stack to include the out-of-the-box PegaCLMFSRetWSS as a built-on application. For
example, if you have a UPlus application, you must create a UPlusWSS application that
includes both PegaCLMFSRetWSS and your base application.
For those tasks listed below where access to Dev Studio is required, use an operator
with administrativerights. If you are working directly against out-of-the-box, use an
operator pointing to the access group CLMFSRetWSSSysAdmin. If you are working
against your own implementation application, create an equivalent one.
Application configuration
If you are using the out-of-the-box configuration and accessing the WSS demo
application from its Pegasystems location, you do not need to do any of the tasks
described in this section. In all other cases, you need to register the URL of the WSS
demo application as a trusted origin in your application.
For that purpose, open your application rule and register the URL of the WSS
application in the Integration & Security area.
In addition, register the WSS URL as an allowed frame-ancestor website in the Content
Security Policy rule of your application.
If you are testing against the out-of-the-box application, configure one operator to point
to the access group CLMFSRet_WSS_User (if you installed the sample application, use
the operator [email protected] as reference). If you are testing against your
own application, create a new operator with an access group pointing to your
implementation layer (for example, UPlusWSS).
Channel configuration
The last step in the configuration of the application is the creation of the channel. Open
the channels landing page (Application > Channels and interfaces), and select Web
mashup. In the New Web mashup interface screen, provide the following information
(leave the default configuration for those fields not listed here):
Name Description
Name
Name of the channel
Example – CLMWSSRetail
URL
URL used by the WSS application to access
Pega. It is autogenerated by the system.
Name Description
Example - https://lab.pega.com/prweb/app/
CLMRetWSS_7983/
Skin CLMWSS
Clickon Saveto persist the changes and then, before leaving the screen, click on
Generatemashup code. The system opens a modal window with two code snippets.
Look at the second one, the one at the bottom (Mashup code), and copy the value of
the parameter data-pega-channelID.
Save the value, as it will be used later during the configuration of the Web Self Service
application.
Client-side configuration
Now that the server is ready to accept requests, it is time to open the Web Self-Service
application and configure it. Open the application using the following URL:
https://pegasystems.github.io/uplus-wss/retail_bank/index.html
Then scroll down to the bottom of the screen and click on Settings.
• Configuring users
• Quick Links
Configuring users
The Users tab presents a configuration screen where you can set the Pega operator
that will be used to access the Pega server. The demo application comes with two
different users that can be used to maintain different profiles. Select one of them, and
provide the following information:
Name Description
Username
Name of the user that will be used to log in
to the Web Self-Service application. This
username is fictitious, and does not need to
exist in any system. It represents the user
identifier of the client when accessing the
WSS application.
Example – [email protected]
Password
This is the password for the client. This
password is used to authenticate the client
during the login.
Example – Password
Pega user id
Pega operator identifier (see Access groups
and operators)
Name Description
Example – [email protected]
Quick Links
The Web Self-Service application comes with a series of quick-links which is made
available on the landingpage immediately after login, and these quick-links can be used
to embed different Mashup actions. For example, you can configure one of these links
to start the onboarding journey. Click on the Quick Links tab, and provide the following
information:
Name Description
URL
URL generated by the system during the
creation of the channel interface (see
Channel configuration)
Example - https://lab.pega.com/prweb/app/
CLMRetWSS_7983/
Channel ID
URL generated by the system during the
creation of the channel interface (see
Channel configuration
Example –
MASHUP63942705ac7a4e7e85e27dd41a868ac
4
Name Description
URL
URL generated by the system during the
creation of the channel interface (see
Channel configuration)
Example - https://lab.pega.com/prweb/app/
CLMRetWSS_7983/
Name Description
Pega user id
Pega operator identifier (see Access groups
and operators
Example – [email protected]
Channel ID
URL generated by the system during the
creation of the channel interface (see
Channel configuration)
Example –
MASHUP63942705ac7a4e7e85e27dd41a868ac
4
Verification
After configuration, it is time to verify the access. Complete the following steps:
• Open the Web Self-Service application, and click on the Learn More button that
appears immediately under the Home Hero offer (“Set goals, save more”).
• The application then shows the mashup gadget with the first step of the flow
(Display Products) that has been initiated in Pega. Click on Learn more in any of
the products.
• The system shows a screen with some details of the selected product. Click on
Apply Now to start the onboarding process.
steps of an
onboarding that, in
an access through a
branch portal, are
executed in the
form of a regular
flow.
certain period of
time (by default, 1
day).
Appendix
• eID Verification
eID Verification
Pega Client Lifecycle Management can be configured to work with the third-party eID
Verification system, which provides a video-based solution for the authentication of
customers and documentation collection. In order to enable this configuration, you
must perform some additional steps. Once completed, the capture screen flow
presents an additional step between Show Legal Information and Collect Initial Data,
where the customer goes through the eID authentication process.
The process starts with the application guiding the customer through a video-recorded
process to obtain facial biometric information and an identity document, which is
automatically validated and compared against the biometric data (for more
information, see eID Verification).
In a subsequent step, executed as a background task, the application accesses the eID
Verification serverand retrieves from there the customer identity document, as well as
any data that could have been extracted from it. For example, if the customer has
provided a passport, the system extracts the customer name, date of birth and other
similar pieces of information. The customer information in the onboarding case is
enriched with this additional data and the identity document is attached to the case.
https://<server-name>/uplus-wss/retail_bank/index.html
Knowledgebase articles
Pega Client Lifecycle Management and KYC expedites onboarding and time to transact
while managing KYC due diligence of the customer.
• Asynchronous processing
• Case searches
• Case summary
• Client outreach
• Data traceability
• eScreening
• Reports
Asynchronous processing
The asynchronous processing functionality is implemented by using Pega queue
processors that process cases according to certain parameters. Customers might need
to change the default configuration to meet the needs of their specific
implementations.
Both queue processors come with the following configuration (available under Records
> SysAdmin > Queue Processors):
• Number of threads per node: 3 – This is the number of concurrent threads that
the system can process per node in the cluster. One thread equals one case being
processed. For example, a configuration of three threads in a cluster of four
nodes, configured for background processing, would provide 12 simultaneous
cases. This number might need to be adjusted based on your volume and the size
and performance of your system.
• Max Attempts: 3 – In some situations, the system can find errors while processing
a case. For example, an external system might be down or there is an unexpected
situation that prevents the process from finishing. Retying three times is usually
enough to resolve those situations. After the maximum number of retries is
reached, the case goes to the broken-process status and an administrator needs
to take action.
• Initial delay (in minutes): 1 – This is the time that the system waits between a first
failed execution attempt and a second attempt.
• Delay factor: 2 – This is a correction factor applied to the initial delay between
subsequent retries. For example, if the initial delay was 1 and the delay factor is 2,
the system waits one minute between the first attempt and the second, two
minutes between second and third, four minutes between third and fourth, and
so on.
Most of the time, the default parameters are sufficient. If you need to make changes to
these parameters, see the Pega Client Lifecycle Management for Financial Services
Implementation Guide.
Note: The queue processors shipped with the application are both configured
to only run in those nodes of the cluster that are configured with the
BackgroundProcessing node type. If required, change the associated node type
in the new Queue Processor rule or configure some of the nodes of your
cluster to use that node type.
Case searches
Onboarding activities can result in the creation of large volumes of work that are
commonly carried out by multiple groups within the organization. It is important to
quickly and easily find this work.
In addition to the following default search parameters, you can implement Onboarding
for Financial Services to use additional search parameters:
• Name
• Short name
• Phone number
• Email address
• LEI
• Case ID
• Address Line 1
In addition to the available filter fields, the results list has been enhanced to include the
customer name for easier identification. Cases can involve a high number of sub-cases.
You can exclude sub-cases from the search results.
for more details about how to extend search properties, see the Implementation Guide
on the Pega Client Lifecycle Management for Financial Services page.
Implementation details
Case summary
The Case summary feature contains a detailed view the of stages, steps, and the most
relevant data for a case across its lifecycle.
The process of onboarding a new customer can vary depending on factors such as the
type of customer, their location, and the products they use. Increasing regulation in the
financial services industry adds pressure to understand the progress of ongoing work,
related information, and those main parties involved in onboarding activities. This
context-specific view of the overall parent case or separate units of related work must
provide relevant data to the user to help them continue their task or understand
blockers to progress.
The Case summary feature solves these business problems by providing an overview of
the case. The Case summary screen has the following core areas.
Stages area
• The completed, current, and future stages. The current stage is expanded by
default to show the detailed steps within. Completed and future stages are
collapsed by default but can be expanded manually.
• Within the current stage each assignment shows the operator (if assigned); a
Complete, New, or Waiting status; and an Action button. The actions change to
Continue, Begin, or Update as appropriate for the state of the assignments.
• If any process has multiple assignments, those are shown separately under that
process.
• The key internal parties working on the case. This differs appropriately depending
on whether you are viewing the main parent Customer Lifecycle Management
case or one of the subcases.
• CLM case.
◦ Customer journey sub type, Relationship manager name, and Sales support
manager name.
• Subcase, for example, Global KYC, FATCA, or CRS.
◦ Link to the parent case, the customer journey subtype, an area-specific
department (such as Global KYC department or Tax department), the
Assessment manager name, the Review department name, or the Review
manager name.
Note: These categories can be clicked to view a read-only view of the data
captured to date. Access to read-only versions are only possible when in the
Enrich data for due diligence assignment is in the Enrich stage.
If the user selects the parallel fulfillment option during Capture stage, upon completion
of all due diligence activities for a jurisdiction, sibling fulfillment cases are generated
and displayed.
The rules used to implement the case summary screen are outlined in the following
table:
summary
information for the
pyWorkPage rule in
the clipboard.
Client outreach
As a client moves through onboarding, maintenance, or offboarding journeys with a
financial institution, a wide variety data and documentation is captured for them and
their related parties. Data or documentation can be gathered from third party
providers, reused if on file, and still valid or in many cases, sourced directly from the
customer.
The Client outreach case type enables the client to provide required data or documents
through a web self-service mechanism accessed from a laptop, tablet, or phone. Data
and documentation received from the client can then be used in the ongoing customer
journey.
With the content of the request, the system creates a new client outreach case, which
starts by sending an email communication to the recipient of the request. The
communication contains a link to the self-service website where customers can provide
the requested data and upload the requested documents. Upon data submission, the
case is routed back to the financial institution user for its review. Users can then use the
data and documentation provided by the customer. Data can be moved into the main
customer journey and documents can be attached.
The communication with the client, which is done through the fulfillment stage, has
been enabled through the following channels:
• A back-office operator can collect the data from the customer outside the system,
then open the provided assignment from a workbasket and enter the data.
• Financial institutions can embed in their web self-service (WSS) applications a
simple to-do list with all of the Client outreach cases addressed to a customer.
Clients can access cases through that gadget and action them. The integration
between the financial institution WSS application and Pega is done by using Pega
Mashup.
• Financial institutions can build their own applications to manage the interaction
with the customer. These applications can get and edit Client outreach cases by
using the provided REST API. This enables organizations to use their own systems
to contact clients.
Users can initiate the process to request data from customers from the review and
perform harnesses of the main customer onboarding journey cases (for example,
onboarding CIB cases). The Actions menu in those harnesses have been configured to
include a new option Create client outreach that enables users to review existing and
create new Client outreach cases.
By default, the system consolidates all of the requests regarding a certain client and
addressed to a certain recipient under a single case. Only one case can be active for a
given pair of clients and recipients.
Customers can use the consolidation key to modify the logic of the consolidation of
cases. By default, the consolidation key is formed as “Client-Recipient”. Customers can
change the way in which the key is built and removed, for example, the recipient
identifier (that makes the system consolidate cases only at the customer level). New
data elements can be added to the key to make the constraint more restrictive. The
extension point CalculateConsolidatedKey has been created for that purpose.
When the user selects a recipient from the list of available contacts, the system checks
if there are active client outreach cases (not completed) for that customer (the
contracting party of the journey) and recipient. If there are active cases, the system
shows a message in the screen giving access to the case and disables the Create New
button. If no cases are found, the Create New button is made available and new cases
can be created.
The user can review a list of cases for the current customer. Users can decide to open
and edit an existing active case or create a new one addressed to one of the recipients
in the list.
When Client outreach cases exist for a given customer, users can see a list of all cases
with their statuses in the review harnesses of all the customer journey cases related to
that customer.
The user can select any of the items from the list and preview its contents.
Process overview
The client outreach case implements the request of information (data or documents)
from a certain customer that is addressed to a specific recipient, an identified contact of
that customer. The request goes through four stages.
In this first implementation of the functionality, the Review stage has been left as a
placeholder for future implementation. This means that cases are created and sent to
the customer (Capture), fulfilled (Fulfillment), and immediately resolved (Wrap-up).
Users are able to access resolved cases from the main customer journey cases, but
cannot reopen them (new requests must instead be created).
For more details about the process, see the following sections.
Capture stage
The Capture stage is started immediately after the user has requested the creation of a
new Client outreach case (see Client outreach case creation). Its main purpose is the
initialization of the request for the customer (Data Collection process) and its
notification to the customer (Notify Recipient process).
The Data Collection process starts with an assignment where the user enters the list of
data and documents being requested from the recipient (see the following details).
After the data has been collected, the case, which was originally created as a temporary
case and has not yet been persisted, is persisted into the database. Then, a message
that the Client outreach case has been created is posted to Pulse and an audit note is
added.
During the collection of the data, users can provide the following data:
To send the request to the customer, the user must provide at least instructions and
one question or one document.
Each of the items added to the case are added to the Items list as a new page of the
PegaFS-Data-ClientOutreach-Item type. The exact name of each of the entries depends
on the type (data or document) and DCR configuration.
• Class: PegaFS-Data-ClientOutreach-Item-Data-Basic
• DCR Reference: DataClass_ClientOutreach_Item_Doc_Basic
Each of the entries in the Items list contains the following data elements:
After the initial data required by the Client outreach case has been collected, an email is
sent to the recipient informing them that there is a request for additional data and/or
documents waiting in the Web Self-Service (WSS) application.
The email includes the link to the WSS application where the customer is able to
complete the request. The link provided by the default implementation points to a Pega
Demo WSS URL. Clients must modify this link in their implementations to make it point
to their own corporate WSS websites.
The body of the email that is sent to the customer can be edited in the
NotifyRecipientCO correspondence rule.
After generation, the email is added as an attachment to the case and is available from
the Actions menu after selecting Case attachments.
In the Fulfillment stage, the system creates the Fulfillment assignment, which is routed
to the Client Outreach workbasket.
The system takes the Party (RMHead) and IntendedWB (ClientOutreach) parameters
and searches the organization structure until it finds the appropriate workbasket. The
CLM sample application includes a workbasket at the UPFS-GM level. Customers need
to define their own workbaskets in their implementations.
The assignment can be retrieved by clients through the Web Self-Service application or
by operators through PegaFS/CLM portals, by directly accessing the workbasket.
In the Fulfillment assignment, the client provides requested data and can attach
documents. There must be at least one question answered or one document attached
to proceed further. If this is not the case, then the comment with explanation becomes
mandatory.
The functionality for the Review stage will be implemented in the future releases.
Currently, the system will bypass this stage.
The functionality for Wrap-Up stage will be implemented in the future releases.
Currently, the system will bypass this stage and resolve the case with a Resolved-
Completed status.
If the Client outreach case is not resolved, the user can modify the list of data items and
documents by clicking Edit Data in the Review and perform harnesses of the Client
outreach case.
A modal window opens, which enables the user to add requests for additional data, or
delete or modify the current list. After the update, the new items list is created and
saved, and a new notification is sent to the client.
Note that this action does not have any impact in the life cycle of the Client outreach
case. There is no change of stage or step, and the case remains where it was
(ClientOutreach workbasket).
The user can withdraw a Client outreach case before it is resolved. The case-wide action
Withdraw (flow action) is available and can be triggered from the Action menu of both
the Review and Perform harnesses.
A modal window opens and the user must provide the reason why the case is being
withdrawn and then confirm the withdraw action by clicking the Submit button. After,
the system confirms the case is closed with the status Resolved-Withdrawn.
This functionality has only been implemented for the Client Lifecycle for Financial
Services application and is triggered at the time of closing CLM customer journey cases
(for example, CIB or Retail cases). At that moment, all pending Client Outreach cases
that might be related to that journey – cases for the main contracting party of the
journey – are closed with a status of Resolved-Withdrawn, unless there is another
journey still open for the same client. The list of Client outreach cases related to a given
customer journey is visible from the review harness of that case.
This functionality is available in to the Wrap up flow after the Customer Synchronization
subprocess (when case is resolved) and in the RewindAssets flow (in case the user
decides to abandon the journey). A utility shape triggers the
ClosePendingClientOutreachCases activity, which checks if there is only one pending
CLM related case for the client. If that is true, the activity checks for all pending Client
outreach cases and closes them.
The ClosePendingClientOutreachCases activity also has the extension point for the
client to add additional functionality (the ClosePendingClientOutreachCases_ext
activity).
After the Client Outreach case is automatically resolved, the relevant audit note is
visible in the case history.
For more information about each of these methods, in Dev Studio, click <application
name> > Channels and interfaces, select API, and then select the Application tab.
API methods
GET /clm/clientoutreach/v1/cases/{recipientId}
Parameters
Response codes
ClientOutreach-Data-
ClientList class.
Sample messages
Messages
Request
url https://{serverURL}/prweb/PRRestService/
CLMFS/v1/clm/clientoutreach/v1/cases/
0000169171
Response
{
"data": {
"clients": [
{
"clientName": "clientABC",
"clientID": "3117",
"case": [
{
"caseStatus": "Resolved-Completed",
"caseID": "CO-14007",
"caseConsolidatedID": "3117#0000169171"
}
]
Messages
},
{
"clientName": "NiceCo",
"clientID": "180",
"case": [
{
"caseStatus": "Resolved-Completed",
"caseID": "CO-27003",
"caseConsolidatedID": "180#0000169171"
},
{
"caseStatus": "Pending-Fulfillment",
"caseID": "CO-27004",
"caseConsolidatedID": "180#0000169171"
}
]
}
]
},
"messages": []
}
PUT /clm/clientoutreach/v1/cases/{recipientId}/{caseId}
Parameters
the cases to be
returned by
the service.
Response codes
Messages
Request
url https://{serverURL}/prweb/PRRestService/
CLMFS/v1/clm/clientoutreach/v1/cases/
0000169171/CO-27004?
itemGUID=649d27b7-362f-435c-ada8-
b2f7d1a18265&itemValue=street 456,
BO&resolveRequest=false
Response
{
"data": {
"item": [
{
"itemValue": "street 456, BO",
"itemType": "Data item",
"itemName": "Please provide full address",
"pyGUID": "649d27b7-362f-435c-ada8-b2f7d1a18265"
},
{
"itemValue": "",
"itemType": "Document",
"itemName": "Please provide proof of address",
"pyGUID": "2345d264-dbb9-44c8-859b-27123928422c"
Messages
},
{
"itemValue": "",
"itemType": "Data item",
"itemName": "Please provide TAX ID",
"pyGUID": "bf322441-59ff-4366-8360-8f63c071cfb5"
}
],
"recipientID": "0000169171",
"clientDetails": {
"clientName": "NiceCo",
"clientID": "180"
},
"caseStatus": "Pending-Fulfillment",
"caseID": "CO-27004",
"caseConsolidatedID": "180#0000169171"
},
"messages": []
}
GET /clm/clientoutreach/v1/document/{docGUID}
Parameters
items of a
case.
Response codes
Sample messages
Messages
Request
url https://{serverURL}/prweb/PRRestService/
CLMFS/v1/clm/clientoutreach/v1/document/
2345d264-dbb9-44c8-859b-27123928422c
Messages
Response
{
"data": {
"itemValue": "proofOfAddress",
"itemType": "Document",
"itemName": "Please provide proof of address ",
"pyGUID": "2345d264-dbb9-44c8-859b-27123928422c",
"itemContent": "dGVzdA==\r\n"
},
"messages": []
}
POST /clm/clientoutreach/v1/document/{docGUID}
Parameters
resolve
request.
Response codes
Sample messages
Messages
Request
url https://{serverURL}/prweb/PRRestService/
CLMFS/v1/clm/clientoutreach/v1/document/
2345d264-dbb9-44c8-859b-27123928422c?
docValue=NewProofOfAddress
Messages
&fileName=NewproofOfAddress.pdf&resolve
Request=false
body dGVzdA==
Response
{
"data": {
"itemValue": " NewproofOfAddress ",
"itemType": "Document",
"itemName": "Please provide proof of address ",
"pyGUID": "2345d264-dbb9-44c8-859b-27123928422c",
},
"messages": []}
Implementation
Service infrastructure
Portals: no portals
Roles: PegaCLMFS:WorkMgr,
PegaCLMFS:API (New) -
Additional roles can be
added as required
Class Structure
The API is built under its own class structure. There are two main branches that contain
all classes:
Class Purpose
Class Purpose
As there are multiple operations that are common for each method, however, one
single point of entry – the SvcLauncher activity. The main entry point is responsible for:
To recognize which method should be run, the SvcActivity parameter must be passed to
SvcLuncher. It should contain the name of the data transform that implements the
specific logic that executes the service.
Authentication
Apart from default authentication that is based on the service package configuration,
the API should always ensure that the customer making the call can access the recipient
that the cases are addressed to. The different services use the CheckAccessToRecipient
activity, an extension point that checks that the identifier provided as the authentication
ID is the same as the recipient of the case or cases targeted by the services. Customers
can overwrite this rule if they have other rules that drive the visibility of cases.
CheckAccessToRecipient Activity
Class: PegaFS-Int-
ClientOutreach-Svc
Parameters: AuthorizationId
Parameters: RecipientId
Response messages
Services return different messages (errors and warnings). Messages are stored in the
messages property available in the response class. Service rules can add messages by
using a certain code. The code is categorized by a decision table (error/warning/info)
and given a description.
Parameters: Code
Parameters: Body
Using the code retrieved from a decision table: type (error/warning/info) and its
description (with optional details from Body parameter). Append code and description
to messages list in response
Tracking
The system provides auditing capability that can be used by both IT departments and
auditors to show which calls were made through the API. The audit data resides in the
database so that it can be stored and accessed in a structured way and protected from
direct access. Entries in the audit tracking can be found in the PegaFS-Int-
ClientOutreach-Audit class.
Class Purpose
CallerId Authenticated ID
To demonstrate this capability, customers can use the new Pega WSS demo application.
https://pegasystems.github.io/uplus-wss/commercial_bank/index.html
Note: This application is not part of the product itself and therefore is not
supported.
To configure the web application, scroll down to the bottom of the home page of one of
the industries and click Settings.
There are multiple options, some of them generic, while others are very specific to
some Pega applications. The most important tabs for an application for CLM are:
• To Do Widget – Configures the widget that displays in the central area of the
landing page after login. In the widget, the user can see the list of the Client
outreach cases that are addressed to them.
• Users – Configuration of users supported by the WSS application, in addition to
the Pega operators to be used to access to the Pega application.
In addition to the actual configuration options, there are two icons in the right-top
corner that enable users to download the configuration file and upload a configuration
from an existing file. Note that all changes to the configuration are lost after the
browser session is finished. Browsers opened in an incognito session cannot view the
configuration made under other sessions. As a best practice, always keep a copy of the
configuration file for future reuse.
To Do Widget
In the To Do Widget area, provide the configuration that is listed in the following table.
Parameter Value
Action Display
URL https://lab0525.lab.pega.com/prweb/app/
PegaCLMFSWSS_1105/
Classname Data-Portal
The URL of the application, which must be an HTTPS-secure URL, can be obtained in the
Mashup configuration.
Users
Configure the users that can access the web application and map the Pega operators.
Multiple users can be configured to use the same Pega operator but with different
customer IDs. Set the contact ID as the identifier of the customer in the Pega CLM
database.
Server configuration
The Pega server should be exposed through a valid name and a valid certificate. Access
through non-secure HTTP or to an HTTPS server by using a self-signed certificate is not
allowed or recommended.
In addition, the server should have the following attributes disabled: the same site
cookie attribute and the CSRF token check (Dev Studio > System > Settings > Cross Site
Request Forgery).
Application configuration
The URL of the web application needs to be added to the application as a trusted origin
(Integration & Security). Customers who use the CLMFSWSS application shipped with
CLM do not need to make any changes because this configuration is already present.
In addition to these changes, there are certain actions (for example, open assignment)
that also need the server listed in the Content Security Policy of the application under
frame-ancestors.
Implementation at CLM
• Name: PegaCLMFSWSS
• Build on: PegaCLMFS
• Rulesets: PegaCLMFSWSS (New), PegaFSCOWSS (New)
• Trusted Origin: https://pegasystems.github.io/uplus-wss/
Access Group
• Name: CLMFS_WSS
• Application: PegaCLMFSWSS 8
• Portals: no portals
Operators
• Name: CLMFSWSS
• AccessGroup: CLMFS_WSS
• Name: CLMFSWSSSysAdmin
• AccessGroup: CLMFS_WSS
During the rendering of a section, the parameter page cannot be used to pass
parameters to a data page. As a solution, the section itself can iterate through all the
parameters during the section generation and put them in a clipboard page accessible
during the configuration of the data page. For that purpose, a new class was created
(container) along with a section (custom HTM) to move the values from the parameters
page into a top-level clipboard page.
The following articles detail the default customer risk assessment factors and whether
they apply to individual or organization customers, or to both.
The risk assessment of both individual and organization customers is calculated using
the following scorecards, which combine and weigh the score of each factor.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
The following table displays the default weightings for the risk factors.
Product related 1 1
Secondary risk 1 1
Each risk factor is driven by a specific scorecard rule, which is part of the decision rule
category. The Pega Financial Services Industry Foundation provides a declarative
network that automatically invokes the required rule in order to assess the needed
value.
In this rule, the data transform SynchronizeRiskProfile is used for invocation of the risk
engine and reassessment of Customer risk.
There are two declare expressions that are configured on the properties
CustomerRiskCode and CustomerRiskScore that trigger the calculation of risk profile.
• SaveWorkFolder – Used for the creation of and to update the master profile of the
customer. This activity uses the pyWorkParty(Customer) copy of Customer data
from the current work case as source data, which is then synchronized into the
Customer Master Profile. Synchronization is performed by means of the
SynchronizeDriverDataToMasterProfile data transform.
• RiskProfileInfoInd (section for Individual)
• RiskProfileInfoOrg (section for Organization)
• RiskProfileInfoFund (section for Funds)
The business code risk assessment factor helps to predict and identify simple mistakes
that can jeopardize a company's future. You can identify the risks facing your business
and get detailed steps on how to resolve them. It is applicable only to organization
customers. The business risk is calculated in the BusinessCode scorecard rule, based on
the BVulnerableBusinessCode and BusinessCodeSensitivity declare expressions.
deOpen data
page to
retrieve
records from
the
FSF_Sample_B
usinesscode
table.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
The country related risk factor is applicable for individual and organization customers.
The risk is calculated by the CountryofNationalityScore scorecard rule, which makes the
calculation by using the CountryOfBirthScore, CountryOfResidenceScore,
CountryOfNationalityScore, CountryOfIncorporationScore,and, BusinessCountriesScore
declare expressions.
The following list details possible values for the SensitivityCode, CountryOfNationality,
and CountryOfIncorporation values.
• No = no sensitivity
• M = medium sensitivity
• H = high sensitivity
• SP = special sensitivity
Checks for a
country code.
If the value is
blank, the rule
skips the
lookup.
the PegaFS-
Data-
RiskProfile
page from the
customer
master profile
WorkFolder
page.
expression
also checks the
CountryOfNati
onality value.
Populates the
CountryOfNati
onalityScore
property on
the PegaFS-
Data-
RiskProfile
page from the
country of
nationality and
its related
information
(sensitivity and
whether the
country is
sanctioned or
not).
sanctioned or
not). If no
country of
birth is
provided, the
CountryOfBirt
h value is 0
and does not
impact the
score. If the
value is not 0,
the expression
checks the
SensitivityCode
value.
that the
TaxIDType
property has a
value of 2. The
expression
also checks the
CountryOfInco
rporation
value.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
The external data risk factor is only applicable for individual customers. It is calculated
based on third-party data using the ExternalDataScore scorecard rule. These
parameters are calculated using the following declare expressions and return the risk
as low, medium, high, or extreme.
Determines
the external
fraud check
value (third-
party data)
from the
master profile
page
(Workfolder).
ssessment
rule.
Determines
the external
credit score
assessment
value (third-
party data)
from the
master profile
page
(Workfolder).
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
YearsOfWork
rules.
customer's
occupation
risk. The value
is
synchronized
in the
SynchronizeRis
kProfile data
transform.
have a weight
of 1.
•
PositionC
odeScore
–
contains
the
weight
value
returned
from the
map
value rule
MapPosit
ionCodeS
core.
•
YearsOfW
ork –
stores
how
many
years the
customer
has
worked
at the
company.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
The product related risk factor applies to individual and organization customers. Direct
costs, for example, claims, discounts, and product recalls, can have a huge impact on a
company. Indirect costs, such as loss of market share and deteriorating reputation, can
have a similar impact.
The product related risk is calculated based on product vulnerability in the Add
products step. A flag is set as Yes or No for each vulnerable product in the product
matrix table. If the product is vulnerable, it adds a score of 50 to the risk calculation. If it
is not vulnerable, the added score is 0.
The risk for each product is calculated whenever you click Save in the case.
added score is
0.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
The related parties risk factor applies to individual and organization customers. It is
calculated using the relevant relationship score, which is, in turn, dependent on the
relevant related parties for the customer.
Use the RelevantPartyDecision decision table to decide whether the related party is
relevant.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
The relationship duration related risk factor applies to individual and organization
customers. The risk is assessed based on the duration of the relationship between the
customer and the company.
The risk is calculated using the DurationOfRelationShip scorecard rule and is based on
the DurationOfRelationShipScore declare expression rule, which evaluates the duration
of the relationship with the customer in years. The calculation measures the difference
between the RelationshipStartDate value and the current date.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
The secondary risk assessment factor calculates the risk for the customer whenever
Know Your Customer due diligence is performed. The ScoreCardSecondaryRisk
scorecard rule uses the SecondaryDDRiskCode and SecondaryDDRiskScore declare
expressions.
For more details about how to extend customer risk properties, see the Implementation
Guide on the Pega Client Lifecycle Management for Financial Services page.
Data traceability
Financial service institutions are subject to a high number of complex, frequently
changing regulations that vary greatly from jurisdiction to jurisdiction. Institutions
might have to prove to regulatory auditors how specific decisions were made. A large
volume of data that drives those decisions is captured from a variety of sources such as
internal databases, customer self-service, manual entry by employees, and third-party
systems. Institutions must be able to track where the data was initially captured from
and how it changes over time.
The Data traceability feature enables you to identify and track data objects and to
configure auditable entries on those particular data objects. Afterward, the data change
tracking engine scans for changes and saves them in an easily accessible data change
repository.
For more details about how to extend data traceability properties, see the Data
traceability extension details on the Pega Client Lifecycle Management for Financial
Services page.
Customer Lifecycle Management data change tracing functionality extends the Pega
Platform data change auditing infrastructure by:
▪ Source Operator – the operator who performed the change. This is left
empty if the change is automatic.
▪ Source Thread Node – the node on which the change is performed. The
information is retrieved from the thread system page.
▪ Source Thread IP – the IP address from which the change occurred. The
information is retrieved from the thread system page.
◦ Target information
▪ Customer ID – customer identifier for the tracked change.
▪ Case ID – work case identifier for the tracked change.
▪ Target Context – data vehicle tracked, for instance, customer master
profile.
▪ Target Element – data element changed, for instance, city.
▪ Target Element Path – path to the data element changed within target
context, for instance, Address(Home).
◦ Values
▪ Previous Value – target element value previous to the change, for
instance, Cambridge.
▪ Final Value – target element value after the change, for instance,
Somerville.
◦ Comment
▪ Change Comment
◦ Custom properties
▪ Two custom properties available for extension/specialization
◦ Change timestamp
▪ Timestamp
• Enabling you to extend and customize for more specialized needs by means of
two custom properties exposed in the mapped table.
Foundation for Financial Services provides a reusable, common report definition for
showing data change history. This report definition allows filtering by:
• Case ID
• Customer ID
• Source context
• Target context
• Target element ID
• Target element path
For more information about how data traceability is implemented, see Data traceability
- implementation details.
Audit Creation
fs/FSIFTrackingFunctionalityEnabled
Use the fs/FSIFTrackingFunctionalityEnabled dynamic system setting to enable or
disable tracking functionality. The IsTrackingFunctionalityEnabled when rule
evaluates the dynamic system setting. If the setting has a value of true, it tracks
changes.
FSIFTrackSecurityChanges
The FSIFTrackSecurityChanges activity uses the FSIFTrackChangesSetContext
decision table to configure the target context based on tracking requirements,
such as the customer master profile. It collects scalar and embedded pages
defined in the FSIFTrackSecurityChanges data transform, starting with the
MasterProfile target context.
For scalar properties, the logic sets and stores the old and new values in the
fsf_changetracker table. On embedded pages, the activity launches corresponding
track definitions from the FSIFTrackSecurityChanges data transform in that class.
The tracking functionality is repeated through the Pega Platform invoke activity
function until all scalar and embedded properties defined in the data transforms
are covered.
For embedded properties, the activity identifies content changes when an item is
removed from a list or group. Before storing the tracked change, the activity
retrieves the value from the database (in this case, the master profile) and
compares it with the current clipboard value. If the two values differ, a record for
this property is created in the fsf_changetracker table, with the old value from the
database and the new value from the clipboard. After running this activity, the
original commit is made to the database.
MapChangeTrackerDataInfo
The MapChangeTrackerDataInfo data transform helps to build the required
structure before the record is saved to the fsf_changetracker table. This data
transform (on Embed_Historymemo) uses the history memo to store
targetelement. The previous value and final value are stored in a particular format
and are positioned based on whether they are changed, added, or removed. This
also brings in other details such as operator, source application, and so on, using
the D_SourceDataInfo data page. A data transform is applied to this data page,
which helps to define a source application, IP address, operator, source node, and
source context using Pega Platform system pages to get corresponding
information.
Foundation for Financial Services provides a dedicated database table to store all
changes on scalar or embedded properties that are defined in the data transform.
This table has a composite fsf_changetracker_pkey primary key with the following
columns.
• targetcontext
• targetelementid
• targetelementpath
• timestamp
For better performance, the database has indexes on the following columns.
• targetcontext
• targetelementpath
• targetelementid
• customerid
• caseid
• sourceContext
• Audit Creation
Audit Creation
Foundation for Financial Services tracks the PegaFS-Data-Party-MasterProfile class for
changes to any of its scalar and embedded properties. To achieve this, the
FSIFTrackSecurityChanges declare trigger is created on the PegaFS-Data-Party-
MasterProfile class and calls the FSIFTrackSecurityChanges activity. This trigger is
configured to invoke the activity each time the master profile is saved. When the activity
is called, it provides the appropriate value for Customer ID.
fs/FSIFTrackingFunctionalityEnabled
Use the fs/FSIFTrackingFunctionalityEnabled dynamic system setting to enable or
disable tracking functionality. The IsTrackingFunctionalityEnabled when rule
evaluates the dynamic system setting. If the setting has a value of true, it tracks
changes.
FSIFTrackSecurityChanges
The FSIFTrackSecurityChanges activity uses the FSIFTrackChangesSetContext
decision table to configure the target context based on tracking requirements,
such as the customer master profile. It collects scalar and embedded pages
defined in the FSIFTrackSecurityChanges data transform, starting with the
MasterProfile target context.
For scalar properties, the logic sets and stores the old and new values in the
fsf_changetracker table. On embedded pages, the activity launches corresponding
track definitions from the FSIFTrackSecurityChanges data transform in that class.
The tracking functionality is repeated through the Pega Platform invoke activity
function until all scalar and embedded properties defined in the data transforms
are covered.
For embedded properties, the activity identifies content changes when an item is
removed from a list or group. Before storing the tracked change, the activity
retrieves the value from the database (in this case, the master profile) and
compares it with the current clipboard value. If the two values differ, a record for
this property is created in the fsf_changetracker table, with the old value from the
database and the new value from the clipboard. After running this activity, the
original commit is made to the database.
MapChangeTrackerDataInfo
The MapChangeTrackerDataInfo data transform helps to build the required
structure before the record is saved to the fsf_changetracker table. This data
transform (on Embed_Historymemo) uses the history memo to store
targetelement. The previous value and final value are stored in a particular format
and are positioned based on whether they are changed, added, or removed. This
also brings in other details such as operator, source application, and so on, using
the D_SourceDataInfo data page. A data transform is applied to this data page,
which helps to define a source application, IP address, operator, source node, and
source context using Pega Platform system pages to get corresponding
information.
◦ Define the entry path from the primary page on which the change
tracking begins. This entry path is passed from consuming context
within the TargetElementPath parameter.
◦ Override the dynamic timestamp to be persisted in the table by passing
it within the CurrentTS parameter.
◦ Pass custom information and map it to the CustomProperty1 column in
the table by using the value within the CustomProperty1 parameter.
◦ Pass custom information and map it to the CustomProperty2 column in
the table by using the value within the CustomProperty2 parameter.
◦ Override the dynamic value of the Comment column by passing the
value within the CustomComment parameter.
Foundation for Financial Services provides a dedicated database table to store all
changes on scalar or embedded properties that are defined in the data transform.
This table has a composite fsf_changetracker_pkey primary key with the following
columns.
• targetcontext
• targetelementid
• targetelementpath
• timestamp
For better performance, the database has indexes on the following columns.
• targetcontext
• targetelementpath
• targetelementid
• customerid
• caseid
• sourceContext
The user interface presents various elements to navigate the changes for example,
history icons for scalar properties to view the different value changes that the property
underwent, hyperlinks to drill down into non scalar properties (pages, page lists, page
groups), and so on.
The rendering of the user interface is driven by a tree data structure where the root
node represents the customer master profile, the intermediate nodes of the tree
represent the properties on that profile, and the leaf nodes contain the actual audit
entries for those properties. This data structured is maintained at the
D_GetAuditTreeForCustomer data page
Extension Points
To customize this view to suit your business needs, use the following extension points:
PrepareDataToBuildAuditEntry_ This data transform rule is invoked as the initial step for
Ext every customer audit entry before it becomes a node in
the audit tree. It can be updated to modify or enhance the
audit entry for display purposes. For instance, the
SkipAuditRecord parameter can be set to true using this
extension to filter out specific audit records based on their
values. Additionally, this rule can retrieve additional
customer data based on properties, including custom
ones, available on the audit entry.
BuildAuditNodeForSimpleProp While building the tree, this data transform rule is invoked
erty_Ext when a scalar property’s audit entry is encountered for the
first time and a node is created for it which represents all
of its changes. This extension can be utilized to enhance
the node to suit business needs like providing additional
functional information about the property. Setup the
required details using this extension and they can be used
later in enhancing the display.
BuildAuditLeafNode_Ext This data transform rule is invoked while forming the leaf
node (transformed version of the actual audit entry). This
IsTrackingFunctiona
lityEnabled
False – disables
data traceability.
For example,
BusinessObjGoalsLi
st is a scalar
property and
Address is a page
group. List both of
these in this data
transform and
create a data
transform with the
name
FSIFTrackSecurityCh
anges in its applies-
to class (PegaFS-
Data-Party), then
add to it the list of
properties to be
tracked (properties
Lifecycle
Management or
Know Your
Customer.
get contextual
information about
source data.
PegaFS-Data-Party-MasterProfile
CountryOfInc
BusinessCode
CountryOfCitizenship
NextKYCReviewDate
PegaFS-Data-Address
CountryCode
ZipCode
AddressLine1
City
PegaFS-Data-Country
CountryName
SensitivityCode
pyDescription
SanctionDesc
IsEUCountry
PegaFS-Data-PartyPartyXRef
actingAs
Party2Id
RelCode
RiskCode
PegaFS-Data-EmploymentHistory Title
PegaFS-Data-Tax
TaxNewStatus
TaxOldStatus
Status
OverriddenReason
OverriddenBy
OverriddenDate
JurisdictionList
eScreening
Businesses must comply with laws regarding know your customer (KYC), anti-money
laundering, countering the financing of terrorism, and anti-bribery-corruption. As a
result, they are obliged to collect information about whom they are doing business with
on an ongoing basis. Client onboarding and KYC services must be dynamic and fast in
order to provide a competitive advantage. However, the main factor hindering these
services is the need for client data gathering and subsequent validation on that data.
This process usually takes days or weeks of diligent human work.
• Integration layer, (base layer), where the technical interfacing assets and
functionality reside. Technical connectors to integrate and interface with external
data providers are implemented here. This layer is Pega industry and data model-
agnostic, and it is intended to contain reusable assets across all Pega industry
applications.
• Industry abstraction layer, (middle layer) where the actual business services are
implemented as abstract, provider-agnostic components for a given Pega
industry. In addition to the abstract services, some abstract data elements
required to support the services are present. Those abstract elements are coupled
with the Pega Foundation for Financial Services data model and features so that
they can be used in applications built on top of the foundation.
• Consuming application layer, (top layer) from where the actual business
applications built on top of Pega Foundation for Financial Services can consume
and use the industry business services, regardless of their technical complexity or
the provider in use.
All financial services applications can use the business services in a decoupled way (in
relation to the actual data provider). When a provider changes, for a new location or for
a new Customer implementation, the consumption of the service does not require any
change. Only the two lower layers require configuration. The business side of the
service is isolated from its actual implementation.
• The eScreening service in the Pega Foundation for Financial Services Industry
layer can be consumed by any application built on the foundation.
• Additionally, the eScreening service can be consumed by different cases and
functions within the applications like the Screening subcase or directly by the New
Business or Onboarding cases, and so on.
• The eScreening service can consume different data providers by using the
configurable connectors in the integration layer, or multiple sources at the same
time. For example, the eScreening service can either use World-Check Connector
or Equifax Connector or any other financial information data provider. It can also
use all of them to merge the results and get better insights into the risk of
onboarding an individual or an organization.
Integration layer
It is built as follows:
The in-depth details of how this layer integrates with the World-Check component can
be referred to at the following link: Pega Foundation for Financial Services 8.1 Screening
Using Word-Check One Guide.
This article primarily describes the Industry and Consuming application layers of the
architecture.
Industry Layer
This layer contains all of the business services. It contains all of the classes specifically
to the financial services industry. The data model is neither specific to the provider of
the information nor to the consuming application. The eScreening service is
implemented in the PegaFS-Data-ExtProvider-BusService-eScreening class and is used
for individual or organization customers.
The rest of the properties in use by the business service belong to the inheritance tree
and are also being reused by other services.
• Enrich stage
• Due diligence stage
The eScreening case in the Due diligence stage is triggered when it is not created in the
Enrich stage.
For Organizations, eScreening is triggered for the related parties in the Due diligence
stage.
CustomerInves A process in
tgation the Enrich
stage that
triggers the
eScreening
case by using
the Create
case shape in
the process
flow.
This flow
connects with
the World-
Check
component to
get matches. If
the API returns
screening
matches, the
flow
progresses
further and
resolves the
step, then
starts the next
step,
Investigation.
If no result is
returned after
invoking the
API, the flow
counts the
number of
attempts the
system makes
and the
consecutive
wait time
before
reattempting
to find
matches.
In the event of
a failure to
connect, the
Screening
success
decision table
waits for the
results and
initiates the
GetWCScreeni
ngMatches
data transform
with the
required
parameters by
using the
pyContinueAft
erWait flow
action until it
gets the result.
If the
eScreening is
successful but
returns no
results, the
user is
informed. This
is achieved by
invoking the
resolve case
utility. This
utility switches
the flow to
initiate the
Synchronizatio
n stage where
the details
about the
eScreening are
persisted in
the Master
profile.
This data
transfom in
turn calls
GetWCScreeni
ngMatches.
Based on
whether the
results have a
positive match
or a negative
match, KYC
cases are
triggered in
the Due
diligence
stage.
If the Match
returns
positive
results, the
relationship
manager can
mark the
customer
being
onboarded as
having a high
risk. This leads
to rigorous
due diligence.
If the match
result is
negative, then
the
relationship
manager can
proceed with a
low risk
customer and
ask for fewer
KYC details
and
documents.
While doing a Maintain journey, the eScreening is configured to trigger only if it expires
during a certain timeline. This timeline is configurable in the application by using
system settings.
Validity” apart
from the
condition if the
customer is
licensed to use
eScreening
services.
The rules in the following table provide the conditions to trigger the case.
HasValidScreeningPages when rule needs to satisfy the rules in the following table.
Reports
Pega Client Lifecycle Management for Financial Services provides a set of reports that
show customer master profile data and customer journey data.
• CLM Abandonment
• CLM Customer History
• CLM Customer Journey History
• CLM Customer products
• CLM Customer Risk
• CLM Performance
CLM Abandonment
CLM Performance
The CLM Abandonment, CLM Customer Journey History, and CLM Performance reports
get data from work objects and display the results in a summary view. Case details,
such as case ID, customer type, customer name, journey type, and journey subtype, are
displayed in a detailed view.
The Case ID column has a link that opens the work object to verify additional case data.
The CLM Customer History, CLM Customer products, and CLM Customer Risk reports
get data directly from the customer master profile and display the results in a summary
view. Customer details, such as customer ID, customer type, customer name, and
status, are displayed in a detailed report.
The Customer ID column has a customizable link that opens the Customer master
profile to verify additional customer data.
The rules used to implement the CLM Reports are outlined in the following table. All
rules are in the PegaCLMFS ruleset.
FSF_SAMPLE_COUNTRY
(pegadata.FSF_SAMPLE_COU
NTRY database table) on
CustomerID
(pegadata.pc_work_masterp
rofile database table) and
Index-PegaFS-Data-Product
(pegadata.pr_Index_PegaFS_
Data_Product database
table) on CustomerID and
PegaFS-Int-
FSF_SAMPLE_PRODUCTMAT
RIX
(pegadata.FSF_SAMPLE_PRO
DUCTMATRIX database
table) on ProductID
joining PegaCLMFS-Work
(pegadata.obfs_pegaclmfs_
work_clm database table)
and PegaFS-Data-Party-
MasterProfile
(pegadata.pc_work_masterp
rofile database table) on
CustomerID and Index-
PegaFS-Data-
ProductAdoption
(pegadata.pr_Index_PegaFS_
PrdAdoption database
table) on pzInsKey
database table) on
pyUserIdentifier
(pegadata.obfs_pegaclmfs_
work_clm database table)