0% found this document useful (0 votes)
17 views30 pages

BTP 42

This document provides an overview of guidelines for designing enterprise-grade integration scenarios with SAP Integration Suite. It discusses key qualities like high availability, loose coupling, error handling, and reusability. Specific techniques covered include implementing exactly-once message processing, asynchronous decoupling using message queues or data stores, creating reusable script collections and message mappings, and implementing retry mechanisms. The guidelines help create integration flows that are robust, reliable and maintainable for critical business processes.

Uploaded by

sprasadn66
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views30 pages

BTP 42

This document provides an overview of guidelines for designing enterprise-grade integration scenarios with SAP Integration Suite. It discusses key qualities like high availability, loose coupling, error handling, and reusability. Specific techniques covered include implementing exactly-once message processing, asynchronous decoupling using message queues or data stores, creating reusable script collections and message mappings, and implementing retry mechanisms. The guidelines help create integration flows that are robust, reliable and maintainable for critical business processes.

Uploaded by

sprasadn66
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

PUBLIC

openSAP
Modernize Integration with SAP Integration Suite

Week 4 Unit 1

00:00:05 - Hello, everyone. Welcome to week four.


00:00:08 In this first unit, we will learn how we can design enterprise-grade integration scenarios.
00:00:15 Before we go into details, it is very important to understand the importance
00:00:19 of enterprise-grade integration scenarios. I think all of you agree that it is quite easy to design

00:00:26 and run a simple integration flow. For learning purposes,


00:00:30 I think running an integration flow could be a goal. But when you'd like to run
00:00:34 your integration flow productively, you'll need to take care of non-functional aspects, as well,
00:00:40 as a poorly designed integration flow can lead to errors. Applying these learnings
00:00:46 to all your productive integration flows is important, as they share tenant resources
00:00:52 that can indirectly affect one another. In the worst case, the integration flow breaks,
00:00:59 resulting in service disruption for the business processes and these design guidelines help to
maintain
00:01:05 overall business process availability. For example, you can design your integration flows
00:01:11 in many different ways to implement a certain integration scenario.
00:01:14 However, there are certain guidelines that help you to optimize the performance of your
scenario.
00:01:24 Now that we have understood the importance of enterprise-grade integration flow,
00:01:30 let us understand what are the qualities that make an integration flow
00:01:34 and enterprise-grade integration flow. One of the key qualities of a cloud service
00:01:39 is high availability. In line with this,
00:01:42 it is crucial to construct integration flows that never break the business processes.
00:01:48 By adopting a resilient approach, we can mitigate the impact of failures
00:01:53 and maintain a smooth flow of operations. As tenant resources are often limited,
00:01:59 effective resource management is also very essential in cloud integration, to prevent leaks
00:02:05 and optimize performance. Integration flows should be designed
00:02:12 with resource management in mind. Loose coupling reduces dependencies on external
components
00:02:17 and prevents cascading failures. Anticipating and handling temporary failures is vital
00:02:23 when integrating with external systems. Integration flows should gracefully handle failures
00:02:30 without disrupting the business process. Errors must be caught and appropriately managed
00:02:36 to ensure uninterrupted operations. SAP provides pre-packaged integration content
00:02:42 to promote reusability. This includes integration flows,
00:02:46 value mappings, and documentation. By leveraging this content,
00:02:50 you can accelerate project development. In conclusion, building enterprise-grade integration
flows
00:02:57 requires attention to key qualities and by considering these factors,
00:03:02 we can create integration flows that are robust, reliable and maintainable.
00:03:10 To design such enterprise-grade integration flows, SAP provides clear and easy to apply
guidelines
00:03:15 in order to safeguard a company's mission-critical business processes.
00:03:20 For example, applying the highest security standards, keeping your integration flows readable

00:03:26 and easy to understand, handling errors in a good way, applying application-specific
guidelines, and so on.
00:03:32 It also contains guidelines to implement most common enterprise integration patterns
00:03:37 such as content-based routing, aggregator pattern, splitter, and so on,
00:03:43 and specific integration patterns like exactly-once scenarios,
00:03:46 asynchronous decoupling, as well. For each guideline, one or more example integration flows
00:03:52 are provided that help you to quickly understand the topic. These integration flows are kept as
simple as possible
00:03:59 and can be set up and executed very quickly. The example integration flows
00:04:04 are contained in a set of dedicated integration packages published on the SAP Business
Accelerator Hub,
00:04:10 with proper documentation. It also comes with a Postman collection
00:04:15 that helps you to easily run the scenarios on your own. Let's explore some of the latest design
guidelines
00:04:24 as we won't be able to cover all of them in detail here. To start off, we take a look at the
implementation
00:04:31 of the exactly-once quality of service in your integration scenarios.
00:04:35 This ensures that a message is delivered and processed at the receiver system exactly once.
00:04:42 A unique identifier is a prerequisite to be able to identify duplicates
00:04:47 and for this, Cloud Integration supports message ID mapping and idempotent process call as a
flow step in the modeling.
00:04:55 Message ID mapping is required whenever the sender and receiver
00:04:59 require different message ID formats. If the sender sends the same message again,
00:05:03 the ID mapper must guarantee that the same source ID is mapped to the same target ID.
00:05:09 This is required in particular in a split or multi-cast scenario where one sender ID
00:05:14 needs to be mapped to multiple IDs on the receiver side. The idempotent process call
00:05:21 and local idempotent process is required where the receiver system is unable to identify
00:05:28 and discard duplicate messages, and when redundant processing
00:05:32 may lead to undesired side effects. By checking the message ID in the idempotent repository,

00:05:39 duplicate detection can be performed and the processing can be stopped if duplicates are
found.
00:05:46 All duplicate checks are logged in the message processing log, as well.
00:05:54 In cases where some sender systems are unable to perform message retries
00:05:58 in the event of transfer failures, we can ensure reliability by decoupling inbound
00:06:03 and outbound processing using message queues or data stores. By leveraging these storage
options,
00:06:10 the message is persisted at the start of the processing sequence
00:06:13 allowing for faster processing and immediate response to the sender
00:06:18 with HTTP code 202 - that is: accepted. Subsequent processing steps are then
00:06:24 executed asynchronously, providing loose dependencies
00:06:28 and implementing a retry mechanism. The new Data Store sender adapter

2 / 30
00:06:33 also facilitates this asynchronous decoupling through the data store and is suitable for lighter
usage.
00:06:41 While both message queues and data stores can be used for asynchronous decoupling with a
retry pattern,
00:06:47 message queues are generally recommended as they provide high-speed messaging with
high throughput.
00:06:57 The next set of guidelines ensures better reusability and maintainability of your integration
flows.
00:07:03 In Cloud Integration, a script collection serves as a bundle of scripts that provides many
benefits
00:07:09 over local scripts - like increased reusability, the ability to avoid failures, reduce maintenance
effort
00:07:16 and decrease file size and memory usage. It supports script resources such as Groovy Script,

00:07:22 JavaScript and Jars files. These script resources can be referenced
00:07:26 by different integration flows within the same package or across different packages.
00:07:31 When working with a script collection, it is very important to avoid making incompatible
changes,
00:07:36 like adding a new parameter or renaming the type of the parameter
00:07:40 or functioning to the reference script resources. Instead of that, create a new function.
00:07:46 This ensures that the integration flows utilizing the script collection
00:07:49 continue to function as intended. For more detailed guidelines on scripting best practices,
00:07:55 it is recommended to refer to the General Scripting Guidelines.
00:07:59 These guidelines provide valuable insight and recommendations for script development.
00:08:04 It's worth noting that currently there is no auto-deployment feature available
00:08:08 for reference script collections. Therefore, any changes
00:08:12 or updates made to the script collection also need to be manually deployed
00:08:16 in order for the changes to take effect. On a similar note, you can also create a message
mapping
00:08:24 as a dedicated artifact that can be referenced by different integration flows
00:08:28 within the same or different packages. Not only it promotes re-usability but adds the flexibility
00:08:35 to dynamically assign message mapping artifacts using a header or property or via a partner
directory.
00:08:42 This way, you can execute different message mappings from a single integration flow.
00:08:47 Similar to the script collection, it is also important to avoid making incompatible changes
00:08:53 to the reference message mapping artifact, and also, there's no auto-deployment possible as
of today.
00:09:00 Like script collections, you need to first deploy the message mapping artifact
00:09:03 and then the integration flow. In an integration world, retry is an important pattern.
00:09:12 Connections to external endpoints or even entire integration scenarios
00:09:16 can experience temporary issues, leading to failures. In some cases, the sender systems
00:09:22 may not have built-in retry capabilities making it essential for Cloud Integration to handle
retries.
00:09:30 By implementing a retry mechanism on the Cloud Integration side,
00:09:34 we can address these temporary failures. The retry process ensures that messages are
redelivered
00:09:41 to external targets without reprocessing the entire integration flow.
00:09:45 This helps to minimize disruption and ensures the smooth flow of data.

3 / 30
00:09:51 Let's take a look at some of the specific adapters and scenarios where a retry mechanism can
be configured.
00:09:58 For outbound integration flows, the XI receiver adapter provides configurable options
00:10:03 for delivery assurance, such as Exactly Once. Additionally, the Kafka receiver adapter
00:10:09 can be configured to handle retries. The SuccessFactor receiver adapter
00:10:15 also retries the connection every three minutes for a maximum of five times.
00:10:20 On the inbound side, the XI sender adapter supports configurable delivery assurance,
00:10:25 including both the Exactly Once and At Least Once options. Similarly, the AS2 or AS2 MDN
sender adapter
00:10:33 and the AS4 sender adapter can be configured for delivery assurance.
00:10:38 It is worth noting that all polling sender adapters such as SFTP, FTP, mail, Ariba and
SuccessFactor
00:10:45 support retries. Additionally, sender adapters connect to a message broker
00:10:50 such as JMS, AMQP, Kafka. Also, the Data Store sender adapter
00:10:55 provides retry capabilities. Now, let's jump into the demo,
00:11:01 where we see how to access the design guidelines, sample integration flows, and some latest
guidelines
00:11:07 that we have learned today. Let's go to the SAP Integration Suite tenant
00:11:12 and then on the top right, there's a link to the online guide
00:11:18 and once it opens, just search for the design guidelines and then it will open the entire Design
Guidelines sections.
00:11:25 Here, it has multiple sections, like how you can work with this.
00:11:28 You can learn the basics. There's also a lot of guidelines
00:11:31 to design enterprise-grade integration flows, like keep readability in mind,
00:11:36 and how to implement different enterprise integration patterns. And in the sample integration
flows, across each category,
00:11:44 there's an integration package name. Let's move to the Enterprise Integration Pattern
packages.
00:11:53 There's a Postman collection, you can download that in your computer and then go to your
Discover tab,
00:12:02 where you can discover the integrations, search for the same package,
00:12:06 that is Enterprise Integration Patterns. So once you find that,
00:12:10 just simply copy it into your workspace. Once it is copied, navigate to your Design tab
00:12:20 and then you can see it has 33 artifacts. Here, we have to deploy the generic receiver,
00:12:27 and on the quality of service, let's open one scenario. Here if you see, we get an order ID
00:12:34 and based on this order ID, we are checking the duplicate.
00:12:37 In the idempotent process call, there is a Camel header,
00:12:42 using which also you can check whether it's a duplicate message or not
00:12:47 and based on that, you can take a decision. Let's also deploy this sample integration flow.
00:12:56 And once you deploy, let's wait for it to get started. That means we can simply trigger this
00:13:06 from the downloaded Postman collection. So extract the Postman collection
00:13:10 and then import into your Postman and once you import,
00:13:15 you can see that the collection contains multiple endpoints
00:13:21 for your multiple sample integration flows. So let's open Scenario 6A.
00:13:26 There is a sample body, payload, and there's also some headers
00:13:30 which here are passing like credentials. Let's create one environment file,
00:13:35 where we give mainly three variables: the host - this is your Cloud Integration run-time host

4 / 30
00:13:43 and then a username. It could be your client ID
00:13:46 and then your password - it could be your client secret. And once you did that,
00:13:53 also we need to create a user credential with the name OWN so that we can call the generic
receiver
00:13:59 from the sample integration flow. Once all the settings have been done,
00:14:02 just trigger the endpoint. So you can see that the order has been created successfully.
00:14:07 Both the iFlows got completed. And if we now try to send the same purchase order again,
00:14:14 it is giving that it's a duplicate because of the idempotent process call.
00:14:19 Similarly, now we can check another design guideline: how we can relax dependencies using
JMS queues.
00:14:32 Copy the package, go to your Design tab and similarly, just deploy the generic receiver
00:14:46 and JMS inbound and JMS outbound iFlows. So once you deploy that, let's open the iFlow
to see what it is doing.
00:14:56 So it's a simple timer-based flow where it is storing a message in the JMS
00:15:00 and there's another flow which is reading from the same JMS and processing and storing and
calling the other
ProcessDirect sample generic receiver.
00:15:09 So because it's a timer-based flow, automatically both iFlows got completed,
00:15:13 and also the corresponding data store entries
have been made as well. With this, we come to the end of this session.
00:15:21 In summary, remember that designing an enterprise-grade integration flow
00:15:25 is not a one-time activity. It's a practice that requires attention to key qualities
00:15:31 for safeguarding mission-critical processes. We learned some design guideline patterns today,

00:15:36 like how to implement exactly-once scenarios with the help of message ID mapping
and idempotent process calls.
00:15:43 You can also asynchronously decouple
your integration interfaces with a retry pattern
00:15:48 using the message queues or data stores. For better re-usability and ease of maintenance,
00:15:54 use script collections and message mapping artifacts. Many adapters also provide retry
options
00:16:01 that help you with temporary connectivity issues. You can learn more about these
00:16:06 and keep yourselves up to date by regularly referring to the design guidelines.
00:16:11 Let's keep these principles in mind as you move forward with your integration projects.
00:16:16 Thank you for watching. Have a great day ahead.

5 / 30
Week 4 Unit 2

00:00:06 Hello everyone, and welcome to this unit about simplifying non-SAP connectivity.
00:00:12 As mentioned earlier in the earlier units, SAP Integration Suite is our strategic integration
platform
00:00:19 for the integration of SAP, as well as non-SAP applications. And as you can see here, on the
right-hand side of this picture,
00:00:28 SAP Integration Suite is clearly positioned, and also used by our customers, for the integration
of all kinds
00:00:34 of SAP and non-SAP systems. Like cloud applications, on-premise applications,
00:00:40 public authorities, e-documents scenarios, business networks, social media channels,
business partner integration,
00:00:48 and all or many of these systems are non-SAP applications. And for this, we need to have a
good set, a strong set
00:00:55 of adapters, as well as prepackaged content, to further simplify the integration
00:01:01 with non-SAP applications. So the focus of this session is on the capabilities
00:01:08 of SAP Integration Suite, namely adapters, as well as content that we provide to ease the
integration
00:01:17 for SAP and non-SAP applications. And on this slide you can see the long list
00:01:24 of adapters that we provide out of the box with SAP Integration Suite, more than 250 adapters,

00:01:30 which are provided to you, which you can use with SAP Integration Suite.
00:01:35 Technical adapters like HTTP, HTTPS adapter, FTP, FTPS adapter, mail adapter,
00:01:41 OData, version two, version four we support, AMQP adapter, AS2, AS4, Kafka, RabbitMQ
adapter,
00:01:48 as well as many application adapters. Integration with our own SAP applications,
00:01:54 like Ariba adapter, MDI adapters, SuccessFactors adapters. But also many adapters for
00:02:01 non-SAP integrations, like AWS with the flavors of S3, SQS, SNS, SWF, Salesforce adapter,

00:02:10 Microsoft Dynamics, Microsoft Azure storage adapter, with four flavors for Blob, File, Queue
and Table,
00:02:16 SharePoint adapter, Dropbox, ServiceNow, Workday adapter. So the list of applications is
long.
00:02:22 Even here, you can see even more adapters are available for SAP Integration Suite and
according to application areas
00:02:30 like CRM, eCommerce, ERP applications. And of course, partners also develop adapters
00:02:37 for SAP Integration Suite. You can see an example list.
00:02:40 Some good news is that you don't have to use Eclipse anymore to build adapters for SAP
Integration Suite.
00:02:46 So you can use any development environment to create adapters and deploy adapters
00:02:53 for SAP Integration Suite. And additional good news is,
00:02:58 for the deployment of the adapters, you can directly use the SAP Integration Suite design
00:03:05 time environment, to also deploy the adapters to your tenant. And previously, you might
remember
00:03:13 from third-party adapters, from our OEM providers, you had to download the adapters
00:03:20 from the Service Marketplace and to deploy them separately via an Eclipse tooling
environment to the tenant.
00:03:26 This is not required anymore. You can directly leverage and consume the adapters

6 / 30
00:03:32 from the design time environment of SAP Integration Suite, and deploy them and maintain
them
00:03:38 from the web tooling environment of SAP Integration Suite. So let's now take a look how SAP
Integration Suite
00:03:46 supports multiple prepackaged integration content, as well as adapters out of the box.
00:03:54 So we can search for prepackaged integration content, first via SAP Integration Suite directly,

00:04:01 namely in the discover area. Here you will be able to see the thousands
00:04:08 of prepackaged integration scenarios. So we navigate to the Integrations Discover section
00:04:14 on our SAP Integration Suite tenant. You can see here, yeah, thousands, 3000,
00:04:20 plus prepackaged integration scenarios, as well as adapters delivered out of the box
00:04:26 with SAP Integration Suite. And another option is also to search for prepackaged content
00:04:34 and adapters on the SAP Business Accelerator Hub. So here we can click on the Discover
Integrations section
00:04:43 here, and then we can see, when we scroll down here, on the SAP Business Accelerator Hub,

00:04:49 yeah, all the SAP and non-SAP integration scenarios. And the list is long of also non-SAP
integrations
00:04:58 that are supported out of the box with SAP Integration Suite.
00:05:01 And in our example, let's search now for ServiceNow related integration content.
00:05:05 And we can see, even here, we have content for service integration with SAP and non-SAP
integration scenarios.
00:05:11 So let's take a look here at SAP S/4HANA integration with ServiceNow.
00:05:15 We can find yeah, this integration package here. And this integration package comes with two
integration flows,
00:05:23 one for the replication of cost center, the other one for the replication of exchange rate.
00:05:31 So, and besides the integration artifacts or the document, documentation is delivered,
00:05:37 which describes clearly how to configure this integration scenario, what to do in the backend
applications,
00:05:45 namely SAP S/4HANA, ServiceNow, as well as on the SAP Integration Suite tenant.
00:05:52 So for example, get the technical user in the SAP S/4HANA system,
00:05:56 do the server manager configuration. And yeah, once we want to use this package,
00:06:02 we can directly copy it from the SAP Business Accelerator Hub to our own workspace.
00:06:08 We just have to initially configure which tenant we would like to use for this copy function.
00:06:13 And then we can directly copy this integration package into our own workspace,
00:06:19 the design area of our SAP Integration Suite. And we can see here now in the Design area,
00:06:24 this integration package has been copied. Before we get started, only one more activity.
00:06:30 Namely, we search now here directly in the Discover area, integrations for the adapter,
00:06:38 for the ServiceNow adapter. Again, here in the description we can see, for example,
00:06:41 which authentication methods are supported via the ServiceNow adapter,
00:06:44 and which formats it supports, XML and JSON for example. So this has multiple benefits to
use a specialized adapter
00:06:52 for the integration with ServiceNow. We can copy it also here to the Design area.
00:06:59 And once we go back to the Design area, we can find the adapter,
00:07:03 and there's only one activity, namely we deploy. Now this adapter, as a one-time activity,
00:07:08 to our own tenant, without Eclipse, directly here, from the Design area of SAP Integration
Suite.
00:07:14 And now we are ready to go. We go to the integration package.

7 / 30
00:07:19 For the integration scenario, we can see again here, our two integration artifacts
00:07:26 for the replication of cost center and exchange rate. We would like to use the one for cost
center application,
00:07:34 which is a scheduled integration flow. We can see it, so it's a timer-based event.
00:07:39 So we need to configure the timer, how often and which interval the data shall be picked.
00:07:45 Another cost center data shall be picked, from the S/4HANA system.
00:07:48 And then data will be mapped to the cost center structure of the ServiceNow application.
00:07:53 And then we can insert or update the cost center in ServiceNow.
00:07:56 And also out of the box, an error handling integration flow is delivered for error handling
purposes,
00:08:02 in case there's an error with the integration flow. For example, an administrator can be
informed via email.
00:08:09 Yeah, so let's configure this scenario. So we just need to configure here the scheduler,
basically.
00:08:14 So in which time interval we would like to pull new cost center information
00:08:18 from the SAP S/4HANA system. And then we need to configure three receivers.
00:08:24 One is the ServiceNow receiver with the endpoint, and the ServiceNow adapter.
00:08:33 And secondly, we need to configure also the integration with the S/4HANA cloud system.
00:08:39 So also here endpoint information, the credentials, the authentication method that we would
like
00:08:45 to use, to integrate with the SAP S/4HANA system. And last, but not least, we also need to
configure the mail adapter.
00:08:55 Like with the mail server, we need to provide here the address, as well as
00:09:02 the to and from email addresses for the error iFlow.
00:09:08 And once we have configured the entire integration flow, we can deploy the integration flow,
00:09:14 and once it has been successfully deployed, it can be used. So what we need to do now is,
since it's a triggered,
00:09:21 a timer-based scenario, we need to now go into the SAP S/4HANA system and create a cost
center.
00:09:29 So in our case, we create a new cost center now, here via the transaction case 01.
00:09:36 So we provide here just, the cost center, as well as a date,
00:09:45 from when it should be valid on. This is the initial screen.
00:09:51 And then on the second screen, we can now provide further information
00:09:54 on the cost center, like a name, the description, as well as additional information.
00:10:02 For example, the responsible, and the hierarchy, and the business area.
00:10:11 And once we have saved the data, the cost center will be created.
00:10:22 You can see that the cost center has been created. And now, after the cost center has been
created successfully,
00:10:31 there should also be a message being triggered, once the next scheduled time is due, when
00:10:38 this iFlow is being scheduled. So let's go to the monitoring area.
00:10:42 We can see here already one successfully processed message. So this will be our message
after the cost center
00:10:49 has been created in the SAP S/4HANA system. So we can open this integration flow.
00:10:57 Now it has just been executed. We can also go to the trace here,
00:11:05 so that we can see all the pipeline steps. On the left-hand side, we can see all the pipeline
steps.
00:11:11 And on the right-hand side, nicely in there, via which path the iFlow has been executed.

8 / 30
00:11:16 So cost center has been pulled, data has been pulled from the SAP S/4HANA system, and
then it will be mapped
00:11:23 to the structure, cost center structure, of the ServiceNow application.
00:11:27 And then, depending on whether the cost center already exists, it will be updated, or in case
the cost center
00:11:34 does not exist yet, it will be inserted. A new cost center will be inserted
00:11:38 in the ServiceNow application. So with this demo, we could hopefully show how
00:11:47 SAP Integration Suite supports the integration with prepackaged content and adapters for the
integration
00:11:53 of SAP, as well as non-SAP applications. Like in this case, a ServiceNow application.
00:12:00 With this, I hope I could show to you how simple it is to integrate non-SAP applications
00:12:07 using the prepackaged content and the rich set of adapters that are available for SAP
Integration Suite.

9 / 30
Week 4 Unit 3

00:00:05 - Hello everyone, welcome again to yet another interesting and essential topic.
00:00:10 In week four, unit three, we will learn about different monitoring options
00:00:15 for the Cloud Integration capability of SAP Integration Suite,
00:00:19 and how you can effectively operate a Cloud Integration tenant
00:00:23 and Integration scenarios. Monitoring is essential
00:00:29 for ensuring the smooth and reliable operation of Integration processes within a cloud-based
environment.
00:00:36 It plays a crucial role in every heterogeneous landscape, particularly when an integration
middleware is in place.
00:00:44 The Cloud Integration provides various built-in monitoring functionalities
00:00:48 in the standing monitoring dashboard that caters to the needs of Integration developers and
technical users.
00:00:56 However, these monitoring options are often perceived as technical and may have limited
relevance
00:01:02 for business users, who prioritize functional aspects and end-to-end monitoring.
00:01:08 To bridge the gap and cater to the diverse needs of various users and use cases, SAP offers
multiple monitoring
00:01:16 possibilities, like centralized monitoring and alerting using SAP Cloud ALM,
00:01:22 archiving data to a customer-owned archiving system, logging monitoring data to an external
persistent solution,
00:01:28 and so on. We will learn about all these options
00:01:32 in detail in the subsequent slides. The standard monitoring dashboard of Cloud Integration
00:01:40 provides essential monitoring features, using which you can monitor messages
00:01:45 and integration artifacts at run time. The monitoring tab consists of various sections,
00:01:52 and each section contains tiles, which show filter settings
00:01:56 and the number of messages or artifacts that correspond to the filter settings.
00:02:02 The tiles in the monitor message processing section show the number and status of process
messages
00:02:07 within a specific time window, allowing users to view details of individual messages.
00:02:14 The Manage Integration Content section provides information on deployed integration artifacts,

00:02:20 including their deployment status and corresponding endpoints.


00:02:25 The Security section allows you to manage certain tasks related to the setup of secure
connections
00:02:30 between your tenant and remote systems, whereas Manage Stores monitors temporary data
storages,
00:02:37 like data stores, variables, queues, and number range objects.
00:02:43 Access logs provide access to the system log files and analyze errors that occurred
00:02:48 during inbound HTTP processing. Finally, in Manage Locks, you can manage message locks,

00:02:55 as well as design-time artifact lock entries. The standard monitoring dashboard also provides
00:03:03 enhanced filtering options for process messages, in addition to the standard filters.
00:03:09 The standard filter options in Cloud Integration monitoring include time, status, and artifact IDs.

00:03:16 Additionally, users can also set additional filter criteria

10 / 30
00:03:19 using additional properties that can be written to message processing logs at run time,
00:03:24 using a Content Modifier step. By this, the Integration developer can add business context
00:03:31 to the logs. These fields include sender, receiver, custom status,
00:03:36 application message ID, and application message type. These filters allow users to narrow
down their search
00:03:44 and focus on specific messages. Moreover, users can utilize a script step
00:03:50 to set custom header properties, which are name/value pairs. This allows the inclusion
00:03:56 of business or payload related information in the message processing log,
00:04:01 to search messages based on a specific business context. Certain industries and jurisdictions
have regulations
00:04:10 that mandate the storage of specific data for an extended duration, like seven years,
00:04:14 whereas Cloud Integration monitoring data are typically stored for 30 days.
00:04:20 For such use cases, the Cloud Integration archiving feature can be used.
00:04:24 The archiving feature allows customers to store message processing logs
00:04:28 in a customer-owned archiving system, such as a content management system, also known as
a CMS.
00:04:35 The external archiving system must support the CMIS API standard version 1.1.
00:04:42 To use the archiving feature, you need to configure a destination
00:04:45 to your archiving system, activate archiving on the tenant
00:04:49 using the activate archiving configuration on an OData API, and finally enable configuration
00:04:55 of the archiving settings in the user interface. You can archive data for persist step content,
00:05:02 adapter payloads, and attachments. The system job also records
00:05:06 specific key performance indicators, KPIs, of the archiving runs.
00:05:11 You can use these indicators, for example, to check for recurring errors
00:05:15 or potential performance problems. You can use the official OData API to query the KPIs.
00:05:23 SAP Cloud Application Lifecycle Management, also known as SAP Cloud ALM, is a
comprehensive solution
00:05:29 that provides central end-to-end business and IT monitoring for hybrid SAP-centric landscapes

00:05:35 that bring the technical flow closer to the business context.
00:05:40 SAP Cloud ALM offers integration and exception monitoring for integration scenarios,
00:05:45 ensuring end-to-end visibility across different systems. It enables the correlation of messages,

00:05:51 providing a holistic view of the data flow. It also provides the capability to report failed
messages
00:05:58 and deployment exceptions. Users can also track messages based on business attributes,
00:06:04 with automated alerting for proactive monitoring. Additionally, SAP Cloud ALM
00:06:10 enables context-sensitive jump-ins to standard monitoring tools,
00:06:14 streamlining issue resolution. The technical health monitoring capabilities
00:06:19 of SAP Cloud ALM track the overall Cloud Integration tenant health
00:06:23 based on the JMS resources and validity of the certificates and KPIs
00:06:28 with a possibility to customize the metric threshold as well.
00:06:35 In addition to the built-in standard monitoring features of Cloud Integration,
00:06:39 there may be scenarios where you require external logging capabilities
00:06:43 to further enhance your monitoring and troubleshooting processes.
00:06:47 External logging allows customers to push message processing logs in real time

11 / 30
00:06:51 to external customer-owned persistent solutions, like Splunk, regardless of database storage
availability.
00:07:00 By capturing and analyzing log data in an external logging system,
00:07:04 you can meet compliance requirements, use it for high-throughput scenarios
00:07:08 and high-volume queries, which are currently limited via systems and databases.
00:07:14 And it is also suitable for central observability by consolidating logs of multiple tenants
00:07:21 and the possibility to create advanced analytics dashboards. To activate external logging
00:07:27 on your Cloud Integration tenant, you will need to follow a few simple steps.
00:07:32 First, set up Splunk as your designated external logging destination.
00:07:37 As a tenant admin, with the role ExternalLogging.Activate,
00:07:40 you have the necessary permissions to proceed. Then, utilize the activateExternalLogging
function
00:07:47 available in the official OData API to initiate the activation process.
00:07:53 The log level for integration flows can be independently configured for both standard
monitoring
00:07:58 and the external logging system. The OData APIs of SAP Cloud Integration provide a set
00:08:06 of standardized interfaces that allow developers to interact with and manage various aspects
00:08:11 of the integration platform programmatically. These OData APIs are documented
00:08:17 on the AP Business Accelerator Hub and provide a powerful and flexible way to interact
00:08:22 with SAP Cloud Integration, enabling developers to build custom integrations, automate
processes,
00:08:29 and extend the capabilities of the platform. It provides a wide range of operations, and using
these,
00:08:36 you can achieve multiple use cases, like customized monitoring, alerting on failed integrations,

00:08:42 or expiring keys. Notification of high JMS resource usage


00:08:47 CI/CD pipelines, and so on. The topic CI/CD, and DevOps in general, is very important
00:08:56 when you want to create or maintain integration content, be it integration artifacts on Cloud
Integration,
00:09:02 or API proxies and other elements on API management. Due to variations in landscape and
processes
00:09:09 among customers, the CI/CD pipelines may also vary. Rather than endorsing a single
approach, our focus is
00:09:16 on enabling multiple methods to implement DevOps. The most simple and crude way is to use
the OData APIs
00:09:24 for Cloud Integration and API Management that are available on the SAP Business Accelerator
Hub.
00:09:31 We do also provide ready-to-use pipelines for Jenkins-Git infrastructure to accommodate
00:09:37 individuals who may not be familiar or comfortable with scripting.
00:09:42 We provide pre-built scripts that require minimal configuration changes for your environment.

00:09:48 Our pipelines serve as valuable examples for experts to exchange or modify as needed.
00:09:55 You can access these resources through the SAP Business Accelerator Hub community.
00:10:01 Alongside the pre-built pipeline scripts, we have made contributions to the Piper library
00:10:06 by adding commands for Cloud Integration and API Management capabilities.
00:10:10 SAP Continuous Integration and Delivery, a low-code/no-code service on SAP BTP
00:10:17 also provides a predefined pipeline for SAP Integration Suite.
00:10:25 SAP provides tooling for CI/CD via Project "Piper" to simplify and streamline the setup

12 / 30
00:10:31 and execution of CI/CD pipelines. It focuses on promoting best practices
00:10:37 and reducing the complexity associated with setting up CI/CD pipelines.
00:10:41 As I mentioned earlier, the Cloud Integration and API Management capabilities of SAP
Integration Suite
00:10:47 offer many "Piper" steps, leveraging the underlying OData APIs.
00:10:55 The delayed software update feature of Cloud Integration provides a self-service option
00:11:00 that allows a one-week delay in production tenant updates and increments,
00:11:04 effectively mitigating regression risk for custom Integration flows.
00:11:08 By delaying the software update in the production tenant and consuming the regular updates
in the test tenant,
00:11:14 organizations can ensure a smooth transition and minimize disruptions.
00:11:19 Tenant admin can enable this in their productive tenants by navigating to the Integration
Settings tab.
00:11:27 It is important to note that this delay is applicable only for monthly increments
00:11:31 and specifically for Cloud Integration run time only. To facilitate easy comparison of versions
between test
00:11:39 and production tenants, the About dialogue in the tenant displays the Cloud Integration run-
time version.
00:11:48 Now let's jump into the demo, where we see how you can activate the delay software update
feature.
00:11:54 And then we also create a CI/CD Jenkins pipeline using the "Piper" commands for Cloud
Integration.
00:12:01 I'm in the landing page of SAP Integration Suite. Let's navigate to the Integration Settings tab.
00:12:05 There you see a Software Updates tab. Just click on Edit and just toggle the delay software
update
00:12:11 to enable the delay software in your productive tenants and save it.
00:12:16 Now we are in another tenant where this setting has been enabled,
00:12:21 and in this productive tenant you can see it also shows you the next date
00:12:27 when this tenant would be updated. And in the About dialogue,
00:12:30 you can also now see the Cloud Integration run time and the design time, different versions,
00:12:34 as this is only applicable for Cloud Integration run times. So you see this is on a lower version
than
00:12:39 the Cloud Integration design time, and it also shows you the date
00:12:43 on which this would be updated. Now, if we see a test tenant
00:12:50 where this setting has not been enabled, - if you can see here, the settings has not been
enabled -
00:12:59 and if I go to the About dialogue, I should see both the Cloud Integration design time
00:13:05 and the run-time versions are the same. Now, we will see how we can quickly execute
a CI/CD pipeline
00:13:11 for Cloud Integration using the "Piper" command. I'm on the Project "Piper" landing page.
00:13:15 Here you can see the library steps related
to API Management, like apiProxyList, apiProviderDownload,
00:13:21 and the library steps for Cloud Integration. And there's also a detailed blogpost
00:13:27 from my colleague Mayur, where you can also learn how you can quickly
create a CI/CD pipeline
00:13:32 using these "Piper" commands. So I am into a GitHub,
00:13:37 where I have already configured a Jenkins file. This is a Groovy Script where I have defined
the stages.
00:13:43 And what this will do, it just utilizes three "Piper" commands for Cloud Integration.

13 / 30
00:13:47 One is to upload the integration flow from this kit to the tenant.
00:13:51 Second, it will deploy, and then it checks the MPL status. And this is a sample flow, which it
will upload.
00:13:58 And this is a config.yml file where for each library step
00:14:02 we have defined the attributes, like the credential ID
00:14:07 the iFlow ID, the package ID, and this credential is already created in the Jenkins server.
00:14:13 So see, there's a package ID CI/CD. So in this target system,
00:14:18 there's no package as such. So let me quickly create a package CI/CD.
00:14:23 I put all the details: name, description, version, vendor, just save it.
00:14:28 So now let's go to the Jenkins server. I create a new pipeline.
00:14:32 Just give a name, openSAP Piper demo, and just create of type pipeline.
00:14:40 Again, provide the description. And as you have seen, right,
00:14:45 we have created a Jenkins pipeline on source code management. So I use pipeline script from
SCM
00:14:50 and it is specifically Git. I provide the Git repository URL,
00:14:54 the branch name, and that's all. So now, if
00:15:01 you see here so I'm going to the monitoring page,
00:15:06 where if you see we have not deployed this, now I'll trigger this pipeline.
00:15:13 And as you remember, it'll upload from the Git to the Integration Suite tenant,
00:15:18 Cloud Integration specifically, and then deploy and run.
00:15:22 See: This got completed, and the MPL status is also Completed.
00:15:26 And if you see in the monitoring, see this iFlow got completed
00:15:31 and this is in a completed state. In summary, remember that Cloud Integration offers a range
of possibilities
00:15:41 when it comes to monitoring. SAP Cloud ALM enables end-to-end integration monitoring,
00:15:46 provides alerting capabilities, and tracking of messages using business attributes.
00:15:51 It also tracks the overall health of the Cloud Integration tenants. External logging allows you to
push Cloud Integration logs
00:15:58 to your own external persistent solutions, like Splunk. This enhances your observability,
00:16:05 and analysis capabilities. To embrace a DevOps approach with SAP Integration Suite,
00:16:11 you can leverage pipeline scripts or explore Project "Piper" to kickstart your automation
journey.
00:16:18 Additionally, if you need more control
over your production updates, you have the flexibility to delay them by a week,
00:16:25 through a convenient self-service option. And that concludes this session,
00:16:31 empowering you with valuable insights on how you can effectively monitor and operate your
Integration scenarios.
00:16:38 Thank you for watching. Have a great day ahead.

14 / 30
Week 4 Unit 4

00:00:06 Welcome to our new unit, Generate Business Partner Interfaces.


00:00:11 My name is Marco Ertel and I work in the product management of the SAP Integration Suite.
00:00:17 Let's have a look at how B2B integrations are different from A2A integrations. In B2B
integrations, we have several challenges
00:00:31 where you see that B2B integrations are different from A2A integrations. One is complexity.
00:00:40 Complexity because we have a lot of systems you have to deal with
00:00:45 and a lot of communication in between. Standardization.
00:00:50 At first glance, it's making it easier to integrate because you have standardized interfaces,
standardized messages.
00:00:58 But on the other side, the standardization makes the interfaces also kind of complex
00:01:02 because you have to fulfill several requirements, and you also have to note
00:01:07 that a lot of the fields of these standardized interfaces are optional and therefore these
messages might be different from partner to partner.
00:01:18 Security, because you're dealing with external companies, your trading partners,
00:01:23 and therefore you're sending messages over a public network. Therefore, you have to secure
your messages.
00:01:30 And also different trading partners have different requirements for security. Partner
management in general is important
00:01:40 because if you have a lot of different trading partners, let's say not only hundreds, but
thousands or several thousands,
00:01:47 you have to store a lot of information and you have to keep in mind that all these trading
partners might have also update cycles, and so on.
00:01:57 And this has to be maintained also in your systems. Scalability is another important topic
00:02:04 because in the B2B world, a lot of messages are flowing. And as normally the trading partner
numbers are increasing,
00:02:13 the number of messages is also increasing. Also from a cost perspective,
00:02:19 dealing with external companies makes it more complex because these external companies
also have different update cycles than your own company,
00:02:28 and you also need expertise to deal with all these things. Let me give you an example
00:02:37 of the advantages but also of the problems of standards and customization. Here we have as
an example an IDoc purchase order.
00:02:47 An IDoc purchase order in total has around 720 data elements, 60 code lists with around 4,000
to 5,000 values inside,
00:02:57 but most often you need less than 5% of all of these possibilities. But which ones?
00:03:05 You need to know which of these elements are really required, which are optional, what is the
exact meaning of each of these elements,
00:03:14 and what are constraints and conditions between these different elements. During our
research, we found that there are several influencing factors.
00:03:24 We call these influencing factors business context. This consists of the different industries,
00:03:31 like the chemical industry, automotive industry, but also the different products and services
you're offering,
00:03:37 and of course, the different countries. Because of the different laws in different countries,
00:03:43 you also have to fulfill or you have to store different data, sometimes even in different formats
for these different countries.
00:03:56 If you have a look on the project phases and also on the costs and duration of such a B2B
integration project,

15 / 30
00:04:04 you see these different blocks. In the first big block, the business requirements,
00:04:10 the message implementation guidelines, and the mapping guidelines, most often, business
domain experts are working in Office tools.
00:04:18 So working in Excel, in Word, getting help because of standards from different PDF formats,
and so on.
00:04:26 In our example, this phase took about four days. Please keep in your mind that these numbers
are examples for projects
00:04:34 and your numbers might differ from these. In the next phase, the technical implementation
phase,
00:04:41 an integration expert is working in a technical system, for example, in Cloud Integration,
00:04:47 to implement what has been defined in the phase before. That might take one day or so.
00:04:54 After this, testing and correction takes place. Here, very often, external tools are used,
00:05:00 and that might take around two days because these structures are very often lengthy and
complex,
00:05:06 and therefore need thorough testing. And very often, something occurs.
00:05:11 This means implementation has to be changed. But here we already see the first problem.
00:05:17 You change the implementation, but the documentation which has been written in the phase
before
00:05:24 is very often lost. This means if a project has been done,
00:05:30 maybe responsibilities are changing and so on, but your documentation and your
implementation are running out of sync.
00:05:39 This makes maintenance over years much harder and costs a lot of time and effort to later find
out what is really implemented,
00:05:46 why is it different from the documentation, and so on. And the last phase, the deployment,
that's relatively fast.
00:05:53 You deploy your implemented stuff on Cloud Integration, then you're good to go. But you
already see that because of these different roles,
00:06:02 the business domain expert, integration experts, and so on, you have some breaks in
between.
00:06:07 And because of changing the different tools, like Office tools, like Cloud Integration, external
testing tools,
00:06:15 you have a lot of media breaks and changes are not every time reflected between these
different systems.
00:06:26 Therefore, with the SAP Integration Suite, we are offering several tools to make this better.
00:06:32 One part of this, one capability, is the Integration Advisor. The Integration Advisor consists of
four pillars,
00:06:40 the library of type systems, the message implementation guideline editors,
00:06:45 the mapping guideline editors, and the artifacts. In the library of type systems,
00:06:51 we are storing the type systems like X12, like EDIFACT, but also SAP standards like IDocs in
a kind of normalized manner,
00:07:01 this means every time it looks the same. In the message implementation guideline editor,
00:07:08 you're then able to create your structures, your interfaces. In the mapping guideline editor,
00:07:14 you're then creating the mappings from source to target. With all these things, you're then
generating the so-called artifacts,
00:07:23 which are on the one side runtime artifacts, which can be used in Cloud Integration,
00:07:27 but also documentation. Let's have a look a deeper look in these parts.
00:07:35 In the message implementation guideline editor, you first select for which type system
00:07:42 you want to create such a structure, like X12. You get then a normalized structure where all
mandatory fields are already marked.

16 / 30
00:07:53 Of course, you can then change it up to your needs. But you also have the possibility to ask
our proposal or Knowledge Base
00:08:01 on how such structures normally look in your business context, because this is something you
also define
00:08:07 during the generation of such a MIG, message implementation guideline, that, for example,
your company
00:08:13 is in the industrial machinery and components in the US and you're acting here as a seller.
00:08:19 That knowledge on Knowledge Base can give you a proposal of how such structures very
often look.
00:08:27 You, of course, have then to refine the structure till it fits your needs. So which fields are
necessary?
00:08:33 What are the exact meanings of all of these fields? You can also add semantics by
qualification.
00:08:39 This means the DTM field, the date/time field can be in such a structure several times for
several reasons.
00:08:47 This is called then the qualification here. And you also have the possibility,
00:08:52 and we really recommend that you also use that, to add already on that level documentation,
00:08:58 because with that, your documentation and your implementation stay in sync,
00:09:02 and it makes maintenance in the future much easier. After you have created your MIG for
source and target,
00:09:09 for example, an S/4 IDoc orders 05, and an X12 850 message as a purchase order,
00:09:18 you are then going to the mapping guideline editor. Here you see on the left side the source
structure,
00:09:24 on the right side the target structure. Of course, you can now create lines
00:09:29 between all these elements to generate your mapping, or you have the possibility also to ask
the Knowledge Base to give a proposal
00:09:38 and to select the best proposal for how such mappings in this context look.
00:09:45 You have then here also the possibility to modify these proposals. And you can also add on
this level documentation
00:09:55 on why have you mapped A to B here to have later a much easier way to maintain all these
things.
00:10:04 This documentation can then be also exported in different formats, like, for example, PDF,
00:10:09 and also to be sent to your trading partner if necessary and helpful. If you have a complete
look on the Integration Advisor,
00:10:20 here you see the different parts. On the one side, the editors for the MIG, MAG, and the
artifacts.
00:10:27 On the other side, the knowledge graph with the two lines for the proposal and for the
contribution.
00:10:33 Because as already said, if you create a MIG, you select your library, like X12,
00:10:41 you select the message, 850 for purchase order, you select which version, let's say version
4010 in X12.
00:10:49 You define the business context for this structure. You then ask the knowledge graph to give
you a proposal
00:10:58 of how this normally looks in your business context. You get a proposal, you refine it at
documentation.
00:11:04 You do this until you think your interface is defined. You have then the optional possibility
00:11:10 to contribute back this knowledge to the knowledge graph. Of course, it is anonymized
because this database is not only for your tenant,
00:11:19 but also for other tenants of other customers. After you have done this for the source and
target MIG,

17 / 30
00:11:25 you switch to the mapping editor. You select these as source and target.
00:11:30 Again, ask the knowledge graph how such a mapping looks. You refine it and have again an
optional step
00:11:36 to contribute this knowledge back to the knowledge graph. After you've done all that,
00:11:41 you can then generate the runtime artifacts, which will be not only the mappings
00:11:46 but also validation steps to make it easier to find
00:11:51 if errors or problems occur in the communication with your trading partner. Of course, here
you're also then able to
00:11:58 create the documentation in different formats. With the help of the Integration Advisor, now our
project phases have been changed.
00:12:09 You see, the media changes or the media breaks are limited because everything runs in the
Integration Suite
00:12:15 in the different capabilities, Integration Advisor and Cloud Integration. We also have a kind of
simulation in the Integration Advisor,
00:12:23 and also the times which are needed are much shorter than before. Now, let's have a look how
this looks in the system.
00:12:36 Now we're in the Integration Suite. Let's start with creating a MIG.
00:12:41 Therefore, we add a new MIG by selecting the type system. Select the message, in our case
an 850 message,
00:12:49 a purchase order in version 4010. Here we have the possibility to upload payload example
data.
00:12:56 We skip that in this example. We give this a name and also a direction.
00:13:01 In our case, it can be used for incoming and outgoing messages and define now the business
context.
00:13:09 Here we have a company which deals in the industrial machinery and components area,
00:13:14 and that company is coming out of the United States. Business process role will here be
Seller.
00:13:26 And with that knowledge, we can already create standard X12 850 messages,
00:13:32 where all the mandatory fields are already marked. And we are now asking our Knowledge
Base to give us a proposal
00:13:39 on how such a mapping looks in the business context normally. We have here now the
confidence intervals
00:13:47 and have the possibility to add the so-called qualification. We will do this at first for the
currency,
00:13:52 we will add a currency for the buying party, and can now refine all the fields which are inside
the currency.
00:14:01 We will also do this for another field, for the N1 field, and we will here then have the possibility
to add several qualifiers.
00:14:10 This makes mappings much easier because we have here several instances of these N1
nodes,
00:14:17 which can also be instantiated differently. On the right side, we can then add documentation,
00:14:24 which will later appear also in the documentation which can be exported. After we've saved it,
00:14:48 we go then to the MAG editor because we also have defined a source and target MIG.
00:14:54 We select now, after adding a new MAG, our source and target MIG. In this example, the
source will be an S/4 IDoc,
00:15:05 and the target will be that X12 message we have defined before. This creates now the editor
where we can add the mapping from source to target.
00:15:16 So on the left, the source message, on the right, the target message.
00:15:20 But again, we will ask our Knowledge Base on how such mapping normally looks in that area.

18 / 30
00:15:28 Therefore, we go to Proposal, Get Proposal, and get now in the lower part of the screen
00:15:35 all the entries with a confidence interval for how likely it is
00:15:39 that which field is mapped to which field in the target message. We can select here step-by-
step these lines,
00:15:47 or we can also ask the Knowledge Base, what is the best proposal here? We get automatically
a mapping
00:15:55 which is very often not a 100 % solution, but a good starting point which can be used.
00:16:02 We're now uploading some test data to check how our source and target message look.
00:16:10 And we get here therefore an additional column, Simulation Data, where we see that, in this
example,
00:16:15 the date format has been changed from source to target. Of course, you can change as it fits
your needs
00:16:22 and can add here some documentation, which will then appear later also in our exported
documentation.
00:16:30 We save it, and create such documentation.
00:16:37 We cannot only generate here the documentation, let's see here that we have that
documentation,
00:16:44 what we have added in your MIG and your MAGs, but we also can generate from here
00:16:52 the runtime artifacts which are used for the runtime later on.
00:17:02 With that, you have now seen how easy it is to use the Integration Advisor to generate
business partner interfaces.
00:17:09 And with that, thanks a lot for your attention.

19 / 30
Week 4 Unit 5

00:00:06 Welcome to the next unit on how to manage trading partner configurations with the SAP
Integration Suite.
00:00:13 In the last unit, you've seen how to set up the MIGs and MAGs
00:00:16 using the Integration Advisor. With the Trading Partner Management, we are going one step
further
00:00:23 on how to make it easier to set up B2B integrations. Here you see a kind of standard
approach,
00:00:32 what we see at several customer sites. Very often we have one or several systems,
00:00:39 either in the cloud or on-prem, like an S/4 system, cloud or prem.
00:00:45 And on the other side, we have several other systems, like trading partners,
00:00:50 which want X12 or EDIFACT messages. And these are very often a lot of these trading
partners.
00:00:55 And for all of these, you're creating integration flows using the Cloud Integration as a runtime.

00:01:02 But for most of these integration processes, you need one to several integration flows
00:01:06 to fulfill all the needs of this integration. And we have seen that these integration flows
00:01:12 are very often more or less the same. A message comes in, it is checked, it is mapped,
00:01:19 it's checked again, and sent then out to the trading partner, or vice versa for the incoming
messages,
00:01:24 more or less the same. This means we have here integration flows
00:01:28 which are very often nearly the same, but the mappings and the validation steps,
00:01:33 and of course the addresses, are slightly different. This was the reason to create the Trading
Partner Management
00:01:40 as an additional capability of the Integration Suite. So you see here how the three capabilities
of the Integration Suite work together.
00:01:50 At the top, the Integration Advisor, the Trading Partner Management,
00:01:54 and at the bottom, the Cloud Integration. How the Integration Advisor works you have already
seen in the last unit.
00:02:02 So we set up there the MIGs and MAGs, and from that we can generate runtime artifacts
00:02:08 either directly to use in the runtime, in the Cloud Integration, or it can be then referenced in the
Trading Partner Management.
00:02:16 So this means the Trading Partner Management is here in the middle, getting data from the
Integration Advisor
00:02:22 for the runtime artifacts, for the validations, but also for the mappings.
00:02:27 But it also stores additional data, and which these are we will see on the next slide.
00:02:33 The Cloud Integration itself is the runtime for all of these things. And for that usage,
00:02:39 we have developed the so-called Generic Integration Flow, which uses the Partner Directory,
00:02:45 which is a database on each Cloud Integration tenant, from where this Generic Integration
Flow gets all the needed data for the runtime.
00:02:56 This means the Generic Integration Flow is, technically seen, several integration flows.
00:03:02 One is for getting the messages in, one is for doing the mappings and some additional
handling,
00:03:07 and one is for sending the message out. So that flow gets the message,
00:03:13 checks from whom is that message, to whom should it be sent, asks then the Partner Directory
database,
00:03:20 goes to the next step, also gets the mapping from the Partner Directory database, and so on.

20 / 30
00:03:27 This Partner Directory database is then filled data from the Trading Partner Management
00:03:32 where you have to maintain four new entities. You will see in the next slide
00:03:37 which entities we have for maintaining these values. Additionally, we have also added in the
Cloud Integration
00:03:44 the so-called B2B Monitoring because we see that the monitoring in the B2B world is slightly
different
00:03:51 from the technical monitoring from Cloud Integration. More details you will then see also in the
next unit for this.
00:03:58 But let's have a deeper look at the entities you have to maintain in the Trading Partner
Management.
00:04:06 In the Trading Partner Management, we have added four new entities - the company profile,
the trading partner profiles, so both are profiles.
00:04:15 The agreement templates and the agreements - these are also a little bit similar, as you can
see this in the pictures.
00:04:23 In the company profile and the trading partner profile, you're storing data which are relevant
00:04:27 for your own company and your trading partners. In the agreement templates and the
agreements,
00:04:32 it's more about how to deal with the process. How is the choreography of this process?
00:04:38 Let's have a deeper look inside. Let's start with the profiles -
00:04:42 either company profile but also trading partner profile because technically they are very similar.

00:04:48 We are storing more or less the same data. The only difference is,
00:04:51 company profile is for your own company, trading partner is for all your trading partners.
00:04:57 So this means you're storing here data, more metadata, like addresses, names of responsible
persons,
00:05:04 because that makes it much easier if a problem occurs during a process to directly connect or
contact that person.
00:05:15 But of course, we are storing here also B2B-relevant information, like identifiers of your sender
and receiver.
00:05:24 And you have to know that we have to store here for each of these, for the company itself and
for the trading partner, at least two identifiers
00:05:33 because you have to identify yourself, but also your trading partner, in your own, for example,
S/4 system, your business system,
00:05:42 and also then later on in that interchange, like in the X12 message. These are very often
different.
00:05:51 Next is business system access information on how to access that system, like what is the
URL,
00:05:56 what is the protocol, how to reach that system, is it with basic authentication, is it with
certificates, and so on.
00:06:04 And because very often, because of security, we have to use certificates. This means we also
store the configuration for the certificates
00:06:15 in the Trading Partner Management in the profiles. Also, we need to know which kind of
formats and versions
00:06:24 your trading partner in your own company can deal with. This means you can say, okay,
00:06:28 this trading partner is supporting X12 in version 4010, 5010, and so on. You also find there
other information, like the number range configuration,
00:06:39 which other parameters are there, like for the interchange handling, and so on. Let's switch to
the next big entity, the trading partner agreement template.
00:06:54 The trading partner agreement template can be used for several trading partners. That's the
reason why we call it "template".

21 / 30
00:07:01 At the moment where you define the template, you have no idea with whom you later will use
that.
00:07:07 This means you define here the choreography. This means which transactions or interchanges
are needed for that process.
00:07:17 This can be one to several. And also are these one-way or two-way transactions?
00:07:24 This means either a message is sent out and nothing else, or a message comes in, nothing
else,
00:07:29 or two-way, something is sent out and you get something back or vice versa if it's an incoming
message.
00:07:36 So here you know already, of course, your own company data, like with which identifier I'll
have to deal,
00:07:43 which formats I am using, and of course, as here we know our own company information,
00:07:50 you already know the formats you're using in your company. This means you're already
referencing from the Integration Advisor
00:07:57 the MIG you have defined before. But of course, in that step, you have no idea
00:08:03 with which trading partner you later on will use that template. That's the reason why here, on
the right part, everything is in gray.
00:08:14 In the next step, you're instantiating that template to a trading partner agreement.
00:08:20 This means now you know your trading partner, you have defined all the necessary
information before
00:08:26 in the trading partner profile. You also have defined before in the Integration Advisor the MIG
and also the MAG
00:08:32 so that you can then really deal with that trading partner. And because, coming from the
template,
00:08:37 you know the choreography, this process needs this and this and this transaction, you have
also the possibility to say, okay, with this trading partner,
00:08:45 we only want to have one transaction less, or something like this. So we have here the
flexibility.
00:08:51 But you get everything from the company configuration before in that template, in your
agreement in.
00:09:00 This means you're setting up that agreement, you define with which trading partner, you define
the IDs, and so on.
00:09:09 You define which communication channels you want to use. All these coming from the trading
partner.
00:09:16 And with that, you have then only one step. You save it, you activate it,
00:09:21 and this extracts all needed information, pushes it to the database, to the Partner Directory
database,
00:09:28 and then your Generic Integration Flow can work with the data and your B2B process can
start.
00:09:35 And with this, let's have a look in the system how to set this up for a new trading partner.
00:09:41 Let's onboard a new trading partner. Therefore, we are going to the B2B Scenarios in the
Design part.
00:09:47 We see the company profile of our own company. We see there's already one trading partner.

00:09:53 We are now adding a new one. We have to give this a name and also a short name.
00:09:59 We can add here already more data, like in which country and so on this trading partner is
resided.
00:10:07 We save that, and add now the identifiers.
00:10:10 As said, you need at least two identifiers for each of these, one from the S/4 system, and one
in the interchange,

22 / 30
00:10:19 in our example, the X12 interchange. Here we are defining the identifier from the X12.
00:10:28 In that example, it will come from a Dun and Bradstreet address. We save this.
00:10:35 We are adding now the second identifier, which identifies the training partner in our S/4
system.
00:10:42 We will use here the S/4 on-prem IDoc system. Save this also.
00:10:50 Now we have to define the systems of this trading partner where we also can then also see the
purpose,
00:11:00 in our case, it will be development system. And add now which type systems are supported by
this trading partner.
00:11:09 In our example, this trading partner accepts X12 messages in the version 4010, and we are
now defining the communication channel.
00:11:19 In our example, we will use As2 as a protocol. And the direction here will be that we are
sending out.
00:11:32 This means that trading partner is a receiver. We're adding here the URL of the system of this
trading partner,
00:11:46 and add additional details, like which kinds of authentication are here necessary. Of course,
we need to add more fields
00:11:54 which are mandatory in the As2 adaptors, like the IDs of the own system, of the partner,
00:12:01 the message subject, and the own email address. These are necessary fields for the As2
communication.
00:12:13 In the next step, let's have a look at the template. So we see there's one template.
00:12:18 This is the template we will now instantiate. And we see there's already another trading partner
there.
00:12:25 So this has been instantiated one time. Let's add another one.
00:12:29 We select that template we have in that system. We're changing, of course, the name.
00:12:36 And we are selecting now the details of our newly created trading partner, like the name, the
system, the type system,
00:12:43 the version, the identifiers, both in the X12 exchange but also in the S/4.
00:12:51 Save this, and go now to the scenarios.
00:12:54 Here we have a simple one, this is only one transaction, one outgoing transaction, a one-way
transaction.
00:13:01 And on the left side we see the company stuff, which is already predefined from our template,

00:13:06 but we now need to define it for this trading partner. We're selecting now the MIG, which
comes out of the Integration Advisor.
00:13:14 This is the MIG we have defined before. We have here the possibility to enable
00:13:20 also functional acknowledgements and a payload validation. And we are now selecting the
MAG
00:13:27 to do the correct mapping from our source to our target message. Now we save it.
00:13:32 And with the Activate button, we push all the necessary information to the Partner Directory
database.
00:13:40 And now this process can start to run to integrate with our newly onboarded trading partner.
00:13:47 So now you have seen how to onboard a new trading partner, you now know which entities are
necessary to maintain,
00:13:55 and how to push the data from the Trading Partner Management into the Partner Directory
database
00:14:01 to get these messages running and to get the Generic Integration Flow running. Thanks a lot
for your attention.

23 / 30
Week 4 Unit 6

00:00:06 Welcome back to the next unit on how to monitor the business partner integrations.
00:00:14 As you already have seen that B2B integrations are a little bit different
00:00:19 if you compare to A2A integrations, you will now see how to monitor them because of these
challenges.
00:00:28 Let me first start, what are the challenges there? So the challenges we have seen in the B2B
integrations are the data volume.
00:00:41 Because of lots of trading partners, sometimes up to several thousand, this means the volume
and the number of messages is really high, up to enormous.
00:00:53 This means finding the correct messages is a really important factor. The number of multiple
systems,
00:01:01 or the number of systems at all, is another factor. This means the selection, which system is
integrated with which,
00:01:10 and how to find the correct message for this is another one. Network connectivity because
you're transferring data
00:01:19 over public networks or over bands. It also means you have here to do a good selection
00:01:26 and check on how these networks are working. Data integrity because you're sending sensitive
data
00:01:34 from your company to your trading partner, or also receiving sensitive data from these
companies.
00:01:41 This means you should have a good possibility on how to check if these data are really
acceptable,
00:01:49 to validate these, to check the content, and so on. And timeliness,
00:01:55 because very often these integrations have strict SLAs and reaction times, on, for example,
when should an acknowledgement at the latest be sent,
00:02:06 or is it already overdue, which might interfere with how your business is working.
00:02:15 The starting point of the monitoring is a complex filter possibility. As said, the data volume
might be quite high,
00:02:26 and normally you are not interested in messages where everything is okay. You're really
interested in specific messages and specific interchanges
00:02:35 with maybe a specific trading partner or in specific error categories.
00:02:40 And therefore, we have added here a lot of filtering possibilities. For example, around the
status, is the message Completed or is it Overdue?
00:02:50 Of course, the creation time, you can here select different time frames when that message was
sent or received.
00:02:59 The error category, so maybe you're only interested in messages where a validation error
happened.
00:03:06 Or of course, you can also select for sender or receiver name, the interchange names, and
much more,
00:03:12 to really have a quick possibility to find the message
00:03:17 and all of the messages you got and you sent where you want to have a deeper look inside.
00:03:23 If you found that message, the next step is the B2B details. So at first, you see directly here at
the top
00:03:31 the status of that message, but also kind of sub status, like the processing status,
00:03:38 but also the technical and the functional acknowledgement status because these different
statuses might be different and are important for you.

24 / 30
00:03:47 Here we see a kind of boring message because it's completed, everything's green, but only
that you get an idea of what you can find here as details.
00:03:57 You see here also the information when the message handling started, when it ended, what
kind of transaction type it was,
00:04:04 with whom you have sent that message, also what was the adapter type,
00:04:10 which control number has been used, what kind of format you have used, and much more.
00:04:15 And here on the tabs, you also see the Events tab, the Acknowledgement tab, and the
Payload.
00:04:21 So with the Events tab, maybe you remember the generic integration flow we are using,
00:04:26 which is logically one big integration flow in the Trading Partner Management. But technically,

00:04:33 we use several integration flows, and each of these integration flows are sending events
00:04:38 and you here have access to these events. The Functional Acknowledgement, you get here
more details when it was sent and so on.
00:04:47 Under the Interchange Payload, you have access to the data which has been going as a
payload over the line.
00:04:57 The events I already talked about which are coming from the technical integration flows,
00:05:02 which are several in that generic integration flow, can be used to drill down more into the
technical monitoring,
00:05:09 so the monitoring of Cloud Integration, to exactly see what happened inside there if a problem
occurs.
00:05:17 So you have here more details. You also have the normal possibility to set also this integration
flow to Trace
00:05:25 if you really need to jump into these technical details. As the Integration Advisor it's not only
delivering to you the mapping
00:05:36 as kind of XSLT, but because of the knowledge we have in the MIGs,
00:05:41 the qualification, the code lists, and much more, we also generate as artifacts validation
material.
00:05:48 This validation material can be used, we have seen this in the Trading Partner Management
with an additional check mark.
00:05:55 And this can be used here to make it much easier than before to find problems in the
messages.
00:06:02 Like here you see in the overview, in the interchanges, there's the box, Interchange Validation
Failed.
00:06:08 This means something with that message had been incorrect. If you now take a deeper look
inside,
00:06:15 you see exactly the Interchange Validation Failed in that second box here. What exactly was
wrong in that example?
00:06:24 So like, it was not a value which comes from a code value list, although that field requires that
code value list,
00:06:31 or maybe it was too long because here you have a length restriction, or stuff like this.
00:06:37 So this really helps you to find more about this interchange where you had some issues inside.

00:06:48 Also in the B2B world, acknowledgements are a kind of standard and normally everywhere
used.
00:06:54 This was the reason to add here the possibility in the agreement to switch on the possibility of
the functional acknowledgements to receive.
00:07:04 And this is also then reflected in the monitoring, like you see here in the right box,
00:07:10 that we switched on Receive Functional Acknowledgements. And here we have then in the
monitoring received the functional acknowledgement,

25 / 30
00:07:18 but unfortunately, our receiver rejected that message because of whatever reason. And with
that knowledge and the data from the Trading Partner Management,
00:07:28 because you know which company you have dealt with, you have the information from that
company,
00:07:33 you have also maybe maintained the contact persons, you now easily can contact that person,
that responsible person,
00:07:41 to find out why has this message been rejected. And with that, let's have a look in the system
how the monitoring looks.
00:07:52 Let's now open the monitoring, the B2B Monitoring, in our tenant. You find it under Monitor,
B2B Scenarios.
00:08:00 You see how many interchanges have been handled in the past 24 hours. And you see here
the possibilities of our filtering,
00:08:06 like you can search for Failed, Retry messages, Processing, Completed, or also for Waiting for
Acknowledgement messages.
00:08:14 You can adjust the time frame when that message was sent, so that you can deal with that
large number of messages.
00:08:22 You can drill down or search and select more which error category you're searching for, like
Interchange Validation Failed.
00:08:31 You can search for the sender and receiver name, and many more things. Now you can, in the
next step, select the message,
00:08:39 in our case, a Completed message, and find out more details of this message,
00:08:44 like what kind of message type had been used, what was the ID inside that. But you see
already here the events of the generic integration flow
00:08:55 and the access to the payload, which can also be downloaded. Let's now select a different
message,
00:09:05 where we see that the acknowledgement has been rejected.
00:09:10 So our receiver has not accepted that message. Over here, the validation has failed.
00:09:18 So you see here that the message which had been sent out has used as a currency DOLLAR
instead of USD,
00:09:26 which is what we expected here, and which had been defined in the code value list.
00:09:32 So we get here different error messages around that value because one, it's too long.
00:09:38 It's only allowed to have three letters, here like USD for dollars or EUR for euros. So it's
already too long.
00:09:47 And you see in the other lines, it's not in the code value list. This means this value is here not
accepted.
00:09:54 With that information, you can then, depending on if the message comes from you or if it's from
a trading partner,
00:10:01 you can then either modify your systems or mappings or contact your business partner, your
trading partner,
00:10:09 to check with him what exactly happened on his side. And you, of course, see in which field
this occurred.
00:10:24 As said, our generic integration flow is technically severeal iFlows, and you have here the
possibility to drill down to that technical iFlow monitoring,
00:10:36 and you see then also here in the monitoring what exactly has gone wrong in that monitoring
or in that message,
00:10:46 that the message cannot be accepted by your company or by your trading partner, depending
on what kind of direction you are here using at the moment.
00:10:56 With that, you have now seen how to monitor your business partner integrations
00:11:02 in a kind of comparison between the technical monitoring of Cloud Integration and the added
value of our B2B Monitoring,

26 / 30
00:11:10 like the acknowledgements, like the validations, like the access to the payload,
00:11:15 and many more things, and of course, the filtering to deal with our challenges
00:11:19 we have seen here at the beginning of the unit. If you're interested in more
00:11:25 and need more knowledge about B2B integrations, because it's a huge field we have,
00:11:30 we're at the moment working on a B2B openSAP course, which will be also available very
soon.
00:11:37 So stay tuned, and hopefully see you there. Thanks a lot for your attention.

27 / 30
Week 4 Unit 7

00:00:05 So here we come to the end of our openSAP course Modernize Integration with SAP
Integration Suite.
00:00:12 Learning is always a joyful journey and I hope this course fulfilled
00:00:17 all your SAP Integration Suite Learning needs. We, as the Integration Suite product
management team,
00:00:24 had a great time bringing this course for you all. To wrap up, SAP Integration Suite
00:00:30 provides our customers with the best in class, reliable, and holistic enterprise-grade
00:00:36 integration platform as a service to manage ever-growing complex
00:00:41 and heterogeneous landscapes. In the first week,
00:00:46 we introduced the SAP Integration Suite to you and highlighted the strategic positioning
00:00:52 of SAP Integration Suite within SAP's overall integration strategy.
00:00:58 You could also learn more about the key benefits of SAP Integration Suite
00:01:03 and how it competes in the enterprise integration market. And additionally, we offered
overview sessions
00:01:10 about the key integration scenarios that are offered by SAP Integration Suite,
00:01:15 such as API-led integration, event-driven integrations,
00:01:19 as well as all the flavors of process integrations, such as application-to-application integration,

00:01:26 integration of business partners, and integration of government agencies.


00:01:31 In week two, we learned about how you can elevate and future proof your integrations
00:01:36 with SAP Integration Suite. This also includes the step-by-step procedure
00:01:41 to provision an SAP Integration Suite tenant. We then deep dived into the newly introduced
00:01:47 integration assessment capability of SAP Integration Suite, which is based on a proven
integration solution advisory methodology.
00:01:55 Then in the latter half of the week, we learned how you can assess
00:01:59 and migrate your process orchestration scenarios to the cloud. Lastly, we went through the
procedure
00:02:05 to migrate your integrations and APIs from the Neo
00:02:09 to the Cloud Foundry environment. In week three,
00:02:13 we learned how SAP Integration Suite supports evolving market needs today,
00:02:19 be it API-led integration or even driven integration patterns.
00:02:23 API Management is a capability of Integration Suite that supports full lifecycle API
management,
00:02:30 providing you with out-of-the-box policies to set API security.
00:02:34 We also learned that SAP provides an end-to-end event-driven ecosystem consisting of event
sources,
00:02:43 eventing infrastructure, and event consumers. Last but not least,
00:02:48 we learn how EDAs support a wide variety of industry-specific use cases.
00:02:56 In week four, the focus was on application business and e-government integrations.
00:03:02 We started with the concepts that help you to design
00:03:05 enterprise-grade integration scenarios, and then we deep dived
00:03:09 into the simplified non-SAP connectivity options. We then touched upon the monitoring
00:03:14 and operating possibilities of your integration scenarios. Lastly, we learned about SAP
Integration Suite capabilities
00:03:22 for your B2B integrations, including Trading Partner Management

28 / 30
00:03:26 and B2B Monitoring. We have gone above and beyond
00:03:31 to create a course that not only encompasses the entire spectrum of SAP Integration Suite,
00:03:37 but also delivers the most up-to-date and cutting-edge content. I hope you enjoyed this
exciting journey
00:03:45 and equipped yourself with the knowledge that sets you apart in the world of integration.
00:03:51 Thank you from all of us, and best wishes for your final exam.

29 / 30
© 2023 SAP SE or an SAP affiliate company. All rights reserved.
See Legal Notice on www.sap.com/legal-notice for use terms,
disclaimers, disclosures, or restrictions related to SAP Materials
for general audiences.

You might also like