 Welcome, everyone, to our webinar today. My name is Martin Schaffler, I'm Product Manager of ASAPIO for the ASAPIO integration add-on. We will see today a presentation from Christian Fristler from Holden Shopping Europe on real-time integration with SAP Retail and Google Cloud. So, we'll go to you, Christian. All right, thank you and good morning, everyone. I will start my presentation. Give me a second. All right, perfect. So, yes, I'm going to tell you a little bit about our journey for integrating our SAP Retail System with Google Cloud PubSupp and how we integrated our data from the SAP Retail System into our e-commerce platform. Maybe as a background, what was... Just give me a second. I wanted to change the presenters so you can still see me. I just need to find that. Oh, here we go. All right, sorry, my webcam, I don't know why. I'm always a little bit dark, but at least you can see me now. So, what was our motivation? Back in October 2021, we migrated our SAP landscape from an on-premise data center to Google Cloud. And with that, we wanted to make more use of the cloud-native services that are being provided by Google Cloud, such as Google Cloud PubSupp. But in addition to our move from the data center to the Google Cloud, our landscape, internal landscape also changed a little bit. So in the past, our SAP CRM system was the main backend for our e-commerce platform. So basically, all the orders were placed first in the SAP CRM system and then transferred wire middleware to our SAP Retail System. And that integration also changed. Meaning now our e-commerce platform is directly talking to the SAP Retail System via an OData API. So all the orders are being placed in SAP Retail via OData API, order updates, et cetera. And with that, we also wanted to change how the communication between the SAP Retail System and the e-commerce system was being designed and established. So maybe as a background, so HSE Home Shopping Europe, we are running 24 hours, seven days a week, 365 days a year. And the motivation behind integrating via PubSub was obviously that we want to show the latest information to our customers in the e-commerce platform without any further delay. So such as what is the current order status, what is the current available stock, and additional data. Why maybe, okay, let's quickly go to the next slide. So as I mentioned, the SAP Retail System on the one side and the one side, on the other side, we have the HSE e-commerce platform. And we wanted to integrate that using the Google Cloud PubSub, so sort of an event-driven architecture, event-driven architecture approach. Why did we choose this kind of scenario? Overall, in our architecture guidelines, we are trying to decouple the system and the processes as much as possible. So meaning, I mean, obviously we do have an OData API where the e-commerce platform could also call the OData API for specific information, but our SAP Retail System has planned down times, even though they are early in the morning, but they could also be in worst case and unplanned downtime. Or during, if you have to, if you're upgrading your system, use news support package or even EHP updates, the downtime will be longer than just a few hours and you won't be done at night. So we wanted to make sure that the platform gets the information via events and then stores the information also on their side to be a little bit more independent from our system. And before we decided to go for, or yeah, before we even started looking at SAPO or in parallel, there was also the question in the room for us, well, maybe we could also build it ourselves but for my background experience before HSE, I was a product manager for SAP add-ons at a different company. And from the experience, you know, building things is the easy part, but maintaining, documenting, adding additional functionality like monitoring, error handling, et cetera, that is actually the hard part. And we had some brief discussions internally and then obviously on the other side, you also need to keep up with Google Cloud Pub-Sub, what if they change their API and other things you need to be aware of. And it was relatively quick, it was clear for us that we will not build this solution on our own. And we found the ASAPU integration add-on, first the NetReverse enablement add-on for SAP EventMesh and then we talked to ASAPU and luckily they also had the corresponding connector for the Google Cloud Pub-Sub that we're using. So what did we start with? First of all, the most important part was for us that we have whenever an order status is being updated such events should be pushed to our e-commerce platform. This was also our proof of concept which we finished in like around about two days, you know, getting everything up and running during the implementation, but it was rather quick and also quite successful. The e-commerce platform then we're not pushing directly, they're pulling from our Pub-Sub subscription, they're pulling the data, not getting the data pushed. And then we added additional financial events that are relevant and also the stock delta events and all of these are getting pulled from the HSE e-commerce platform via the subscriptions. And overall, I think the final implementation for us took about three to four weeks and then we were done with all the implementations. If you're wondering about this little icon here, this was just an additional information. So the event specification that we are following is the cloud events, which is also specification written by SAP. And if you want to find further details, you can check out cloudevents.io. Okay, so these are the events that we are publishing and what I will do now, I will show you a demo in our system just to give an overview of what is happening, how does the process look like? I'm just gonna stop sharing. We're quick, close PowerPoint, yes, okay. And just give me one second to look into our environment. And now you should see the environment. This is our staging environment and sorry, we don't have, since our main focus of Home Shopping Europe is the Dach region. So Germany, Austria, Switzerland, we don't have an English UI. But basically here you see the orders with all the different statuses that we have. So these orders have already been shipped. This has been canceled and this is still in progress. And what I'm just gonna quickly do, I'm gonna create a new order. Let's buy the makeup one more time. And then I'm gonna show you what's happening in the background of our SAP retail system and in the SAP UAD on. All right, so the order has been placed. And now we're gonna check what's happening in the background of our system. Okay, here we go. This is the order. This is now the order number from our SAP retail system. Okay, sorry about that. Someone was not muted. Okay, so SAP retail system. So what is actually happening once the order has been placed, they're calling the OData, our OData API, the order is being saved. And then at a specific customer exit, we write a status event, but the status event is not directly being. So the first thing that's happening is you might all of you are aware of the BDC P2 of the change pointers tables. So first of all, a change pointer is being created. We can already see that this has been processed already by, so this happens, the process is set to true once the SAPIO job is running, which is running on a each minute, so to say. And I can even check. So this is the SAPIO monitor where you can monitor the transactions. I'm just gonna check out for the events. So this is the event that we just fired and triggered to the Google Cloud Pub-Sub. You could even go to have a look at the payload. But since Pub-Sub is, it's necessary to send the data via base 64 encoded. You won't, this looks like gibberish. I mean, you could encode it and then have a look what the real data says. But for us, that's not relevant as of now. So nothing happened yet. We just placed the order obviously in the front or in our customer facing front end. It still says it's in progress. So nothing happened yet. So what I will do, I will process this order now. This is something I'll do manually now for the demo. Just quickly removing the delivery block as a first step. And then I guess most all of you know the drill, lovely transactions, VAO2 create the delivery to end. Okay. So I have booked the transaction now. So what we will see, sorry, in process. Okay, this is still from the OData API call where I removed the order block. We should also let me just verify that what I did was actually correct. So sorry about that. I have to use a backup solution. So that's what we already did. We processed the order. Let me just skip to that part. Yes, we already saw that. Okay, now basically doing the delivery. That's what I thought I just did, but apparently it did not work. Yes, and basically that's the step I wanted to show you now. As my user, an additional change point I got created and this has also been processed. Here we see it again. We do see the additional events that are running towards the Google Cloud PubSub system. And once that event is processed, we can also see in the monitor, in the sub here monitoring. And then we can also see the status update in our UI immediately in the e-commerce platform that the status is being updated to send. So apologies once more that I couldn't show the full demo, but I guess you should be able to get an understanding of how the process works. I will just quickly go through, show you once more the high level point of view. Yeah, so in the in the sub retail system, the status of the order item was updated. This causes that the change pointer is being created. The sub your framework then processes the change pointers. From the change pointer data, it reads the relevant data that is needed. With the Google Cloud PubSub, what is actually nice, we don't have to send. So, you know, this is the staging environment where not that much is going on. On our production environment, this looks completely different. We have hundreds and up to thousands events per second. And what the PubSub connector allows us to do is not send each event individually, but collect, let's say for example, 200 events. And then if we have these 200 events, then they're being sent to the PubSub system. PubSub splits the payload then into individual events again. The events are being stored inside the subscription and then our e-commerce platform pulls those events and processes the events. I just have to switch the screen once more. No, this is, yes, and then I guess all of you are interested, okay, what does it mean? Again, implementing, but then you also need to have the operations part. Maybe just to give you an idea, so today we process millions of events per day, really without much system impact. That is something we closely monitored, especially at the beginning, especially our SAP Basis team was new solution, additional processing. They were worried about, okay, that this integration will take up too many processes and also too much load on the CPU, et cetera. We're still closely monitoring our systems, but even so at night we have some huge batch jobs that are running which doing massive changes to sales orders and other objects. And even then we realized the CPU goes up maybe five to 10% higher than usual than before, but overall the system impact was very small for us. And what are we doing today to make sure that everything is working and we're getting notified in case things are not running? So first of all, we monitor our job execution itself in the SAP retail system. We're doing that with an external tool. Some of you might be familiar with UC4. So we check the frequency, we check the duration and if the duration is too long or the job might have not even started as we expected, we're getting email alerts to our on-call duty team. So that even at night, if something breaks, they will check it and try to fix it. But not only do we monitor our SAP retail system, Google actually provides best practices for Pub-Sub monitoring. So on the one side, we also check do we send continuously send events from the SAP retail system? Are they actually received here by Pub-Sub? So the publisher health is being checked to make sure that the data is actually being, the events are being received. And then additionally, we're also checking, even though it's not fully our responsibility, in the end it's the e-commerce team that is responsible for pulling the events. But we're also checking if these subscribers are healthy, meaning that we don't end up all of the sudden in a huge backlog of events that haven't been processed. So we're checking if events are older than one hour and if a certain threshold of events is in the subscription. And if these thresholds do not fit, then we get alerts to our team to make, and then connect with the e-commerce team to check if everything is okay and everything is running as expected. So now I think we come to the question and answers from your side. All right, so if you have a question, please unmute yourself and then just ask the question. Yeah, I have a question. I know you can hear me. Yes, I do hear you. Thank you for your presentation, by the way. I have just a little question. Why did you decide to group the events before sending it to PubSub and then just split it in the PubSub broker? Is there any, is this to save time or is this because you have a lot of traffic or what is the logic behind this decision? Yes, especially during night when the batch shops are running, we send hundreds of thousands of events in very short time. And in order to avoid to open, I mean, even though all the traffic is run internally in Google Cloud, it still means that the sub-realtail system is opening up a connection during the REST API call, closing the connection. And this allows us to reduce the load on the ICM and other important parts of the system. Okay, thank you. Hello, I do have a question. So in your landscape, so what is the rationale behind grouping the events into batches and then, I mean, shorter batches and then delivering onto the PubSub rather than delivering it immediately? I mean, is that you reverted back to these batches since you got a performance impact or the decision was to drive them in batches during the implementation phase itself? Actually, so what we initially started with was our first POC was actually not even with PubSub, we used the SAP EventMesh. And SAP EventMesh is only able, you always have to, for each event, you have to make one additional, you have to make one call towards the REST API from SAP EventMesh. And there we actually monitored that in order to handle the additional load, we would have to deploy additional cloud infrastructure. We did deploy some of it, but with the PubSub connector, and since we're running the job every minute, it's not super crucial for us to have a real, real time. We are more near to real time. So it was okay for us to wait for one minute, collect all the events, and then reduce the amounts of external calls done by the ICM and other components inside the SAP system. Okay, so you mean to say like, I mean, it does not have a performance degradation within the SAP. However, you have done it to reduce the number of HTTP calls, right? Yes, exactly. To minimize the HTTP overheads. Okay, okay, totally got it. Thank you very much. All right, you're welcome. Martin, you're on mute. Thank you so much. Yeah, one thing I wanted to add about that point is also that you had these bursts of events during certain runtimes at night, where some batch ships did some mass updates, which were still touching the same or producing the same events. So that is typically a scenario where it is very good if the connector can do this kind of batching because it greatly reduces the overhead. Because otherwise you would try to basically separate out these kind of events so that the batch job produced different events than during normal working hours that are more easily handled with a real, real time. Right, right. Okay, okay. And maybe just to add, so from our experience ever since we went live last year, I think June, yeah, I think June it was. We never had any issues. So sometimes we get alerts from that the subscribers are not keeping up with the event flow. And then we have a quick talk to our e-commerce team, but that's a different part of the integration. But overall, we never had any issues that the Google Cloud PubSub connector didn't manage to handle the workload or our system. So we're happy and it's just running. And actually in May we're starting the next integration of events regarding invoice data that is being sent to the e-commerce platform. Perfect, any more questions from the attendance? So if there are no more questions, then I thank you very much for your demo and the whole presentation you did. And I will perhaps show a few slides on the upcoming release. Don't have that many minutes left, but then you can get an idea on what is coming up in our next release. That's will be released next week, I think Tuesday next week. And one point that did come up in the presentation from Christian was the show trace for the Google connector. Actually it is already in place since the October release last year. You will now have a decoded view of the payload so that you get a better idea of what was actually sent. But this is just minor detail there. As you said, Christian, as you don't have any problems, you probably never look into those traces anyway. All right, we only did it at the beginning during implementation, but then it's just running. Yeah, so if you upgrade, you will get an improvement there, but you can actually see the failures. Perfect, good to know. But let me share also my screen so that we can go through a few bullets of what's coming up in the next release. So, yeah, we of course have been working on the tool and I will pretty much focus on what's coming in the core framework. So as you might know, around the core framework, there's always also feed class integration and there are different features for the different connectors. So those are your one touch because we only have five minutes left, I think. So one big feature that we added in the April release, so it was in kind of better release in the October. The release we have last year is the payload designer that should simplify or improve how you can do code list definition of your payload and also of the data extraction so that you get a better UI combining different tables and then also specifying renaming for the JSON that will be produced. So this is now part of the April release coming next week. And this also will allow us to also give you some preconfigured payload packages where we have already defined the right extractions for typical objects that we see a lot, sales orders, purchase orders, invoicing data, another master data. So that will be made available now, download portal as templates that you can then copy from. So to get your head start on if you design your own events and are not quite sure what are the emerald tables. So you get a little bit guidance there to get you started. And then we also added a new support for async API specifications. So this currently is an export of the configuration that you do in our framework in async API format. So not sure if you are aware of it, it is a standardization process very similar to open API for the REST-based interfaces. So for the more synchronous and this is now the event answer for it for these asynchronous APIs that are defined there. And this will always work if you have a code is based payload and extraction. So you have to have to use either our database view extractors or the new payload designer with the extractor to actually get the schema export and everything that you need for the async API. But if you're utilizing some API management tool then this can be really helpful that you don't have to generate those manually anymore but you will get now a report where you can export them. And then of course there are more a set of general improvements and fixes in the overall framework what I want to highlight is the improved formatting options. So we already had some renaming options for the database view-based payloads but with the new release you also get to influence a hierarchy of the payload and have a little bit more options on how the payload will actually look like in the end. Yeah, I think we only have time to touch up a little bit on the multi-tag topics. So the payload designer will give you a drag and drop UI where you build out your joins basically of the involved tables and you get then a UI to choose all the fields and we did that so that we can actually run in a way where we don't have to code these. So there is no coding generated. There are no data dictionary objects or other dictionary objects created for it. So you don't have to create a new database view but you just click together which tables you want to extract and then which fields to choose from these tables and then you're already good to go and the extractor then there's based on your configuration built during runtime the extraction and same goes for the formatting then as well. We also added the new formatting options. So one thing that was already available is a table and field renaming. We did restructure it a little bit so there is a report coming with it. So if you already use it, you can change to the new configuration way. There is a new option to basically camel case. So there's a standard that if you have typical SAP names where the names are actually separated by underscores you can automatically camel case that for your JSON. So that the underscores are removed basically and the next character will then be uppercased. So that's an easy option if you have a lot of fields already and don't want to make a separate rename for all the fields. You get a new option to skip fields in the payload so sometimes you have fields like the client field in an SAP table or other fields that you just need for the joining of the tables but you don't actually want to output on the payload. You can now skip them when the payload is generated and you can define how the tables relate to each other and how the hierarchy in the payload should look like. So as you might know, the format always tries to structure the JSON and make sub-objects of the different sub-tables and this now gives you control over on which level each sub-section should occur and these are included in all the database view-based formatters so far we had quite a few for the different connectors, there were different formatters but there was also a new format that is especially used for the payload design but works also for database views based extractions which basically combines all the features in a now better structured way so you can rely on that one formatter to work across the different format as you might have before. Yeah, I think with that we are close to our end so I will thank you again for the attendance and Florian I give back to you for any last words or any last things we want to say. Thank you. So I'll be close down with a final slide that whenever what we presented today and many thanks to Christian and Martin to prepare all that and demonstrate it. When that resonated with you there's more on our website and please feel free to reach out to us any time. Thank you.