 Yes, as Peter mentioned, Kevin Wilson from Microsoft over here. Firstly, I want to just thank you guys for putting this together. We love having the opportunity to actually share with the broader community. And especially, you know, what we do here at Microsoft. I think Florian gave you a good idea from a sort of an external Microsoft facing perspective. So you know how we're impacting customers. I work for internally within the cloud supply chain organization, so we're responsible for basically procuring all the components that go in the servers that you see here in the data server that data centers that run the Azure cloud and then we track that process all the way through to the actual manufacturer of those servers that included in that get included in these racks and then ultimately installed in the data centers. So let me let me show you what's going on over here over here at Microsoft. Firstly, we'll cover a couple of things. We'll tell you why we actually went down this path and then ultimately how we actually came about the solution. And I'll just write off the bat, say, you know, it took us less than a year to find a SAPO, engage a SAPO and actually implemented into production into the environment, which is kind of an unheard of scenario within the Microsoft world. We move carefully in the back organization to ensure the integrity of our solutions that we're delivering, right? So just goes to show around the simplicity of the solution that sits in the back end. We'll show you the high level architecture. Won't be able to show a demo today. These are internal systems, so we'll just run through the components that we're actually using and demonstrate the solution benefits. And funny enough, Simeon mentioned that Siemens, they're a process that's the same solution that we chose to get out of the backwards. It seems like it's an industry-wide problem of tracking your purchase orders through payment. And then we'll just talk high level as to what our way forward is. And we hope that other folks can actually resonate with the same journey that we're going on. So let's just take a look at generically our problem statement. And I'm going to tie it back to the purchase-to-pay process, because that was the initial one, initial process that we chose to implement. But it's basically any process. It sits in the back end if you're using your traditional mechanisms to interface with SAP. We simply don't have or we did not have real-time access into those SAP events. And in today's world, if you have that as your position, you're going backwards very, very fast. So we needed that real-time access to the status change of our business objects that are occurring in SAP. And much like Siemens, we have a very large SAP footprint in the back end. We obviously have a lot of other systems too, including like D365 to manage certain components within the supply chain. But a lot of our supply chain is run on SAP. From a monitoring perspective, since we don't have that real-time access to those events, we don't have any real-time monitoring of issues that are happening in the supply chain as and when they occur. So if you hear about attention and response, or I like to call it insight to action, insight being when do we actually get insight into an issue that's occurring in our extended supply chain. So way upstream, what's happened, I need to know, I need to have that insight around that exception as and when it occurs. And then the ability to quickly trigger corrective action in order to make it happen. So if you don't have that immediate insight into it, then the corrective action can change. If you don't pick it up in time, there's nothing you can do. So you accept the consequence and ultimately, ultimately your customer suffers. We suffer from this a lot. So the long lead time to deploy our current solution involved coding for every event against every business object. And so we had a big amount of code and anytime we have to change code, we have to go through a lot of hoops to make that happen. So there's a long lead time. So often the business chooses not to go down that path or they go down an alternative path and they don't adopt basically best practice to get the data into their hands. They'll start to ping databases directly and misbehave. All right, from a resourcing perspective, and purely because of the mode that we were using, the different systems were actually polling SAP to say, is there a purchase order change? Is there a purchase order change? Is there a purchase order change? And we would do that several times over in parallel just because different systems were wanting the same onset to that, just different downstream systems. So that is obviously not very good from an SAP perspective. We had external systems basically impacting performance on SAP because they were just looking and hoping that there was a change. And if there was a change, they would do the read and then off it goes. That's just not a good use case, or a good experience for anybody. So a couple of examples, and we'll run through this in a little bit. We want those timely notifications in our downstream systems to suppliers, informing them of any changes in POs. We also dealt with the cruells, so the whole invoicing side of things. So we went all the way of preco to pay. And we want early warnings from an ASN and the goods receipt perspective. So this is obviously key for us, the shipping of products from our tier two suppliers all the way through ultimately to our system integrators who are building the service for us. If there's any delays in shipments or receipts, any receipt discrepancies, we need to react accordingly and in time. So what did we come up with? The combination of SAP that's running out that cloud supply chain together with that SAPO add-on and the event grid, right? Initially when we engaged with the SAPO, literally just last year, there wasn't that direct connectivity with the event grid that was going through SAP event mesh or enterprise messaging, as it was known at that time. And from a Microsoft perspective, at that time, we're building a world around the event grid for our internal solutioning. So we can have one place where all of our systems can tap in and subscribe to those events. So it made sense for us to have that direct connectivity between SAP and the event grid. In addition, maybe down the road, we'll also connect SAP to SAP event mesh. Certain applications I would imagine are just gonna raise their events directly in event mesh, cloud-based applications, maybe a REBA, IBPE or something like that will go there and then we'll simply use the connectivity that's already out there that between SAP event mesh and an event grid. But so we'll continue to have event grids kind of being the center to our universe or our downstream systems, right? So we wanted to move away from kind of a polling infrastructure to more modern, flexible, agile, efficient one and Simeon spoke to the same points at Siemens. And I can't say anything more than that. It's literally, that's what we're trying to do. We're just moving forward towards an environment that we can actually work with where there isn't an impact on SAP from external systems. It's basically a fire and a forget and let the downstream systems understand how they want to actually consume and use the data and SAP is not part of that process. And maybe that's kind of a philosophy that we follow and most folks should think that the seeming system of an event or the raising system should not really take part in how the downstream systems are using the data. In other words, more point to point connectivity let's publish the information and let the downstream systems decide how and when they wanna pick up that information and actually use that. Florian mentioned as part of the cloud event standard he's actually involved in the formation of that standard. We from within a cloud supply chain perspective actually leveraged that standard. We brought together a couple of other internal organizations within Microsoft who also producing events and consuming those events within Event Grid. We set a global standard for Microsoft which is just the same format as within the cloud event standard that Florian spoke to. So there's the cloud event standard that we actually use and I'll give an example on a later screen. And then as with Microsoft we believe in our products and as your first approach, so let's bring the data to Event Grid and you kind of saw exactly how easy it was to actually consume within teams. We've got Olga Bruchholt on the call over here. I think it was the one who actually built and developed that for ASAPIO using that ASAPIO add-on. At the end of it, on the last slide I'll give a link to that on YouTube. So go and have a look at it just literally took like an hour to actually configure. So I don't know if it was clear enough earlier but let's just explain to you the two types of events here that we deal with here at Microsoft using the ASAPIO add-on. The first one, and both of these I'm gonna use just the business object. So this 2012 for the PO creates it's the same example that was showed earlier. The first type of event is what we're just terming a notification event. So it's a small little payload. You can see the payload there on the right-hand side. That's literally the size of our payload. We have a couple of custom properties that are in there that maybe helps the downstream systems connect to or subscribe to purchase orders that are of interest to them. So for example, a purchase order type you may wanna only subscribe to capital PO's or expense PO's and take it from there. So with the pure notification event it's a very small, you just literally saw the configuration. That's it. There's no payload. You publish it out to the event grid. We've got a subscribing application that basically would read that data and then they get to choose. Do I need to go get further information? In our solution, there's some events that we don't need to go get information. So for example, if you get a PO deletion event and you subscribe to that, unless you need the reason for rejection, there's no reason to actually go back to the database and read that using a Bappi call. You literally can create your subscribing system, can subscribe to that and actually just go and deal with it however it wants to do, whether it's in a notification to the supplier that you're deleting or whether you just wanna show an analytics account of orders deleted or so forth. Whatever the purpose of that subscribing application they can choose whether to go back and get the data or not. And then if they get the data, they have the option to actually store and persist that data. Again, in there's your land. The second type of event that we use is the data event. So there's certain events that we know we need the data to be persisted downstream and we're gonna be using the data in its entirety for our downstream processes. So in that instance, when we get, so for example, a PO create, the settings there within the SAP, yeah, let's go get all the data that we need in order to send it. So we'll go get the data, we populate that actually in the cloud, same cloud event standard but now the payload's a lot heavier. We go there and then within the Azure Event Grid framework because it's not just Event Grid, we have a couple of other components sitting around it. We store the data and then when the subscribing application then reads that event, for example, no need to go back to SAP, we have the data persisted in Azure so we can just go fetch the data from Azure and not SAP. So both of these patterns, you can kind of see how SAP is less impacted. There isn't the continuous polling. There's only one read of the data, especially in the data event pattern. Okay, so what do we go live with? From my order perspective, we're talking about a purchase order visibility very similar to Siemens. We basically wanted to get any exceptions around that. And just before I continue, maybe I just wanna mention SAP has their tool, SAP Event Management, something also I've been working with for many years. Also, users events to find and uncover exceptions within your process and specifically within procure to pay, for example, when things are not going well, it can raise those events. We are not using that particular solution here right now but when we do, we will be using the same technique. We'll just raise a business object event when we uncover the fact that, for example, a purchase order acknowledgement has not been posted. But that's something you can't do in native at ECC but you can do it with ECC and event management on top of that. You can determine when things are not happening, when they should be happening. But I would use the same technique just to raise a business object that would then trigger the event up to the event grid and from there, I would be able to react to a supplier to say, listen, your service level agreement for sending a purchase order acknowledgement back to us has expired, you need to send us a confirmation, right? Or if an ASN was supposed to ship at a certain time and it hasn't, event management would pick up that fact, trigger a SAPIO business object and it would send it to the event grid so we can react to that. So back to the scenario, we're talking about about 60,000 purchase order changes per year. These are all internal notifications that we wanna tap into. And it's again, just our pilot project but there was significant churn in our organization around getting real-time access into this and it was pretty simple just to actually set up. Always like to look at what success looks like. So obviously we're looking at the reduction in the number of tickets. So almost exclusively folks calling just to figure out what's the process of an order, right? Where is my order, what's the status of my order? We want any of those downstream systems to be immediately updated with the latest information and available through users. You can kind of see our current experience of 21 hours to actually get the data through all of its downstream systems. So we're getting it down to seconds now. And then reducing the polling, we had literally about 70,000 hits to the SAP database per hour just with the systems looking for the status changes of purchase orders. So what does it look like from a large scale perspective? We have the publisher, so from within SAP ECC we have that SAP add-on and I know it's available for S4 but we leveraging SAP ECC for our POs. So implement to that SAP add-on and then we've got our basically our, let's just call it a little platform that ingests those events and then pushes it out to the subscribers of it. That's not just event grid but that's our initial endpoint. And then basically in there we're enriching the data and then routing it to the receivers. That's our platform that we're dealing with. And within that environment that's where we're looking for a high throughput. It is based on the cloud event schema we're talking about the push-pull model as well as we've built the telemetry and then archiving, reprocessing all of the functionality that you need for consuming those messages and sending them forward. So from a high level architecture you can kind of see over here that SAP integration, most of them notification events out to Azure Event Grid. We basically also as a side thinks why we don't have to use data events for all of them. We push the data out to other data mods. When it gets to the Azure Event Grid we've got our external events, or not get to external events now. The subscribing applications will be able to consume those SAP events and if they want to pull additional data they can also pull it out from the data hub or BW. So not necessarily going back and hitting that ERP. You can understand there's a little bit of latency between that. If you need real time access to that data then you can also from the Azure Event Grid once you publish you can call that the back directly back to SAP. In our supply chain world don't necessarily have to have that second minute type response. So we're happy to actually get the data from data hub as and when it arrives. You can see some of those downstream systems, some of our future thinking piece. Basically once it's in the Azure Event Grid any system can actually consume those SAP related events as well as any of those systems can publish their own events back onto the Azure Event Grid and we intend to consume those events within SAP ECC in the incoming stages of the project. Okay, so if I focus on just the My Orders app that we dealt with, just I got five minutes left Peter solution. So just high level like from a benefits perspective and these are real benefits that we achieved in a very short amount of time. The ability to maximize the usage of data not only maximize the usage of data but we also maximized the amount of data that we could do. And when I say the amount of event data it's just the events themselves. The events themselves were not getting to the right place at the right time. So with more data available in less time we were able to maximize the usage of that data becomes much more important that insight to action literally became a measure that we were now proud of. In efficiency, the event throughput greatly increased the throughput by actually adopting this architecture. We reduced our Azure resource usage again by eliminating that whole polling mechanism that vastly reduced the usage of our resources and not talking about like compute and storage and so forth. And we increased our real time data or access to that data. The solution itself very flexible. I saw a question in the chat a little earlier about whether this can be done for any object. Absolutely, even a custom object, multiple use cases. It's exciting. It's opened up a world of thought for us. It's just like what's next and you can implement things pretty quickly. So from an agile perspective, time to enable time to inside, time to action, all of those were reduced greatly. Time to enable is more of an IT type focus. So to build something and stand it up in production is a manner of weeks as opposed to even in a Microsoft as opposed to months when you need some downtime to load some code into SAP, you don't need that anymore. And from a scale perspective, very scalable solution that's offered over here. I just wanna, I think this is maybe the last slide. Our path forward just from a high level perspective, what we were doing with the SAP, there's initial discovery. How can we do this? Does this fit our security and our thinking going forward? Once we've moved past that, let's move with a pilot which is this my orders application. Let's stand up some monitoring and put the dev ops procedure in place. So all of that's there. Now what we wanna do is if you think about it, we need to expand the footprint of the actual events that are in Azure event grid so that our downstream systems can actually do that. So we'll be pulling more data out of SAP. We also have blockchain running some of our supply chain processes. So we'll also be publishing blockchain events after SAP. In fact, we're also working with a SAPO to build that blockchain adapter for us and our partners on the blockchain. So that's in the work and pretty exciting. And then going forward, once we have a good amount of data that's covering our supply chain processes, we'll look at expanding those downstream applications. So leveraging the logic app, she saw the example earlier, API apps, web apps, all of the functionality and then expanding into machine learning and artificial intelligence. We've got a lot of this stuff already in place, but we need to tie it in with all the new sets of data that will actually be pulling in from phase three. So I think, Peter, I've got one minute left. These are a couple of other little interesting pieces. Feel free to subscribe. Holger's great Azure SAP on Azure video podcast. You can sit down there on the bottom left. At the top left, that is the video of the demo, a little bit earlier. And then there's another one, another SAP blog. So welcome to go and check out some further information on this solution.