 Welcome back. Hope you got a moment to step away for a second, get a glass of water or something like that because we got a really great presentation coming up here for you in just a minute. I'm gonna give everybody about 60 seconds or so to kind of get back to their desks and get ready for this great session which is gonna be one of the last of the day today. Just a reminder that you can add your questions in our chat and we will save some a few minutes of time at the end to talk through them. So we'll just give it just another 30 seconds or so. Right, everybody. Well, listen, we're gonna go ahead in the sake of time and get going. I'd like to turn it over to Ugo and Bomsi that are gonna present to you about connecting the dots, Red Hat Service Interconnect as a modernization enabler. Turn it over to you guys. Perfect, thank you very much, Greg. We really appreciate this opportunity to be able to talk about very interesting topics like application modernization and service interconnect connectivity challenges. That's something we have been looking and trying to help out customers around this. So today with me, Bomsi, is gonna be able to talk about this topic. So Bomsi, welcome. Hey, thanks, Ugo. Bomsi Ravla, technical marketing. Some of you have been in one of some of the previous sessions. I was doing a session on API management and service interconnect, but this is a whole new different topic on modernization and how service interconnect will help out. So yeah, for people who are rejoining, welcome back for people who are joining new. Thank you for joining and hope you guys like it. Yeah, I remember that people will be able to see this recording as part of the YouTube channel for Red Hat developers as part of the definition event. Well, we really appreciate that you're with us today. Any questions? Pretty little chat. We are gonna be able to chat with you about this topic, try to answer any of the questions. We'll go over some slides and we also will want to show you a quick demo on this topic. So yeah, let's get started. And yeah, we're gonna be talking about modernization and what is it important right now? We know that there has been a trend and some changes and shifts on the way we have been developing applications in the last couple of years. Now it's almost two decades since microservices was slamming the door and getting to as well as then Kubernetes came on to play and now it's containers and modern application development that we're covering today, right? But if you're not like a brand new company, if you are already have a set of assets in your organization that you need to move forward, be able to get the most of these new paradigms and be able to get the benefits of the scale of economy, how you can get into that journey. There are different kind of strategies, right? The first one, you can do different shift. The shift, you can just start to move your application from one platform to another, do some kind of refactoring or being able to start to extend your current applications and augment through different layers, we call anti-corruption layers, assets as well as the capabilities of your microservices that help you out to easily move the parts that are more important for your business. One of the things that we used to work in the past is that we had everything running on a single instance on a single server at a single data center and that really had some benefits as well as some limitations and challenges that microservices now with the new infrastructure capabilities are making it easier to work with. And obviously you can always do the full rewrite, the full replace and start from scratch to recreate most of the functionality without all the technical depth and become part of a big bank. However, if you do a big bank rewrite, the only thing that guarantee of it's a big bank. So you need to be aware that, yeah, even though it's an option, even though you can't work with that, if you're really thinking, I'm gonna rewrite everything from scratch because I don't want to continue using that of the European rails that used to be. So all that I want to do, all the things and go on rust. Yeah, it's something you can do, but it requires a lot of management. But usually what you do is you have a approach that goes in a more steady way, right? You're gonna be running the marathon for moving your monolith to your new microservice architecture. It's not gonna be a quick spritz, it's gonna be very demanding and very only a few people in the work can really achieve and be champions of that. But the marathon, everybody's running marathon, so we can see that that means that there's something we can achieve more easily. And this is one of the things that we have seen. And if you have read the books like Monolith to Microservice from Sam Newman or you have read the modernization of enterprise job applications from Marcos Adel and Talibinto, one of our guides here within Red Hat, you see that the transitional approach into decomposing functions make it more easy to move into this kind of applications. So for that, there's a series of steps that we can follow that helps us to being able to get into this path of modernization of your applications. So if you go over the text, if you go by the book, you will find a different combination of these steps, right? You need to get the hands into your monolith or your previous application, identify some logical components. You can start to flatten or refactor those components. You need to check exactly what are the component dependencies. So you start to make your graph to be able to identify who's calling who, who's depending on who and so on. The next step obviously start to grouping those components where you start to separate in a mind map where the relationships and where are the trees and dose. And then, and this is the important one, you create an API for a remote user interface of that because that's a very important part that sometimes a lot of people neglects when talking about modernization of applications. Because then automatically just, I'll say, oh yes, APIs, oh yeah, that's something that's gonna be remote. I'm using EGVs, I'm using HTTP and that's enough. And then just go on the six, migrate the component groups to microservices, move components groups to separate projects, make separate deployments. Then those microservices going to microservices are carving them out and carving them out and carving them out and then repeating the steps six, seven until complete. But as I was saying, there's some caveats on that and this is what we are seeing now. This is an example from the monolithic microservices group from Sam Newman, where they're decomposing an application that has certain functionality. So you have the invoicing, you have the payroll, you have the inventory management, user notifications and this is basically a way to start mapping the features and the capabilities. And one of the interesting thing he points here and this is the one in credit diagram, it's the calls that we want to intercept inside the monolith. Basically, it could be just sockets or memory calls or something that we used to do within the same deployment or the same service. But it's not just the only one. So when we're talking about monoliths for example, applications written in Java, those kinds of communication between the services, it's not just APIs. It could be HBs, enterprise Java beans, session facets, RMI, Corva, perhaps. And it is not just rested. And so there are different other protocols that were available there. And when we start to strip down or when we're starting to carving out all those particular functionalities, we might need to still be able to reuse some of those connections and being able to establish a way to interact within those services. So what we have in the next slide is that, yes, there's this process when you start to have an asset, moving to microservice, doing a redirect of the calls. But the important part, and this is where the rest of this session actually will be focusing more on, it's on the upper part. The part that, if you notice, there's nothing that are connecting those aspects. And this is because there are different strategies on how to solve this part on, yeah, when I'm handling the call that it's going into my application or my applications depending on if we are already on the left, on the right side or the left side of this modernization strategy. Yes, most of the modernization approaches that you will hear or that you have heard about in the past where we have a lot of great tooling within Red Hat, like migration tool kit for applications as well as other capabilities and books and everything goes mostly on the features and the code and the functionality, the domains, et cetera. But the connectivity part, it's something, as I was saying, the step forward that we were really in the past, it's very important. It is important and sometimes it is underestimated because there are different challenges that puts us in the spot that we need to be able to overcome. And yes, this is because applications, usually when we really want to get the most of these new infrastructure trends, of these new platforms and cloud that are available for us, it's because there are a mix of environments. We will love to see that everything it's heterogeneous and everything will have the same type of infrastructure, but that's not reality. Most of the times when we were working in our application that was running locally on our own data center, even there we used to have different versions of the Linux systems, right? Perhaps spread six, already all of rail fives, somebody was already deploying to rail nine or using all our kind of Linux systems, Unix systems and so on. But even though you have moved to, if you go to the previous slide, you already have moving to this infrastructure with the different Kubernetes versions, with the different, multiple versions of open, multiple versions of Kubernetes and on different access to different services like mainframes and all Unix systems, that makes it complex. So that means that it is not gonna be the exact same way to connect because the infrastructure and the networking and all these applications and we'll be running on different infrastructure. And why is it that it is because by default, most of the times you will need to live with the reality that we are in a hybrid cloud world. I was actually chatting with a team of a customer that was mentioning that they were in the betting sports systems and they were mostly deploying all their infrastructure and their applications into one of the cloud providers. However, because of some reasons related to compliance they had to have some of the services running in a different cloud provider. And even though they were trying to go like all in just one cloud, the reality was that they were not being able to do that. And then there are other reasons where your organization might require to have more than one provider. It could be a security compliance if you're in a highly regulated industry where you need to recover restrictions of where your data it's located or where your customers or your access can be enabled. It could be just because you need IT agility, right? The people that you're working on the services of the vendors that you're working on are using one cloud or another flexibility in case of like this customer that wanted to really get into the benefits of the other cloud provider data gravity. It's one of the major things where we have seen that people are not able to just moving to cloud or moving from one cloud to another because sometimes when we're talking about big databases, terabytes of data, your data legs, it sometimes becomes more costly to just move the data than just keep it running where they are. So there are different reasons, mergers and acquisitions sometimes for us in this kind of a scenario. So moving just applications, it's something possible but most of the times we're gonna be living on this hybrid work. And we were saying, okay, if we are living in a hybrid world, if we need to be able to deploy applications on these different clouds, different providers, what are the options? What are the choices that I have as an architect to define how we're gonna be working with those? And I will say that there are options. That means that there are different ways to tackle this problem. And it will depend on how you're working, what are your constraints, right? We can always go with public IP networks where you are gonna be implementing load balancers, IPs that are gonna be available for your services. And but that's costly, right? That's something that sometimes it's limited to. So it is one of the approaches. You can set up your own VPN network and you can have network isolation but you will need to handle all the different rules because every department is gonna be looking at everyone on both that segment. You need to check, okay, ports, what are the IPs that I need to go with the privilege that I have as a password of the regular administration to reconfigure the network after that and so on. But if you are instead going with the provider from the cloud that also provides, it gives you a solution that tries to isolate networks like a BPC, it's also something that you can do. You can go over and perhaps go on cluster privilege to be able to enable those permissions, been able to keep track of every single dot, one of those, perhaps you'll get the benefits of some infrastructure as code but again, it's still something that costs you money that also uses resources and you will need to have dedicated team. What we are proposing with this session is that you can also try to try and give it a look to the overlay network or the band, the virtual application network that gives you, again, network isolation but fine grained at layer seven that removes some of the complexity of how you are handling the connectivity between your applications where you also remove friction and how non-administrators can configure and access to the services depending on their privileges built on top of what your network stack and completely goes from physical ledger to all the way to the application. So this is the kind of proposals that we have for Hybrid Cloud. And we will start with a simple scenario, right? So we do have a application of running on your data center and Red Hat Enterprise Linux where we have certain aspects of the application already decompose, right? So we have been able to extract the database into one place. We have been able to break out the, some of the logic of the application that we have a payment process or service that it's already extracted as an independent application and we have a very nice, decent, modern UI living also in their own application. And this is how it works and this is how we want to deploy it. However, when we decide, for example, to go to a cloud provider, say the number one cloud provider and deploy our application there because we want to get the benefits of the reach that they have, the presence of our application in the caching and they possibly need to make it very fast to deploy, right? However, because of restrictions, as we mentioned on the data that we can move into the cloud or because of the way that we are working and handling our services right now and the budget, we need to keep some of those in the data center. So if we want to connect both clouds have been able for the UI to be able to access information in the database, as well as information in the payment service. One of the usual approaches is, for example, exposing APIs, right? The creating an API gateway and then open some port so the application that can connect and then access the web service that it's running the payment service. However, because API gateways usually work at the HTTP level, HTTP protocol, if you want to access the database, you are not going to be able to tunnel just through the HTTP connection to a database unless you're using TLS, but that means that you will need to enable TLS on the database as well as the application, but you're not using that because you're going to change from the code. So what you're doing is just creating a rest to database service that will handle those kinds of approaches. If you notice, we have two different pieces new added to the solution that needs to be managed, needs to be deployed, needs to be built in one of those cases and your architecture becomes more and more complex of what you really want to do. That was in the next slide, just to behave the same way it was behaving before, but just deployed on different networks and different data centers, right? So because at the end is the same same application that we are working on. So how can we achieve this? How can we get back that feeling of working in a single data center, but at the same time having those deployed and independently? And this is where Service Interpreter helps us in easy four steps. So we have two clusters. We have Service A and Service B deploy, and we can do it with four easy steps. So the first one is we will need to init Service Interpreter using this copper CLI, and that will deploy the routers and each one of the cluster that will help us out to establish connections and connectivity between the sites and being able to manage the services over there. One of the clusters needs to have a public endpoint to be able to create this connection. One of the interesting things here is that we will be able to establish a bi-directional connection even though it's only one of the cluster that is exposed in services. So one of the clusters, we're gonna create what we call a token that will help us out to get the credentials requirement for the other cluster to be able to take that token and then create the link between those services by providing the mutual TLS communication between the routers using these tokens that includes some certificates to be able to guarantee the identity of the route that is connected. So we create the link using that talk. Next thing then, when you already have the connection established between two clusters, the next thing is to be able to work at the service level and being able to expose what service we want to use. And this is because in the opposite of other kind of solutions that automatically expose everything that it's connected to the same namespace or the same network segment, a service interconnect works in a way that it is more like an allow list. We need to explicitly tell the network which services we want to expose to the network. So they are being shared across all the different clusters connected. This case, we only have two, but we can have a mesh or a connection set that has more than two clusters, three or four clusters. And we can do that and just scope or expose deployments. And the deployment will create this service that will be exposed into the other clusters. So the next step, it's for BAMC to show us how all this works in real code. BAMC, are you ready? Sorry, I was talking on mute. Can you guys hear me? Yeah, we can. Absolutely. Cover it. So thank you, Hugo, that was great setup, right? For the demo itself, what we are going to do is, as Hugo mentioned, we'll see how, what are the different scenarios or connectivity challenges or use cases that an organization will face when they're trying to go to, while modernizing their applications, breaking them down into microservices and deploying them across the hybrid cloud. So the first thing, first things first, is actually creating that hybrid connections. The respective of the environment your services are deployed in, they should all feel like they're deployed in the same environment without exposing your services directly to internet. And that is one key capability of service interconnect. So in our case, first, we have a patient portal application that is deployed on an OpenShift cluster in AWS Cloud. And then we have a database and a payment processor that has the payment, the database has a list of patients and the payment processor is used to make, build payments by the patients. So both these, the front end is in the OpenShift cluster and the database and the payment processor are in the real data center. And let's see how we can create that overlay layer seven network using Red Hat Service Interconnect. So jumping directly to the demo, if you can see, you should be able to see three different terminals here. The green one is for AWS, the blue one is for Azure, which I'll cover again soon. But, and the orange one is for the relan environment and they are both the database and the payment processor deployed at containers. And that's not a necessary for the connectivity, but for this demo, I've deployed them as containers. You can see the patient processor and the patient portal database, both are deployed using Portman. And now let's go ahead and start creating the connections. So first things first, before I start creating the connections, I'd like to show you the patient portal front end. This ideally, once we create the connection to the database, it should show a list of patients and doctors and eventually we should be able to click on a patient and try to make some bill payments and go through different scenarios, how service interconnect can help in the modernization journey of this application. Yes, we are not showing how we are breaking the monolith of microservices here because that's not the scope. We are focusing on the connectivity and some other parts of, some other modernization challenges that come. Okay, so let's, what I'm going to do is, in this AWS cluster here, the OpenShift on AWS, I'm going to initialize the first, let me check if I am on the AWS project. Yes, that confirms it. I'm going to initialize the SCUpper router in this, the service interconnect router using the SCUpper CLI. And I'm copy pasting some commands so that in the interest of time and so that we don't make any silly errors and keep debugging it, so bear with me there. And it'll take a few seconds to initialize the SCUpper router. And then we also check in which mode we also initialize the SCUpper router in the rail machine. There you go. And let's wait for it. It should initialize. I think takes a couple of minutes or so sometimes due to connectivity, but let's wait for a second or two. Let's see. Yes, SCUpper is installed in the lab user on the rail machine. And now let's go ahead and create the token, right? The most important thing is to go, as Hugo mentioned in his slides, that we have to create a token in order and exchange that token so that we can create the connectivity between the routers. So I'm going ahead and creating the token. That token again takes a few seconds to create. And once we have the token, let me go ahead and display the token. I'll concatenate the token. And what we'll do is I'll create a file directly on the relvm and copy the token. I mean, that's our mechanism to transfer for this demo, but I'm copying the token and I'll go ahead and create the same token on my virtual machine. Let me go ahead and copy the token again. Copy. And that should be it. Hopefully we didn't make any errors while copying the token. Now, since we have the token in both our environments, let's jump into our rail machine. And try to create the link. So the site is configured and Scupper link is created, as you can see. And then once you see, I can go ahead and check Scupper status here. Just a second. In these comments, I think there's some network issue at my home, but this should be pretty instant. So let's go ahead and check if the connectivity is established. Again, forgive me for the connectivity issues at my home. Let's forget that. We can actually see Scupper link status here and that should work. Scupper link status. Yeah. So Scupper link status, Scupper link status, AWS2VM is connected. And that's how we know that we have established the connection. So once we have the links that's established, let's go ahead and expose. So as we learned earlier, you have to explicitly expose each service to be available on this layer seven network using service interconnect. All the services by default are not exposed. So you have to explicitly say service interconnect to expose these commands in order for them to work. So I have exposed both the database and the payment processor. Now, since we have established the connection, since we have established the connection, oh gosh, there you go. I think there's some connectivity issues again. Let me go and check my incognito if it's caching something. There's something missing here. So let me just check how I expose the database. Connection, yes. Scupper link status. There's no connector link, some other sites. Okay, let me just tear down the network quickly so that we can establish the connections again. There seems to be some glitch in my network or some glitch in my configurations. So let me quickly again create all these connections. Let me first initialize the Scupper router in AWS and let me initialize the router again here. Good, that is initialized. And let me go ahead and create the token. Search for the token and then just remove the secret that we created earlier so that we can create a new one. There might be an issue in me copying the token correctly previously. So that might be another issue that you want to look out for. So I'm still going ahead and creating it. And let us go ahead and create and expose. I'm doing all the steps together so that we don't waste time here. So let's go ahead and create and expose. And, okay, good. All right, expose my services now. Let's expose our services again. If that, let's see if that works. Okay, oh, sorry, my bad. I forgot to create a corresponding virtual services in the AWS environment. And that's why we are not able to see our database and the payment processor. So each time when you are specially connecting from a portman site on rel to your AWS machine, you have to create a corresponding virtual services on AWS for the connections to show up. Now hopefully the, yeah, there you go. Now hopefully the patient list shows up. And let's go ahead and try to make a bill payment for Angela Martin. So let's go ahead and pay. And you see here, if you look at the processor information, it should say it has been processed at the data center. It knows that the data is coming from the data center and the payment processor is located in the data center. So the next step that we'll do here is, we will, what the next step that we'll do here is we'll check high availability and failover use cases. What I'll do is we have the, since we have the, in the context of this patient portal organization, right? That is creating this software for hospitals and other healthcare organizations. They have decided, okay, now that we've modernized our application, how do I make it highly available? How do I deploy my payment processor in some other cloud in case there is a high load or in case one of, in case some failure happens. So first I'm going to show you a load balancing scenario. So what we'll do is, as you can see here, I'll show you Azure cluster, where we have deployed our patient portal payment processor. And I've logged into the Azure cluster here using the blue tab. And I will go ahead and first create a token on our AWS cluster. So basically I'm going to establish this connection that you see here, let me get a pointer. Establish this connection that you see here. Now those, we are going to achieve those steps. So I'm going to create again a token in our AWS cluster. We already have router created. So let's go ahead and create the token. And let's, let me double check if I'm on the right project here. Let's see, project. Yes, it's an Azure project. So it's on the Azure cluster. And let's initialize the router here on our Azure cluster, the Red Hat Service Interconnect router. And there you go, the router is there. And let's go ahead and create the link between AWS and Azure, right? The link is configured. It's been linked to our AWS cluster. And let's expose the payment processor using the SCAPR exposed command. Now we are saying, okay, there is another instance of the, to the SCAPR, to the Service Interconnect network. You're saying there is another instance of the payment processor that is there in case high load comes and you want to do, if you want to load balance. So I'm going to expose the payment processor that we deployed on Azure. As you can see in the image here, this is what I'm going to expose right now using the command. So let me do that. And let's give it a couple of seconds. Good. And now what I'll try to do is, I'll try to make a say a hundred or say a thousand calls to the payment processor and see how the load is getting distributed between the Azure payment processor and the real data center payment processor. So let's go ahead and do that. I'm initializing a terminal on the AWS cluster to make the call. So what I'm going to do is I'm going to call the payment processor internal service. And I'm going to store the results, the responses that it gets in a file. And in the file, I'm going to see how many times it got a response from the Azure cluster and how many times it got a response from the data center. So let's give it one more minute or say 30 seconds for the terminal to load here. We'll use this terminal to make the call. So to test the load balancing scenario. Now that we have, as you can see in the slides, we have two replicas of the payment processor in Azure cloud and it should ideally get distributed. So still waiting on the terminal here. Give it a minute. It's usually, I mean, sometimes it takes a couple of minutes. Usually should be a minute or so, but sometimes it just takes a couple of minutes. Let's give it that time. Come on terminal, or we could jump into the pod and do it directly if needed. This terminal is testing my patients now. There you go. Okay, now let's check OC project. OC project. Right, now I'm going to make the call here. As you can see, I'll make a bunch of calls, say 100 calls to the payment processor API and store the results in response underscore result text. And then I will see how many instances of data center and how many responses have Azure are returned from Azure. So let's go ahead and do that. Again, give it a few seconds for the calls to process. We are making 100 calls to the service here, to the payment processing service here. And now let me see, let me grab the file to see how many responses it got from the data center versus how many it got from the Azure cluster. As you can see, the load was evenly distributed. 49 of the responses have come from the data center and 51 of the responses have come from the Azure cluster. So the load was evenly distributed, right? And that's what service interconnect as, as soon as it sees a new service, that replica of the same service in a different environment and if it's a part of the network, it will distribute the load in case of one of the thing is not able to take it. So we've achieved that. And what if the payment processor in the rail machine goes down? Will it automatically switch to Azure? It should ideally, right? So what I'm going to do is I'm going ahead and going to unexpose the, you know, the payment processor from the rail machine. What I mean by unexposes is I'm taking out, I'm saying service interconnect to take the payment processor out of this network so that it can just process on the Azure cluster. So let me log into my rail machine and just clear all the commands so that it's easier to see. And I'm going to unexpose the payment processor using the scupper in unexposed command. And once I do that, that should go away. And when I try to make the payment, let's try to again make a payment, log out and try to make a payment for another patient who we've not made the payment. Jim Halpert builds and when you try to make a payment it is processed at Azure. And that's the thing, right? And when you again say, when you expose the thing back again, it will say it is processed at rail data center. So that's how it can change based on which, on which environment it is accessing the service from. And the next scenario is where you'll actually look for cloud connectivity resilience. What I mean by that is, now we've established direct connections between say our AWS OpenShift cluster, OpenShift cluster on Azure and also created direct connection between our rail machine and Azure cluster. So what if the connection, direct connections break? For example, the network between two clouds goes down and say, for example, the connection between AWS and Azure goes down. What would happen? So Red Hat Service Interconnect will look for any indirect connections and then route the calls through that indirect connections. For example, if you see here, these are the direct, the black links, the green links are the direct connections. And if some of the direct connection goes for this legacy app to reach to the service here, it will take the, so for this legacy app to reach to the bare metal, it has to come all the way through the indirect connections because the direct connections have been broken because of network outage. So it not only gives high availability, enables high availability for the services, it also enables high availability of the network in case one of the links, one of the network links actually breaks. So in our case, we already have a direct connection between Azure and AWS. What we are doing is we are going to cut the connection between Azure, AWS and Azure and see if, and establish a connection between, and see if it will take the path from rel to Azure. So for that, what we need to do is one break this link and also create the link between the rel data center and Azure Cloud. So let's go ahead and do that. So first, I'll go ahead and delete the connection between AWS and Azure. Let me again clear this. The link from AWS to Azure has been removed and then I will go ahead and create a token for establishing a connection between Azure and VM. The token is created and let us concatenate that token so that we can copy, we can create that token to our rel machine and I'm going to copy the token here and copy the token, pasting it and we'll use this token to establish the connectivity. So now let's go ahead and create the link between our rel VM and Azure. Fantastic. So what we've done here is we have removed this connection and we've established this connection. Now we have to check if instead of looking for a direct connection between AWS and Azure, the front end can actually realize that the link is broken here and can take this alternative path. We can test that by doing two things, right? One is we can say, you can do Scupper status on the AWS, Scupper status. And see it has, if you can read it, Scupper is enabled for namespace AWS. It is connected to two other sides, one indirectly. So it already knows that it has an indirect connection with Azure. So we can also confirm that by making say a payment for let's say, let's choose pick a patient, right? Kevin Malone in our case and let's go ahead and make the payment. You see payment process at Azure. If the link was removed, we would have not been able to make the payment and this is payment processed at Azure because it realized that there is an alternative path and it can take that path to establish the connections. So that kind of brings us to the end of the demo session. I'd like to summarize the capabilities of Service Interconnect that we used here. You know, it's application focused connectivity. You can virtually connect to services on any platform, make TCP calls or HTTP calls locally without the use of complex VPNs. The connections between the routers that are key for establishing the connectivity are encrypted using mutual TLS. And it abstracts the application layer. It makes your networks portable. It is agnostic of your environments, IP versions and it enables portability of your applications without taking away the complex by abstracting out the complexities of the underlying networks. And as I mentioned, it is a layer seven. Since it uses layer seven addressing, it is not IP instead of routing IP packets, it uses application addresses and that it uses application addresses so that even if you move from one environment to other, if the application address remains the same and the IP address changes, Service Interconnect still is able to find the service. And that's how it was able to realize that the payment processor in Azure was also working, right? And it has a console for some reason have not initialized the console properly. Let us go check if the console is available. Yeah, there's some issues with my console today. I think I've been doing it in another session and it's not been working. So, but it has a console where you can visualize all these connections as soon as you create scupper link, create it, all the sites will appear immediately. So the console is usually, that's how you can actually visualize all these things. And it also has an OpenShift operator. It also has an OpenShift operator where you can automate. If you're deploying an OpenShift, you can create using simple CRDs. And finally, that's, these are the key takeaways and summaries from the session. I mean, you've looked at common modernization patterns, lift, shift, refactor, augment, and then you've looked also the steps in modernizing an application and you've looked at Service Interconnect's role in modernization, right? It solves issues such as connectivity in hybrid cloud, load balancing and failover, portability, cloud connectivity, resilience.