 Hello everyone, welcome to this session. Today we're going to be talking about easily connect applications across clouds with Service Interconnect. And here with you, it's Bamsi. Bamsi is the Principal Technical Marketing Engineer on Red Hat. So Bamsi, do you want to introduce yourself? Hi, I'm Bamsi doing technical marketing at Red Hat, focusing on application foundations and Service Interconnect. That's perfect. And I'm also part of the developer team advocate as part of the Application Services BU. So we want to welcome you to this session. We hope you're enjoying the rest of the sessions. You enjoy the keynote and perhaps you're participating in the labs. So keep with us. There's going to be a lot of content still during the day. So let's get started with this session. OK, we're going to go over some slides at the beginning and then we want to show you a demo. So remember that you can do any Q&A session and questions on the QA area that is in the top right side of your window. Or you can just write something on the chat. We are going to be able to monitor that. So when we talk about the challenges on application connectivity, and perhaps you heard before, Mark, talking about some of these kind of connectivity patterns, it's that every single day we are adding more and more components and more and more sites to our architecture. So we are doing distributed applications. And we are moving away from the traditional single mainframe single data center into multiple applications. So we are relying on different environments where we are distributed in our applications. And those environments are really very diverse. They're not homogeneous, and they are very different between each other. Perhaps you're still using legacy systems, all unit systems, mainframes, and such. Or perhaps you're using already some virtualization, Bermuda perhaps, but every single VM that you're using, it's different on its own. Or you have already started using some Kubernetes offering that can be different between the providers that are giving you access to those. Or if you decide to standardize from top of all red had open ship, it could be that even you are handling multiple versions of open ship and multiple closures of open ship. Perhaps you started with open ship 3 and now you're open ship 4. Perhaps moving into a managed service like RO or ROSA that are our offerings on top of Azure and red had open ship. So this brings us to the challenge of the hybrid cloud and the distributed applications across different clouds. We at the beginning of this cloud journey thought that perhaps we just will need one single cloud to be able to fulfill all our requirements for distributed applications. However, what we realized at the end was that that worked for certain kind of business that were able to do cloud-native applications such. But for more traditional organizations and industries and for those that require heavy compliance with regulations and laws and things like that, we realized that going to one single provider was perhaps not the solution that we were looking for because we have different challenges, right? From compliance security, we cannot take data out of the country or out of the region. We need to be compliant with certain information that it's available on a specific region. Perhaps there's not even a region of the decided provider in our country, so we cannot comply with that or data that is generated and stored in that specific cloud that we have been chosen. It's just keeping growing and growing and perhaps moving that data, it's not easy enough. So that gravity. But also you want to be looking for a better solution. So you know that certain cloud provider has great AI, API, so you want to use them. All that has a very good storage system and storage APIs and services. So you want to get the best of that. So at the end, when you end it's with this connectivity challenge because you will need to handle all those different environments, different clouds, different, your own data center. And the problem is that not all of them are public and not all of them are going to be totally open for you to connect to. So that's where you have two main challenges. So the first one on the left is the hybrid cloud connectivity. You already have your own data center, your own private cloud and you want to connect to the public cloud where you get more services, you get scalability and so on. So most of the times you won't be opening ports on your private cloud or just adding firewall rules to allow people to enter. So how do you solve the challenge? The other one is when you need to connect sites like branch offices or locations that are spread across multiple regions, multiple physical locations and you want to use some way to connect and being able to access from one edge site to another to check inventory from one of your stores into all the different stores in the region or in the area that are available but perhaps to offer a better experience for your customers. So this is the kind of edge to edge connectivity that we are looking for. And to solve these different challenges there are multiple options. And the one that we're gonna present today it's just one of different options. We think it's a simpler and better approach for some of the users because of the things that BAMC and me were gonna be showing later but there are options and choices. You do public IP networks. You can just open firewalls, poke holes into that and expose certain services directly. You can use things like net services, IPS and such or you can create your own VPN network, right? You can set up software to be able to extend your different networks into different sites and being able to have this kind of IP tables that are routing different network segments and such. Or you can go with a cloud provider and use things like a VPC for network isolation. The problem is that mostly dependent on the cloud provider, what if I want to mix providers? How do I march and attach to those networks when I'm using a different cloud vendor? Also obviously the cost related to that. And the proposal that we had it's related to the overlay network or the service network where the solution that Red Hat offers called service interconnect will certainly focus on. So let's get into the details of the first challenge. I wanted to work on the hybrid cloud. So I myself, I'm part of the team and the IT team that it's managing the old data center, the bastion, the forward where everything's running. And we have a simple application here. It's just a simplification, but it could be a more complex system where I do have a database to have something, like a DV2 database or an Oracle database or perhaps even a RAG system that it's running on my own data center that I most of the times won't be taking to the cloud because of the data gravity that we mentioned. But also I have a UI that allows me to handle the information of my patients. And then I have a payment processor for patients to be able to pay for their own bills. As you can imagine, we want to move our UI that does not require PII compliance challenges or things like that into the cloud. And we're gonna be using the cloud provider called it public cloud number one or AWS. And then we want to move that UI over there. However, if you can imagine, if my UI first was accessing the database directly using a JDBC connection or a DCP socket and my payment processor API, if I want to expose that service externally to be able to access, I will need to implement things like an API gateway to be able to control the access to my service. You have a single point of entry, I have security, some policies and such. But when I implement an API gateway that is mainly focused on HTTP, so then they have realized that if I want to access database, I will need to do something like intermediate system or service that will expose rest to database and database to rest. So I'm able to query the information there. So that means a lot of point cards, a lot of development that I will need to try, test and deploy to be able to fulfill this challenge. But that's not what I want. What I really want is that I want to continue using my application, my UI, connecting directly using the TCP socket, the JDBC connection to my database as well as my data center. You know, remember as we were working at the beginning where everything was on the data center where like everything was local. So that's what I want to do. So how can service connecting help us to do that? Well, this is what we were gonna see. What we want to do really, it's to first being able to merge and get connectivity between those two sites, right? And that's centered to my public cloud one. But as you can imagine, we don't want to open ports on my data center. So what I will be doing is I will establish a connection from my more secure data center to my public cloud where I can expose or already have these low balancers the API gateway because they're gonna be part of my UI access and infrastructure. So I will establish a connection using service interconnect to be able to connect first my clouds and then I will be able to expose the services. So we can do this in a very simple four easy steps. Step number one, we just need to have our clouds and then being able to initialize using this copper CLI that it's basically the service interconnect CLI that we're gonna be using in both sites. So what it will be do is we'll start the data plane and the control plane on each one of the sites to be able to deploy our routers. The routers are gonna be the ones that are gonna fulfill the proxy kind of job between my connections. So the second thing is create this token. The token is basically the credentials that I'm gonna be using for my private cluster to be able to connect to my public cluster. So basically we're gonna be sharing that information those credentials through this token that at the end it's just a mutual TLS connection where I'm gonna validate the certificates of the connection. In the second one, we're gonna then establish the link and that link will use those credentials to connect both services in a bidirectional channel but using just one single connection. So then the next step is just expose my service to the network. So I'm able to then connect to this overlay network, service network and have some services being exposed and shared across the service. So I can expose service A in the public cluster as well as service B in the private cluster and both of them will be available for anyone to be able to consume that. And then finally, we want to show you how to do this. And for that we have BAMC that has already set up this kind of environment and we'll be sharing that to you. So BAMC. Thanks, thanks Hugo. I hope, can you see my screen and when I switch between the terminals? Yeah, I can see it so, all good. Anyway, so the setup here is we have three, two different, we have one, this is our AWS cloud where we have deployed OpenShift and we have deployed our frontend here. If I do OC get parts, you can see that we have deployed the frontend here and this is how our patient frontend portal currently looks. Currently it's empty. You can't see any patient values, doctor values and the goal is for us to go ahead and make a connection between these data center and AWS cloud so that the UI can access both the database and the payment processor. So this is what we want to do. Though they are residing in different clouds and different environments, we want to make it seem like they're all part of a single, a single domain and not different networks or different clouds and let's see how Service Interconnect enables us to do that, right? So this is our AWS cloud. As Hugo mentioned, these are the steps that I'm going to do first. First, let's initialize Service Interconnect using the Scupper CLI on one side. I'm enabling it on our AWS site. This will establish the routers in the AWS, like we'd like to call cloud one. One thing I'd like to remember, I forgot to mention is that, I have different color denotions for different environments. So the green color is for the AWS cloud. The blue color is for the Azure cloud which we'll talk about later and the orange is for the data center. So when I'm switching between the different CLIs, which environment I'm applying these commands on. So I've initialized my Scupper on AWS and then now I'm going back to my data center to initialize the service interconnect routers there and good. So another thing to remember here is, I'm already running both my containers, the database and the payment processor in my data center as the image shows here. So basically what we are doing is we are just moving, we've just moved the UI to the AWS cloud and now we are trying to establish the connectivity. So let's initialize Scupper. Now what next? As you've seen in the diagrams that Hugo showed you, now we have to create a token that will exchange, that will create in the AWS cloud and then send it to the data center to create this mutual TLS-based connection between both these sites. So let me go ahead and create the token here. I'm going and creating the token and give it a minute. Let me see. This token, okay, have the token here. This is the token as you can see, this is very important for establishing the mutual TLS. I will go ahead and copy that token into my data center where both my database and payment processor are running. Let me copy it. Need to be, there's better ways to transfer the token, but I'm going to make take the simpler route here. I'm going to save the token. So now what I'll do is using this token, I'm going to create, establish a link between both the sites by issuing the Scupper link command and I'm also giving a name to this link or link between these two sites, saying AWS 2EM, right? And I'm going to go ahead and create that link. And it's a site configure to the link and it shows the Kubernetes cluster that it has established the connection with. Now what happened is we've established the connection, sorry, we've established the connection between, we established the connection between both these sites, but none of my front end or my front end doesn't know what services are exposed. So we have to explicitly say to the network that only these services that I'm going to expose will be available for the front end to consume. So let's go ahead and check, let's go ahead and do that. So I'm going to do the Scupper expose for both database and the payment processor to for them to be available on the network. Let me go ahead and hit, okay, there's something wrong. As usual happens with demos, but let me try this again. Service database already defined. Okay, there was, I typed host twice here. There was a typo in the command. Let me go ahead and expose my payment processor. That should work, yeah. So now what I've essentially done is I've told my data center, the Scupper network in my data center to expose both the payment processor and the database to the network. But at the same time, we have to create services that map to both the services in my AWS cloud too. So the router will call this service, the local service, and the local service in turn will route to the services that they find on the network. So we'll create the services with the same names. And as soon as you're done with that, if everything works well as we are expecting, if you go back, as soon as I refresh, you should see that patient values are coming. So essentially what we have done here is, we've established this connectivity and we've exposed both these over a network and the UI is actually accessing both the database and the payment processor as if they're on the same network. And we're not building any rest to DB conversion and all that complex stuff, right? It all behaves like a single network, even though you've moved two different environments. Let's also go check if the payment is working because we exposed two services here. That should work. Yeah, I've been having a little bit of caching issues. So let me just try to spin down this part and take it up because I've been having a little bit of caching issues with my browser, but let me try that now. Here's the product. Give it a minute. Well, it loads up. Let's try to go back. Let's try to access our patient here. Wills, no wills. No more wills. Yeah, see, so what is technically happening is once I've go ahead and make the payment, it is calling the payment service and the way we know the payment is success is, the date paid and the processor ID will show up. So if it's only a numbers, that means it's coming from the database. In the later demos, what we'll see is we'll move the payment processor to another cloud and see how we can create that connection and what is the significance of doing that. So what we've done here is we've basically connected our services across different environments, across a real data center, different Kubernetes clusters and also legacy clusters. So basically that's what Service Interconnect provides. It helps you link different applications and services across different environments in three to four simple steps by exchanging the tokens and exposing the services. Now let's talk about portability across the clouds. I have, as you know, we've deployed the payment processor here. Let me just be mindful of time here. Yeah, good. So as you know, we've established the payment processor here and we are also at the same time, we want to move the payment processor to another new cloud because of some regulatory issues or also some kind of regulatory issues, for example, right? And let's see how we can do that without losing connectivity or without having any downtime, right? So currently we have a payment processor, the payment processor, as you can see, gives us a number, which means, if it just gives us a number, this is the idea of the processor, that means it is coming from the data center. But if it comes from the Azure cloud, if the payment process, if our UI, you will use the payment processor that is in the Azure cloud, you will get, you should ideally get that, you know, the payment happened at Azure or some message like that, right? So let's see how to do that. So what I'll try to do is now first, I will try to establish a link between my AWS cloud and Azure cloud. So again, what we have to do is we have to go ahead and create a token, as we usually do, going ahead and creating a token. This is the token I'll try to establish the connection between AWS and my Azure cloud, right? And let's go to our Azure cloud. Azure cloud is a blue color terminal. We've already deployed our payment processor here. As you can see, I've already deployed my payment processor. So what I'll do now is I'll initialize copper here. I just wait for a minute while copper initializes. And now I will use that secret that I created on the AWS cluster to create the link between Azure and AWS. So what I'm technically doing is I am, let me pick a pointer here. So basically I am establishing this link here, right? So let's see how that goes. I'm going to create the link, okay? That is some nomenclature, but that's fine. Should not have any capitals AWS to Azure, that's okay. This should be okay now. There you go. So we've established this link that we've been talking about, but we also need to expose the payment processor that is in the Azure cloud over the Scupper network. How do we do that? We just issue the scupper expose command and yeah, it's been exposed. So now what next? What next, right? So currently this is our state and let's, we've established connection between the Azure cloud, the AWS cloud and the rail data center. Now let's try to take out the payment processor in our data center and see if it goes back to the Azure cloud. This way, when you have two different instances of the same payment processor, if one payment processor goes down, or if you want to migrate to the cloud or if one payment processor goes down in cases of high availability or fail overs, the Azure cloud will take over because it is a part of the same network and has the same service name. So let's see how we do that. So first I will go ahead and, you know, unexpose or remove the payment processor from the network which is located in the data center using the scupper unexpose command. Let's do that. It looks good. It looks good, okay. So ideally what we've done is I've removed the payment processor from here and let's see if that works, right? Let's try for another patient. So ideally because we don't have, we don't have the payment processor in the data center, the processor should show that, you know, the payment has gone through the Azure cluster because that's where the other instance of the payment processor is located. So there you go. See, before it was showing as a number now because that number was coming from the data center. Now, since we've killed the payment processor in the data center and only kept the, you know, the payment processor in the Azure, emulating a scenario where, for example, say, if the payment processor in the data center goes down, would it default to Azure? That's happening here, you see here. So now you see one Angela Martin who's the patient, her payment was processed in from the data center. The number shows that it's coming from the data center but as soon as her payment went and the payment processor in the data center went down, the Azure, the payment service in the Azure took over and, you know, the other patient, Dwight could use the Azure cluster without them knowing that you were, the UI just called the payment processor at Azure to process this payment. So that's another use case where Scupper is super helpful providing, you know, for high availability and, you know, support for failovers. Now, the next use case that we'll see is, I'm just being mindful of time here. Yeah, I think we have more time. The next use case that we'll see here is, you know, cloud connectivity resilience. What that means is, you know, you've established connections between, so for example, if you say, Scupper also enables indirect connections. What I mean by that is, for example, if you are connected from, say, legacy to, if you don't have a direct connection between two sites, if you have an indirect path and if the direct connection goes down, the Scupper network finds a way to reach to the service through the indirect paths. So for example, if I have to explain that, so if I have to explain that in our example, right? For example, if the direct connection between, you know, the Azure and AWS cloud goes down where the payment processor is sitting, the, what Scupper does is it says, okay, the AWS cloud is connected to the real data center and the real data center is connected to the Azure cloud. So even though there is no direct connection, but since all of these are part of the same Scupper network, let me find, let me, Scupper will route the request through, sorry, Scupper will route the request through this indirect path, through this indirect path. So let's see how that happens, right? So before that, let's establish a connection between the data center and Azure cloud, which we've missed doing, right? So let's go back to our terminal here and let's do that. Let's first create a token in our Azure cloud. This is Azure cloud, the blue color one. Okay, this is the token. Let's go to our data center, data center and copy, right? Let's copy the token and let's try, let's create the link. So now what we are doing is we are, we are essentially creating a link between, you know, the Azure cloud and our data center. That's the first step. Yes, we've created that. So what we've done is we've established this link here and now what I'll do is I will take down this link between AWS and Azure. So how do I, and by default, Scupper will find this route and it should route through the alternative path. So let's take down the link between AWS and Azure. Let's go to our Azure cluster and let me just clear these commands so that you see better. Let's go to our Azure cluster and it's say scupper, delete, link, let's see scupper, link status. I'm doing this to get the name of the link here. Okay, so it's AWS to Azure, so scupper. So we see the link AWS to Azure is still connected. I'm going to go ahead and delete that link. Scupper, link, delete, AWS to Azure, right? Has been removed when you do scupper, link status, you don't find it. So what I've done here is I've taken down the direct link between this, but there is an indirect link from AWS to data center and data center to Azure cloud. So even though these are not directly connected, the payment should still go through. Let's see if our payment works. Let's try for another patient, right? Let's say, Kevin, submit payment and payment process to Azure. So what we are essentially seeing here is, even though there is no direct connection, scupper is establishing that connection by finding some indirect routes so that it supports high availability, not just for the services, like we've seen in the previous case, but this is high availability for the connections itself. So we've seen three different use cases right here just to sum up. One is plain connections. When we move, I'm going back here a little bit, but one is just establishing the connections and making it feel like they're a part of the same network. That's the first use case that we've seen. The second use case that we've seen is, having the payment processor at two different sites and providing high availability that way. And what if the payment processor goes down in one site, say the data center, the other site, the other payment processor in the other center will take over. And the last use case that we've seen is the high availability for the connections itself, which means if one connection goes down, Service Interconnect finds alternative paths through indirect connections and makes sure that the calls from the UI reach the payment processor if there is some means to indirect connectivity. So with that, I think that's the end of the demo. I would pass on to Hugo for the next bunch of slides. Thank you, Bamsi. That was a great, great demo where we have seen a couple of scenarios that Service Connect can help you out with. Interesting point that Max was remarking is that you have two different kind of resilience. The resilience between the services, that means that the payment processor can be in two different clouds and your UI can connect to either one of those, doing a load balancing, perhaps a round-robin kind of access, or based on which service it's closer. So you can manage things like coast and see which one has less latency and being able to process that first. But if that service is under too much stress, you can do overflow and then move to the different one. And if one fails, you ultimately get re-routed to the other one. And then the second kind of a scenario for high availability is between the links, between your cloud connectivity. Say you lose the load balancer, you lose the connection between those sites, but you still get the access through your data center. So because you still have established MPLS networks between your data center and those clouds, you can still be able to reroute through those services using Service Interconnect. So that's pretty interesting. Now, making a recap of Service Interconnect, this is application-focused kind of creation. That means that it's layer seven addressing. So we just create links and name them using addresses that become available for service discovery using the Kubernetes DNS, as well as the Pullman site that we were showing running on the data center. So that means that we are able to create this abstraction layer on top of your current topologies and your actual cloud deployment networking. And you as a developer, you as an architect, think about how you're going to design and create your applications. You can think about the single application domain. It doesn't matter where your applications are living, because one of the things that we see when people talk about multicloud, it's that they have just the same service as duplicated in different clouds and different regions and such. But when we think about the shared applications, that you have these service in the place where it should belong and your applications are still able to consume and access those services. And finally, one of the remarks that we mentioned at the beginning, but we want to show is that every single connection, it's working using MeChall TLS. So you have a service network between the connections, between the sites, between the clouds. But your services and your applications are using the standard protocol that they were using before. So one of the things that we didn't show is that we are using a traditional process database for this demo, that it's exposing just the process protocol on the port 543.2, but it's not using TLS. And we are spreading our application or UI into other clouds with the payment processor, but we are not adding TLS to those services. Those services require no changes, no additional sidecars, no things related to be able to have this kind of connectivity. And the other thing around that we mentioned is that we were using user scope access to these clusters. We don't need to be admins to be able to create these clusters at this configuration. As Bamsi was mentioning, the cloud where he was deploying, he was just a regular ownership user. He was not really a cluster admin for creating this networking. Now, the magic behind, we can see in the next slide, is that we are using this concept of open source, where we have projects that are very well established in the community, like the Apache Cupid community, where the Apache Cupid dispatch router allows us to create these connections and establish these messaging patterns across one networks, where we can then send information across these links. It is a very well-established project since 2014. There's plenty of activity over there. And then we have the other new project that is called Scopords, the option community for servicing interconnect, that is the one that is doing the controller of these routers, thinking about the control plane, this should be the control plane, where it helps us to be able to configure the QPD dispatch router to be able to establish these, as well as create some product translators between MQP and HTTP as well as TCP. It's a more recent project, we're still very active, and it has been a couple of years being boiling cooking and then available now as servicing to connect. Where you can see, there's plenty of activity we're going here, very good releases on the interconnect side and the router side, as well as the control plane, where you, as you can see, we can run this kind of configuration, Docker, PubMan, running on a local environment that is perhaps a VM or a Linux machine. You can run it on OpenShift. This is gonna be also supported in all Kubernetes distributions. If you want to start moving out your services that are perhaps a bit of a burden for the cloud services. And as we were mentioning in the final, these benefits, no code changes. So it's the same application that was running on the data center, now it runs spread across the cloud, no network changes. So the only thing is that there needs to be built in connectivity at the underlying layers to be able to expose this at the layer seven. No cluster admin requirements, so pretty good. Some of the final capabilities that service independence is one of the exposing things like a console. So we didn't show it, but a console allows you to have a better view of your network or the different services, the different sites that are being connected and the different traffic that is flowing through your service network. So you can see how much traffic is one of the services, which one of the sites are receiving. So very good information for network admins or for in general, the service network supervisors. And finally, we are using mostly the CLI comment, but there's also a Red Hat operator for the services in credit that allows you to do a more GitOps approach using custom resources, as well as config maps, well, not custom resources, but config maps for the operator to be able to create and configure sites, being able to establish the connections and so on. So there's plenty of stuff here to see, to follow up, they're very good scenarios. You can try it yourself. So finally, if you want to reproduce this demo, you can go one before that. We can invite you to use the developer sandbox under the developers.redhat.com site, where you can get access to a free Kubernetes cluster, well, not cluster, but Kubernetes namespace where you can try with that cluster to be able to reproduce this kind of demos. So BAMC has a very good example that you can follow up. So being able to follow some instructions, run a Docker machine in your own laptop or a Pondman site, being able to deploy that on your machine and then being able to connect and see how you can expose your local laptop services into the cloud using the copper gateway services and so on. So still, if you just want to play with OpenShift, the developer sandbox, it's great, a great way to go and enjoy. BAMC, I think we are at the end of the session. So if you want to share some closing remarks. And then we can open for questions, but if there are no questions, I'd like to add one more final part, right? That we've not talked about the ephemeral nature of the scupper networks, right? If you think that this network that you created with scupper is not needed anymore or it was created by somebody who doesn't need the access anymore, all you have to go ahead and do is scupper delete and it will tear down all the configurations that you've done and all the accesses are reworked. So that's something I just thought I'll show, but you just go to all the different sites, do scupper delete and all the configurations are torn down, there's nothing left over and you're all good there. So yeah, that's just one more thing I wanted to add, but if there are any questions, we are open for that. If there are no questions, I hope you guys enjoyed the session. Yes, again, thank you very much for staying with us for this session. There's gonna be more sessions still around. So get there, see the different stages and see the different sessions, join the labs, and we hope that you continue to enjoy the rest of the event. So thank you everyone and see you later.