 Hi! Hello and welcome to this session. Today, we will be talking about how to do interconnectivity between applications across multiple clouds, multiple clusters, and even going all the way to the edge. Okay, first also a little bit more about myself. My name is Hugo. I'm Mexican currently based in Massachusetts in the United States, and I am an open source advocate within Red Hat. I also consider myself a history travel and food enthusiast, and I leave you here my tutorial handler if you want to follow up the conversation beyond this session. I will be in the chat, you'll be able to answer some of the questions, but you can also contact me outside of this conference. And today, we will be covering three main topics. So, the first one is we will be talking about what are the challenges when using our applications and when creating this architecture that will be expanding across multiple clusters and even going to the edge. The second topic that we will be covering today is how we do this interconnectivity of applications and why Iranian things as if they are local, it's something that may be useful when designing your architecture. And for that we'll be covering the concept of virtual application networks or bands. And then finally, we'll be introducing this project for creating virtual application networks to be able to interconnect these applications. So, we will be doing a quick overview of the technology of the project and how it relates with Kubernetes and Edge. And finally, as I was mentioning, we will be able to see a demo of these kind of implementations. So, let's get started. And I do like this picture because it really reflects the story and the transition between how I do start to feel when suddenly we are able to create our applications and being able to run them first locally and then being able to deploy into one cluster. So, most of the times we just package our applications from our CICDs and that application is now available in our cluster. And it really feels like when you relate with the Kubernetes movies, when you get Gizmo for the first time, just this little cutie thing that you want to hang around and be friends with. And most of the times, this is because you feel that feeling of being comfortable around those. And this is because the single cluster or the single cloud type of deployment allows us to have our applications under control from the connectivity perspective. If I have an application, for example, a UI or another service that needs to consume from my API and that API needs to then connect to a database, it is very easy to easily route our connections through the same space because most of the times our cluster or our cloud will be sharing a single networking configuration. So, most of the time we will have segments or even IP addresses and host names that are easily reachable within our cluster. So, basically, we have everything available within our breach. However, when we start to talk about, okay, now we need to move and have this application replicated across clusters running on different regions or when we suddenly need to leverage the services of one hyperscaler instead of another and have certain components of our application being deployed in one cloud provider and then the rest of the components in another cloud provider or even in our own private data center because of government regulations or because of the way that data gravity pulls our information into certain spaces. Or when we need to suddenly realize we need to cover some use cases that are more edge-related running a cluster on a branch location or even in a disconnected environment. So, suddenly, all this beauty thing that you had in the past really becomes very, very complex and scary. So, it's when your gizmo type of ground engine suddenly gets fed after midnight, then it becomes this monster that is just creating chaos and making your life miserable. So, this is how I feel when suddenly I realize that my application discover all these scenarios. And this is because the real world has a lot of ground field applications. So, I will think of going to the cloud and go and start using things like AWS Lambda and suddenly everything is cloud native and it's super easy to deploy. But that's not true for most of the organizations. They do have a mix of environments that covers not only traditional deployments on virtual machines and bare metal and servers with even legacy systems or all type of unique systems or even mainframes and tandems running that have never stopped in the last 10 years. But also, we are seeing organizations embracing things like Kubernetes, deploying multiple versions and multiple flavors and even multiple distributions of Kubernetes, right? Because it's like Linux. There's not one single flavor for that or one single provider. We have several different vendors and Red Hat, for example, offers you one of the enterprise grade through open-shoot. And also, when we are talking about connecting to other systems and other applications and other components, we certainly face the different and complex network topologies that have been created along the time. So, you will have firewalls, you will have a mix of IP protocols, you will have VPNs, VPCs and also some conflicts or some networks that are conflicting and such because the API address space was limited in the time. So, you will need to work around these kind of limitations. And that becomes a connectivity challenge. And one of the most common challenges that we see in the field are these two. One is the possibility to connect from the public cloud into the private cloud. And suddenly, as I mentioned, we have data gravity running on top of our all legacy database or the mainframe. And suddenly, we need to scale. And for that, we make usage of services from the cloud providers and we need to retrieve that information in a safe way and be able to connect both worlds. The other one is the edge-to-edge connectivity. Suddenly, we know that we need to connect two edge devices but the networking between them is not reliable enough. And we want to use the public cloud as a way to connect these edge devices, keeping up the security and the control that we expect from a point-to-point connection. And for that, we have a possible solution that can help us to work with this. So, one of the things that we have been doing for a long time is creating this concept of a VPN, VPC, where we are able to use the same range or the same segment of the network for those services running on those different clouds, right? This is the most common approach that we can find. And in the last years, we also have seen the raise in the right side of the usage of the gateway, right? Because one of the challenges of the VPN is that when you start connecting your segments and your networks, you need to start building on top of the firewall rules to protect and be sure that only certain ranges of IPS are available for the public cloud and such. So, the gateway, try to fix that in a way that you simplify the points exposed in your networking and you can protect them easily, but still, you will need to be able to know exactly where you're connecting, where is the kind of protocol, the IP and the routing and such. So, you still depend on external connectivity components for that. In the case of, for example, for Kubernetes, one of the solutions that some other projects propose is to have this flattened network across the different clouds, right? Where you need to have an additional third party that is serving as a control plane that can manage and articulate the networking and the configuration of your clusters in both sides of the network, right? So, things like Submariner, those are the kind of projects that are trying to get this approach. And this works when you set up everything from the beginning, but at the end, it really gets too much complicated to start, you know, checking that there is no collisions or the unit to handle the glow-out CIDRs and such. So, we will try to avoid going all the way to the networking level to solve these kind of things. And that's where, that's the point where things like the virtual application networks get really interesting. So, looking at this picture, this is how I visualize how a virtual application network works. If you have seen this movie in the past, you will see that the Jedi Council has all these Jedi masters available in the meeting, but some of them are not really there. There's only their images or their holograms of their person being sitting there. But at the same time, they are able to communicate and listen and see what is going on in the room. So, they are like, like, virtually there. So, people can interact with them, but in reality, they are away from that place. So, that's how the virtual application looks like. In this case, the virtual application network is this abstraction layer between the networking layer and the application services and processes. And the idea of this virtual application network is that your applications don't need to be modified. So, it is something that needs to work with different protocols, HTTP, GRPC, messaging. So, general TCP type of connectivity, like, for example, for JTBC and databases, the virtual application needs to be able to handle this kind of connectivity. Because other services focuses more on the specific networking or just for HTTP and such. The idea is to have something that is flexible enough. And obviously, it needs to be something that plays well together with Kubernetes with container runtimes and genes, like Docker, for example, and even native deployments on top of edge devices. And the most important thing is the virtual application network. It's that capability that allows us to have an independent application connectivity topology from the network implementation that we have. In a way that we can have different network segments, different network administrators, but from the application perspective, everything looks like the same thing. And this is where Scopper becomes handy because Scopper is this project that has been working on trying to implement this concept of the virtual application network. As they mentioned themselves, Scopper tries to provide the secure connectivity at the workload level or the application level between different clouds, different sites, and even different networks. Instead of the infrastructure level, they just want to focus on the application or the workload level. So you can check more on the details of the project going to scopper.io or following them through your scopper.io. And scopper is this, as I mentioned, it's the implementation of a virtual application network. It runs with a deployment that runs on top of Kubernetes, vanilla Kubernetes, OpenShift, other providers, as we will see on the demo, and also can run using the implementation of the container runtime that runs locally in, for example, your laptop. And it is built on top of the Apache QPAD dispatch router. So this is a network router that's using NQP as a transport protocol. So this is using a session level type of protocol for implementing the transport for TCP, for example. And that's a very mature project that has more than six years out there as part of the Apache Foundation. And the benefit of this scopper components are that they are very lightweight and fast. So they use small footprint that also makes them very appealing for direct implementations on low resource devices and being able to be deployed independently without having to use them as sidecars. So we have been mentioning that there's two main parts. So we have integrated router that's coming from the Apache QPAD project as the mature data plane that allows us to implement this kind of connectivity. And on the other side we have scopper controllers and operators that are part of the control plane that is the new addition to the project that allows us to be able to leverage all the benefits from the router and be able to do the automation of the configuration and the deployment and the auto configuration of the details of those kind of deployments on Kubernetes, for example. In that case, giving you the owner of the application control and agility on how to create your networking top college. And this is very useful when we're talking about heterogeneous environments and indirect topologies when we need to connect, for example, for the edge-to-edge case where we need to route our connection through a network, but we don't want that network to access or be able to reach those services. And also we just mentioned that we were being able to run on top of Kubernetes or OpenShift, for example. And there's where things like, for example, your service mesh, it's super useful where you're working through that on a single OpenShift cluster. And even when you're working through different Kubernetes distributions, you might be able to use something like Submariner or even the same service mesh. But suddenly when we go outside the Kubernetes space and we need to suddenly incorporate Bermetal or virtual machines into our topology and be able to expand our services across this type of despair components, it's where things like the virtual application network, it's easier to configure, it's easier to set up. So now that we have talked about what is copper, let's see a demo of how it works. And I hope this makes it easier for you to take a look on what is copper. Well, now it's time to see a virtual application network in action. In this case, I have an environment, including three clouds. One, it's a private cloud that is running on my laptop. So it won't have any external exposure, so nobody can connect actually to my local environment. Then we have an AKS cluster running on Azure. And finally, we have an OpenShift cluster running on AWS. So in this case, we will start with the local machine and creating a namespace called local. So we create namespace local. First that there's no pods running. And initialize these namespace using the copper CLI in a cluster local mode. Now it's time to go with the Azure. Let's create a namespace here called Azure. And then let's initialize it using the console with unsecured authentication. So we can provide a load balancer IP to access from external. Now with OpenShift, we can create an AWS namespace and then initialize it using the copper CLI. Scopper CLI. In this case, it's running locally. So we need to update this to run from the local folder. So it's dot slash copper. And now it didn't initialize too. Okay. Now we can check the pods that are being deployed. And now we can see that there's a scopper router and a scopper service controller in my namespace. Let's then check the status. So currently scopper is in local edge mode here and it's not connected yet to any other system. So let's check the pods running on Azure. Again, we have the router and then let's check the status and it is available without the connections. Time to check AWS with OpenShift. We can check that there's the router pods running. And then we can check the status status. It's available and ready to connect. Now for the connection, we need to get the service information. So we can view that the controller has an external IP. So we can select this IP running on port 8080. So we can go to our browser and they use that IP address in the port 8080 to check the web console for scopper. So this is the web console currently listing the services and some more information about our configuration. Let's now create the connection token for Azure. So this will create a secret with the information required to be able to connect to scopper. It has the endpoint name and the secret. So let's download this file called Azure secret. Secret and it's YAML file. So we can download that from the AKS cluster. We can allow to download on our local system. So that's now in my local drive. So we can then take a look at the YAML file. Let's do a BI. Azure secret. And we can see there's a YAML version on Kubernetes secret with information on how to get the connection to this cluster. So let's use this YAML file locally and then edit a file here in my local system. In this case, we will be doing again a BI with the Azure secret YAML file. We just paste this the content of the of the previous file with just a download. Let's insert all the content. Let's then save this file and then we can do the connection. So it's going to be scopper connect and then we just provide the YAML file with all the information. In this case, we're going to name the connection name local to Azure. So we can identify this easily and then hit enter and then the configuration it's ready to unconnect it. So we can check the status with scopper CLI and then we can see that it's connected to one other site and there's no services exposed yet. Let's do the same thing in AWS in the OpenShift cluster. So in this case, we will be creating the Azure secret. We're going to be pasting the exact same content. The information is the same. We save the file and then we can run scopper connect with the name AWS to assure as the connection name. Now we can check the scopper status for the OpenShift cluster and then here we can see that now this cluster is connected to two other sites and one of those sites it's indirectly connected through the Azure site. So let's see on the console how we can now take a look at these three clouds, these three Kubernetes clusters that are running on my environment. So we can see the AWS cluster and the local and the AWS connected to Azure and they are all breached together. So let's get back to the local environment and create a deployment. It's going to be back in service that is going to be just a reply into hello world. This is created in two sites. We're going to create that in the local and also in the OpenShift site. So we also created deployment here and then we have both pods available for high availability in the local. We have the lock back in both and also in the OpenShift cluster we have the hello world back in pod running. Now it's time to deploy the front end. See there's no pods running and unassured. Let's create the front end deployment. This is going to be the only pod running on the Azure cloud. So there's only the hello world front end. There's no other pods running here besides the router. So it's time to check the services. Again, there's only the scoper services available. And now let's start to expose using a scoper the deployment of the back end as part of the scoper services and the OpenShift service to expose the deployment. And now this will create the services around all the clouds to be available. So let's see in the local. The service it's already created. So it's hello world back in this service cluster also again in the OpenShift there's a hello back world back in service created and now this same service should be available. If we can see the list of exposed services, we know that there's the hello world back in service and the application listening on the Azure cloud. We can take a look now on the services here and we can see that there's the hello world back in service being created also in the Azure cloud. So we have the service all the three clouds. But there's no pod running here. It's just the front end. So we have the service. It's virtualized service. It's time to expose the deployment of the hello world front front end report 8080. And then the type will be load balancer load balancer. Okay, this create the service for the for the deployment. So if we see now the services, we see that there's the back end service and the front the service has a cluster at external IP address. So we can use that to access the front of service that is just a simple get call. We can go to the our browser. We type in the external IP address for the 8080. And then we can see that it loads the application. Let's do it a little bit bigger. So we can see that just a single message showing the name of the pod that is currently answering these requests. So if we do refresh, we can see that there's also counter and information about the pod name. So let's see which one is answered. So in the local cluster, we can get the pots. We can see that the pod name corresponds to the pod name being shown by the application. So in this case, the front end application, it's been tunneled through Scopper to the local cluster in my laptop running and then been able to access that service. If we go to the to the council, we can see that the destination of the site, it's my local cluster. So it's the cluster that it's currently answering those requests. And how is it flowing to check the high availability? Let's scale the deployment. Hello world back in and then put the replicas on zero. So we can simulate a shortage in the system. So this will scale down the service. We can try calling the application again. It will respond until the part it's been it's terminated. In the moment that the pod is terminated, then the application will automatically reroute from the local cluster where it's not available to the OpenShift cluster. Now we can see there's a new pod answering back and we saw the update in the in the screen. So let's just verify that it's the sex pod that is running. So we get pots and yeah, the name it's exactly the same as the pod that it's currently running in the OpenShift cluster. So for my sure, it's now calling the AWS service cluster. And let's get back the local service and let's do a failback. Let's restart and check that it's still on the OpenShift cluster. And we can see it's still that one answering back. And then let's scale back the service running on top of OpenShift. So let's scale scale deployment. Hello world back in replicas replicas equals zero. And now that there's no pod running on OpenShift, we should be filling back again to the local cluster running my laptop. And here it is. Now the cluster that it's answering black, it's my own laptop. Amazing, right? Okay, after we have seen how this load balancing across different clouds works with Scopper that we can do private to public cloud and load balancing, those kind of things that the virtual application network allows us to have without having to mess up with the strong network topology. These are the three key takeaways that I really want you to take after this session. So if you want to use Scopper, it is best if you use Scopper when your data sources are running in a mix of different environments and you cannot move them like, you know, having data gravity in your databases and such. Or when you have, you can have access to your data sources by changing the network in a layer three. So services that are already available at certain IPs or host names and they need to be accessed in the same way, but your applications needs to outgrow that requirement. And finally, for example, you can use Scopper when suddenly you need to spend your administrative domains that you don't control. Like if you want to connect your two namespaces, but you are not the owner of the underlying networking infrastructure and you're limited to just your space, that's where Scopper can be leveraged to get this type of connectivity. All right. I think our time it's over. I really appreciate that you hang with us during this time. And here is some more information, some more links if you want to follow our work around either YouTube, Twitter, blog posts and contact us offline. And now I think we can take some questions and answers and maybe continue the conversation. So thank you very much and see you later.