 Hello all, welcome to this webinar and thanks for attending. Today I'm going to talk about edge computing, how it is important for multiplayer gaming, and then I'm going to show a solution to deploy multiplayer game server at the age, based on different open source technologies. So in particular we will look at how to deploy Kubernetes clusters on the age based on K3S distribution, and then we are going to use Agonis, that is an open source platform built on top of Kubernetes to deploy game servers. I will also talk about some tools by OpenNebula that allow us to deploy resources on the age. So let me introduce myself a bit, I work at OpenNebula, that is a company based in Madrid that focuses on building open source solution for age and cloud computing. I work remotely from Lecce, he is in the south of Italy, as a cloud technical evangelist. So I engage with the community, I talk at events, webinars to showcase open source technologies that are related to cloud computing, edge computing, Kubernetes containers and so on. Okay, so let's move to edge computing. Now it is not just a buzzword, it is a new real paradigm, computing paradigm, that requires some technological challenges because what we need to do is to bring the computation, data storage closer to the location where it is needed, in order to improve response times to save bandwidth to reduce data transfer. So edge computing plays an important role in different sectors, like gaming, so this is what we are going to talk about today, but also broadcasting, streaming, internet or things, smart cities and visual desktop infrastructure. So edge computing is important for applications that require low ultra low latency, that require high bandwidth, fast response, real-time analytics and these are the main benefits that we can get by using this new paradigm. If we look at multiplayer gaming, edge computing can be a game changer. Today multiplayer online gaming represents a big percentage of the entertainment in general and some types of games are very popular, like first-person shooter, like Destiny 2 or Call of Duty. We have multiplayer online battle arena like League of Legends, Dota 2 and also very popular battle royal games such as Fortnite or Apex Legends. So all those games are played worldwide by millions of players. So how they work, usually there is a matchmaking system. So people join a queue and then the system matches players from the pool and when there is the right number of people that can go from 10 to hundreds of people, a game is created. Now in order to start a game, a game server is deployed and a game server is a dedicated server for the game between the players that are involved. So when a game starts, all players connect to these dedicated game servers and they send information during the whole game to the game server that will process all the actions that are coming from the player. So if the player is jumping, is running, is shooting all this information is sent to the game server that then will run the full simulation, the physics by taking into account all the player actions. And then the server transmits data about the game state to the client. So each client will have their own accurate version of the game state to be displayed, to be rendered by the GPU client. So edge computing is important for multiplayer online gaming because in the case of these fast-paced games we need to lower latency as much as possible so in order to provide a satisfying gameplay to the players. Now we cannot use an approach based on a central data center where all game servers are deployed because this will increase latency, will decrease game response time and also the perceived game quality. And so players will not tolerate this kind of service. Instead by using an edge computing paradigm we can improve the gaming experience. Why? By provisioning game servers as close as possible to the pool of users that participates to the game and then so we can drastically reduce latency. We can transmit data faster than using a large centralized data center where all game servers are deployed. Also we have to take into account that if a game has a global deployment so is played worldwide there is also a need for scaling dynamically these resources in order to satisfy the demand at particular times. So in a particular time in a time zone you can have a peak, may is the evening after the work time and in another time zone there is a low demand of resources because most of the people for example are sleeping. So we have to take into account all of this. So for multiplayer online gaming we have to take into account these dynamically resources that must be created and deleted and also we have to take into account latency. Now in order to set up an edge computing solution for multiplayer game we are going to use several technologies. So let's start from Agonis. Agonis is an open source platform for deploying, scaling and orchestrating game servers for multiplayer games. And this has been built on top of Kubernetes. So Agonis extends Kubernetes so you can use standard Kubernetes tooling and APIs like QBCTL to create, run, manage and scale dedicated game server. The second technology that we are going to look is K3S. So K3S is an official now CNCF Sandbox project. It's a certified Kubernetes distribution and is ideal for edge deployment because it's packaged as a single binary, less than 40 megabytes. So it has fast provisioning, comes with minimal to no operating system dependencies. It also can run on several processor architecture from Intel ATCX to ARM architecture. So K3S is as two components. The server that is in charge of managing the cluster, deploying container as pods, and the K3S agent that has the function as a worker. It's all in charge of running and executing pods. Okay, so now let's look at the solution that we are going to show also with a demo in a few minutes. So we are going to use a couple of tools also developed by OpenEbola. One is called the OneProvision. So OneProvision is a tool that allows to dynamically grow a cloud infrastructure. So with the physical resources that runs on remote cloud providers. And so we can create resources, for example, on Equinix Metal, on AWS, on other cloud providers. And with OneProvision, we can deploy on these cloud providers a fully functional OpenEbola cluster. So with the computing, with storage, with networking resources. And this cluster will be managed by using OpenEbola. Computing the resources will be configured by using Firecracker, that is an open source solution by Amazon Web Services. And that has been integrated in OpenEbola to create and manage secure and multi-tainant container-based services and applications. Then with another tool from OpenEbola that is called OneFlow, we are going to deploy K3s clusters on resources provisioned by OneProvision. So K3s clusters are deployed as Firecracker micro-VMs. And we have to take into account that Firecracker micro-VMs is a very fast startup time. And a very low memory of overhead with respect to traditional virtual machine. In order to deploy K3s cluster, we start from a K3s Docker image that has been built with a Docker file. And then we can define a service template that will be instantiated to deploy a K3s cluster on Azure resources. So when the cluster is deployed, we can use a standard Kubernetes tool like Qubectl. And we can deploy Agones in the clusters. And then we can still use Qubectl by deploying a game server within Agones. And for example in the demo, we are going to use Sonotic that is a open source and a free first-person shooter. And we are going to use then the Sonotic game client to connect to the game server. Okay, so now let's see a demo. So I will show how this solution works and for multiplayer gaming at the edge. Okay, so let's start with the demo. So here we have one provision that is the tool by Open Nebula in order to provide resources at the edge. So remote cloud providers. So here I have defined already providers. We're considering AWS, the facility London. I will show you how to create another provider. So we're going to select this case packet, Equinix Magda. We're going to select the template for the Amsterdam facility. And then here we can change link, for example Amsterdam. And then here we are going to use to configure the connection. So I'm going to the console or Equinix Metal. Here I created the project. If I go here, this is the project ID. So we can copy the project T and put here. And then I defined also an API key for the demo. So I can copy this one and it's here. Okay. Now we can finish. So here the provider has been configured one. The provider has been configured. We can provision resources on that provider. So let's go and provision resources on a packet by using Firecracker as an app advisor technology. So we select here the provider in Amsterdam. Here we can put the name, for example, MS-1. And then we can configure the inputs. So the number of hosts that you would like to create, the number of public IP, for example, is set four. Here you can select the different size for the resources for the server. And here you can choose, for example, the operating system. So let's finish this and now the provision will start. One provision use Terraform to create resources. And then we'll use Ansible to configure the hosts. And you will going to create computing resources. So in this case, two servers. Storage, so data stores for the VM. And then the networks, the public networks for public IP that we can use. Okay. Now let's go to the Open Nebula Sunstone. This is the graphical user interface of Open Nebula. I will show you in the infrastructure now there is a cluster that has been created. By one provision. I call it packet MS-1. Here you can see also the host that has been, is going to be provisioned. I also will show you in the project on Equinix, if I want servers. Here you, yours, that the hosts are being created on Equinix Metals. So I selected the location, MS-1, Santos operating system, the T-1 small configuration for the instance. Okay. So this is how to provision resources on a provider. Then let's see how we can deploy the Kubernetes clusters once we have a cluster available. So first of all, what we have to do is to create an image, a Docker image, and importing the data source in Open Nebula. Here I already imported a KDRS image. I will show you how you can create an image starting from a Docker file, for example. So let's follow this KDRS new. And here we can define the sides. And then here I have a Docker file that I can use to build an image containing KDRS. That's I will download from the GitHub repository. And here I'm going to download the version 1.17. Okay. So once you click create, what is happening here is that Open Nebula is going to build this image. It's going to be contextualized with the contextualization package on Open Nebula. So we can have SSH, we can have networking and so on. Okay. So this image will be entered with the contextualization package of Open Nebula. When we have the image, we can define a couple of templates for the micro VM that will be deployed on the resources with the Firecracker. So we've defined two templates. One for the server component. And here in the template, you can define the memory, the CPU, and we are going to associate the storage, so the image, Docker image to this template. We can associate a network. In this case, it's automatic selection. That means that when we are going to deploy at runtime, the network belonging to the cluster will be selected. And an important thing is the context part. So here we have defined a start script. So this start script will get some information from the VM, like for example, the public IP. And then we'll start the, sorry, it will start the K3S server. Also, when we execute, we launch the K3S server, a token will be generated. And this token will be put as a key in the metadata server of Openable that is called 1K. Because this token will be used by the agent in order to start and to connect to the server. So we have defined also a template for the agent. So also for this, you can define memory, CPU. We associate the same image that we have defined for the server. Here also for the network is automatic. So the difference is in the context. So the start script is different. So in this case, the script is going to get, by using the 1 gate, the metadata server of Openable is going to get the IP of the server. It's going to get the token. And then it will start in this case. Instead of the server, it's going to start the agent. Okay. Once we define these two templates, we can define a template for the service. So for the cluster where we have defined the two roles, one for the server and one for the agent. And we associate the two templates that we have previously defined. Okay. So in this case, we are going to, when we are going to instantiate the service, we have a cardinality for server equal to one. That means one server will be deployed and we will deploy two agents. In the case of the agent, I have defined, for example, a minimum of VM equal to one and maximum VM equal to 10 with a cooldown of two seconds between each scaling operation. So using one flow, you can scale at runtime. So for example, if we need more agents, we can scale more agents. Okay. So now let's instantiate the service. And here we have an attribute that we are going to pass to the template. And it will be in the name of the cluster that we would like to deploy the K3 clusters. Okay. So in this case, we are going to deploy on packet and the cluster packet MS1. And let me go to create the service. Okay. So now the service is going to create the server first and then the agent. Okay. Yeah. You see this is problem state while this will create the K3S server. I will show you here. As you can see, the provision is finished. And here we have now the, you see the cluster that is in a running state. And also in the open Nebula, Sunstone, in their face, you can see that they also now are on, so are available. So we can deploy the K3S clusters, the K3S cluster here. So let's look back to them. So here the service is going to, now the K3S server is going to boot and to run. And here we can see that he has a public IP that now we will use to connect to deploy Agonis. Okay. Okay. So let's first download the configuration file for connecting to, so this, the file is still not available. So it's still starting and copy the IP. Yeah. You can see that the, now we started. So now we can connect to the, so we can copy the configuration file to connect to the Kubernetes cluster. Let me change the local IP, the public IP, the configuration file. And now we can export the file and we can connect. So for example, let's see which nodes. But the moment is, only the master is available. So it's the server. If we look here, you see also the agent are now running. So the starts creep is starting also the agent. So let's check back again. Now we have also the agent that joined the cluster. So now what we are going to do is to create, to deploy Agonis on this. So we can create the namespace and then we can install Agonis. Okay. In the namespace. Okay. Now let's check the ports. Right here. You see that someone is already running. Some are creating. And so they are. Okay. So once Agonis has been deployed, we are going now to deploy a game server. So I've defined a file. This is a fleet. So we can deploy different several game servers. So we can define, for example, the replicas, also the type of scheduling. We can decide between distributed packet. According to the, if it's the classes dynamic or static. So in this case, I will deploy this fleet. This will be two game servers will be deployed on the cluster. Okay. So let's see me apply first this. And now we can do this. So first of all, let's check the fleet. Here we have a tool. So not decided to current. And we don't have any game server ready. So we cannot check game server by using the, as, as you see, we are using the. Cubis it here because Agonis now extend Kubernetes. So you can use the API of Kubernetes. So let's check the game server. So this will take some time because in this case, we don't have the image of the Sonotip on the, on the clusters of the Kubernetes clusters. So he's taking some time on pool, on pooling the image. We can check this by using the scribe. Let's see here. Right. As you can see here is still pooling. So we need to wait a bit in order to the image to be pooled. Meanwhile, just want to say that here we have deployed the clusters on, on packet. Right. Clearly, if we would like to, to deploy the clusters on another provider, for example, on AWS. So the first thing is also to provision the other resources, for example, on AWS. So here, for example, let me go and also start the provisioning on AWS London. Okay. So let's call it service London. And here we can define again the number of servers, the number of public IP. This is the NMEI for CentOS. And here we can select the instance, no, for the provisioning. In this case, our bare metal server from Amazon. So when I click finish here, here is starting the provisioning or also this. I will show you also in this case. Another cluster has been created here. Also another host. In this case, I choose just one host is going to be provisioning. And also I have here the, the instances. So still not running. And with Terrom is going to run to create now here in instance for in London. So let's see. Okay. So now it's running. Okay. Okay. So let's go back and see if now the game server is ready. Okay. So meanwhile, the images me bullets. Now we have two game servers that are ready. And now let me show how to, so we can connect now with the client. This is the client folks. Not this three person. First person shooter. Okay. So, and now what we can do is just join one of this game servers. Okay. So, so let me put here this address. Then the port. Okay. Now we can join is loading and now is joining the game server. And let's see should start. And now I am not here. You can see that I am in the game and I'm playing now the game. Okay. Here, for example, the game server has been created with the some balls within. Okay. You can see that some boats in the, in the game. Okay. Okay. Now I can reach this. I cannot. Okay. I see if I can shoot this one. Maybe not. Okay. Let's let me quit this. And so we can use the, the QBCTL, for example, to also scale game server. So let's say we need more games. So what we can do is using QBCTL scale. And then for Sonotix, let's say for replicas. Now, if we check the fleet, we see that we have four for decided the current and only two ready, but in a few seconds, we will have four ready. Now, this is faster because we already downloaded the Sonotix image on the servers. Okay. So, and then we maybe want to shut down some game servers. We can set replicas equal to one. And now what is going to happen is that the servers are going to be shut down. So that's how it works. Okay. So we can scale game servers. I will show also to you how to scale, for example, also the K3S cluster. So if we need more agents, we can do this by using one flow. For example, we can click on the agent and then we can click on scale. And so let's say, for example, that instead of two, we would like to have three agents. Okay. So what is going to happen now is that a new agent is going to be deployed on the resource. And this will join also the cluster. Okay. So, this is for the demo. And well, I think that this is a whole. I hope that you enjoyed this webinar and some fun with this deployment. And you can check also the website, one-edge.io, where you can find information about the edge cloud architecture by Open Nebula. Okay. Thanks to all for telling this webinar. Bye.