 And we are live. First of all, welcome everyone. Thank you for joining me. Let's see how many people will actually join the live stream and a quick disclaimer right away to get it out of the way. This is an official live stream of the CNCF and as such is subject to the CNCF code of conduct. So please do not add anything to the chat or questions that will be in violation of that code of conduct. Basically, just be respectful of each other, your fellow participants, presenters, and so on. So as I said, thank you for all of those who joined the live stream. I think this is gonna be a really interesting session. We have a lot of interesting topics to discuss around my favorite subject, which is Kubernetes. And I'm also very looking forward to your questions as well. So let's start the presentation right away. Awesome. So some of you already know me probably from my YouTube channel, Tech World with Nana, where I talk about Kubernetes and many other DevOps tools. And for others who don't know me yet, I am Nana. I'm a DevOps engineer and I actually dedicate a lot of my time creating free content as well as online courses, educational programs for those who want to get started in DevOps and basically learn different tools in a DevOps space because they're thinking about changing to the DevOps carrier or they have to learn these tools for their work. And I will also be very happy to connect with you on any social media platforms. And I think we have the link to all my social media platforms in the chat so you can find it there. And as I said, one of the most fascinating tools for me personally that I have focused on also as an online educator for the last few years and also as a practitioner is Kubernetes. And since KubeCon is around the corner, as many of you already know, I actually decided to make this live stream to talk about Kubernetes and all the tools that have been evolving around Kubernetes or are evolving around Kubernetes and basically just categorize them based on the concepts that they belong to. So you get some kind of a nice overview of all the Kubernetes, basically all the tools in the Kubernetes world, all the concepts and all the other things that are basically happening in this world. And also I hope this can better prepare you for KubeCon because most of the talks and technologies that you will see on the list will basically belong to one of those categories that we're gonna go through today. And also share in the chat whether you are actually going to KubeCon and what are your expectations or what are your experiences already with the previous KubeCon conferences. And with that overview, let's actually jump right in. And let's start with the common truths that everybody already knows. We know that Kubernetes is gaining the popularity at a crazy speed. I have seen this with the companies that I work with that I support the people that are following my channel and basically through the feedback that it is being adopted by more and more companies and projects. And also logically enough, we see this whole huge ecosystem evolving around it. And you have lots of startups and new projects that implement solutions for different problems or different things in the Kubernetes world. And as you know, Kubernetes is not the easiest tool to learn or maintain or to set up. And it is really complex to understand. It's, as I said, it's difficult to set up, but it's also difficult to maintain for the teams who actually have to run and manage Kubernetes. And the, but what I think is the most the most of the complexity of Kubernetes is not the Kubernetes itself, which is anyways complex and its own topic. But what adds complexity to that is the integrations with all the other tools and technologies, right? And we're gonna talk about that throughout the live stream, what things we need to integrate Kubernetes with and so on. And also because Kubernetes is a platform where applications will run in and there are lots of all sorts of applications from the databases to web applications, to monitoring and logging services and security services, et cetera, there are basically lots of things happening within Kubernetes. Plus we have the whole life cycle, life cycles around Kubernetes, like CI-CD pipeline that deploys applications to Kubernetes, CI-CD pipeline for configuring the Kubernetes itself and updating the platform, maybe infrastructure, even underneath. And as a result, you see all these technologies evolving around it and you see the least of technologies you never heard of before, probably. And I know that for a lot of people, this could be overwhelming to keep up to date because people usually ask me, like, do I have to learn all these tools? Where do I start? How much do I actually need to learn? Where do I stop? And so on. So I thought to make these a little bit easier and first try to understand what are all the issues related to Kubernetes and what are main concepts we have and only after that, see what technologies are out there that actually solve these issues and are related to those concepts. And after this, you can decide what you want to deepen your knowledge about, which concepts you want to learn, which technologies you want to focus on and basically improve your skills in one of these areas. Awesome, I'm just going through some of your questions and great. So let's actually start with the first topic around Kubernetes, which is container runtimes. So what is it that runs the actual workloads in Kubernetes? So when you schedule an application in Kubernetes, could be a database, a logging application or web application under the hood they will run as containers, right? So in Kubernetes, you need a container runtime technology underneath. The first one was Docker. So Kubernetes was actually created with Docker support integrated in the code itself. And Docker, it was logical because Docker was actually the technologies that made technology that made containers popular and mainstream in the first place. But with time parallel to Docker, more container technologies emerged that were more lightweight than Docker, like container D and cryo are some examples. And the reason was that because Docker itself is more than just a container runtime, it is used to build images, it has its own user interface, command line interface, and actually more features get added to it, right? To have it to make it a self-sufficient technology. But you don't need any of these extra stuff in Kubernetes. You only need the container runtime. So preference was actually given to these more lightweight runtimes like container D and cryo that are just container runtimes. And so this kind of change happened in many major cloud platforms like AWS, Google Cloud, Azure Cloud and so on that provide managed Kubernetes services. Actually use container D as a container runtime. And cryo is, for example, also used by OpenShift. But I have to note here that when you work with Kubernetes, you actually don't need much knowledge of the container runtimes itself because even in a self-managed Kubernetes cluster, once you install the cluster and once you install the container runtime and everything is set up, you don't need to do anything or you don't need to work with runtime anymore, right? They just run and do their job in the background quietly. You just work with Kubernetes. And another node on Docker itself, even though it is not the first option anymore for Kubernetes as a container runtime, it still remains super popular tool because as I said, it is used to build images that will then run in Kubernetes as containers, no matter which container runtime it uses. So again, if you see any talks during the KubeCon in the list that are about container runtimes, this is basically, I wouldn't say this is the core concept that you need to know about Kubernetes, but it is part of the Kubernetes ecosystem. And let me see again, if there are any questions. All right. So let's move on to another topic, which is very interesting for me personally. And that is, as I said, Kubernetes was actually created as a container orchestration tool for the use case of running hundreds or maybe thousands of containers, right? And the question is now, in which case or in which use cases do we have so many containers? And one of the perfect use cases for this is actually microservices architecture, right? Microservices applications. So basically, and this was itself a shift that happened in the application infrastructure application, sorry, architecture or world itself. So instead of one big monolith application, we have the same application, but divided into or cut into smaller, cleaner and more manageable pieces, right? And they can run in isolation and these are microservices and these microservices often have other services that they talk to. So we end up having loads of containers that needs to be managed. And we basically deploy all of this on Kubernetes. And since in the microservices architecture, these small apps get deployed as own isolated containers, the challenge becomes, how do these microservices talk to each other in this Kubernetes environment? But not only that, but how do they talk to each other efficiently with no single point of failure, also securely with high scalability. So even if you have thousands of services and you scale it to 10,000 microservices that all the traffic between them doesn't actually slow down the whole network and break everything. So all these challenges are what service mesh technologies actually address. And not surprisingly, there are many different service mesh implementations out there. Most popular ones currently are Istio, LinkerD and HashiCorps console, which you can all deploy and use in Kubernetes. And again, these are technologies that were developed in this ecosystem, in this area and they have all really good integrations in Kubernetes. So you can easily deploy them and use them there. Even though these are all tools that can be managed and deployed separately, independently of Kubernetes as well. And they solve the same challenges in different ways. And I also get this question very often when there are 50 technologies that solve that basically do the same thing, people that the usual question is, which one do you learn, right? Which one do you use for your project? And again, as I said, these tools will solve the same challenges, but in different ways. So depending on what approach you want to use, depending on which features are more important for you, you have to basically just decide per technology level which one you want to test and basically try out for your application. But personally, I think you just start with one of the tools, you learn all the concepts and then you can test another tool and just compare how they work with each other. Right, so again, if you are running microservices application in Kubernetes, then learning about service mesh technology, especially one of those that actually integrate with Kubernetes will really be interesting for you. And all right. And now let's continue from the microservices architecture and talk about persisting data because whenever you are deploying applications, whether it's a monolith or microservices application, you need some kind of data persistence, right? Usually mostly you will have a database connected to your application that or multiple databases that persist different types of data. And this could be an in-memory database, this could be a caching database to make your application faster using cache. This could be a persistent SQL database or no SQL database. Basically you can have multiple ones for your applications. And in any case, it's important to know how data persistence works in Kubernetes. And very important thing that may be a little bit confusing for people that start working with Kubernetes at the beginning is that Kubernetes does not offer or implement persistent storage for application data. Instead, so basically Kubernetes tells you, you know what, I don't care where you store your data physically, but I will just give you an interface so that you can connect your physical storage wherever it's configured, whatever that is, to the applications running inside Kubernetes using Kubernetes components like persistent volume, storage class, persistent volume claim and so on. So the question is if you need to persist data for your MySQL or Elasticsearch database, for example, how do you do that? Like what are the steps? First of all, and first and the most important step is you decide yourself as an administrator, you decide what kind of storage you want to use to persist the data. And that could again be dependent on what data are you persisting for which database. And this could be a storage from cloud platforms themselves like Elastic Block Storage from AWS, Google Cloud Storage, Azure Cloud Storage and so on, or it could be any cloud native storage tool like CepFS or GlusterFS, which are scalable storages, distributed storages when you want to scale the data persistence and you can also have on-premise storage, right? It could be an NFS server or it could be very simplistic file system storage. So physically you go and configure physical storage where the data will end up from the Kubernetes applications. And you actually have a pretty wide choice of where that storage could be physically and that gives you a flexibility to select whatever storage back and you want to persist your data with. And once the storage is configured by you or whoever's responsible for that in your company, you can make it available in Kubernetes simply by creating and using a Kubernetes component called persistent volume, right? So it's a YAML definition where the storage backend address and other data is configured. So you basically in the persistent volume and say, I want storage from Elastic Block Storage which is available at this address or from NFS storage available at this address and then how much storage you need. And once Kubernetes has the storage information through PV, persistent volume, you can then attach or link that physical storage to the workload in Kubernetes. Again, using Kubernetes native configuration. So as I mentioned here, the challenge and the main takeaway is learning about different types of storages that Kubernetes supports and then which storage type is appropriate for what type of application data. And I believe that Kubernetes official documentation in the persistent volume section actually has a list of all the supported storage backends where you can basically check which one you can use. And there is also another very important thing I want to mention here is that in practice, even though Kubernetes theoretically supports all this and everything is great, but in practice till now, and this could change later, many people decide to run and manage the databases outside Kubernetes, especially in the production environments too. And this is because they want to basically avoid the headaches of setting up the data persistence in Kubernetes and chaining the whole thing from the actual storage backend all the way to the application. So they just decide, you know what, let's leave our MySQL database outside the cluster and then applications inside the Kubernetes just connect to this outside database. But I believe that this will change in the near future and there are actually lots of trends towards that because it is going to become easier to use storage and people are going to get more confident basically running a stateful applications in Kubernetes and using the storage or data persistence in Kubernetes. So I think we're going to see that shift very soon as well, right? So moving on, now at this point, we have our modern microservice applications running in Kubernetes. We have the data persistence configured and so on. But what about deploying applications into a Kubernetes cluster? Because when you have 1,000 microservices, let's say for your project, and this is actually a very moderate number for lots of projects, getting in all these microservices could be updated by different teams or a handful of teams, it doesn't matter. Obviously you don't want to deploy these microservices individually, manually to the cluster, right? Or using some scripts. You want the whole process to be automated with a CI-CD pipeline for your applications that basically watches for your commits and then tests and builds the application, produces a new container image version, and then automatically deploys that to the Kubernetes. So the question now is how do you automate updating application versions in Kubernetes or deploying the new application versions in the cluster? And that's where integrating Kubernetes with different CI-CD tools comes into the picture, right? We of course have the traditional tools, like Jenkins, but also more recent ones like GitLab, CI-CD, Circle CI, et cetera. And as I mentioned, these tools will all have integrations with Kubernetes. Of course, some of them will have better ones than others. Like you probably, like if you already have tried this and set it up, you know that Jenkins does have an integration with Kubernetes, but since it has been developed way before Kubernetes, it tries to kind of fit in in this whole cloud native world, but it's really struggling. But some newer CI-CD tools are being developed that are actually purpose built for Kubernetes, right? So CI-CD tools that work directly with Kubernetes and they're super easy to integrate and so on. So we see a lot of changes and a lot of new tools evolving in that area as well. And as a continuation of the topic, we have CI-CD pipelines that automatically deploy coach application changes to the cluster, but we also have automatically pipelines that will automatically apply the platform changes or infrastructure changes on Kubernetes, right? So traditionally we know that when managing and configuring servers or platforms where applications ran and by platform, I mean Kubernetes in this case. So in this kind of cases, administrators would use scripts or do all these configuring and managing or maintaining tasks basically manually. But if you have infrastructure where you basically run and manage thousands of servers or tens of thousands of servers because with Kubernetes it's so simple to scale that, then obviously managing all these manually and using scripts is not feasible, right? And we saw that infrastructure became more complex. We are running applications or much more complex infrastructure. It became more virtual. Cloud platforms became very commonly used and popular. And therefore we now need a little bit more sophisticated and high level tools that will help us manage all these and not basically just do the stuff manually, right? Another very important change in parallel to that or maybe because of that is the culture of collaboration between engineers itself, right? So if you don't have just a couple of engineers that do their own things on their laptops and basically just remotely configure servers, you have this whole culture of team collaboration where the system engineers need to talk to the developers and security engineers and DevOps engineers and everyone has to work together and so on. And as a result, basically they don't only collaborate on the infrastructure changes but on the application changes as well. So the best way to address this was basically writing code for infrastructure provisioning and also configuration which emerged into a term called infrastructure as code which you probably all heard of that after that later extended to configuration as code, policy as code, X as code like whatever is code. And that was part of the team collaboration culture switch as well as just a solution to infrastructure becoming so complex. And you basically needing to have a tool to manage 10,000 servers and not just 10 of them. So now instead of isolated manual changes we have a team collaboration and more transparency in what changes are actually done to the infrastructure. Again for clarity infrastructure would be for example if you're using cloud platform this would be AWS infrastructure and then on top of the infrastructure also you have the platform like Kubernetes that runs on the infrastructure, right? So you will have more transparency into what changes you're making into infrastructure as well as platform. And you have that using version controlled code to express all these changes. And as a logical continuation of it because you now have code that represents your infrastructure or platform changes or configuration changes, we have the same CI CD pipeline for applying those changes. Just like we do application we do apply application changes using this kind of pipelines and that process got a term of GitOps. So GitOps is a concept of using a version control system like Git for making any changes to infrastructure, platforms and so on and automatically testing and applying the changes using a CI CD pipeline. Once the code is committed to the repository. And again, there are many tools that implement the GitOps concepts, especially in the Kubernetes world such as Argo CD, Flux CD and so on. And again, many of them purpose built for Kubernetes, right? So they actually evolved specifically for this use case. Now, GitOps is a process that cannot be implemented for any project within a day because it requires engineers getting familiar with the tools and the concepts because it is, I wouldn't say this is a difficult or complex concept, but it's just so different from what we have been working with that it needs to sometimes to basically adjust to this new way of working. So again, going back to this cultural shift and how teams collaborate. So this is probably one of the challenges of implementing GitOps in the projects. And that means if you and your team, for example, plan such a transformation that you want to basically move in the direction of GitOps, then familiarizing with one of the GitOps tools will be a really good way to get started. All right, let me check for any questions. Awesome, again, moving on. Now we have our old microservices applications running in the cluster plus any third-party applications that your microservices apps are using. Could be databases, authentication services, messaging, whatever. And finally, you have CI-CD tools that are connected to Kubernetes and do automatic configuration updates as well as application updates. So a lot of things going on inside Kubernetes and around it, right? And of course, the cluster itself is running on some kind of infrastructure like on a cloud platform or on-premise data center. So you have the infrastructure level, the platform, the Kubernetes on top of it, applications inside, and then integrations with external applications. And this actually means lots of potential security issues. Every new tool, new application, new integration with Kubernetes actually increases the tech surface, right? Now you have a new way of, or new place that could potentially become a security issue. And you need to manage the security issues on multiple levels. As I said, there is an infrastructure that is lying underneath, then you have the Kubernetes platform on top of it, and then you have the applications inside the cluster, and you have all the applications that Kubernetes is integrated with, right? That either push something to Kubernetes like deploying application from Jenkins to Kubernetes, or they pull the external changes inside Kubernetes like in case of Argo CD. So you have all these, basically have opened up some security vulnerabilities. So you can imagine that this creates quite a challenge for teams to secure the cluster and the applications inside. And again, not surprisingly, there are loads of tools that address different aspects of security in the Kubernetes world. And I actually went through the list as well, and I was surprised to see that a lot, really a lot of talks in the KubeCon are actually around security in Kubernetes on different levels, whether it's integration or the cluster or a cross cluster, and there are actually lots of tools listed there. So this will be an interesting topic for those who are actually running really critical applications in their clusters, and they want to be confident that everything is secure and they're actually running everything according to the security best practices. And in addition to all the security concepts, we have also a new term called DevSecOps, which basically adds the security solutions within the automated DevOps processes. And we saw that security basically became more foreground issue instead of just being in the background and the responsibility of the security engineers, suddenly it became okay, now we have to actually integrate the security concepts and security solutions throughout our automated processes because in the whole chain, because we need to really secure each and every step of the application lifecycle when it gets into the cluster and then the infrastructure underneath and so on. And just like with GitOps, DevSecOps is also some way, a cultural shift in how teams of engineers work together and share the responsibility for addressing the security issues instead of saying, this is just, this is for security engineers or the system administrators basically have to take care of all this. All right. Now Kubernetes clusters can get very complex. Again, thousands of applications inside. So the question is, how do I monitor and observe stuff to make it transparent for me or whoever is basically responsible for Kubernetes to make it more transparent what state my cluster is in, right? So we need some observability in our cluster, but not only to identify issues when they happen like application is not accessible, it's crushing or it's under heavy load. Kubernetes cluster is running out of resources. So we have to now identify what caused these issues and we can do that using very easily using monitoring tools but that's not the goal or purpose of the monitoring tools but it's rather to prevent that these issues actually happen. So by we monitor the values and we then alert the right people in the team to take action when there is a chance that some issue will arise. And there is a very effective way of managing and maintaining your cluster because especially in production environment because you can actually utilize all these tools to avoid any unexpected things happening in your cluster or avoid users being the first ones to find out that your cluster has issues instead of your team that is responsible for Kubernetes. And that's exactly where monitoring tools like Prometheus, Nagios, Datadog, et cetera come in. I personally have used Prometheus with Grafana in lots of my projects and it really takes a lot of maintenance work and effort of the Kubernetes administrators because it lets you automate all these issue detection stuff for you basically. And again, if you're running large clusters you may want to consider using monitoring and one of these monitoring tools to make your own life easier in managing Kubernetes. Now, in addition to observing the cluster and the infrastructure, when I am administering Kubernetes I want to observe my applications as well. So let's say with monitoring tools I find out that the cluster is running out of resources 70% of the CPU and RAM of my whole cluster is used up and there are more applications to deploy. So I know to take action to add more resources, right? But let's say everything is running fine cluster components are healthy but I want to know how are my applications themselves doing, right? So you want to see the logs of your applications and especially in case of application errors like you were getting some HTTP error messages you want to be able to see what actually happened inside the applications in the chain that resulted in that error. And this will be especially critical when you have microservices that are all tied to each other and connected to each other and you basically have to retrace the request to see where the error happened or what caused the error and how it kind of propagated throughout the application. Now, managing logs of one application is obviously simple because you just find the application, check the logs you see the issues in the logs and that's it, right? You know the reason. But if you have thousands of microservices you can't just manually check the logs of each application to find the issues, right? And basically see if any application has any problems you need some way to collect all these logs of all the applications and view them in a more organized way which in a way that you can basically search through, filter and kind of logically like make connections between the logs of different applications, right? An example would be I want to see all the logs that from all the applications that happened between these timeframe that have this request message or header inside, right? And then you can see basically the logs throughout the applications and how the trace basically looks like. And for solving this challenge of managing logs of thousands of applications in Kubernetes there are log collector tools some of the more popular ones being FluentD, FluentBeat, LogStash and so on. There are also tracing apps that basically are specifically for helping you analyze the trace of the requests within applications which could be an additional tool that you may want to use in your cluster especially with microservices applications, all right? Now that we've talked about what runs inside the cluster let's see where Kubernetes cluster itself will run. I quickly mentioned infrastructure underneath Kubernetes and AWS is one example but you actually have lots of options. So it's really, and again, as any other tools this is also evolving and lots of new technologies and lots of new cloud platforms are offering new services for this. So when you are basically thinking of moving to Kubernetes it's really important to know what options you have of where you can run Kubernetes. And basically what are advantages and disadvantages of each of these options, right? Now broadly speaking, you have two options. The first one is a self-managed Kubernetes cluster where you get several virtual machines and you basically install all the Kubernetes binaries on them. You make each virtual machine into a control plane node or a worker node and you just join it to the cluster. And this way you can add any number of nodes to the cluster, you can extend it, you can shrink it based on what your application requires so you can flexibly scale up or scale down the cluster size. According to how much resources your applications need. But the disadvantage of self-managed Kubernetes clusters and basically spinning up on virtual machines whether it's on your own data centers or on cloud and then installing and deploying everything on that. The main disadvantage is that you have to manage the cluster yourself and the main controlling parts of the cluster which are the master or control plane nodes. You have to install the binaries, you have to deploy the cluster network plugin, you have to manage the cluster certificates, you have to make sure the master processes are replicated because if one master process like API server for example dies or has issues and you don't have a replica for it your cluster will be inaccessible. If you lose another master process like EZD basically your cluster will not function because EZDs holds all the information about the cluster and without that nothing will work. So you will have to take care of replicating all the master processes and basically you will need someone in the team who knows how to administer Kubernetes cluster. And as I said at the beginning Kubernetes, the challenge in Kubernetes is not just setting up the cluster but you have to maintain it the whole time, right? You have to make sure to upgrade it, you have to make sure to renew the certificates, you have to, if things break and don't work you have to obviously fix them and so on. And in many cases it could be that you and your team just wants to get started with Kubernetes as fast as possible, right? You have an application, could be a startup and you just want to deploy it and make it accessible for users without any overhead and effort of going through the process of learning Kubernetes, setting up Kubernetes and all these headaches. So basically you don't want to handle the cluster administration process. And there are many projects like this, as I said. And in this case a great alternative and this is gonna be the option number two is to use a managed Kubernetes service. And lots of cloud providers, especially the major cloud providers actually offer managed Kubernetes services like you have EKS from the AWS and Elastic Kubernetes Service, you have Azure Kubernetes Service, Google Kubernetes Engine and so on but also many smaller cloud platforms and I'm also seeing more and more cloud platforms are adding managed Kubernetes service as one of their services they offer to the customers which I believe again speaks for the popularity of Kubernetes and especially the need for Kubernetes as a managed service. Now I should mention some additional benefits of using a managed Kubernetes service especially on these popular cloud environments like AWS, Google, et cetera. For example, if you use AWS EKS service your cluster will automatically have integration with many other AWS services. So when you create a load balancer service for example in Kubernetes, AWS will automatically provision in Elastic Load Balancer service which is an AWS native service with all the right configurations for you. So you don't have to do that manually which you would have to do if you have a self-managed cluster or another example would be if you want to encrypt for example your Kubernetes secrets you can use one of the AWS services to do that much more easily than basically trying to deploy something there and piece all these things together to automatically encrypt the secrets every time they get created. So basically you get all these extra benefits of using the cloud platform own services in addition to your Kubernetes as natural integrations. Of course a downside of that is if you want to migrate your Kubernetes cluster to another platform that will be difficult because you're kind of tied in to that platform. But again, you have to decide it probably on your project specific basis whether that's something that you will want to do in the future or you're basically fine saying you know what I want to automate as much as possible and make my life as easy as possible and basically let AWS manage all this or any other platform. And I think let's let me check the questions again and I think we're coming to one of the final topics which is also one of the big trends and I use I saw actually lots of topics lots of talks in the KubeCon around topic of multi-tenancy which is a very interesting a very hot topic today and let's actually see why. So what is a multi-tenancy first of all and why did it become so demanded? Suppose you your team of administrators set up a production grade cluster, right? With all the security best practices configured the control plan processes are replicated. Again, we're talking about self-managed cluster where you kind of do everything yourself or it could be in some cases this also could be the managed service where you also can have configured the monitoring the observability, you have logging set up. So basically you have the super well configured well tested Kubernetes cluster set up, right? Now, all of that is a lot of administration and maintenance effort, right? So in that case, it makes sense that you may want to use that same cluster for multiple projects in your company. So you don't want to be setting up and managing a separate cluster for each project because imagine what kind of effort that will be for Kubernetes administrator team or the project team to have to basically manage their own Kubernetes cluster for each project. And this actually sounds logical to many people. So you want to instead host multiple independent projects that have nothing to do with each other. So individual projects within the same company you want to host them in the same cluster. This production grade super set up cluster. So you have multiple tenants, multiple projects in the same cluster. And that's where the term multi-tenancy comes from. But of course the challenge here is how do you isolate the independent projects from each other in the same cluster? Because as I said, this could be projects that have nothing to do with each other and could also be very important to actually isolate them, right? For security reasons. So how do you make sure each project team only has access to their own project resources and applications? How do you make sure the cluster resources are distributed fairly among all the projects workloads? So one project doesn't basically use up all the resources of the cluster. And of course, how do you make sure that any security issues or application issues that happen in one project, right? If one project team basically introduced a security vulnerability and it basically just messed up the whole thing, it doesn't actually affect all the other applications of other projects. So how do you configure and guarantee this isolation between the projects in the same cluster? So these are all the challenges that comes with multi-tenancy in Kubernetes. So the question now is how does Kubernetes itself with its own resources, existing resources help you with these issues? First of all, Kubernetes has a namespace resource which helps you separate each tenant and their Kubernetes resources into their own groups. Where each namespace has its own isolated network. So pods from one namespace do not directly see pods or other components like services, config maps and so on from other namespaces. So namespaces, like you can think of it as a virtual cluster or multiple virtual clusters that share the same control plane processes. So they're managed by the same processes but they're isolated from each other. They're virtual clusters within the same physical cluster. And now you have this namespaces, you go a step further and on the namespace level you configure more isolation, right? Because as I said, namespace is a network isolation but you need some security isolations as well, right? You want to make sure that applications cannot access each other with the fully qualified domain names, right? Or you want to make sure that the resource quarters or resource limitations are set for each project. So on the namespace level, again, using Kubernetes on resources, no external applications or services needed, you can then use Kubernetes policies and there are different types of policies in Kubernetes. One of them is resource quotas. You also have security policies. You have network policies and so on. And you can use these policies to restrict access of what containers are allowed to do. You can restrict what traffic from and to the applications is allowed and using the resource quarters, you can constraint the resource usage of workloads within each namespace. So basically, again, with specific example, you create namespaces for each project teams and you say each team gets this amount of resources and you define that using resource quota component and then you say applications within this namespace cannot be accessed or cannot access themselves any other applications in other namespaces and then you also define the constraints or limitations of access of the applications or basically how much they can do within that namespace using the security policies. So you can say containers are not allowed to run as routes, they're not allowed to or pods are not allowed to basically escalate resource, escalate permissions and so on. So you can configure all of that on the network on the namespace level. So as a result, you'll have all these namespaces where the tenants, so each tenant will get its own namespace and they will share the control plan resources but they will have their isolated environments with their own resource usage, network and security isolation for the workloads. So that's basically how multi-tenancy will work. For many projects, what the Kubernetes itself offers is not enough isolation. So there are many new technologies that actually go a step further to really make sure the isolation is there between these tenants. So we have the concept of virtual clusters which are implemented by different technologies, one of them being loft, for example, or we also have concepts like multi-cluster, multi-tenancy within the multi-cluster and so on. We have multiple Kubernetes clusters basically connected to each other. And again, if you go through the list of the talks in KubeCon, you will see a bunch of topics about these concepts which again, shows how important and how popular this topic is. All right, so let me check the questions again. I see one that says what would be your favorite talks, which one you are looking forward to attending. I haven't decided, like, for me it's really difficult because I found most of the talks very interesting. I personally like talks that are more concept oriented. So they're not specific to the technology itself because I want to know, okay, what are the concepts and how are different technologies trying to solve it? Or if I have a knowledge in a concept already, then I would check out like if there's a new tool out there but I haven't decided actually exactly which talks I'm gonna attend. The multi-tenancy, multi-cluster topics are definitely interesting. So I'm gonna be checking some of them out. And security is something that I personally do not have much expertise in. And that could also be something to look into because as I said, I was really surprised to see how many security topics there on the talk. And I also recognize this is a very important thing because you have so many things happening there and you have to make sure that the security is, or you don't compromise security for all these super cool setups and configurations. Yes, that is absolutely true. I see a comment message. We can also restrict apps from one namespace to access service in another namespace using network policies. That is absolutely true. You can configure network policies in your... So by default in Kubernetes, the network will be allowed or will be allow all, but using network policies, you can actually limit which applications can talk to which other applications in other namespace or get traffic from which ones. And also an important note on that is the network policy support depends on the network plugin that you're using inside Kubernetes. So whichever network plugin, for example, in a self-managed Kubernetes cluster, which network plugin you install. Flannel, for example, doesn't support network policies, I believe, but most others should be supporting. So that was a good point. I love GVisor in terms of multi-tenancy. I don't know the tool, but is that actually on the talk? I think I saw that. I see, mm-hmm. Yes, another example, another question, control plane services are not tenant aware. Example, API DNS scheduler, what would be the best practices to prevent bed tenant abusing the shared resources? That is actually a very good question. I believe it's really difficult to achieve with the Kubernetes on resources because you may have, for example, you may even have another namespace where the monitoring is set up, right? For example, you have logging namespace or monitoring namespace where the monitoring or logging applications are running and basically other namespace resources can have access to that. So how do you make sure that they don't mess something up there? Again, one thing would be restricting access to any other namespaces. So basically, having own users, for example, giving each other each team, their own users that only give them access to the namespace and then restricting that access for the applications within the namespace. But as I said, it is actually difficult to 100% avoid that. And that's why, for example, the visual clusters concept that I mentioned, I believe Loft is one of them. So basically, their approach is to have a virtual cluster where you don't only have a namespace with workloads, but you have the master processes also in that virtual cluster, right? So you have the complete cluster with the control plane and the worker processes as a virtual cluster. So basically, you can't break out of that and you have much better isolation than the namespace because they don't use namespace to create them in the background. So I believe for that, you would actually have to look into some additional tools that are out there that try to exactly solve these problems. Flannel plus Palico channel is now very common too. Do you mean using them at the same time or like in combination or just Flannel is a very, very common one? I have used Weave Net's network plugin which has worked pretty well for me. But again, like if you're using a managed Kubernetes service, obviously you don't have a control over that because that's managed. So it is already installed and managed for you by the cloud platform. So it could be interesting to check out. You can, the network plugin application is also running as a pod. So if you basically print out the pods in the kubesystem namespace or basically in all the namespaces depending on the network plugin, you will actually see the pods of that running and you will see which network plugin they use. And usually they will be deployed as demon sets so each node will have its own network plugin pod. So you can check it like this. Any other questions? Which talks, I would actually be very interested to know which talks or which concepts do you actually find the most interesting or most relevant for your current projects? All right. Well then, let's actually wrap this up. This is actually end of our topics. I really hope I was able to give you some new interesting insights into the Kubernetes world and its whole ecosystem, which is again, very huge and evolving super fast. And even though this talk was a super high level overview, I can imagine that it's still a lot to consider and still a lot of concepts. Lots of things were probably missing out of it. So you may want to actually learn a bit more and dive into each of these areas, right? Like what is GitOps or DevSecOps or ServiceMesh, et cetera. And many of these topics that we went through today, I actually have already covered on my YouTube channel. So if you're interested in any of them, you can check the channel and find these videos there. And basically with that, this is the information of when KubeConnys, but you also have the links there. And with that, I want to say thank you to all of you for hanging out with me today and for all your questions and nice comments. And that's the end. Thank you very much.