 Thank you very much, Christina. Good morning, good afternoon, good evening. Thank you everybody for joining this webinar today. My name is Beal Kingston. I'm a Solutions Architects at F5. Here at F5, I spend a lot of time helping customers and our community adopt NGINX and F5 technology. From looking at many environments and topologies, we see many patterns in modernization, microservices and collaborative technologies. So today, I'm going to speak primarily about the fundamentals of microservices. Now I know this is quite a broad topic, so I will do my very best to go through some of the most popular trends and technologies that we do see when speaking to the community and to our customers. There are so many reasons why companies would shift from a more traditional monolithic architecture to a more cloud native microservices-based architecture. It could be to reduce costs, it could be to avoid single points of failure, it could be to shorten the application development cycle in a DevOps environment perhaps, but there are so many reasons why. So I will explain at a very high level some of the technologies that we do see in some of the most popular applications in the world today. I will briefly describe what a modern app is, why some companies should modernize, why DevOps is important. Of course, we cannot talk about microservices without mentioning things like containers, things like Kubernetes. We'll spend a little bit of time going through what an ingress controller is. Service mesh, we will also talk about, which is a very hot topic right now also. Do you need one? What is it? Are you ready for one perhaps? And finally, it's important to understand why production-grade applications and solutions can save you time. It can simplify your architecture and reduce costs. So let's get started. So this slide in particular here is a maturity model that represents different stages where companies tend to sit. The first stage being a monolith or traditional application. And this is usually an application that is built as a single unit. A good example of this would be a very simple application that has a database, a client-side user interface, HTML pages, for example, and a server-side application, let's say a PHP application. To make any alterations to this system, a developer may need to build and deploy an updated version of the server-side application, which can be very slow. It's very difficult to scale. And overall, making one change to the application might require a big bang release. The next stage is what we like to call hybrid, which is essentially a mix and match. We have some modern components and we have some traditional components. But most customers I've spoken with have followed within this umbrella because obviously we have some modern applications, but traditional applications, they're not going away anytime soon. So let's say for example, you want to create some microservices-based functionality, but the core of the application is still the monolith. For example, maybe you have created an authentication service using modern technologies. And a mobile application might be a good example of a hybrid application. In front of you, it looks like a very modern app, but behind the scenes, it might be a traditional app with old technology doing the bits and pieces. And then we get to the next stage, what we call microservices. This is modern application, usually built from the ground up as multiple isolated services that are stitched together. Usually it's a single application, perhaps it's born in the cloud, but they are most definitely developed, deployed and tested with a very sophisticated CI CD pipeline. There's usually automated testing and release orchestration, maybe they're in Kubernetes, but usually these are very specific digital services, maybe something specific to your industry or business unit. So as I mentioned, most of the companies that I've worked with have fallen into the hybrid buckets and monolithic, I suppose. We're seeing more and more companies adopt a hybrid model where they're trying to transition to more modern environments. What microservices is the ideal state that many companies shoot for? So here is an example of a monolith. This is a taxi application. And as you can see, we have components for payments, trip management, building and so on and so forth. Most likely a very large chunk of code with many components, but here it's a single unit. It might have a single shared database. Releases are very slow, perhaps a waterfall methodology, but because all of the components of the application are linked together, if you wanted to make an update to one component, you might have to bring down the entire application. Releases are very slow. Maybe you're releasing a new update every six months as a big bang release, but services are very tightly coupled. They're very dependent on each other and communication between each of these services is done using synchronous method calls. So we flipped this monolith on its head a little bit. We separated these components. We now have smaller pieces of code per service and maybe they're connected via APIs. You can see the REST API logo here. So now we have a microservices environment where each microservice runs its own process. These may be deployed in containers or pods and Kubernetes and they all communicate with each other using a mechanism such as a REST API. But the idea to simplify is one microservice for each function. This is not going to happen in a single step. It could be very expensive, very risky and you don't want to rearchitect your entire application in one go. So this is going to take time. There was a pattern known as the Strangler pattern that you may be familiar with. The idea is that you add small pieces of functionality in microservices and repeat the process. For example, the authentication service I mentioned earlier. But it's very important to adopt a DevOps mentality here. So having proper source control, having automation, having the teams organized around service ownership. Services in a microservices environment. They should all work together as loosely coupled rather than tightly coupled services. And each service should have one job and that should do that job very well. They're isolated. So each microservice might have its own data so it can evolve and scale by itself. And if you needed to update the application you can just update that specific microservice. So microservices, the idea is to take an application and take that application and take specific components and compose them into loosely coupled and independently deployed services. Usually microservices are very maintainable and testable. They're usually smaller, self-contained, loosely coupled. We're using APIs, of course, modern applications and oftentimes need an API. And this could be message brokers or event streamers also. It's possible that each microservice might have its own language. Maybe I have one microservice deployed in Java and another microservice deployed in PHP. But usually they're deployed around business capabilities. So separating services to have a specific capabilities. And obviously having teams organized so that you have specific teams managing specific microservices can definitely help. But coming from F5 and Nginx, proxying solutions for these environments are changing in that a traditional load balancer might now be known as an API gateway or an ingress controller in Kubernetes or perhaps you're using a service mesh. So this is why we hear terms like API management, Kubernetes and service mesh solutions like Istio. So there are many different areas of change in the migration from one elliptic to microservices, we're moving to APIs, we're moving to the cloud perhaps, we're moving to containers, we're moving to more lightweight protocols like a RESTful API, release cycles are changing. If you're within a DevOps environment, you might be releasing multiple times per day. And obviously the bigger the application gets, the longer and more frequently the release is getting. As you move to microservices, we're releasing multiple times per day. You can have development of different microservices happen in parallel, which is a huge advantage, which brings me to collaboration across teams. Teams are managed differently, we're moving to a DevOps culture where DevOps teams are more involved with the entire release process and you have automation and you can use many different programming languages as you're choosing. Just to take a little step back here and focus a little bit on some of the key trends we're seeing within the microservices landscape. Nothing should be too surprising here, but I just wanted to share our perspective, what we are seeing in the enterprises. Organizations are modernizing at a rapid pace about three quarters of enterprise prices from our state of application services survey reported that their customers are modernizing internal or customer facing applications with APIs and containers as the primary method given their ability to combine capabilities with modern and traditional components. DevOps is on the rise. It's obviously DevOps is very critical to agility and things like automation can speed things up and you might be familiar with automation tunes for infrastructure as well rather than applications and to as a terraform, you can set up and manage your infrastructure via APIs like terraform. So just to mention Kubernetes very briefly, as Kubernetes adoption continues to increase at Nginx we're closely tracking the Kubernetes and cloud native journeys. We asked our community oftentimes a number of questions around Kubernetes adoption and we did so last year and 35% of our community said that they were using Kubernetes in production. Another 35% said that they were actively exploring Kubernetes and 30% said that they haven't adopted Kubernetes yet. So when asked, when do you plan to implement Kubernetes? 72% reported plans to put Kubernetes into production within the next 12 months. So yeah, that's Kubernetes adoption is continues to accelerate. It's a common strategy in modern app initiatives but it's definitely a very important part of that microservices journey. Now let's take another step back and focus a little bit on the microservices technologies. So containers, most of the most important technologies that allow microservices at scale is the container. Why are containers so popular? Well, compared to virtual machines, containers are quick to build, they're small, which means that they can be stored and transported over to a network, over another network. They're very well defined, they can run anywhere and they're stateless as well most of the time. So containers are a key component of that microservices journey. They are a solution to the problem of how to get software to run reliably in one infrastructure and another. So once it puts simply, a container consists of an entire runtime environment. So an application, it's dependencies, libraries and other binaries of all configuration files that are needed, bundled into one package. By containerizing this application and its dependencies, all of the OS distributions and infrastructure is abstracted away. So if you look at a virtual machine, traditionally when using a virtual machine for applications, you're taking an entire operating system as well as the application. So you might have a physical server that runs three virtual machines, you have a hypervisor and multiple operating systems running on each virtual machine. So that's heavy and by contrast, a server running containerized applications with Docker, for example, you have a single operating system and each container shares that operating system kernel from the machine. So that means the containers are much more lightweight and they use far fewer resources than virtual machines. So this is the foundation for bringing portability to microservices applications as well as some legacy applications, of course. It is possible to put legacy applications into a container, obviously, the difference obviously comes down to the size of the applications, dependencies and so on and so forth. But there isn't an analogy known as cattle versus pets that you might be familiar with. The idea is that in the old way of doing things, we treated our servers like pets. For example, a mail server, you might have a mail server downstairs in the server room. If that server goes down, it's all hands on deck, CEO can get their email and it's a big problem. In the new way of doing things with microservices, it's more like cattle than a herd. If a service goes down, you just replace it there and then you just spin up another container and it's not really a big deal. And the Kubernetes, for example, you scale your applications horizontally. If a container competes its job, you just destroy it and when they complete the job, you can either destroy it or you can reduce the size of it also. So it's a very dynamic environment, it's much different. Now, this isn't a Kubernetes course. So I'll do my very best to explain this at a very high level. Kubernetes is the orchestrator for containers. It's the magic that makes it all happen. So when it comes to Kubernetes, there are multiple components. And first of all, there are multiple nodes. You would have a Kubernetes master node. You would have Kubernetes worker nodes and you'd have an internal network. You might have an ingress controller. You might have a load balancer. So the master node will contain the key Kubernetes components. Your worker nodes will contain the containers of your applications. Each of these interacts via an internal Kubernetes pod network. And of course, you might have an ingress controller bringing traffic in. This is your layer seven tool. But the idea is Kubernetes is all about container orchestration. It's about managing and coordinating the lifecycle of containers, especially in large dynamic environments. Software teams use tools like Kubernetes to control and automate many tasks. Some examples could be provisioning containers, deploying containers, scaling your containers horizontally or downwards, moving containers from one node to another, exposing containers to the outside world, load balancing, monitoring containers. The list goes on and on. What the idea is that the Kubernetes cluster is the management of the control plane for your containers running inside of the cluster. More on the ingress controller layer. Now there are absolutely some drawbacks when implementing a modern microservices architecture. There are so many changes happening in this environment. For example, you might have a traditional application might have function calls that are very easy to configure and then everything makes sense. Whereas now with a microservices application running in Kubernetes, for example, communication between your services is much more complicated. There are no network calls rather than function calls. Debugging is very difficult because it's not just one application on a single machine anymore. It could be one application spread across multiple machines. That application might have multiple microservices and within each microservice, it has its own set of logs, tracing and the source of finding a problem can be very difficult. It comes to testing. It might be easier to unit test with microservices or test the functionality of our components, whereas integration testing, meaning testing the entire application is more difficult because the components are now distributed. So developers cannot test an entire system from their individual machines anymore. We are updating the application a lot more. Yes, that's a good thing, but you need to spend time automating and learning how to roll back at their issues. If you're adopting a DevOps mentality, if you're writing an Ansible Playbook or a Chef script for an automation task, you have to write another for rolling back at their issues. That takes time. If a microservice has its own API, they need to be consistent. If you're updating the API of one microservice, the other microservices need to be in sync and understand the new API version. One thing that I don't have mentioned here, I should mention is that what we're seeing more and more is that companies are adopting a multi-cloud strategy when they're deploying their microservices. So they might be deploying application services across multiple clouds like AWS, Azure, GCP, they might have some on-prem. So when different microservices are spread across multiple different clouds, it can be very difficult to keep track. So all of these points here relate to that. Here's an example of an environment that we often see. Some of the most popular applications we see have something similar to this. It's important to note that every environment is different, but the flow was oftentimes very similar. So you have multiple components, many teams, looking after each of those individual components. You have the application teams looking after the application, the DevOps teams, ensuring that updates are happening in the CI servers, working, you have all your automation scripts. You might have a security team looking after the web application firewall, whether the infrastructure or networking team looking after everything else, networking ready. Maybe you're using open-source solutions or enterprise solutions. Maybe you're using an Nginx ingress controller or an F5 big IP for security. Perhaps you're using monitoring tools like profana, Prometheus, authentication tools like octa, keychip, different security products, automation tools like Ansible, DevOps tools like GitLab. The list goes on and on. But this is the flow that we often see. You have your code repository, you have your CSD pipeline. We're deploying the application within containers to a Kubernetes environment, and you have external tools for monitoring, for security, for logging, and so on and so forth. What I'm trying to say here is that this is the data plane. The data plane is there to control and monitor how traffic is sent to and routed within our microservices application. So this is probably the most important point. We depend on the data plane. Kubernetes and microservices are often used hand-in-hand, they're almost synonymous these days, I guess. Kubernetes has emerged as the favored container orchestration platform. So it is the gold standard for modern container-based microservices, but the data plane, which is your traffic flow to your applications, handles all traffic from the client to the application container. And that includes load balancing, proxying, security, analytics, open tracing. All of those things are very important and usually there are specific teams looking after specific areas within that flow. Developers, infrastructure, engineers, security teams, operations teams, so on and so forth. So very briefly, some of the challenges and concerns that we've seen, we asked customers recently, what are your biggest concerns around Kubernetes? We've got a multitude of answers ranging from very small details to very broad concerns about configuration. Learning curve was obviously very popular. How you handle persistent data in a Kubernetes cluster was another very popular point that was made. But the four big ones were knowledge, complexity, security, and scalability. So knowledge, the biggest concern was not being able to understand the technology and how it works. And that makes sense because Kubernetes networking is hard. Kubernetes security is hard. So there's a pretty steep learning curve when it comes to someone who's new to Kubernetes. You have to understand container networking. It's a completely different language. So that leads me into complexity. It's not, even when Kubernetes is deployed and it's out of the box form without management tools like OpenShift or Tanzu. Kubernetes is pretty well documented but the networking is completely different than those that came before that. Containers are still relatively new technology and other things like certificate management and complications and security is then the next point. To be honest, Kubernetes when it's deployed out of the box basically has no security turned on which is quite a risk. Enterprises looking to learn Kubernetes as they deploy applications. It's probably one of the main reasons why things take time, why things are slow in adoption. And I say security doesn't come out of the box. I mean, if you're deploying applications in Kubernetes with no web application firewall, you might start exposing applications directly with no proxy and that can be a security risk. And finally, scalability. This is kind of ironic because the idea is that you have a Kubernetes cluster and you can scale your applications or your pods at will. What we mean by scalability here is that because Kubernetes is so complex, a lot of the concerns are around the platform, scaling the platform. It's very challenging to operate a Kubernetes cluster at scale. If you have multiple nodes, multiple pods within each node and they're continuously scaling, you need to have resources for that. You need to have a team looking after the resource configuration of Kubernetes. And it can be quite complex and it does become a concern. Okay, so the ingress controller. Now, as with microservices, containers have really become very popular because they do provide a massive benefit to the application development process. They're very dependable, they scale, they provide a nice isolation layer and many microservices applications rely on this technology to operate in Kubernetes. So traffic management into and within Kubernetes is often handled by a load balancer, what we call an ingress controller. An ingress controller is responsible for bringing traffic into your Kubernetes cluster. Think of it as an Nginx proxy for now. It's configured a little differently to an Nginx proxy, but it's essentially an Nginx load balancer sitting in front of the Kubernetes. So it's a layer seven HTTP primarily and it brings traffic in, deals with North, South traffic, anything you would use Nginx for like the TLS termination load balancing and much, much more. That is where you would use Nginx. Uses something called an ingress resource to configure itself and it does a lot of things than just load balancing. Obviously it can scale like other containers do. It can monitor the status of your pods and do health checks and TLS termination and so on and so forth. But it's essentially your layer seven load balancer that brings external traffic into your cluster. It's important to note that there may also be another load balancer in front of the ingress controller. And this could be a DNS load balancer. It could be a cloud TCP load balancer, perhaps. Every environment is different, of course. It could be a big IP if it's on print. So usually you have your DNS service that routes traffic to an ingress and then your ingress does all of the application traffic. Now, one struggle that we're often seeing in companies adopt microservices within Kubernetes in a dynamic environment is the infrastructure, scaling the infrastructure. Another problem is that the actual teams, the application teams, it's getting very complicated for them to operate because they're designing the entire environment around application complexity. So web application firewall policies, routing rules, rate limiting, API management and so on and so forth. The teams do get complicated in organizing around these complexities. I myself have more of an application slash DevOps background that used to be a developer. But I'm very familiar with deploying applications, writing applications and getting them ready for production. But what I'm seeing a lot more is the complexity around networking and security when you start deploying these applications in Kubernetes. You have IP tables and security policies and all of these things. Application teams are starting to feel more like infrastructure teams and security teams are starting to feel like more like application teams when getting stuck in the Kubernetes. So service mesh very briefly. We were trying to work distributed microservices application here. The ingress controller is responsible for controlling traffic coming into the application. And this is known as North-South traffic. It has no visibility or control over traffic flowing within the application. We call this East-West. So if you have a microservices based application here and each of those microservices are deployed in containers and they're all communicating with each other via REST APIs, this is usually East-West traffic. So engine X brings traffic in. Once that's within the Kubernetes cluster, it's within the application rather than the load balancer. So this is where a service mesh might come into play, but it does depend on the application. Oftentimes the ingress controller can do everything you need to do. But let's say you wanted to have mutual TLS, security between all of the application traffic within Kubernetes. Let's say you wanted to limit traffic from microservice one to microservice two. Very, very granular requirements we're talking about here, but that is why you would start looking at the service mesh. You use a side car proxy for those scenarios. So let's simplify it. You have your layer seven ingress controller. Traffic enters the cluster via the ingress controller with this entire layer seven, traffic comes in. When traffic passes from the ingress controller to the service and from the service to the pods, it's layer three, layer four. So this could be okay if your application is simple and you don't have the requirements for encryption perhaps, but adding a service mesh here would give you the ability to manage east-west traffic. So this could be mutual TLS between your pods. It could be for better granularity like open tracing and you want to see traffic communicating within each microservice. It could be for DevOps methodologies or things like A-B testing, canary and blue-green upgrades perhaps. You might decide to do some rate limiting between your pods. There are many reasons why you would do this. The biggest reason is oftentimes encryption, encrypting traffic within the cluster. So some of the use cases of a service mesh, very briefly, it's important to understand that a service mesh only solves a very particular set of problems, like I mentioned. These are not required, these types of features that you may be moving too fast. You need mutual TLS or client-side authentication between services within Kubernetes from the ingress to the egress or east-west traffic. It may be a service mesh might help you. If you need advanced load balancing, traffic splitting and A-B testing and access control within your cluster, that could be a use case also. Open source tooling from Atheist for Fana, all those things. Perhaps you want to view metrics and analytics on the traffic patterns you see in your cluster. That is something you could do with a service mesh. Oftentimes, you don't need a service mesh. And once traffic hits your ingress controller and ingress distributed across your individual application pods, and that is all you need. So you should ask yourself, if you could say yes to these items here, you probably, if you could say yes to these and maybe you could benefit from a service mesh, if not, maybe not yet. It's a very complex technology. We're not trying to scare you away from this. We are seeing a lot of companies trying to adopt a service mesh before they need one. And it causes a lot of complexity and headaches. If you've only started using Kubernetes, then maybe a service mesh isn't needed yet. If you have deployed an ingress controller and it works very well to deliver your applications, then maybe you don't need one yet. But if you've a fully automated CI-CD pipeline, you're using Kubernetes, you have an ingress controller deployed. You want to add visual TLS to a zero trust environment. If you want really granular traffic control within your cluster, then yes, a service mesh could definitely help you there. And the idea is that you inject a sidecar proxy within your Kubernetes environment to have more granular control in your environment. So let's have a quick review of what we learned today. So microservices is not a destination. There will be different clouds. You might adopt microservices on-prem. You might use different tooling. The number of tools in the industry is extremely high. Automation tools that Chef, Ansible, Puppet, cloud platforms, Azure, GCP, AWS, load balancing tools and ingress controllers, EngineX, F5, HAProxy, the list goes on and on. You have multiple DevOps tools, CI tools like Jenkins, a DevOps platform like GitLab and so on. So it was actually a periodic table of DevOps tools out there that's worth looking at. But you will see DevOps methodologies. You will look at API management solutions. You will certainly come across Kubernetes, but it will take time and it's too risky and expensive to move too fast. We spoke about service mesh and ingress controllers primarily because they are key to production-ready microservices because the ability to control how traffic reaches your applications or microservices applications is very important. That's where the data plane lies. So tools like EngineX and other ingress controllers are a very important way to play there also. Okay, that is the end of the presentation. Let's have a quick look at the Q and A. Okay. So question here. Both Maulits and microservices have their pros and cons. Do you think that Maulits will completely disappear in the future or will the number of them just shrink? That's a great question. And the answer is, I'm not sure because when it comes to all of the applications I've worked with, the Maulith isn't going away anytime soon. And oftentimes, some of the most modern applications you see out there today are built around microservices, but some of the core components are still Maulits. I do think they will shrink personally, but not anytime soon. I think some applications are actually perfectly fine to be a Maulith. That's important to understand that too. My previous slide that went through some of the cons of moving to microservices, all those are relevant and things like complexity, having multiple containers for each of your application performance, that may not be needed if you have a very simple application that does one job. For example, let's say you have an online blog and essentially contains articles, blog posts and images. And it's pretty static and you've had updates not too often. Then a three-tier application Maulith would be absolutely perfect for that. Database, clients, server-side application and a client-side content. Okay. Yes, there's a comment here on top of Kubernetes concerns, security is most scary. Yeah, and that's a good point because when it comes to security, you hear things like zero trust. You hear things like service mesh from each of the TLS between your application containers. What we're seeing a lot more is the ability to deploy a web application firewall inside Kubernetes. I know at Nginx, we do have the ability to deploy our application firewall on the ingress controller itself. There are multiple ways you could add security to the Kubernetes platform. You could put security outside on the external load balancer bringing traffic into Kubernetes. You could bring security to the ingress controller, as I mentioned. You can start encrypting all the traffic within Kubernetes so that more traffic can be accessed without some encryption. There are a lot of ways to do it. But yeah, we're seeing a lot of companies adopt a zero trust model. Our microservice is a vital option for small teams or individual application developers. Yeah, so I think my answer to the previous question answers that also. Some applications are perfectly fine to be a monolith. You can deploy an application using microservice methodologies also. But the application is small and it doesn't need to be scaled horizontally. Then you might not necessarily need a microservice environment, but it depends on how you want to deploy it also because if you want to deploy things in Kubernetes or if you want to deploy things in containers, then obviously you want to use a more lightweight language. Modern languages like Node.js and Python, for example, are much more container friendly. Okay, what essentially is the difference between an ingress controller and a service mesh like Istio? It's a good question. There is confusion around it too. And oftentimes we see people mixing them up actually because an ingress controller, the idea is that it brings traffic in. It's layer seven. It's a HTTP load balancer bringing traffic into your Kubernetes cluster. It works the same way as an Nginx server sitting outside or inside. And it does your TLS termination and load balancing and so on and so forth. But it focuses more on North-South traffic, bringing traffic in, whereas a service mesh focuses more on East-West traffic, which is traffic that's being distributed inside the cluster. So you might have an ingress bringing traffic into a microservice. That microservice might send requests to another microservice. The ingress has no visibility of that. Microservice one to microservice two request. With a service mesh, you could add another proxy within that layer to keep track of those traffic patterns, to add encryption to those traffic patterns and a lot more than that. So ingress, North-South, service mesh, East-West within Kubernetes. Let's go into the questions here. Yes, so there's a question here for a networking world, big IP is used for load balancing. Can Nginx be a substitute to big IP? Yeah, good question. I think they're both slightly different in terms of how you want to deploy them. If yes, both solutions can do load balancing. Nginx I often think is more associated with the application. So it's closer to the application whereas big IP is usually more of the network entry point into your application portfolio. In the previous example where we had a DNS service bringing traffic into an ingress controller, we actually use both of them side by side. Big IP could be used as your external load balancer. You can add a security layer to that also. And then that's managed by the NetOps teams. And you could also have an Nginx ingress controller or an Nginx proxy that's managed by the application teams doing different things like JSON web token authentication or more layer seven features, I suppose. But it depends on the environment. You could use big IP by itself or you could use Nginx by itself depending on what you need. Okay, it's a really good question here about what are the different use cases for using ingress controller versus ingress gateway? That's a really good point because Kubernetes is adopting a new type of resource called gateway. Yeah, so when you configure an ingress controller you can configure an ingress resource in Kubernetes and that is a manifest file that is sent to the Kubernetes API and that configures the load balancer. With the new gateway resource for Kubernetes it's a different way of configuring the load balancer better than using ingress resource to use the gateway object. And there are plans to add that to Nginx and other ingress controllers out there. What are the benefits? What are the different use cases? I would say the ingress controller if you write an ingress resource in Kubernetes oftentimes they're the same regardless of what ingress controller they're using. Some ingress controllers have more features than others and in order to do those extra features let's say for example, a WAF policy or a mutual TLS for example you might need to add an annotation in Kubernetes to extend the functionality whereas I think an ingress gateway would allow for more custom or more advanced use cases like that. Is it mandatory to use an ingress controller in Kubernetes? Technically no, it's not mandatory but it's recommended and if so within Kubernetes if you deploy an application container that container is running within the internal Kubernetes network and there is no way to access the container externally unless you configure a way to get access to it. And within Kubernetes, how you do that is via a service. You create a service within Kubernetes and that allows you to expose your application container that's running within that pod network. If you expose your application container within the pod network then you are directly exposing your app with no proxy. That can be okay depending on the application. Well, let's say you have multiple applications for the Kubernetes which is the whole point of having a Kubernetes cluster is to allow microservices and apps to be spread across multiple nodes. You don't want to be exposing different pod IP addresses for each application or each microservice. You need to have some form of entry point like an ingress controller to bring traffic into the application. That's one reason why an ingress controller is recommended it's to have one load balancer to distribute traffic to all of your pods. Second of all, you most likely have a DNS service for the application. So when a request comes into your website's DNS name the ingress controller is responsible for resolving that and sending that request to the relevant application that matches the FQDN. That is something that would be a lot more difficult with without an ingress. So I would say I wouldn't recommend it but it can be done. If you have a single application that runs as a single container that has a built-in DNS termination then maybe you can just expose it directly without an ingress. But of course that does depend on what you're trying to achieve. What makes a language container friendly? Yeah, that's a good question. I think I don't think I have an answer to that. I think it depends entirely on the dependencies of an application. If you're looking at a traditional application it's most likely has an external database. It might have external dependencies for the server side. There might be an app server. It can be quite difficult to containerize something like that. And when it comes to having a database within a Kubernetes environment then it's difficult to have a database within a container and scale it, for example. Containers are meant to be stateless whereas a database isn't really stateless. So I think for something to be container friendly it needs to be able to be destroyed and spun up again without losing data. It needs to be lightweight. So some traditional applications aren't lightweight. They might require lots of libraries and dependencies and they just don't fit well in a container because they're too heavy. If a container is over a gigabyte in size then that might not be very container friendly depending on the app. But it's a very good question. It's a broad discussion but the first two examples I thought of I suppose is they're supposed to be stateless and the dependencies in the libraries need to be lightweight. Okay, Christina, I think that's all we have time for today. Thank you very much for your time. The whole couple was useful. And please, if there are any questions reach out to the contact page after the call. Great, well thank you so much to Mihal for his time today and thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you're able to join us for future webinars. Have a wonderful day.