 Hi everybody, thanks for joining this webinar on monoliths and microservices with Bite-sized Kubernetes. My name is Tomo Nakahar, VP of Developer Experience at a company called WeaveWorks where we created GitOps. We're really excited to bring together two fantastic speakers on this topic. We think it's really important that so many people that we talk to in the Kubernetes community and especially with the Flux project that we created and maintained so many of you still struggle to bring your monolith to microservices and some of that has brought you to the Kubernetes space which people like Chris and Leo agree is really kind of a no brainer for this challenge. However, that doesn't mean that it's easy. So we're really excited to bring these two speakers together to first of all talk about what microservices exactly means for you and that journey can be hopefully a little less daunting. Chris Richardson is the creator of microservices.io. He's been working in this space for over 10 years so we'll have the first section by Chris breaking that down in a really fantastic way and then lead to the part where Kubernetes itself you can do in small steps and so we have Leo Morello our architect from Weaverx will be sharing that part so I hope you enjoy. Thanks. Welcome to my talk on the microservice architecture. In this talk I'm going to answer the following questions. What is the microservice architecture? Why should you use it? When should you use it and how do you adopt it? But first let me introduce myself. I've done a number of things over the past 40 years. For example I developed Lisp systems in the late 80s early 90s. I also created the original cloud foundry back in 2008 and since 2012 I've been focused on what eventually became known as the microservice architecture. I help organizations around the world use microservices. Here's the agenda for my talk. First I'm going to describe why we need to deliver software rapidly, frequently and reliably. After that I'm going to describe the monolithic and microservice architectures. Next I'll describe how to refactor an existing monolithic application to microservices. Finally I'll talk about how the microservices pattern language can be your guide when designing an architecture. These days the world is crazy or more specifically it's volatile, it's uncertain, it's complex and ambiguous. Not only do businesses have to deal with unexpected new competitors, they also need to deal with pandemics, wars and so on. In order to thrive businesses need to be nimble, agile and innovate faster. Modern businesses are powered by software. This means that if you're responsible for a business critical application then you're under immense pressure to deliver software rapidly, frequently and reliably. Specifically your organization needs to be high performing as defined by the Dora metrics. There are four Dora metrics. The first is deployment frequency which is the rate at which changes are deployed or released into production. This needs to be high. The second metric is lead time which is the time from commit to deploy. It must be low. The third metric is time to restore a service. You need to be able to quickly recover from production outages. The fourth metric is change failure rate which is how often a change to production causes an outage. This obviously needs to be low. In other words you need to move fast and not break things. But unfortunately your reality is probably very different. Deployments are infrequent, painful and often result in production outages. What's more your monolithic technology stack is out of date. To deliver software rapidly, frequently and reliably you need what I call the success triangle. First you need the right development process, specifically DevOps as defined by the DevOps handbook. For example developers commit changes frequently and an automated deployment pipeline builds and tests each change and deploys it into production. Second you need the right organizational structure. That's a loosely coupled network of cross-functional autonomous teams. By loosely coupled I mean that a team can get their work done without constantly having to coordinate with other teams. The book team topologies is a must read on this topic. And finally you need an architecture that supports DevOps and loosely coupled teams. You might consider asking Twitter about whether to use the monolithic architecture or the microservice architecture. As you might expect on Twitter there are lots of opinions. Some are more helpful than others. In reality the answer to this question is that it depends but on what. What are the criteria that you should consider when selecting an architectural style? To answer that question I now want to talk about architecture patterns for modern software. The software development community is divided by what Neil Ford calls the suck rock dichotomy. Your favorite technology sucks. Mine rocks. Much of the microservices versus monolithic architecture debate is driven by this mindset. A powerful antidote to the suck rock dichotomy are patterns. They provide a valuable framework for making architectural decisions. A pattern is a reusable solution to a problem occurring in a context along with its consequences. It's a relatively ancient idea first described in the 70s by the real world architect Christopher Exout Alexander. They were then popularized in the software community by the gang of four book in the mid 90s. What makes patterns especially valuable is their structure. In particular a pattern has consequences. The pattern forces you to consider both the benefits and the drawbacks of a particular approach. It also requires you to consider the patterns issues which are the sub-problems that are created by applying this pattern. A pattern typically references the success of patterns that solve those sub-problems. And then finally a pattern must also reference alternative patterns which are different ways of solving the same problem. Later on I'll describe some specific patterns. Sometimes the patterns that are related through the predecessor-successor relationship and the alternative relationship form a pattern language. A pattern language is a collection of patterns that solve problems in a particular domain. Nine years ago I created the microservices pattern language with a goal of helping architects use microservices more appropriately and effectively. On the left are the monolithic architecture and microservice architecture patterns. They are alternative architectures for your application. All of the other patterns are direct or indirect successor patterns of the microservice architecture pattern. They solve the problems that you create for yourself by using microservices. The pattern language can be your guide when defining an architecture. The way you use it to solve a problem in a given context is as follows. First you find the applicable patterns. Next you assess the trade-offs of each pattern. You then select a pattern and apply it. This pattern updates the context and creates one or more sub-problems. You then repeat this process recursively until you have designed an architecture. I now want to describe the first two patterns, monolithic architecture and microservice architecture. These two patterns are alternative solutions to the same problem. The monolithic architecture structures the application as a single deployable or executable component. The microservice architecture consists of multiple components or services. These two patterns share the same context and forces. Let's look at the context. The context is the environment within which you are developing modern applications. It's as I described earlier the need for loosely coupled DevOps teams to deliver software rapidly, frequently and reliably as measured by the Dora metrics. Let's now talk about the problem that these two patterns solves. Roughly speaking the problem is to design an application architecture, but more specifically we can frame the problem as how to group the application sub-domains to form executable or deployable components, which are also known as services. A sub-domain models and implements a slice of business functionality, which is sometimes known as a business capability. Each sub-domain is owned by a small team that's responsible for its development. The sub-domains must be grouped to form executable or deployable components. A monolithic architecture consists of a single component, a microservice architecture consists of multiple components or services. Let's now look at the patterns forces. In order for a network of small autonomous DevOps teams to deliver software rapidly, frequently and reliably and sustainably you need an architecture with several key quality attributes. For example the authors of the Accelerate book describe how testability, deployability and loose coupling are essential. In addition if you're building a long-lived application you need an architecture that lets you easily upgrade its technology stack. I've generalized these architectural requirements into what I call the dark energy and dark matter forces. Dark energy and dark matter are concepts from astrophysics, but they are good metaphors for the conflicting forces or concerns that you must resolve when designing an architecture. Dark energy is an anti-gravity that's accelerating the expansion of the universe. It's a metaphor for the repulsive forces that encourage you to put sub-domains in separate services. These forces include team autonomy, fast deployment pipeline, the need to support multiple technology stacks and so on. Another dark energy force is the need to segregate sub-domains by their characteristics such as resource requirements, business criticality or regulatory requirements. Dark matter is an invisible matter that has a gravitational effect on stars and galaxies. It's a metaphor for the attractive forces that encourage you to put sub-domains in the same service. They are primarily generated by the operations that span sub-domains. These forces include simple, efficient interactions between services, minimizing design time and runtime coupling between services and preferring acid transactions to eventual consistency. Let's now look at each pattern solution in a little more detail. The monolithic architecture is an architectural style that structures the application as a single executable component. A monolithic application typically consists of a single code repository. Multiple teams work on different modules of the same application. There's a deployment pipeline that builds and tests the application and deploys it into production. This architecture has numerous benefits and drawbacks. Because the monolithic architecture consists of a single component, it resolves the dark matter attractive forces. All interactions between the application's modules are local and so they're simple and efficient. There's no runtime coupling. Moreover the application can implement operations using acid transactions which are simple and familiar. But whether the architecture resolves the first three dark energy forces depends upon the size of the application and the number of teams that are developing it. As the monolith grows it becomes more complex. It takes longer to build and test so the single deployment pipeline just gets slower and slower. Even the application startup time can impact the deployment pipeline. Also as the number of teams increases their autonomy declines since they are all contributing to the same code base in the single repository. For example even something as simple as pushing changes to the code repository can be challenging due to contention. Some of these issues can be mitigated through design techniques such as modularization and using sophisticated build technologies such as an automated merge queue and clustered builds. However ultimately it's likely that the monolithic architecture will become an obstacle to rapid frequent and reliable deployment. Furthermore the monolithic architecture cannot resolve the last two dark energy forces. It can only use a single technology stack. You need to upgrade the code base in one go which can be a significant undertaking and since there's only a single component there's no possibility of segregating subdomains by their characteristics. The monolith is inherently a mixture of subdomains with different scalability requirements, security requirements and business criticality. The microservice architecture is an architectural style that structures the application as a set of components or in other words services. Each service is loosely coupled, independently deployable, implements one or more business capabilities and is often owned by a small team. A service has an API that consists of operations and events. An operation is a behavior that can be invoked by a client either synchronously using a protocol like REST or asynchronously using messaging. Events are published by a service when something notable occurs such as the creation or updating of a business entity. A service can collaborate with other services. It can invoke their operations and consume their events. A service consists of code in a source code repository. There's two types of code application code and infrastructure code. Application code such as Java or Golang is the services implementation. The infrastructure code such as Kubernetes YAML or Terraform configures the infrastructure needed to run the service. A service has a deployment pipeline which builds tests and deploys the service. At runtime the service consists of one or more service instances typically containers and it also consists of infrastructure such as databases and message queues. An essential characteristic of the microservice architecture is that each service is independently deployable. This doesn't just mean that a service is packaged as a container image it's instead it means that a service can and should be tested in isolation using test doubles for its dependencies. It's then deployed into production without any slow and brittle end-to-end tests. A service is loosely coupled. There are two types of coupling. Design time coupling which I'll talk about later and runtime coupling. Runtime coupling between service A and B is the degree to which the availability of service A is affected by the availability of service B. For example the order service is tightly coupled to the customer service if it cannot respond to a create order request until the customer service responds to it. This reduces the availability of the create order operation. Ideally the order service should be able to respond to an HTTP post without waiting for a response from the customer service. This is known as the self-contained service pattern and it typically requires a service to use asynchronous collaboration patterns such as Saga. Design time coupling is the degree to which service A is forced to change in lockstep with service B. Two services are tightly design time coupled when they regularly change in lock step for the same reason. Tight design time coupling reduces productivity because it requires time consuming API changes and coordination between teams. One way to minimize design time coupling is to design services so that they look like icebergs. An iceberg service has a small stable API that encapsulates its much larger implementation. This enables the services team to make changes without regularly impacting the services clients. Loose design time coupling also means that your services should not share database tables. For example the order service should not access the customer table directly. Instead it should use the customer services API. This is the database per service pattern. Services are typically organized around business functions or capabilities. This is especially true since I define a service as a group of sub-domains each of which corresponds to a business function or business capability. Services should rarely implement a technical function. For example it's a red flag if a service is simply a wrapper around a database. The relationship between teams and services is an interesting topic. Since one of the goals of the microservice architecture is team autonomy it's common for each service to be owned by a single team. This is the service per team pattern. However sometimes the service might be owned by two or more teams especially when it resolves dark matter forces such as efficient interactions. It's important to remember that if a service is owned by a small number of teams it does not necessarily reduce team autonomy. Moreover it can help you avoid having an excessively fine grained architecture which is known as the more the merrier anti-pattern. In fact a team should only own more than one service if and only if it solves a tangible problem. For example the fraud team might need to have an additional Python service in order to run a Python based machine learning model. The microservice architecture has various benefits and drawbacks. When compared with the monolithic architecture the benefits and drawbacks have flipped. The microservice architecture pattern can resolve the dark energy repulsive forces but potentially not resolve the dark matter forces. You need to carefully design the microservice architecture in other words the grouping of subdomains to form services and the design of the operations that span multiple services in order to resolve these forces. You should consider using the microservice architecture when one or more of the following is true. Your application is large. A large number of developers are working on the application. You need to use multiple technology stacks. It's beneficial to segregate subdomains by their characteristics. For example segregating subdomains by their resource requirements might improve scalability or segregating them by their business criticality can improve availability. So let's imagine that one or more of these are criteria applied to your monolithic application. How exactly do you adopt the microservice architecture? There are numerous principles for migrating a monolith to microservices. I want to talk about these six principles. The first principle is make the most of your monolith. Remember it's not an anti-pattern. If software development is slow then improve your process. You will need to do that anyway when adopting microservices. Adopt DevOps is defined by the DevOps handbook. Automate your deployment pipeline. Redeem topologies and improve your organization. Similarly if your application technology stack is out of date don't automatically assume that you should modernize it to microservices. Sometimes migrating to a modern monolith is sufficient. You should only migrate to the microservice architecture if you have truly outgrown your monolith. The second principle is adopt microservices for the right reasons. You should only do it in order to resolve one or more dark energy forces. For example one reason to use microservices is to improve team autonomy. Another is to support multiple technology stacks. You might also adopt microservices to segregate subdomains by their differing characteristics. The third principle for refactoring to microservices is that you should define a draft target architecture up front. Assemblage is the name of the architecture design process that I like to use. It takes your application's requirements as input and defines a service architecture that consists of one or more components. It's important to remember however that this target architecture is not set in stone. When you do the migration you'll learn more about both your application and the microservice architecture. As a result you should expect to evolve the target architecture. The fourth principle for refactoring to microservices is that it should be done incrementally using the strangler application pattern. The evolution of your architecture looks something like this. At the beginning you just have the monolith. Over time more and more functionality is migrated out of the monolith into services. You can also implement new features directly as services. The monolith gradually shrinks and might ultimately disappear. The fifth principle is that you should focus on migrating functionality that gives you the highest return on investment. Migrating a module out of the monolith into a service is time consuming. For example you need to untangle dependencies. As a result you should only want to migrate a module if there's a benefit to doing so. In other words if it resolves one or more of the dark energy forces such as improved team autonomy or a faster deployment pipeline. A good way to visualize priorities is to place the application's modules on a cost benefit matrix. You want to focus on those modules in the top right quadrant. The sixth principle for refactoring to microservices is that you should measure success using the right metrics. Success is not measured by counting the number of services. There's not inherent value in having services. What matters are the benefits of using services. There are two types of metrics that measure success. The first are improvements to the Dora metrics. You want to see a reduction in lead time which is the time from commit to deploy. You want to see an increase in deployment frequency while at the same time you want to see deployments become much more reliable. The second way to measure success are improvements to the other illities such as scalability. In the final part of my talk I want to explain how the microservices pattern language can be your guide when designing an architecture. Deciding to use the microservice architecture is just the beginning of the architecture definition process. The pattern language consists of solutions to the numerous problems that you create for yourself by deciding to use microservices. There are three categories of patterns. The first category are application focused patterns. These include patterns for designing operations that span multiple services such as the Saga pattern and the CQRS pattern. The third category are patterns that are infrastructure focused. These include the deployment patterns except for serverless actually. In between is the second category which are patterns that are a combination of application code and infrastructure. For example, developers write application code that uses infrastructure services. Most observability patterns fall into this category. The serverless deployment pattern is also in this category since it's a combination of a programming model and infrastructure. Let's now look at a few of these patterns. There are two database architecture patterns. The first is the shared database pattern. While this approach seems simple, it creates design time and runtime coupling. As a result, it's almost always an anti-pattern. The second pattern is the database-per-service pattern. A services database schema is part of its implementation and so is hidden behind the services API. This pattern reduces design time coupling between services. One drawback, however, is that transaction management is more complicated. An operation that spans multiple services cannot be implemented as an ACID transaction. Instead, you must use eventual consistency. There are four patterns for implementing operations that span services. A command, which is an operation that updates data, can be implemented using either the saga pattern and or the command side replica pattern. A query, which is an operation that retrieves data, can be implemented using either the API composition pattern or the CQRS pattern. One drawback of these four patterns is that they are eventually consistent. As a result, using them is more complex than using ACID transactions. The pattern language contains numerous patterns related to inter-service communication. The communication style patterns are remote procedure invocation and messaging. Remote procedure invocation style communications such as REST is simple, familiar and easy to use. One drawback, however, is that it can create excessive runtime coupling which reduces availability. As a result, it's a pattern that you should use very carefully. The other communication style pattern is asynchronous messaging. There are various flavors of asynchronous messaging including events. The application typically uses a message broker such as Apache Kafka or RabbitMQ. Alternatively, it might use a brokerless messaging mechanism such as webhooks. Asynchronous messaging is more complex. However, a key benefit is that it reduces runtime coupling between services. The pattern language includes several deployment patterns including service per VM, service per container and serverless deployment. Each pattern has different trade-offs. My recommendation is to use serverless deployment on a public cloud provided that it's a good fit for your application. Otherwise, I'd recommend using containers or more specifically Kubernetes. That's my part of this presentation. In summary, the best architecture for your application depends upon the details of your application and your organization. The microservices pattern language is your guide when designing an application architecture. The dark energy and dark matter forces are a very useful set of criteria for helping you decide between the monolithic and microservice architecture. If you decide to migrate your monolithic application to microservices, it's important to follow the refactoring principles. In particular, it's essential that you incrementally migrate to a microservice architecture. At this point, I'm going to hand over to Leo who will talk about deploying services using Kubernetes. All right. Thank you so much, Chris. This was very insightful. I hope you all really appreciate it, all the really, really cool knowledge that Chris just shared. Microservices monoliths went to pick one or the other and the differences around these architectural patterns. That was really awesome. Thank you, Chris. I want to take all that Chris just shared with you and break it down to the world of Kubernetes. I want us to talk as to how we can actually and likely will have to run and operate and build and maintain workloads that are following both architectural patterns and that we can do that on top of Kubernetes using all the capabilities that Kubernetes can offer. Before I get to that, though, I'd like to introduce myself. My name is Leonardo Murillo. I am principal solutions architect at Weaveworks. We are the Get Ups company. Weaveworks is a company that coined the term Get Ups and we specialize in application life cycle and developer experience on top of Kubernetes. We have a very, very strong presence in the open source community. This webinar is sponsored by the CNCF, the Cloud Native Computing Foundation. We have actually donated a few of our projects, Flux, which is a Get Ups toolkit flagger for progressive deployment to the CNCF. We build other products such as Weave Get Ups open source and Weave Get Ups enterprise to enhance and extend the capabilities of our open source solutions for Get Ups deployment, policy as code, cluster life cycle management, and many, many other things. Look us up. We're doing a lot of cool things that likely will help your enterprise as well. Now that I mentioned enterprise, let's talk a little bit as to what the reality of the enterprise is. It's very likely that for most of you that are listening to me now, Greenfield is far from our reality. Most of the people that I work with, most of our clients are actually enterprise level organizations that have a lot of legacy code bases. They have huge monoliths that are mission critical. They're basically not going anywhere anytime soon. It is very important to realize that as much as there are all these different architectural patterns that we can leverage today, monoliths are mission critical and they're going to stick around for a while. It's because it takes effort and time to strangle them or to build new capabilities using a different architecture pattern to replace them over time. These are applications that have been longstanding, as I mentioned, they're mission critical. The risk and effort associated to modernizing them is non-trivial. It's important for everybody, both those that are building this application, developers as well as the people that are managing those operations and DevOps teams and infrastructure teams and all the people that have to deal with these applications to realize, and we probably already have, that it will take time for us to go from having monoliths in our infrastructure and in our solutions to just being microservices. The idea here is that we want to choose a platform that will simplify the process of us moving from one to the other and into the future because we're talking about monoliths and microservices today. Chris briefly mentioned functions of service or serverless as it's always known. There likely will be other patterns that we'll have to adapt for, that we'll have to operate, that we'll have to build. The whole idea here is what we can do to abstract complexity from this heterogeneous complex environments, it's better for everybody. It's better for the developers that are building those services that are operating those solutions. It's indispensable to reduce the friction so that all these different generations of technology can coexist. That's where I think Kubernetes provides us with a unique advantage. Let's look at our agenda. This is exactly what we're going to be looking at today. One, we're going to talk about the different solutions out there and capabilities that allows us to use Kubernetes as our holistic workload scheduler. This is whether you're running VMs or whether you're running containers or functions, serverless, you can basically run anything and everything on the same platform. This has a host of benefits that we'll look into. There's a lot already being said about running microservices in Kubernetes. It's after all kind of like its most friendly environment. We're also going to look at specifics as to how you can actually run your monoliths on Kubernetes as well and realize that containers are not just for microservices. You can actually containerize monoliths and there's a whole skew of benefits that you'll get if you run your monoliths containerized in a common platform to get with the rest of your code basis of your services. What this enables you to do is to reduce the complexity as you start modernizing, as you start moving forward. We'll see how to reduce the complexity of the pattern that extracts capabilities and puts them in their own little bounded context, struggling in those applications, using Kubernetes primitives, services, ingresses, and other such objects. Then once we are living in this new world of both monoliths and microservices co-existing in a common platform, we can see how we can incrementally make changes so that we increase the extent of cloud nativeness to your applications as you go through this iterative incremental process of modernizing from monoliths to microservices or eventually just choosing to have both live on in your architecture. Let's talk about Kubernetes as your holistic control plane. What I mean by that is Kubernetes as the control plane that manages everything in your organization. There's basically solutions and capabilities out there so that Kubernetes can become the single API to operate any type of resource that you need to operate. Here's just a few of those options that I think are very relevant and valuable. We're actually not going to go in depth into any one of these, but I think it's important for you to be aware, if you're not already, of the different tools that are out there that allow you to use Kubernetes as the API for everything. There's other talks by the CNC after they're very valuable here as well, so look them up. Kubernetes allows you to manage virtual machines, VMs, just as you would any other workload in Kubernetes. Kubevert is a project also donated to the CNCF that allows you to declare virtual machines as Kubernetes objects. This is one path where you can start integrating workloads that are running on VMs with containerized workloads and manage the life cycle and their discoverability and how to reach them using a common pattern, which is basically driven by Kubernetes, declarative configuration, services, and all these other capabilities or all these other primitives that Kubernetes enables. Crossplane, another project by the CNCF, is for VMs and anything else. This is a project that you can use to declare any cloud resource, any managed cloud resource across pretty much any public cloud that is out there using the same patterns. Open functions as a service. If you need to run functions as a service within your cluster, of course, containers, which containers are the underlying component to all those different types of workloads. Now we have the open container initiative, OCI, which is this new standard for container images. So point being, Kubernetes is a growing ecosystem where you can effectively run any type of workload in the platform, following a common operating model, using declarative configuration, basically reducing the complexity and friction between all the different teams that are managing different aspects, different areas of your architecture. Now, let's talk about monoliths and microservices and how they can coexist. And just as I mentioned, as well as any other cloud resource that you have, serverless functions or anything else. Now, the way for monoliths to coexist, this is after going from just VM, like, if you're already running a monolith on a virtual machine, you can use Kubevert. Let's take it forward. Let's think about containers. Your microservices are already containerized. They're already stateless. They're already, hopefully, following 12 factors in their cloud native. But it is possible to containerize your monolith as well with caveats, knowing where to look and how to approach it. And this is one key step to having your monolith and your microservices living together as containers in the common platform. Let's start with, let's just first look at the four different areas where you're going to have to look into and we'll double click on them going forward, okay? Stickiness and statelessness is one of the most monoliths where built in a world where you're not really looking for statelessness, right? They actually hold state. And because they hold state, you need to have session affinity, right? If I make a request, all my requests need to go to the same instance of that workload because that's where my state's being kept. So you need to look at how to handle this stickiness and statelessness when you containerize a monolith into a container, of course. Because of this, because of the way that states managed and sessions are managed, you cannot always use the same patterns that you use for scaling microservices if you're running the containerized monoliths, okay? So cloud native patterns don't always apply. And what I mean by that is you usually can't always do horizontal auto scaling, okay? So horizontal scaling is when you create, for anybody who doesn't know, create more instances of the workload to handle more capacity as opposed to just increasing the resources associated to any one instance, right? So you're going to have to look at how to scale your monolith in this new environment and also how to build the artifact because most monoliths depend and expect, assume a lot of characteristics are met by the underlying nodes that are running the workload. And they're usually not immutable, which means they're not replaced with new versions. Rather, they're reconfigured to run the new version when a new release happens, okay? So we're going to have to look as to how to go from mutable to immutable and how to handle configuration, okay? And of course, build size and time, okay? Monoliths are usually larger than microservices, while they're larger code bases, they'll do a whole bunch of different things. So we want to make sure that we are optimizing these new artifacts that we're building, not just for their built size, and that is important, and we'll look at soon why, but also because of the time that it takes to build, right? Which has an impact? Chris talked about door metrics, right? How quickly you can get a new version built and how quickly you can actually enable your developers to iterate and release small changes will be critical to the adoption of this new pattern, of this new platform as you start to unify your microservices and your monoliths in a single platform, okay? Let's double click on each one of these, okay? Stickiness and statefulness, okay? So just to make sure that we're all on the same page, stickiness means that whenever I go and make a request to a service, all my future requests will hit that same service. And that's usually related to statefulness. There's something about my request that is kept unknown by that instance, and my next request is going to have to hit something that knows that same thing, okay? There's a couple of areas where you can enable session stickiness in your deployment of your monolith in Kubernetes. And this is dependent on where you need it, and usually you might need it in both places. It can't be done at the service level or at the ingress level, okay? We're going to look a little bit later on this talk as to what those different components are. Just remember, you can do it in both places. And it's, as with most things, Kubernetes, either just a change in the configuration of the object that you're deploying or just an annotation. Annotations are, of course, dependent on the ingress that you're using. So here's just a quick example of how you use it, how you do it with the nginx ingress controller. You can do it with service messages as well. You can do it with Istio and you can do it with other ingress controllers. They all have kind of a different way to do it. I guess the bottom line is, and it's very important, once you're going into this world of containerizing your monolith and you want to run it together with your microservices, you need to make sure that your underlying infrastructure, what you're doing for ingress, will support this type of capability. Now, as I mentioned, session stickiness is mostly related to statefulness. There is an object, a kind, that allows you to manage deployment of stateful workloads in Kubernetes and they're called stateful sets. With stateful sets, you get unique network identifiers, storage that is attached to your workload will remain even on restart. And it's always ordered. So using these two, this kind of three different configuration areas, you can manage statefulness and stickiness of your monolith when containerized within Kubernetes. Now, we talked about scaling your monolith in Kubernetes, meaning enabling it to handle more load. With microservices, you usually rely on horizontal pot auto scalers. Horizontal pot auto scalers will create multiple instances of your, well, increase the replicas. It will increase the number of pots that are serving your workload. That's not always possible with monoliths because of what we just talked about, state and stickiness and whatnot. And this is where the vertical pot auto scaler comes into play. The vertical pot auto scaler, what it does is it increases and it gives the instance of your workload more resources. It doesn't spin more copies of it. It just makes them handle, have more memory or have more CPU. So the vertical pot auto scaler is what you usually want to use for monolithic deployments where stickiness and statefulness are relevant. Now, there's some stuff that you need to watch out for, okay? Because of your vertical pot auto scaler behavior where it's increasing the capacity requested by each one of those workloads, you might get to a point where you're requesting more capacity than a given node is able to provide, that anyone know it is able to provide, which would make your workload unable to be scheduled anywhere. So there's something to really be mindful of when you configure the patterns, the configuration of your vertical pot auto scaler and when you define your node capacity, okay? We'll talk about how to also isolate monoliths from microservices in terms of node going forward. Another very important thing is you don't want to use horizontal pot auto scaler and the vertical pot auto scaler together, okay? That's just a recipe for disaster. Now, dependency and configuration, okay? We mentioned how monoliths are usually built in a way where they actually expect a lot from the underlying hardware, from the underlying machine VM or whatever is it running that workload, okay? And what I mean by that is they expect packages to be installed, they expect files to be in some path and they're usually not immutable and what I mean by that is the VM and its configuration is managed through change, okay? So whenever a new package needs to be installed, it's installed on top of the VM, all right? Whenever a new version of the artifact that it represents your monolith is built, it's copied over and replaces the previous version on the VM that's running it. So this is called mutability, right? You're actually mutating the VM. Container images are immutable, okay? And there's a whole bunch of value to immutability that I encourage you all to dig deeper into. We're not going to talk as the benefits of immutability, but container images are immutable and by being immutable, that means that you're packaging everything that your application needs to run on a single artifact, which is a container image, okay? Now, that means that there's going to be some effort to go over when you're looking to identify how to build the Docker files, which are basically the configuration files that define how to build your container image for anybody that's not familiar with that, to build the image of your monolith, okay? You're going to have to identify what packages it relies on. You're going to need to identify what operating system it expects to have on the VM that's currently running so that you can use the similar container-based image, okay? And there's some ways for you to accelerate that process. If there's already configuration management, well, managing disconfigurations, Ansible, POP it or any one of those, that serves as a great foundation, is a great basis for you to identify what to put in the Docker file for the container image to be built so that it satisfies the expectations of the monolith artifact, the binary that you're copying into that container image, okay? Now, there's a critical aspect here to consider and that is that a lot of monoliths require to do things at the node level, okay? And I know that's ambiguous, but this is where security contexts come into place, play. A container running within Kubernetes is usually or should be very much isolated from the underlying node, right? It should have no rights to see, perform or otherwise do anything at the node level. That's not always possible with monoliths, okay? So if you're going to run both monoliths as well as microservices and a common infrastructure and a common Kubernetes cluster, it is important to have them isolated, ideally to individual nodes that are only running this type of workload. This is also beneficial for capacity, right? You will need a different balance of capacity for your microservices to your monoliths and you'll need different security constraints, okay? So running your microservices and your monoliths in separate environments, in separate groups of nodes is super important, particularly when your monoliths or any one of your workloads require privileged security contexts, okay? There's primitives in Kubernetes that allow you to do that. You can use taints, you can use affinity and you can use security policies to both specify specifically in which nodes your workload, your monolith, your containerized monolith can run as well as to deny any sort of traffic that originates from those nodes or the namespace that's running your monolith to any other node group or namespace where you're running nonprivileged workloads, okay? And we also talked about build size and build time, okay? Monoliths are usually much larger than microservices because of what they're doing. They're doing a whole bunch of more stuff than just one service, which means the build time is usually slower and the container images are usually going to be larger, okay? This is going to have an impact in your storage, okay? Container images are, if you're using proper practices when you're building those stalker files, they're going to be built in a way that is very efficient, okay? That basically it can reuse different layers that you've built in the past so that you don't consume all that capacity when you're storing them, okay? You only store kind of every new version is just the delta difference between the like any other previous layer that was used and what hasn't, what's new, okay? But as you probably know, it's every time a new container image is downloaded on one of your nodes, it's stored in that node. So, since we're doing immutability, which we weren't before, we have a large binary that changes all the time, might not be changing the underlying dependencies. And if you're using Docker files right, you built that in a way that those are cached and not reused all the time. But if you're back in binary, the one that you're building for your model is pretty large, and you're doing multiple deployments each with a new version of the container image for your release, right? For your application, for your monolith. That means that your node is going to have multiple versions of your application at any given time. And if they're large, they couldn't consume a lot of hearties, okay? So, you need to be mindful of understanding the size of your build, making sure that you're using proper Docker file practices to reduce it to the smallest possible size, and then make sure that you have a pruning mechanism in your nodes so you can get rid of old images that you're no longer using, okay? Now, it's very important for your initiative to get buy-in from your development teams, okay? Your development teams are motivated by fast delivery. They're going to appreciate being able to run your monolith locally, easily. They're going to appreciate how container images will enable this DevOps benefits, right? Of you have the same code base and you run it anywhere, and it operates in the same way, right? It's fully automated. But you need to make sure that you are quantifying the build and release time, okay? Because if there's one thing that I know developers hate is to have to wait for builds and deployments, okay? So, the faster that you can actually get this cycle, the better for you to gain traction within your organization, the better for you to actually, and for any developer that is listening, I guarantee you're going to approve this that I'm saying, right? The better it is for the developers that are now using this new platform to deploy their workloads to buy-in. So, make sure that you are deploying as quickly as you can, that your build pipeline happens as quickly as you can, and we'll see how GitOps can help with that. And this basically gives us what heterogeneous architecture will look like, right? You have a common API. You're using ingresses and services to route requests to different types of workloads, those that are horizontally auto-scaling, that are microservices that are running in their own namespace, and those that are vertically auto-scaling and using stateful sets and other capabilities to scale, which are your containerized monoliths, and all those are running on different nodes, and there's policy between them. So, what does this look like, right? First of all, we have Kubernetes, which is the single API that you're going to use to operate your heterogeneous architecture no matter what it is that you're running. You're going to create individual node groups for your different services. You're going to have a specific set of nodes with specific capacity for your containerized monolith, which is going to scale vertically. You're going to isolate your monolith from your microservices because they might need different security contacts and they have different requirements from your microservices using namespaces and network policies, which will disable any direct communication between these different workloads. And then lastly, we're going to talk next as to how you can use ingress system services to send requests to the proper destination, whether it is a microservice or whether it's a containerized monolith, and how you can use this pattern to start sending traffic to the right place as you strangle and as you build new capabilities over time, all right? So services ingresses an incremental 12 factors. The whole idea here is that the service basically abstracts whatever is running behind it, okay? So it could be a containerized monolith, it could be a set of microservices, but it doesn't matter because you're using a service in front. And the service is what's handling whether you're such an affinity or whatever it is that your monolith needs, the service is what's subtracting access to it. And what any other service or any outside consumer through an ingress will need to know. Now, in this scenario, we have our containerized monolith that is running three different sets of features or functionalities, activity users and configuration, very basic idea, okay? And because this is a monolith that is coming from a world where it was running in a VM and it's mutable, the configuration for that monolith is actually a file in the file system. So we're going to use more Kubernetes capabilities to run this with very little change in this new containerized environment. So we're going to mount a config map, which a config map is an object in Kubernetes that holds basically content, in this case, let's imagine a JSON file, and we're going to mount it in the file system, okay? So that's very little change to the containerized monolith, okay? The service is using session affinity, the pod is vertical auto scaling, so state is managed, the stickiness is managed, and we didn't even have to change how the monolith loaded its configuration because we're loading it from the file system using a config map as it's the origin of that configuration. Now let's imagine we scratched the user's capability because we now have a user's microservice, okay? So this is that path right of strangling this application, we had all the sets of capabilities in a single monolith, we're taking bounded context out of it and creating small services from them, and this new services are actually cloud native, and they are actually using following the 12 factors. So how would that look? In the previous slide, we saw one ingress sending all traffic to one service, and then the monolith is doing its thing, okay? Here we're saying now we're going to split something out, users is no longer going to be in this monolith. So we use the ingress for that. We add on the rule that says anything that is slash users is now going to hit the user's microservice, which has different rules, it's not using session affinity, and the same config map that we had mounted, or that we have mounted as a file in the file system of the containerized monolith is now injected in the environment of this new workload of this new microservice, and we can actually do horizontal scaling with it. So this is the pattern that we can use to over time start eliminating components out of the containerized monolith and pushing them to other services, and we can actually share the same configuration and inject it differently into different types of services as we proceed, as we go through this process of modernization, okay? And of course, we need to talk about GitOps because we're the GitOps company. So let's talk about GitOps as a common deployment pattern, okay? For us to accomplish this, we need configuration, and that's the beauty of GitOps, okay? The configuration of your monolith, whether it's VMs, whether it's a containerized monolith with a vertical portal to scaler and session stickiness, session affinity, or a microservice that is fully cloud-native. Now, what the type of workload it is, the configuration itself is declarative and can't be stored as code, okay? GitOps is all about this art state stored as code in an immutable repository, and then the GitOps automation handling the continuous deployment of it through reconciliation, right? Basically, by comparing the runtime versus the desired state, the actual state versus desired state, and making sure that what you've declared in your code is consistent with what is actually running in the cluster, okay? And this is now the common pattern that you're going to use no matter what type of architecture you're having to support, you're having to operate, or the various architectures that you have to operate. And the objective here is for you to be able to move fast without breaking things. And I think that's one of the key risks to modernizing legacy application to a struggling monolith, right? There's a lot of fear of breaking stuff. And it's high risk, right? Looking at legacy code bases, extracting the components of that into its own new service, it's non-frivial. It takes a lot of effort, and it takes a lot of planning and strategy, right? So we want to be able to do that without breaking anything. And by using declarative configuration, we can do that easily. And using Kubernetes and containers as a common platform for runtime makes it even easier. And with tools such as we've GitOps, you can actually automate where you feel comfortable. You can have manual gates by using pull requests and stuff like that to basically provide a means for humans to interface in your delivery process, in your deployment process, and integrate with any number of audits, alerts, and any other mechanism that you need to make sure that you understand and that your developers understand what's going on, okay? So what are the takeaways? First, monoliths and microservices can and will likely coexist for quite some time. But aside from that, operating all those different architectures on top of a common platform, which is Kubernetes, really reduces complexity, and it streamlines the path, right? It makes it easier for you to migrate and refactor over time. Because of all the native capabilities that it offers, Kubernetes is ideal for this. And containers are not just for microservices, you can actually use them for monoliths as well. And if you choose to use monoliths, to run your monoliths as containers in Kubernetes, you need to be mindful of node isolation, capacity planning, network policies, so that when they're running privileged workloads and less than ideal scaling mechanisms, they're basically running in a protected environment where they can't negatively impact other workloads that are following different architectural patterns. So thank you very much. I hope this was insightful. And please look us up. You can go to Weave.Works and check out our different tools. And look me up if you have any questions you want to talk about this. Thank you all. I hope that was really useful. It was quite information packed. And again, we're here to help you both on your monolith to microservices journey, as well as how Kubernetes fits into that. If you have any follow-up questions, we have the contact information here for Chris, Leo, and I'm Tomo Nakahara. If you have any questions for me, I'm happy to help. And we'll leave you with one of our recent blogs that hopefully will be useful for you if you're getting started with Kubernetes. We've got some core concepts and components. Thanks for joining us. And please do reach out to us if you have questions. Thanks.