 All right, welcome everyone. We're back for our next session the distributed monoliths and corkis with Mike Battles and John Keen I'm going to turn it over to the two of them and let them let them roll with this great session Just as a reminder if you have any Q&A Please post it into the comments section and we'll try to fill those at the end of the session at your time and as a reminder if you Would like to go back and watch this or any of the other sessions They will be posted to the Red Hat developer YouTube channel in the coming weeks It'll probably take a couple of weeks It might be into the first part of January just with the holidays pending here But look for that in that so you weren't even if there was another session at the same time You wanted to attend so definitely be able to go get that as well So with that I'll turn it over to Mike and John and have a great Thank You Mike All right, welcome everybody. This is treated monoliths and corkis how to properly modernize legacy workloads So 15 years ago. We were both in consulting prior to the reddit acquisition of working on legacy modernization So John just to kick things off here. What is the oldest monolithic application you've ever worked on? Ooh that had to have been a cold fusion app It had sequel queries right in the UI or I got another answer Moving business logic out of store procs into Java services Making them easier to reason about debug deploy and actually were more performant and we actually had to prove that with real data So I'm not just saying that What about you? Yeah hands down. We had a legacy side base EA server Corba application that we converted to J boss an open shift That was a really great story of how we were able to take a very monolithic code base and refactor to work better with containers But then we had a chance to work together on a value stream aligned team in 2014 a small 10-person nimble team Building microservices with a front-end the team owned the end-to-end feature delivery Good times. I remember those. Yeah, indeed, but even that project was modernization of a monolith We've always been doing legacy modernizations, but legacy looked a lot different back then Yeah, the definition of legacy Changes the definition of modern is a constant moving target So I think an important insight here is to understand that the the fact that you're on a continuous journey Never really hitting a point destination There's a great quote here by mr. Martin Fowler himself And I think a necessary prerequisite for all of this is good code quality Any fool can write code that a computer can understand good programmers write code that humans can understand I really love that its communication code is communication for your future self your future maintainer And future people coming in to fix bugs and add features So really being clear in the code is so critically important And to expand on that and maintain and enhance and support in perpetuity and really this talk is about this concept of technical equity everybody talks about technical debt the technical equity is the Value of the old systems that need to be maintained regardless of the choices that were made at the time So as software engineers, there's much we can learn from other engineering disciplines I often look to civil engineering due to the similarities between architecture and architectural approaches So this is the Golden Gate Bridge, which you may or may not know I didn't know that So there's a rumor that painters paint it completely in one direction and then turn around and come all the way back Looping back and forth forever So while not completely accurate It's actually pretty close to the truth and the truth is that while it's not a continuous loop It is painted continuously The water and salt from the seer causes rust and steel corrosion Causing a need for this constant upkeep and maintenance And the paint was actually changed a couple times too in 1980. There was more corrosive resistant paint that came on and In 1990. Well, they figured out lead wasn't good in paint So again, they had to change and use a whole new system of paint and the processes to back that up Now while painters get all the glory and you can see it's pretty impressive here About 28 painters and all they actually need cross-functional teams to perform this painting job These teams consist of engineers iron workers Operational engineers electricians laborers and even carpenters So, why do I say all this? What can we learn from all this? Well, how do we do things like that with continually changing requirements? Dynamic market forces new privacy data regulations new laws in our business domain new frameworks new tooling So pause here for everyone attending this talk here today Take a moment of self-reflection and ask yourself. How is your organization continuing to paint the bridge? How do we continue to do this in the face of new policies new technology new paint new processes? Modernization is a continuum. You're never truly done. You're always painting the bridge And if you stop painting the bridge for too long, it rusts again You stop modernizing for too long you get regression bugs delivery times take longer and code begins to churn To avoid stagnations organizations need to figure out their key metrics for success You cannot improve or change what you do not measure This slide is from the conveyor community an open-source community sponsored by IBM and Red Hat That is chartered to help organizations modernize and migrate their existing workloads lots of tools and techniques to help So you should definitely check them out, but that community reported This this survey the community self-reported reliability and scalability as a number one in two ways to measure success Metrics are paving the way for the need to change All right. Okay, John. So we all know what we're here for and that is corpus. So let's get into it What is corpus and why is it so exciting? What is corpus? Well, corpus is a cloud native Java framework Crafted from the best of breeds Java libraries and standards So being a modern Java framework is especially important Java has stood the time the test of time over these number of years and has only gotten better and better Now with corpus if you're writing an application destined to run in the cloud You can use Java meaning you don't have to reskill retrain or re-hire your team And you can use your existing knowledge and investments to start writing faster slimmer apps today So that's the what why is corpus? Well, let's bring Java into the cloud I'm gonna quote directly from corpus.io corpus.io corpus was created to enable Java developers to create applications for a modern cloud native world So let's go into how it does this first boots extremely fast and It's internal components are all optimized for speed even Compiling down to native machine code So you can see here the traditional cloud native stack versus corpus running on open JDK And then at the top there is corpus using the Graal VM aka compiling down to native machine code Now it also is slimmer smaller memory footprint There's advanced compiler optimizations that remove dead in unused code paths and classes So that results in not only a faster booting binary, but also one that uses less memory So John are there any concerns about using native compilation with existing code? Yeah, so there are some issues aka reflection being a big one If you're using a third-party library that you don't necessarily have access to She let me take a step back because of the optimizations that it does it strips out code that it does it identifies as not being used aka At compilation time so at runtime when you're using reflection to actually load something else use something else Corpus isn't aware of that the compilation compiler isn't aware of that So there's techniques we can do like adding annotations to say hey, I'm gonna need this leave it in However, if you're using third-party libraries that you get you can't necessarily jump into the code and add those annotations Then you might have to stick to the kind of traditional open JDK method there pictured in the middle Okay, cool. So how about developer productivity? Yes, both us being developers is this near and dear to my heart So developer joy is an important concept an important aspect of corpus There are some things that we want from a modern framework right live reload is the number one I don't want to sit around waiting for the compiler to finish so I can see my changes Corkus has got that Convention over configuration. I don't want to spend a lot of time writing configs. I want sensible defaults I want things to work out of the box corpus got that dev UI It's a way to visualize the extensions that you're using and some of the aspects of the code to make it easier to see What you're actually doing and the last one I really like dev services aka test containers an example Let's say I'm using a database Well, now I have to figure out how to run that database. I could either pull it down and do it manually or Corkus can actually look at some of your configurations See that you're using Postgres and pull down a container run it for you So that I can do things more easily like running tests and just have the database there for me So I can run my tests. It's so cool So what types of applications can I build with corpus? Yeah, it's kind of be able to support this why I love corpus because it can support you wherever you are in this modernization journey From traditional apps to cloud native apps to microservices to serverless EDA Supports the micro profile specification So what you're used to already writing Java for the web for instance You can reuse here those annotations all those still apply and you can get up and go quickly Also supports camel and functions as a service So now that we've gone some over those some of those low level details some of the new paint you could be using What about some of the new painting techniques and approaches? Mike, what are some of the high-level architectural approaches that we can use beyond just the language in runtime? Great, so there's this great quote here from the monolith to microservices book by Sam Newman that most people Don't start from nothing. Most people don't start with a blank sheet of paper when it comes to building a system So it would be naive of us to just say you can go and do everything in corpus, and you're totally modernized So Gardner coined these five Rs then AWS came around and made six or seven Rs But really what they are are they are patterns that document Various ways to migrate legacy workloads to the cloud and for us there are three the relevant to our conversation Rehost replatform and refactor so John tell us about rehosting your legacy application With this approach application changes are kept to a minimum But you start to benefit from running the application on a cloud platform So some examples are moving from a private data center to a public cloud environment or Migrating from VMs onto open ship virtualization where you can run VMs and containers alongside each other Now developers might not feel this impact But that's exactly the point it can be an initial low-effort step to move towards modernization So Mike, what is replatforming? I thought you would never ask The replatforming is when you take the legacy Existing legacy application and make minor changes to it in order to deploy this onto a container Replatforming of applications can open up opportunities to improve not only the core technology foundation But also the delivery workflows creating efficiencies and increased consistency Legacy black box functionality can be preserved maintaining that technical equity With this approach application changes are kept to a minimum But you start to take advantage of the functionality of a Kubernetes container workstation platform like reddit open shift But with this approach your developer productivity is still not greatly enhanced Changes can still break unrelated parts of the system and testing is complex. So John. What is the next modernization stage? Refactoring uh-oh Rewrite everything. No, just kidding after you've rehosted and replatformed that next step usually is to start Modernizing the application itself looking at the code Now this is probably happening in some shape or form already, right? You're fixing bugs You're adding features. You're learning how to do things better You're refactoring to update the code for the new context of things that you're constantly learning and it's a big topic So let's put some time to it So Martin Fowler has a perfect quote from his book about refactoring That's twice to Martin Fowler's here Refactoring is a discipline technique for restructuring an existing body of code Altering its internal structure without changing its external behavior. I think Mike has the book. Oh, yeah All right, but let's zoom out and talk about the architecture supporting that refactor Okay, let's put on our comp side one-on-one hats here So the terms coupling and cohesion refer to the concept of the Interrelation between code modules So this was first coined in the book structure design by Jordan and Constantine Which has this funky green cover Coupling an abstract concept that is the measure of the strength of interconnection How much of one module must be known in order to understand another module? Cohesion refers to the degree to which elements in a module belong together High cohesion means the elements are closely related and work together to achieve a specific goal or functionality So good is high cohesion low coupling and we have a couple examples of this All right, so this is an n-tier application Probably an architecture that a lot of you here are familiar with We have the database tier the back-end tier and the user interface tier This is a reflection of Conway's law. We tend to organize our systems in the same way. We organize our communication aka our software looks like our teams a Common byproduct of this architecture tends to be that changes in one tier have a tendency to impact the other tiers For example a new field on the front end Requires the back end to be aware of this change and then of course to persist it You need a new database column your simple change now spans every tier It can be said that it can be said that these n-tier applications have high coupling between tiers where minor changes propagate through the entire stack and Low cohesion of business functionality where all tiers are aware of all aspects of every domain Now let's compare and contrast that with something like a microservices architecture So microservices are services that are modeled around a specific business domain And they encapsulate and own their own data think good OOP object-oriented programming keeping data concerns within the business domain and not letting them leak and Only making that data available through well-defined interfaces Now this necessitates practices such as strong interface think interface first-tier of development and also outside in thinking So you can see here the changes we're introducing in one service Tend to stay in that one service. So it can be said that There is low coupling between services where only changes to the interface impact other services and the changes are kept local to that service and High cohesion of business functionality. We're all business functionality around that particular business domain is again in that one place so another important aspect of all this is that the artifacts be Independently deployable so Mike. Why might this be important? Okay, so this is a fundamental point of this talk here about monoliths versus microservices We compare them based on their level of modularization, which is also a measure of developer productivity Versus the number of deployable units, which is also a measure of operational complexity So you can imagine a system a big old ball of code I need to click. Yeah Where all the code it could be tens of thousands or hundreds of thousands of line of code are all compiled into a single deployable unit We call that a single process monolith Any changes to a single line of code requires redeployment of the entire monolith This requires that the entire models is recompiled retested redeployed and you are on the bottlenecks as multiple features compete for limited testing resources Contrast this with a well-architected microservices based system each unit has a properly defined domain and independently deployed in a release Each service is highly cohesive to its business functionality This approach tends to push concerns away from the developer and onto the platform a Common pitfall is if you haven't properly defined your domains You will fall into the dreaded distributed monolith which fails to achieve the developer productivity Due to the poorly defined domains within with this pattern you get the worst of both worlds with the increased operational complexity of a distributed system While not achieving the deployment independence due to reliance on shared components i.e. a database Another logical starting point is to build a modular monolith by focusing on breaking up the internals to be more modular in nature Really focusing on your business domains But importantly leaving the deployment topology as is and not increasing the operational complexity yet until you have determined a performance Reason to scale out this approach can be very beneficial as you will get the developer productivity gains through the Restructuring of the monolith without prematurely incurring the developer the operational complexity of a distributed system Okay, so Mike Both you and I have done this type of modernization work more than a few times over our careers What are some real life hands-on recommendations we can give our audience here today? Yeah, so here's our quick guidebook on how to start your modernization journey first starting with planet Make some immediate changes to your existing systems that provide value and improve the delivery processes prior to Decomposing it which is breaking out functionality into a new microservice then you need to Isolated once the functionality has been broken apart isolated untangle separate them out to independently deployable units and Lastly instrument it once they're separate. Don't lose visibility. Don't lose sight of them instrument them So if you remember that quote from Sam Newman most people don't start with a blank sheet of paper when it comes to building a system Prior to starting your modernization journey there are some things you need to start out with and really these are Table stakes if you take nothing else away from this conversation listen to the next few recommendations Legacy code is often stuck with antiquated practices and processes before embarking on a microservices modernization effort You should perform the following things as table stake activities to gain immediate value The point of a modernization exercise is to retain value in your existing systems That can begin today with changes to how your system functions on every modernization effort that I start the same way of performing the following actions Now this recommendation is opinionated and open to much Conversation and interpretation But a recommendation is that you adopt a mono repo This term essentially refers to all the source code for organization residing in a single source code Management repository most famously Everything at Google is a mono repo from adwords to self-driving cars, but also notably they don't use get that is true They have their own Google foo But for our purposes The mono repo should include all the program application code being managed by the same team Instead of pulling in dependencies using your build tool Collate all that source code together into a single repository The advantage of this allows your developers to more easily refactor using their IDEs restructure functionality into common areas and build boundaries around business logic domains In one modernization effort I worked on this step took months as we kept finding source code from various couch cushions in order to corral the entire system together In my opinion going back to Conway's law your source control should mirror your organization's development teams as you embark on your Microservices modernization you will likely have one team running many microservices Until they mature and eventually you may splinter the development teams to make them But for the initial development a single source of truth can be very beneficial in the early stages of microservices development You know kind of a non sequitur, but the term mono repo reminds me of that Simpsons episode Where they had that intensive three week class that finished on the teacher explaining that mono means one and rail means rail That is simple. I like it mono means one repo means repo To spend that okay, CI CD. This is an extremely popular buzz word I'm sure everyone here has heard this if not already doing it But it is table stakes for an application being developed and or maintained in 2023 I will usually start by development of a continuous integration environment that compiles the application into a release artifact I've worked on systems that had legacy build scripts and only advanced members of the team were able to perform release builds This can be a huge hurdle to productivity It can also require the completion of individual per environment application release binaries These issues will also need to be resolved before the migration effort can be undertaken This also implies that you were using a modern build process. IE Excuse me maven for a job of base applications Now this slide refers to the popular moniker CI CD which complates continuous integration with continuous delivery The former continuous integration is an integration of the development process for the production of consistent release binaries And the latter continuous delivery is taking those release artifacts into a production environment Microservices are as much an organizational workflow as they are a lightweight runtime design pattern By eliminating the blockers the organization's release process You have to pave the path for microservices teams to deploy their own releases This does not need to be fully automated i.e. release immediately into production But you should strive for a process in which any build could be delivered into a production environment Random question for you. What was the first automated testing tool that you use? Or CI CD. Um, I think cruise control. Yeah. Yeah. Yeah, I remember that that was a good one All right next question besides Hudson and Jenkins Hudson. Yeah, who is the most famous butler? um Probably Alfred from Batman, but um, did you ever watch the show soap? Nope There was a guy named the butler was named Benson and I always liked that name Um, I wish they'd gone from Hudson to Benson, but yeah, I like we're stuck with Jenkins Yeah, I also like soap not not the xml soap but regular soap All right, so testing is important. All right, cool next slide. I have a little bit more Yeah, so obviously testing is important However, many legacy systems lack sufficient testing when modernizing legacy systems It is important that the business logic is retained and has equivalent functionality after the migration It is important that all layers of the testing triangle are properly represented So starting at the base unit testing are tested the method or class level which can form Confirm that the algorithmic nature of your application works as expected At this level it is common to substitute external services such as databases or other apis Using a process known as mocking Integration our tests that validate that several classes or modules Work together to perform business logic at this level You are testing the integration with additional components such as the database Which may require a setup or tear down scripts to properly initialize the expected state API is a special kind of integration test that validate that the apis performance expected These tests can also double the sort of documentation for how to properly use the apis And then finally at the top of the triangle you have ui Which is the automated invocation of the user interface Which ensures the customers will have a good experiences in the application These are the most expensive types of tests to automate and they typically slower than direct invocation of the interline code or apis Testing is crucial in order to ensure that you are not introducing aggressions into your Application sometimes this suite is known as the regression test suite Remember if you change it you're going to have to test it Yeah, I really like your comment about documentation on how to use and call apis by looking at the tests I find that extremely valuable especially when hitting a new code base or looking at a new code base To look at the test to see even how functions are called and how they're used Yeah, definitely tests can definitely be a source of documentation Okay, so once you have performed these steps, you've restructured your code into a mono repo You've developed a csd pipeline and you've enhanced the robustness of the automated testing You are finally ready to start decomposing your monolith into microservices It is important to remember that you are starting from a functioning system And that you want to incrementally involve this system into microservices by taking manageable steps After each step the system should pass regression test suite and still function Try to avoid adding functionality and decomposing in the same iteration Focus on the task at hand and evolve the system organically. I like that Okay, the main driven design refers to a process of deeply understanding and accurately representing the underlying real world domain This uses the domains which already exist within the business The goal is to identify meaningful separation between business entities that correspond to the known organization For example in this diagram, we depict a common e-commerce platform This defines four distinct bounded contexts So you have the orders which are how customer orders are received and processed stock Which is how the physical warehouse inventory is managed Banking which how the money is invoiced and processed Authentication how users interact with the system and are granted access to functionality Within each bounded context the definitions of similar roles may be different For example, a singular person could represent an authenticated user the customer and the payer However, the use and the way each of these roles are handled will vary wildly depending on the bounded context The separation of concerns ensures that each domain has high cohesion All right So you're saying that even though a customer in real life is a real human a singular human Yeah, you're saying in code. We could potentially represent that human in multiple ways Yeah, that's exactly right You know IRL humans are multifaceted. You could be a customer an employee a co-worker a brother, etc In each context or domain you will show up in a different way based on social norms And in the same way in each bounded context an individual will show up in different ways based on that context I like it Start small so this is a quote here Um a journey of a thousand miles begins with a single step Now one of the key takeaways from this talk is to remember that you need to start with a single service As long as you're making progress that is measured in days and weeks and not months quarters or years You'll be able to keep that momentum up and which will fuel your towards your modernization journey I really love this it speaks to the power of lots of small improvements over time equals big improvements Okay, so there are A ton of patterns to cover legacy modernization far too many to cover in this talk But one special pattern is the strangler fig In nature the strangler fig is a special type of plant that grows around a stronger more mature plant using it for support In software this term was popularized by martin faller. That's the fourth martin faller quote third Love martin faller around here. It depicts the incremental replacement of a legacy system by building a new system around it Gradually replacing the legacy system until the old system is deprecated In contrast to this a popular approach is to just start from scratch and do a green field rewrite of existing system functionality But this can be very risky and costly as you may not re-implement the business logic to function the same way as before Which introduces regression or other bad things to your business So in order to reduce the risk you can implement a strangler fig to steadily provide value with Frequent iterations Even the net new system should be designed in such a way that they could be augmented with a strangler fig pattern in the future To avoid future rewrites So this is a very cool looking tree. Have you ever seen this tree in real life? I don't I don't know. I don't think so if and if I did I probably didn't realize what I was looking at Fair All right. So once you break apart the functionality next we want to isolate it So don't let that functionality or data bleed into each other So let's start with data While it might be tempting or easy to have a single database with multiple service updating it don't avoid that temptation Properly carving out a business domain means that a single service Is responsible for a single business domain's functionality And also is singularly responsible for that data In this way, the data can be properly represented in a way that makes sense for that service And all of its business rules properly enforced In this image here the catalog microservice should not be allowed to update the basket's data Yeah, and again sharing of databases is probably the number one anti pattern with microservices And it may be a sign that you actually have a distributed monolith and not proper microservices Very very true So the next strategy for isolation is to employ something called an api gateway It's a single entry point for all your apis The idea here is that the actual apis being exposed by your services Can and will most likely change As a result of consolidation or creation of new services most likely do to your modernization efforts So by adding an abstraction layer in the form of an api gateway We can shield ourselves from these low level api changes also Adding an api gateway allows us to move our cross cutting concerns like Off-end off z rate limiting or other policies that you may have To a centralized location before any real services hit Lastly the api gateway can act to group and expose the underlying services In a shape that's most meaningful meaningful for your users So let's say you have an admin user base You could potentially regroup all your admin related functionality Behind a slash admin URL and wrap policies around this And similar to an api gateway is the concept of a service registry Which can document your services and enforce schema validation on incoming requests and perform service discovery With a catalog the developers can use when integrating with new apis. That's true There's a whole ecosystem that can be formed around apis Next anti-corruption layer This really fancy term, but in simple words. It's the idea that one business domain Should not impact or corrupt another I think this makes the most sense in the example which we're going to walk through here So in this image, we have subsystem b on the right and it's interfacing with subsystem a on the left Now say subsystem b was written many many years ago While subsystem a on the left is part of the modernization effort and has been fully converted into new microservices with new business domains Now say that subsystem b has legacy models and objects that don't exist in the new world in a So instead of corrupting a With these legacy models and hand jamming them into the domain where it doesn't make sense We instead add a translation layer that can convert and connect the two worlds So we call this concept the anti-corruption layer And it's a powerful technique when modernization modernizing a large code base So this isn't necessarily a permanent fixture either because you know as soon as subsystem b can be modernized Or we could potentially retire that anti-corruption layer However, sometimes, you know that modernization effort for b takes a long time. There's in budget yet. Maybe it's never modernized So that anti-corruption layer might end up existing for a while But this technique still allows you to modernize various parts of the system without impacting the entire system as a whole So I I think I get it. But john, could you give a real world example of anti-corruption layer? Real world example. Okay. I'll give you one for arbitrary one from the physical world using things on my desk So let's say subsystem b Is aware of a thermos object So the the thermos object is nice holds water But let's say system a doesn't have a thermos object anymore. We've moved away from mere thermos and now we have Buckets instead of just holding water. They can hold anything other liquids non liquids. The bucket is just a better abstraction So how do we connect the two? Well, we have the anti-corruption layer that when we send thermoses to my subsystem a We turn the thermos into a bucket. I don't have a bucket here You can imagine me creating a bucket and then pouring the liquid into it and now So system a can use the bucket. Okay, and then vice versa when a talks to b We get out of thermos and we pour hopefully pour whatever's in there into the thermos and there you go Yeah, so this is also another example of it could be an example of a strangler for big as well I like that connecting the dots. All right next we want to instrument it We want to instrument the system so that we still have visibility and can see what's going on So these practices that we're about to talk about should be standard practices But they become that much more important in a cloud native context Logging good logging is an important practice. Hopefully y'all are doing already So you should be doing things like pulling logs collecting them parsing them transforming them pulling them into a centralized location And then normalizing them storing them indexing them and then using some sort of visualization tool to see the valuable data That you're actually collecting Now there are many many log stacks out there But the point here is you should be shifting left on this and making this as part of your workflow Earlier rather than later Because as I said earlier when you have more smaller services Can be harder to find and dig through these logs, right? These problems become more compounded in a cloud native context So for example, if you're not disciplined in your logging practices You might end up hunting through multiple places And gripping through thousands of log files to find one particular stack trace that happened maybe a week ago Yeah, I've been definitely been there with the purl scripts and the remote copy commands Cloting all the logs together. Yeah, a logging stack makes this a lot easier. Sure. Definitely So closely related to the idea of logging is monitoring different though. We're collecting metrics So it's vital to be able to see the health of all your services at any moment And this becomes harder to do as you can imagine you have more services So like logging shift left on this and do it earlier rather than later Create a practice of plugging all your services into a centralized monitoring system And this allows you to keep tabs on all of your system and services health So that you can do more intelligent and proactive decisions like scaling This allows you to take advantage of one of the main benefits of these smaller deployable units Which is to independently scale them as needed So in addition to standard cpu memory network file.io metrics You might also want to expose custom metrics like the number of active sessions or q depth Which can correlate to real world traffic usage. Very very true Next after logging and monitoring We want to continue and increase our visibility into the system with tracing We spoke about it earlier, but moving concerns to the infrastructure can potentially make it harder to find problems So say you had a long chain of services being called and one of them failed Which one failed do we know why maybe the app crashed but the platform has already self-tealed and spun up a replacement So having good tracing and being able to visualize what services are being called And which ones are failing succeeding and when is critically important This also lets us do things like root cause analysis It's probably worth a mention here that uh the open shift Container platform comes out of the box with an integrated logging monitoring and tracing stack that all play nicely with quarkis Love it. Love it All right So this was our guidebook to how to start your modernization journey At a high level it's four steps One planet because failing the plan is planning to fail Two decompose it start to break things apart Three isolate it continue breaking them apart and untangling them Four instrument it make sure that as you break them apart you don't lose visibility of them Great. Thank you john So I think with that we're open for questions don't See any in the chat I've got one for you john What color is the golden gate bridge? Yes I've got this committed to memory. Just kidding. I don't it's actually called international orange by sure win williams If you wanted to mix it yourself the cmyk colors are Cyan is zero percent Magenta is 69 percent Yellow is 100 you probably guessed that And the black is six percent cool I'll be running out to the paint store right after this session Actually two more said we should have a couple more to go. So we'll wait till then Yeah, I don't I'm not seeing any q&a in this in the Session here. So great great presentation. Thanks mic and john for You know very compelling interesting session. Um, certainly want to you Welcome everyone to hang around for our next session Which is going from containers to pods to kubernetes help for your developer environments Or feel free to hop over to one of the other other tracks as well If there's something there that that may interest you as a reminder These will all be available up on the the red hat developer youtube channel in a in a handful of weeks It might possibly slip to the new year depending on how things Go with it with it being processed on the back end But again, thanks. Thanks guys for the session and uh, we're going to drop drop the stream for the moment And then we'll come back on at the top of the hour with our next session. Thanks a lot Thanks guys