 I'd like to welcome you to our next session. It's called Distributed Monoliths with Porcus, How to Properly Modernize Legacy Workloads. My name is David VanBalen, and it is my pleasure to introduce you to John Keem and Mike Battles. They are both architects at Red Hat, and they will be presenting this session. Before we start a little logistics, I just want to let you know if you have any questions that come up during this session, please put them in the comments, and we'll set aside some time at the end to answer any of the questions. Also, I want to let you know that this session will be recorded and will be available on the Red Hat Developer YouTube channel shortly after Dev Nation ends. With all no further ado, let me hand it off to John and Mike. Great, thank you. John, let's get going. 15 years ago, I met John. We were both working for a company, Amentra, that was acquired by Red Hat. Amentra's focus was really on legacy modernization. John, what is the oldest monolithic workload you've ever worked on? I've actually got two. One was in a cold fusion app where we had SQL queries right in the UI. Awesome. It was. And another one was we were moving business logic out of stored procedures, making them easier, obviously, to debug and deploy and actually more performant, which is something we had to prove to the customer at the time because they were unconvinced, but the data didn't lie. What about you? I worked on this Y2K era system. It was a side-base EA server with Corba, and we were able to migrate that to JVOS EAP on OpenShift, and it was a really great story of how we were able to take a monolithic code base and refactor it to work better with containers. But I think the first time we actually worked together was in 2014. We worked on a value stream-aligned team for microservice with the UI, and it was really interesting seeing that paradigm shift between the legacy way of working and sort of a microservice-oriented approach. But I feel like we're always modernizing on some level, right? I think that's sort of the bread and butter of what we do. Yeah. And the interesting thing is that the definition of legacy has really changed over time, and with that, the definition of modern has also changed over time. So acknowledging the fact that you're on this continuous journey is, I think, really a critical one. You're never trying to reach a point destination, but you're just continuing to move forward. So for those of you who don't know, this is the Golden Gate Bridge. I didn't know that. Oh, all right. There is a rumor, though, that the way it's painted is they start on one end, and they paint all the way one direction, and then they turn around, and they come all the way back. Now, that's not entirely true, but it is closer to the truth than you'd think, because it is actually continuously being painted. And the reason is because of the water and the salt from the sea air causes rust and steel corrosion, causing this need for continuous upkeep and maintenance. And the paint was actually changed a few times. In 1980, it was changed to shift over to a more corrosive resistant paint. And then again, in 1990, regulations came out where they had to remove the lead from the paint. And now while painters get all the glory, there's about 28 or so painters and all, they actually need to work on cross-functional teams. They work on teams consisting of engineers, iron workers, operational engineers, electricians, laborers, and even carpenters. So what can we learn from all this? What can we do in software? How do we keep up with things like changing requirements, dynamic market forces, new privacy, data regulations, new laws, new frameworks, new toolings? And so for right now, I want everyone here on this Dev Nation talk to take a moment of self-reflection. How is your organization continuing to paint the bridge? How do you continue to do that in the face of these new policies and new technology? So modernization is a continuum. You're never truly done. You're always painting the bridge. If it stops, if you stop painting the bridge for too long, it will rust over again. If you stop modernizing your code base for too long, you'll get technical debt, regression bugs, delivery times take longer, code churn. To avoid that stagnation, organizations need to figure out their key metrics for success. You cannot improve or change what you do not measure. So this slide is actually from the conveyor community, which is an open source community sponsored by Red Hat and IBM that is chartered to help organizations modernize and migrate their existing workloads. So they have lots of tools and techniques to check them out. But here, they show that the number one in two ways of measuring success are reliability and scalability and metrics are paving the way for the need to change. All right, John. So you've talked about paint. Everybody here wants to talk about Quarkus. I learned today from Burst keynote that Quarkus was invented in 2019. But what is Quarkus and why is it so exciting? Yes. What is Quarkus? I'm going to steal this directly from the website here. Quarkus is a cloud native Java framework crafted from the best of reads, Java libraries and standards. So being a Java framework is especially important. Java has really stood the test of time and has only gotten better over the years. And with Quarkus, if you're writing an application destined for the cloud, you can use Java, meaning you don't need to reskill, retrain or rehire your team. You can use your existing knowledge and investments to start writing faster, slimmer apps today. Okay, so I've talked about what it is, but why is Quarkus? Well, it's to bring Java into the cloud. Again, from the website here, Quarkus was created to enable Java developers to create modern applications, to create applications for a modern cloud native world. So let's go through how it does this. Yes, how does it do this? One, it's extremely fast. Its internal components are all optimized for speed, even being able to compile down to machine code, native machine code. Next, there are advanced compiler optimizations that are going on, like removing dead and unused code paths and classes. So this ends up resulting in not only a faster booting binary, but also one that uses less memory. So John, are there any concerns with native compilation if you have existing code? Yeah, for your older type of applications, maybe you're using another library, a third party library, something that makes use of a lot of reflection. Well, that might get stripped out during compilation. So as a result, you might not be able to use it to do native because you would need to go in and add annotations so the compiler knows what to do with all those classes. So as a result of you not being able to edit potentially older third party libraries, you'd probably opt to do the non-native compilation and use the OpenJDK, which is you'll still get a lot of benefits. You can see the graphs on here, but those are some of the gotchas. Gotcha. So what about the developer productivity? Oh, yes. So we're both developers. I think this speaks near and dear to our hearts. You know, it's got to be not only run well, but it's got to be easy and fun for us as developers to write to. And I think Quarkis delivers that in multiple ways. I think the first one is the ability for it to live reload. Nobody wants to sit around waiting for long compilation times while you're coding. You would just want to see it. So Quarkis has got that convention over configuration. That means less boilerplate code. Dev UI, being able to visualize when you're in developer mode to be able to see the different components that you're actually using in your application. And last is something I really like. It's called dev services. So if you're using a database dependency or something, that database will actually be pulled down as a container and started and run automatically when you're doing things like running your tests. So that makes it super easy for you just to focus on the code and get going. All right. So it's Java based. It's fast. It's lightweight as a lot of developer productivity tools built into it. What types of applications can we build with Quarkis? So this is another great point about Quarkis. It's a tool that can support your journey wherever you go from monoliths to cloud native to microservices to serverless to event driven architecture. It can support and be able to deploy it for any of those types of use cases, even supports the micro profile specification. So if you're familiar with that, you'll be very at home and supports camel and functions as a surface as well. So we've gone over some of the lower level details, some of the new paint you could use. But what about the new painting techniques and the new painting approaches? So Mike, what are some of the high level architectural approaches that we can use beyond just the language of runtime? Sure. So there's this quote. There's a great book monolith microservices by Sam Newman. And this quote, most people don't start with a blank sheet of paper when it comes to building a system is sort of a great segue here because it would be naive of us, as I say, just go forth and do everything in Quarkis. You have existing technical equity that you need to maintain in your systems. How do you do that? Gartner coined the five Rs, sometimes I think AWS has six Rs or sometimes seven Rs. These are the recommended patterns for migration of legacy workloads to modern cloud native applications. And for our talk today, there are three that are relevant to us. So John, tell us about rehosting your legacy application. Yes, rehosting. Moving to the cloud. So with this approach, application changes are kept to a minimum, but you start to benefit from running the application on a cloud platform. So some examples of this might include moving from a private data center to a public cloud environment, or migrating from VMs onto something like OpenShift Virtualization so you can run your VMs alongside your containers on a common control plane. So the nice thing is that developers might not feel this, but that's exactly the point. It's the initial kind of low effort, small incremental step towards modernization. So with that, Mike, what is re-platforming? So re-platforming is when you take the existing legacy application and make minor changes to it in order to be deployable in a container. Re-platforming applications can open up opportunities to improve not only the core technology foundation, but also the delivery workflows, creating efficiencies and increased consistency. Legacy black box functionality will be preserved to maintain that technical equity. And with this approach, application changes are kept to a minimum, but you start to take advantage of functionality of a Kubernetes container orchestration platform like Reddit OpenShift. But still with this approach, your developer productivity is not greatly enhanced. Changes can still break unrelated parts of the system and testing is complex. So John, what is the next step on the modernization journey? Oh, yes. So once you've rehosted and re-platform, the next step could potentially be refactoring the application itself. So this is probably happening in some shape or form already. You're fixing bugs, you're adding features, you're learning how to do things better. But refactoring is a big topic. So let's take some time and let's dedicate some, yeah, some meat to it. Yeah. All right. So Martin Fowler has a great quote from his book about refactoring where refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. So let's zoom out and talk about the architecture supporting refactor. Yes. So let's do a quick refresher on these terms coupling and cohesion, which is going to be important to our conversation. So the terms refer to the concept of interrelation between code modules. And they were first coined in this book, Structured Design by Jordan and Konstantin, which has this awesome sort of green cover with the illustration here. Coupling is an abstract concept that is the measure of the strength of the interconnection. So how changing one thing requires a change to another thing. Cohesion refers to the degree to which elements in the module belong together. The code that changes together stays together. So clearly there is an interrelation between these concepts. The greater of the cohesion of the individual modules in the system, the lower of the coupling between the modules will be. And that's what we're looking for. We want high cohesion and low coupling. So let's talk about, say, very classic architectural pattern, the N tier application. Yes. This is an architecture I'm sure a lot of you here are familiar with. We have things like a database tier, and then a back end tier, and then maybe a user interface tier. So this in a lot of ways is a reflection of Conway's law, where we tend to organize our systems in the same way we organize our communication. Said another way, our software tends to look like our teams. So a common byproduct of this type of architecture tends to be that changes in one tier have a tendency to impact the others. For example, adding a new field on the front end requires the back end to be aware of this field. And of course to persist it, you need a new database column. So your simple, your seemingly simple change has really spanned every tier now. So it can be said that N tier applications have high coupling between tiers where minor changes propagate through the entire stack and low cohesion of business functionality where all tiers are aware of all aspects of every domain. All right. So let's compare and contrast that with something like a microservice architecture. microservices are services that are modeled around a specific business domain and encapsulate their own data. So think good OOP object oriented programming, keeping the data concerns within the business domain and not letting them leak out and only making that data available through well defined interfaces. Now this necessitates practices such as strong interfaces, think interface first driven development and outside in thinking. And you can do this through changes. So changes in one service tend to stay in one service. So you can think about in this particular scenario, we are modifying a customer where that customer new field new functionality is within that service and doesn't propagate to the other service. Now it can be said because of this that there is low coupling between services, again, where only changes to the interface impact other services and high cohesion of business functionality where all business functionality around a particular business domain is in one place. Now another important aspect of this is that artifacts be independently deployable. So Mike, why is this important? Yes. So this is a fundamental point of this talk here about monolith versus microservices. And we will compare them on the Y axis, the level of modularization, which is also a measure of developer productivity. How changes to code impact changes to unrelated code, the cohesiveness of the application versus the X axis, which is the number of deployable units, which is also a measure of operational complexity. So if you can imagine you have a system, a big old ball of code, where all the code, it could be tens of thousands or hundreds of thousands lines of code are all compiled into a single deployable unit. And we'll call that a single process monolith or a layered monolith like the end tier application. Any changes to a single line of code require redeployment of the entire monolith. The entire models need to be recompiled, retested, redeployed, and you run into bottlenecks as multiple features compete for limited testing resources. If you contrast this with a well-architected microservices based system, each unit has a properly defined domain and independently deployed and released. Each service is highly cohesive to its business functionality. This approach tends to push concerns away from the developer and onto the platform. A common pitfall we'd like, you know, you focus to avoid is if you haven't properly defined your domains, you will fall into the dreaded distributed monolith anti-pattern, where you fail to achieve developer productivity gains because of the poorly defined domains. With this pattern, you get the worst of both worlds. With increased operational complexity of a distributed system, while not achieving the deployment independence due to reliance on shared components, say a database is a common concern here. A better starting point would be to build what's known as a modular monolith. So by focusing on breaking up the internals to be more modular in nature, you're really focusing on your business domains. But importantly, leaving the deployment apology as is and not increasing the operational complexity yet until you identify it a performance reason that justifies scaling out. This approach can be very beneficial as you get the developer productivity gains to the restructuring of the monolith without prematurely incurring the operational complexity of a distributed system. All right, John, so yes, you and I have done this type of work for numerous times over our careers. So at this point, what are some real life hands-on recommendations we can give our audience here today? Yeah, we wanted to leave with some sort of some tangible next steps. And the first thing that you can do is you can start with planning it out. So the first step towards modernization of an old system is to corral all your existing source code across multiple legacy systems and releases into a single universal structure known as a mono repo, a mono repository. I once worked on a modernization effort where this part took months as we kept finding more and more source code and kept adding to the mono repo. Keeping everything together in a mono repo enables the code to be easily refactored and managed together as it is modernized. Another important concern is testing. Many legacy systems lack sufficient testing. This is an important time to improve the testability of the system. So you can add unit tests for complex business logic, integration tests and functional acceptance testing can also improve the confidence of any refactoring activities. So remember, if you change it, you're going to have to test it. And if you don't already have them, this is also a good time to modernize your build process and install a shared CAC pipeline across all the components of the monolith. Next up, you have your plan. You're ready to actually start decomposing it. This begins with a process known as the main driven design methodology that focuses on building software by deeply understanding and accurately representing the underlying real world domain. We did talk briefly about this earlier, but the proper identification of bounded context for your business functions is crucial to actually obtain the benefits of microservices without accidentally creating a distributed monolith anti-pattern. But the most important thing you can do is start with a single service or domain. It doesn't have to be perfect because you will learn a lot on your first attempt. As long as you're making progress that's measured in days and weeks, and not months, quarters or years, you'll be able to keep that momentum up for your modernization journey. And we really don't have time to cover all the migration patterns here, but one of the favorites is the strangler fake. So this pattern incrementally replaces a legacy system by building a new system around it and gradually redirecting its functionality into the old system is ultimately deprecated. Yeah. And once you break apart that functionality in the smaller units, keeping them isolated and don't let that functionality or data bleed into each other. So I don't think it showed up. I'm sorry. It's my bad. All right. We spoke earlier about the thinking outside in and focusing on the interfaces between the services. So to that end, we can use patterns like the API gateway for services that we're exposing to our end users, or where we can add things like security policies off the NLC, rate limiting, and we can also add adapters in between services that perform context translations. So for domain incompatible services, one service doesn't influence or corrupt the other. Next, we have instrumenting it. Logging and monitoring should already be standard practice, but for a microservice architecture where you have multiple deployable units and functionality that spread across all multiple places, all sorts of places, this becomes that much more important. Also monitoring becomes more critical, as you can now independently scale your services, seeing what services are under load or over provision so you can scale them up or down in real time, and tracing especially vital. As Mike previously mentioned, a microservice architecture tends to push concerns away from the developer and onto the platform. So if and when things go wrong, it can be a bit harder to perform root cause analysis across these services. So tracing what services are being called combined with good monitoring and logging will help you immensely. So these have been just a few of our tips and tricks, so with our couple minutes time remaining here, we can dive potentially into some QA. Yeah. So David, do we have any questions in chat? So I have a question for you, John. So what color is the Golden Gate Bridge? All right. I'm glad you asked. I've been burning to say it's called International Orange, and it's by Sherman, Sherwin Williams. So it's open source, the formula, non-proprietary. So if you wanted to use it yourself or mix your own, this is the formula. All right. Everyone's got their pens and papers out. So the CMYK colors are cyan, zero, magenta, 69, yellow 100% and black 6%. All right. Cool. All right. Thanks guys for that great presentation. We don't have any questions in the chats right now. So I think we're good. If you missed that formula for the paint color, I'll remind you that we are recording this session and it will be available on the Red Hat Developer YouTube channel. So you can re-watch it later on. I definitely will. This is great. Definitely a good reminder that we shouldn't just put stuff on the cloud. We should do it the right way. So yeah, join us in around five minutes for our next session. It's around Quarkus Funky, which is serverless for Quarkus. I will be moderating that one as well. So see you on this stage in around five minutes or feel free to join the other stage, which has some other chats going on in parallel. Thanks again, Mike and John, and I'll see you guys later. Thanks everybody.