 Hello and welcome to another OpenShift Commons briefing. This time, we have one of my favorite Red Hatters, James Faulkner from the Red Hat Metalware team, coming in and he's gonna talk to us about building cloud native applications using some of the raw work that's being done at Red Hat and Wildfly form. So I'm gonna let James introduce himself. The format of this event is that we do the Q&A in chat and there's a live demo. And at the end of this whole session, when he finally takes a breath, we will have live Q&A. So James, I'm gonna let you take it away. I know there's a lot of content and I'm really looking forward to this one. Okay, great. Thanks, Diane. So hello, everyone. As Diane mentioned, my name is James Faulkner. I am a technical marketing manager in the Red Hat Metalware group. Today, we're gonna be talking about building cloud native applications with Wildfly Swarm and Roar. I'll give you a brief intro to Roar and Wildfly Swarm, but the most of the time we'll spend doing a hands-on demo so you can get a feel of what we're actually talking about here. So I won't spend too much time on the marketing fluff at the beginning. Just briefly, a couple of other OpenShift Commons briefings that have occurred late last year around Roar and building cloud native apps with Spring Boot. You can see the listing here. We also did one for a monolith to microservices journey describing how you can modernize your existing applications using Roar. So check those out if you're interested in more in-depth detail around modernization or if you're interested in Spring Boot or Roar in general. So again, I will give a brief introduction to Roar. So Roar stands for Red Hat OpenShift Application Runtimes. And what this is is a new product from Red Hat. It's a collection of curated and certified and supported runtimes and frameworks that are targeting microservice and cloud native application development for companies who either have existing applications that they want to modernize or they're building net new applications, cloud native applications. You can use the components within Roar to do that. So here's a market texture diagram of what Roar actually contains. So at the top, you see tested and verified frameworks. These are popular frameworks that have been proven to be very useful and effective in building cloud native applications like Spring Boot. There's a number of open source projects from Netflix around fault tolerance and other aspects of microservice development. And then underneath, we have the supported microservice runtimes which include JBoss EAP, which we've had for a number of years. We've also added in other popular runtimes like Eclipse Vertex for building reactive applications. We have Node.js for building reactive JavaScript applications and we also have embedded Tomcat for building more lightweight web applications. And then lastly, we have Wildfly Swarm which we'll get to in a moment. So all of these applications are packaged into a product from Red Hat and supported through our normal support channels. We also have a launch experience which I'll get to in a moment. But getting down to the nuts and bolts. So when we talk about these frameworks, how are they actually delivered to you as a customer or as an open source developer? Well, three of them at least are kind of Java based. And so you typically use things like Maven or Gradle. So we at Red Hat focus on Maven. So we have artifacts in Maven repositories which deliver these components to you for non-Java like Node.js in this case. It's delivered as a container on our Red Hat container catalog. So we have official supported channels for getting the bits into your projects and you'll see this in the demo as well. So I also mentioned the launch experience. So before you get deep down into building cloud native apps with Roar and you wanna see what Roar is all about if you head over to developers.redhat.com slash launch we have a kind of a guided wizard-like experience for generating new projects and these projects exemplify cloud native aspects like fault tolerance with circuit breaking and externalized configuration and securing microservices and so forth. So we call these things boosters. And so this launch site will walk you through and allow you to select the type of what we call a mission that you want to see an example of. You can also select which runtime you want whether it's Spring Boot or Vertex or Swarm or Node.js. You can deploy it to your local OpenShift instance or you can deploy it to OpenShift Online as well. So check that out if you wanna get started without too much fuss and without having to download and install too many things. Okay, so let's focus on wildfire swarm. So swarm is one of the runtimes in Roar. Wildfire swarm targets Java microservices. It has a lineage going back as you can guess from the name goes back to the upstream wildfire projects and this is an open-source Java EE application server. Wildfire swarm brings in a number of components from wildfire itself to implement a number of features that microservice developers need. Also, more importantly, it brings in Java EE features so that if you are a Java EE developer and you're looking to implement microservices, wildfire swarm is a great choice because you can leverage that existing expertise that you already have. Again, it brings in components from wildfire but it also brings in components outside of the wildfire ecosystem for doing a number of things like which we'll see in the demo in a moment. It also is an implementation of MicroProfile which is a new specification champion by a number of companies and we'll have some more detail on that in a moment. So getting down into the nuts and bolts of wildfire swarm, so what it is is a way to package just enough of the app server components needed to run your application. So these components are packaged into what we call fractions. So these are things that provide the small amount of functionality that you need. And so the way you build your application is you declare the fractions that you need or you let wildfire swarm auto detect what you need based on your source code and then it packages into a single runnable jar file which we call a fat jar or uber jar. So you can, again, you can add specific fractions that you need and only package those things that you need. So it makes it a very small runtime, easily deployable to a container application or a container orchestration platform like OpenShift, for example. Each of the fractions is also configurable independently. So each fraction may expose some configuration knobs that you can turn to modify things. So for tooling, as I mentioned, we focus on Maven in the wildfire swarm project. There's also a Gradle plugin, but you'll see the Maven stuff coming in this demo. And so we have a plugin for Maven called the wildfire swarm plugin, which is one aspect of the tooling. We also have IDE integration. So with JBossForge, which is a scaffolding project that in your IDE, you can click a few buttons and it generates a bunch of scaffolded code for you. In the swarm case, it will also do the same. It'll generate a project for you or it will annotate your existing project with wildfire swarm functionality and gets you up and running very quickly. There's also a swarm tool which lets you wrap an existing war file if you have an existing application and you can't rebuild it for whatever reason, you can wrap it into a runnable jar file using swarm tool. And then lastly, there's a project generator very similar to what you get in the spring ecosystem with start.spring.io. If you head over to wildfly-swarm.io slash generator, you can choose your dependencies and click a button and it'll generate a project for you which you can download and then you're off on your way to building awesome stuff with wildfire swarm. So we won't use the generator today but we will use Maven plugin and the Jboss Forge add-on. Here's a quick screenshot of the project generator. As you can imagine, you just type in some identifiers, you choose from an auto-completed list of dependencies and then you click download and you're off and running or click generate project and you're off and running. For building cloud native applications with swarm in particular, there's a number of features within swarm that come from either the Java EE upstream wildfly project or from other projects like Netflix to implement features that target specific for around cloud native. And the definition of cloud native is somewhat ambiguous but essentially there's a number of aspects of a project that make it cloud native and make it able to be deployed to a cloud environment with changing network conditions and different environments from the developer's desktop to a QA environment to a staging to production that allows cloud native applications can easily move through that. So the features you see listed here are exposed in wildfly swarm through various fractions or functionality within swarm itself. And I'll go through two of them today. There's a number of others that we won't get to because we don't have enough time today and you'll probably get bored watching screens fly by after 20 or 30 minutes. Okay, so I mentioned micro profile. So wildfly swarm is an implementation of micro profile. So this is a set of specifications championed by a number of influential communities within the Java ecosystem, including Red Hat, IBM, Pyara, Tommy tribe, et cetera. You can see the list there. Swarm is our implementation of this specification and this again targets Java microservice developers who might not need the complete set of specifications coming out of Java EE or they might be using a legacy platform like WebLogic or WebSphere. And they kind of want to move off of that and get into a more micro smaller, logically smaller implementation using a subset of what they know. So you can see the list of specifications that are included in the latest release of micro profile which was version 1.3, which came out, I believe about a week ago. It has support for health checks, for example, or CDI and fault tolerance. These specifications, interestingly, some of them actually started in the wildfly community and were donated to the micro profile community as a specification. Obviously that makes things easier for wildfly swarm but in other cases, specifications came from IBM and from the rest of the micro profile community members who are working on this. So great set, very small set, targeted specifically at microservices and it's very easy to use with swarm and with any of the other implementations. Okay, so let's get on to the demo. So the demo is based off of some source code on GitHub. You can see the URL that we're not a URL but it's on my GitHub account. So what I'm gonna do is take an existing application to warmify it and then we're going to break it apart into microservices and show you how you can build some cloud native aspects into those microservices using wildfly swarm and then we'll tie them all together. Essentially what we'll be doing is what we call strangling the monolith. This is a pattern that was championed by Martin Fowler a number of years ago where it allows your projects to slowly evolve over time into a more modular architecture without having to rewrite everything from the ground up. Okay, so let's get out of slide mode and head over to my trusty developer environment. So what I have here is a number of projects. I'm really gonna touch a couple of them today. So the first one is this monolithic application. This is what I was talking about. This is a Java EE application. Typical Maven build file here. You can see I only have one dependency or actually I have two dependencies. I have a dependency on Java EE itself because I've built this application over a number of years and I'm a Java EE expert. I also have a dependency on a database. So in this case, we're using an in-memory database for simplicity. So the first thing I wanna do as a swarm evaluator is I wanna start with swarm. So what's the easiest way to do that? Well, I have the forge, Jboss forge project has an integration with my IDE. So I've installed it and now I have a nice little set of menu items that I can use to get started quickly. So the first one is called setup. So I'm gonna just click this, just go ahead and click finish. And now my project has been swarm-ified. So you can see what it's done. Let me scroll up and highlight exactly what it did here. So it added a new property to my project for the version of Wildfly Swarm that it's gonna use. It added a plugin definition for the Wildfly Swarm plugin which integrates with Maven, which I'm using. And then it also added a dependency, a set of dependencies in a bomb, a bill of materials, which brings in the Wildfly components. As I mentioned, this is a Java EE project that was written five years ago with Java EE. I am not gonna touch the source code at all. I'm simply going to build and run it with Wildfly Swarm now that I've added these three very small components. So I'm gonna open up a menu over here which gives me access to my Maven target. So I'm gonna open up my project here. Wildfly Swarm is a plugin. So it appears under the plugins directory. And then I'm just gonna simply run it with Wildfly Swarm run. So what this is gonna do is package up the application into a runnable jar file and then run it using Java-jar. So very simple. You can imagine how that would integrate with your existing built systems and existing CI CD pipelines because it is a fat jar, something you would get just like you would get with Spring Boot or Drop Wizard or other projects that use the concept of a fat jar. So right now it's building. You can see up here somewhere it did the auto detection. So out of the box it auto detects. So it looked at my source code and looked for annotations or looked for specific files and decided which fractions it needed to import. So it imported a number of fractions that I never even specified simply by auto detecting them. So it installed these fractions and it started up the project and it looks like it should be up and running now. So it's up and running. You can see here and it's running locally. So if we go back to my browser and open a new tab and go to local hosts 8080. It should be hopefully up and running. It's up and running. So here's my monolithic application. Didn't touch a single line of source code. I simply wrapped it in a runnable jar file with wildfire swarm. I let it auto detect what it needed. It created the runnable jar file and then ran it. So I can do things here. This is a online shopping cart as you can imagine as you can see actually. I can add things to my cart. I can remove things and I'm all happy and everything looks good. Okay, so that's pretty fun. Now let's move into the cloud native aspect of this. So it's still a monolith, but now I want to move it into a cloud orchestration platform. In this case, we're gonna be using OpenShift. OpenShift is Red Hat's container orchestration platform built on Kubernetes. And so to deploy that, I'm going to use another plugin called Fabricate. So Fabricate is an upstream project for building microservices with Red Hat technologies and other technologies. It has a Maven plugin as well. So I'm gonna go ahead and turn on that plugin and then it should appear in the menu here. So it did. So here's my Fabricate Maven plugin. But before I deploy it, the first thing you do with cloud native applications is one of the first things you need to do, you should do is create a health check. This implements a health check pattern. It allows container orchestration platforms to discover whether your application is healthy or not, which is really important for avoiding downtime during an update or for detecting when something's gone wrong and automatically healing the application, replacing the broken parts and bringing up a new version of the application. So I wanna add a health check and it just so happens that Wildfly Swarm and MicroProfile have that notion of a health check. So to use this in Wildfly Swarm, you need to add a new fraction. So I'm gonna use my trusty swarm tool or my trusty plugin here. Actually before I do that, I'm gonna stop this running process and now I'm gonna add this fraction that I need for my health check. So I'm gonna click the magic button again, bring up this swarm command tool and say add fraction. So I'm gonna add, there's a number of fractions. There's almost 200 different fractions you can add to swarm. The one I want is called monitor and this is a fraction that enables automatic health checking. So I'll click finish here. And the only thing that it did just now is it added this dependency information to my build file. I could have done that manually. You probably will do that manually once you become more familiar with swarm and the fractions it has, but it added it for me here and now I can go ahead and deploy my application with the health check. I wanna actually add some logic to my health check. So I'm gonna quickly create a new class in my monolith which is gonna implement my health check. So I'm gonna create a new Java class. So I'll call it infra endpoint. This is an infrastructural endpoint. It's going to be a Jaxrs. Was there a question? I think that we were having a little debate on the chat whether it was IntelliJ or Eclipse that you're using. Ah, it is IntelliJ idea. I lose. Okay. So the Forge plugin actually has a support for IntelliJ idea as well as Eclipse. So if you're using Eclipse, I think there's a NetBeans one as well but I'm using IntelliJ here. So I'm gonna create my health check endpoint which is gonna be a RESTful endpoint. So I need to add a path here of infra and I'm gonna create a simple single endpoint called check and it's gonna return a health status which is another class within a wildfire swarm. So check and it's gonna be extremely simple. I'm just gonna return healthy which actually has value in itself because this won't actually work until the application is up and running. So we will just do this health status. Dot named, we'll call it name.up. So I could add a bunch of logic up here to inspect the database or do something else before I return up and I can return down if it's down. But I'm just gonna go ahead and return up. So I need to annotate this with the RESTful endpoint annotations of get. So it's a get endpoint meaning HTTP get. The path is gonna be health and then the magic health annotation is a wildfire swarm annotation which identifies this endpoint as a health check. You can have multiple of these in your application and then wildfire swarm will aggregate them all and return kind of an aggregated value depending on all of those health checks as long as they all pass then swarm will be happy with that. Okay, so I've implemented my simple health check. I've added my fraction for supporting health checks. So I'm ready to deploy. So before I deploy I'm gonna go ahead and create a new project in OpenShift itself. So I'm gonna log into OpenShift. I have it running here locally. I'm gonna create a new project called cool store. I have my new project here and it's got nothing in it obviously because I just created it. So we'll just go ahead and now deploy it with fabricate. So fabricate deploy of my monolith. So this plugin will package the application with wildfire swarm and then package that into a Docker container image builder image shift that off to OpenShift and then it will go ahead and deploy that out to OpenShift. Let me make sure I'm logged in. I should be logged in to OpenShift. Yes, I am. Okay, so it's now created the, what it's doing is actually building it in OpenShift. So if I switch back to OpenShift we can actually see that build in progress. So you can see it actually already finished. So the build finished and now what fabricate will do is create the necessary Kubernetes objects to deploy that and get that up and running in OpenShift creating the deployment config object, the service object and a route. So it looks like it finished now. So let's switch back to OpenShift and you can see the application is now running. It's actually on the way up. While it's coming up, I can show you the health check. So within the OpenShift web console you can look at the health checks and you can see this path defined here, the slash health. This is the, you'll remember the health check that I defined was on infra slash health and so this is the aggregated version of that which will return true once it's up and running here, hopefully. So it looks like it's up and running. So we can check out the logs here, make sure everything looks like it's okay. Oh, you know, I forgot one thing. So when I turned on that health check in the, using the dependency or while fly swarm out of the box does auto detection which you saw momentarily ago by adding a specific explicit declaration of this fraction, I turn on that off implicitly because it thinks that I know what I'm doing and I declare all the dependencies in you but I actually don't know what I'm doing. So in order to turn the auto detection back on we need to configure this real quick. So let me do that. Fraction detect mode force. So I will go ahead and redeploy that one and then redeploy it. So this, because I again, because I declared the monitor fraction it turned off its automatic auto detection of other fractions needed. So I need other fractions obviously because I haven't given it any other fractions to depend on. So I'll deploy that, undeploy that and then redeploy it and it should come back up hopefully, switch back over to OpenShift here. It looks like that one's gone and my project is empty again and then it will rebuild and then come back on. So you can see now it's doing its detection properly because I forced it to auto detect so that the application will come up completely. So wait for that to finish. That should only take a couple of maybe 30 seconds or so. What's gonna happen is it's gonna deploy that same exact monolith but to OpenShift with a health check. We haven't talked about microservices yet and that's the next topic. So what we're gonna do is with our monolithic application which is a retail store, there are different components to that like a catalog for the number of project products that you have for sale. There's also an inventory system to tell you how many of each project, how many of each product you have left. So we're gonna split those out into individual microservices and then tie that into the monolith so that we effectively start the process of strangling that monolith where we can remove the inventory and catalog components because they're now independently developed by our inventory team at our company and the catalog team at our company and are no longer part of the monolith. So that's what's gonna happen here. So it looks like the new version is now finally up and running. So if I click this link here, I should get my monolith now and I see the monolith is running now on OpenShift so it's all well and good. So again, this UI is actually built from a number of different subsystems within the monolith which we're now gonna break out into microservices. So we're gonna do two of them. One of them I've already done, the inventory service. I've hired a bunch of JavaScript developers who love Node.js and so they've implemented the inventory microservice as a Node.js application. So you can see it running here or you can see the source code to it here. It's a very simple microservice written in Node.js and JavaScript. I've already deployed it. So if I switch back to OpenShift and go back to this other project here called service, you can see the inventory is already up and running. If I curl it, which I could do, I can simply go like this and curl. I can do it like that. So I curled a simple, I accessed this Node.js endpoint, giving a product ID of this giant number and it returned me the fact that I have 337 of these products left in Idaho. So it actually doesn't matter what number I choose here because my lazy developers just returned whatever I give them. So product 4545 also happens to have 337 in Idaho. Every single product is gonna be in Idaho and have 337 copies of it. We're gonna use that to show you some interesting things with circuit breaking. Okay, so that's my inventory service. It's up and running already. So we'll skip that one and move over to the catalog service. So this is the second and final microservice we're gonna develop. This one is developed with Wildfly Swarm. I've already created the project. It has a RESTful endpoint for getting products and it has a kind of a simple, the way that it returns products is very simple. So it's actually a DOM. It's called DOM Get Quantity and it calls into the inventory system and gets the inventory of each product and then adds that to the return value. So let's go ahead and deploy that out to OpenShift as well. So I will close that one down, open up the catalog microservice, plugins, fabricate and I'll go ahead and deploy that out to OpenShift. So again, this is a simple catalog endpoint. It collects, it returns the products in the catalog with the inventory added to each component of the catalog by calling into this inventory service with very little error checking. In fact, there's no error checking because it's DOM and my developers are lazy and I guess I don't pay them enough. Okay, so that's building hopefully. It should take maybe 20 seconds or so to build and deploy and then we're gonna test it out and see what happens. More importantly, we're gonna see what happens when the inventory service is not working properly and we should hopefully get a big giant failure. Okay, so while that's deploying out, let's collapse this down and we will wait and the catalog service should appear here momentarily once the build is complete. Just like we built a wildfly swarm monolith, we're building the wildfly swarm microservice for the catalog service. So it looks like it's coming up now. So it's been deployed and it will start to boot up here. Again, I can watch the paint dry if I want. You can see the Java or the OpenJDK container images running my wildfly swarm microservice and it looks like it's coming up and it should be up momentarily. It looks like it's ready to go here hopefully. Yes, okay, so catalog service is up and running. So let's go ahead and test that real quick. So again, we'll curl that here. Okay, so my new catalog microservice returned a complete array of products. Each product has the same quantity of 337. Everything looks good. My developers went off on holiday and all of a sudden the inventory service has a problem. So I'm gonna go ahead and shut off the inventory service and let's see what happens if I try my product list again. So now you can see that it's sitting there waiting for a long time. And you can imagine what would happen if this was a real world product, a real world application like amazon.com and you went here to buy a birthday gift for your best friend and you get this annoying timeout and you're gonna head over to the competitor's website and purchase from there instead. So that's not good, that's unacceptable. I'm just actually gonna timeout after about 30 seconds. So what's happening here is obviously because I shut off the inventory service, the catalog service is unable to get the inventory and it then eventually timed out. So let's fix that. So what we're gonna do is we're gonna use Histrix. So Histrix is an open source project from Netflix. It does circuit breaking and bulk heading and essentially allows you to do defensive programming. Again, essentially adding an error check, right? If the service is down then do something else. So I've already got the code in here but before we can uncomment this code out, we need to add the fraction, the wildfire swarm fraction. So I head over to my palm.xml, I can, you can see I have a number of dependencies already declared here. So I wanna add a new dependency. Again, I'll just use the swarm forge tool plugin in my IDE here, hopefully. Yes, there it is, swarm add fraction. And now this one we're going to use Histrix. So I'll scroll down this giant list. Be nice to have auto complete, but we don't. So I'll just click Histrix finish and what it will do is add that again to my palm.xml right here. So very simple again, you probably don't need a tool to do that for you but it's fun and good eye candy for demos. Okay, so I've added that. So let me go back to my dumb endpoint and change that to a smart endpoint. So I'm gonna uncomment my imports that I need just to speed things up. I've added some code for logging just to output some stuff to the log file so we can see what's happening. And now I'm gonna comment out my dumb parallel stream and uncomment or bring to life my not dumb endpoint. So I'm gonna, and so this is going to essentially use a Histrix command, which essentially wraps that call to the inventory service with a circuit breaker. It protects it. So if things are not operating correctly, then what will happen is this inventory will kick in and Histrix will manage the checking of the inventory service so that when it comes back, it can start sending traffic to it. So it allows it to essentially recover. So the scenario here is you have a overloaded inventory service or a bug in the code and it causes it to fail and you need a backup solution here. Our backup solution is a very simple, hard-coded value. You would normally do something like check a cache or go to an alternate system or what you would have is multiple instances of the inventory system up and running so that if one fails, it doesn't bring the whole system down and the load balancer can rebalance and send traffic to the healthy versions of the inventory service. But for now we're just gonna use this simple work around here. Okay, so I've got my code. I'm gonna go ahead and deploy that out again. So I'm gonna undeply the old one and redeploy the new one. That shouldn't take too long and then we'll see what happens when the system is healthy and unhealthy. Okay, so undeploy the old one and I'm gonna go ahead and deploy the new one now. Let me clean it first to make sure it recompiles and does all the new stuff that I wanted to do. Okay, then I'll go ahead and deploy. If I'm in the right project here, yep, I am. Deploy and then while it's going, I'm gonna bring the inventory service back online just so we can demonstrate the failure scenario here. Okay, so it's coming back. The inventory service is coming back and the new catalog service is being deployed at the moment. Actually, I think it's building at the moment. So switch back and just have a look at that. Yep, looks like it's still building and now it's going to generate the necessary build config Kubernetes objects to invoke the build. And then it's going to deploy that out to the cluster. Okay, so what should happen is once this gets deployed, we'll test it again, obviously with the inventory service running, everything will look good. We'll shut off the inventory service and then get to see what happens with the, with HISTRICS and allowing HISTRICS to manage the fallback when things are not healthy and to manage the revival of the inventory service once it comes back online. And then the last thing we'll do is tie all of this into our existing monolith, which we already have running to kind of demonstrate the final strangulation process, which turns out to be extremely easy. Okay, so that looks like the new catalog services is deploying and it's coming up at the moment, which should be up momentarily. So once that is ready to go here, let's just take a look at the log file and make sure everything looks okay. Looks like it's up and running. Go back here. Yep, okay, so it looks like it's been deployed now. So I'll just collapse these two and let's go test our new circuit breaker. So let's hit it again. Okay, everything looks good. Let me pipe that to a pretty print thing. Looks like everything's good. Let me just hit this as many times as I want. And it looks good. Everything's good. So now let's shut off the inventory service and see what happens. Go ahead and turn this guy off. Okay, now let's hit it again. So remember the last time we did this, it timed out after 30 seconds. So this one took, there was a brief hiccup there where it's the call from the catalog to the inventory timed out or failed. And then Historic's decided that the circuit needed to be opened because the inventory system is unhealthy. So we get these negative ones here. And this is what we expect. If we were to tie this into the UI, which we'll do, you'll see the effect of this. So let's just bring the inventory service back online. And hopefully it should recover here. Right now it's probably still kind of debating whether the service is back up and running again. So we'll just hit it a couple more times and it should once it decides that the service is healthy again, we should start getting 337 again, which indeed we do here. Okay, so that is the circuit breaker in action. In fact, if you look at the log file for the catalog, which is where the circuit breaker is defined, you can see there the circuit breaker is here. There's fallback, success, short circuited. This was at the time when the inventory system was down, it was short circuiting. And then ultimately when it came back, it finally then returned success. Okay, so that's circuit breaker in action. All of the timeouts and thread pool sizes and all kinds of stuff that are tunable within history and through wildfire swarm and it's fraction configuration, you can tune all of that stuff. I'm just using the defaults here for demonstration purposes, but you would tune that based on the expected load and the size of your cluster and the number of instances of the services that you have up and running. So, okay, so the final step here is we're gonna tie this into the UI. And as I mentioned, it turns out to be extremely easy because we have these two services, catalog and inventory, but our monolith running over here has no idea that those exist and we're still getting values like 736, 512, 256, because the existing monolith has no clue that these services are out there. So in order to tie these in, we can actually use OpenShift's software-defined networking to do that. So this monolith, when I load this into my browser, it's making callbacks to itself to get the catalog and inventory information. I can intercept that and redirect it to my new microservices through software-defined networking. So let's go ahead and do that. So let's shift over to my new project here. What I wanna do is create what we call a new route. It's a path-based route, which allows you to intercept specific calls to a specific hostname and path and redirect those to different services running in my cluster. So I have two routes here for the catalog and inventory. I'm gonna create a third route called redirect, doesn't matter what you call it. The hostname is gonna be the hostname of the monolith. The path I'm going to override is the slash services slash products path, which is the path that the monolith makes a callback to itself using that path. And I wanna redirect that to my new catalog service, which in turn will call the new inventory service and then return those values back to the UI. So with this route in place, now any call from my monolith, I can switch back to the monolith and experience that. So when it calls back into itself, it's actually being redirected to my new microservices and getting the values from my new microservice and the corresponding back-end inventory system, which is controlled now by an independent team rather than the monolithic development team. And so I can now essentially continue this process. I can take the price service, I can take the shopping cart service and turn those into microservices and then similarly strangle it, strangle the monolith so that it uses these new services. And then ultimately I can turn off the monolith and I can fire all the developers that developed it and then move, maybe not fire, but find different jobs for them and continue on with my independently developed and autonomously deployed microservice teams. Okay, so that's it for the demo. So again, all of this code is on GitHub, if you wanna have a look. And there's actually a solution branch in there that gives the solution with the code that I added to this project so that you can get started quickly with the solution if you wanna see exactly what I did. Okay, so switching back to the final slide here. So that was the demo. So last summary slide, so WildFly is super awesome, built on the upstream WildFly Java EE community. Essentially it keeps Java EE developers in the game. So if you have experience with Java EE and you wanna continue using that in a microservice world in a new modular cloud native world, WildFly Swarm is a great option for you because it provides that path forward. It also implements the standards like micro profile and which has a huge community behind it and a lot of momentum. And we see great things coming forward with WildFly Swarm. Okay, so that's it. So I'll check out if we have any questions. Let's see, it looks like fonts keep getting smaller. Yeah, that was it. The disappearing fonts was the thing but I didn't wanna interrupt you. That's a good flow. You mentioned the GitHub repo. Can you pop over there in your browser and just show, so that shows up to where all the code is for this demo, that would be a good thing to have in here. The WildFly's been around for a while. So I suspect that a number of people on this call are already WildFly people. So... Good to remember if it was a WildFly Swarm example or a Roar example. So there's the URL. So I hit it on my browser here, you can see it. So actually I use this code for a couple of other things. We have a Vertex microservice and a Spring Boo microservice which we do a similar process for. So you can see the code in there as well. But here's the monolithic code which you can deploy to any OpenShift or Kubernetes using Fabric8. Here's the Node.js inventory and the WildFly Swarm catalog that we used in the demo. So that's all there. Well, I think you did a pretty awesome job with the demo even with the fonts shrinking but I think we'll always expand our screens and see that. Then there aren't any questions in the chat that I see. I'm giving people a couple of minutes to ask if you can also now pop back into your thank you. And slide. Then I think we're almost to the end of our hour and we will probably have more Roar talks coming up in the future. So stay tuned for that. And I'm gonna respect everybody's time and thank you very much James for another awesome presentation. Thanks for having me Diane, it was fun. Yeah, it's always, always is. You're always very entertaining and interesting content. So I'm very appreciative. It's like having a private tutorial every single time. So. Yeah, so that webpage there, developers. I've read hat.com slash Roar. You can get a lot more information about Swarm and Roar in general and details on what the components are and how they're supported from Red Hat. Cool. All right. Well, we look forward to more and hearing more from people are running WildFly Swarm or building applications with that and Roar and want to talk about their use cases. Please let me know and I'll give you the podium because we'd love to hear from you. All right. Thanks very much. Okay, bye-bye.