 Hello everybody and welcome again to another OpenShift Commons briefing. I'm really pleased to have with me here one of the leads from the Roarer team, James Faulkner, who's going to give us a talk on what Roarer is or Red Hat OpenShift application run times and the practical side of it. And I'm going to let James introduce his topic and take it away. We can have chat in the chat, online chat. I'll try and answer any questions. And we'll have live Q&A at the end. So with that, James, please take it away. Okay, thanks, Diane. Thanks, Diane. Oh, I'm getting some feedback. I'm going to turn off mine. How's that sound? Okay, let's try it again. Yep, much better. Okay, great. Hello everyone. My name is James Faulkner. I'm a Senior Technical Marketing Manager in the Red Hat Middleware Group focusing on the Roarer project, the Red Hat OpenShift application run times. And today I'm going to talk to you about using Roarer in the context of transitioning from monolithic application architectures to microservices architectures. There has been a couple of briefings a couple of weeks ago by some of my colleagues. John Kling and the Product Manager for Roarer went through a number of introductory pieces on Roarer, what the development goals were and why this product came to light. My colleague Thomas Kovarnstrom did a deep dive on Spring Boot within the context of Roarer. And today I'm going to kind of cover Roarer in general, focusing on that transition, what to do with existing applications. I have a demo, about a 20 to 30 minute demo. I'm going to demonstrate all of the run times that we have within the Roarer product umbrella and show you how you can use them to start that migration process and getting from monoliths to microservices. So we'll start with a definition. Basically, the modernization approaches can be split into two. Number one, modernizing existing applications. So reusing as much as possible, sometimes achieving 100% reuse, but also moving them to an environment where the app can benefit from more automation and continuous integration and set yourself up for a future modernization effort with microservices. The other bucket is what some organizations do where they make a concerted effort to build net new apps. So not essentially a rewrite from the ground up, employing modern application development frameworks and architectures and developing a process to get those apps to production very quickly and with less downtime in case something fails. So we'll start with modernization. So there's three options here. There's rehost, replatform and refactor. Rehost is simply moving the application as is with very little, no changes to a more modern application platform like JBoC AP along with OpenShift. Replatforming is similar to rehosting, but is a slightly more invasive approach where you start taking advantage of some of the platform's features like an OpenShift, for example, for better performance or scalability or better manageability or adding new business value incrementally as opposed to the complete rewrite, which is the third option, the refactor option using, again, modern app dev technologies and methodologies and tools. Deturning which ones you do does require some analysis and prioritization based on the business benefit that you expect to achieve and the risk of moving that application. So for example, small apps that are essentially frozen and maybe all the developers have left, those are good candidates for a rehost whereas apps requiring high scale and for which you'd like to be able to revise very quickly would be something where you're looking at a replatform or a refactor depending on the resources and time that you have. The other three are not interesting to us. They're essentially do nothing. And as you've seen countless times in the past, that is really not an option for businesses to really innovate in this digital modernization era. So a couple of diagrams that show exactly what I'm talking about here. So rehosting, again, is taking an existing app, say a Java EE application on a legacy platform like WebLogic or WebSphere and taking it to a modern EE platform like JBoCAP and then ultimately onto a container-based orchestration platform like OpenShift where you can provide, again, advanced deployment and CI CD techniques and gain the benefits of these things without doing virtually no changes to the application itself. The replatform, again, is doing the same thing but starting to go down that path of replacing functionality of the application using advanced development techniques like microservices to bring additional business value and to bring additional performance benefits and the ability to get bits to production quicker. Now, we talk a lot about microservices and I'll talk more about it today but the reality is it brings more complexity to the application. So there's no more single database, for example, there's no single source of truth, a lot more moving parts that need to be integrated. So for customers who aren't ready for that, the majestic or fast-moving monolith might be a good option and some actually argue that it's best to always start with a monolith even for new Greenfield applications that gives your developers a familiar environment in which to iterate, get the domain boundaries correct, get the domain models correct without having that additional complexity of distributed microservices. A key example here, again, is a key bank. You may have heard of this. It's an old bank. They've had a number of acquisitions over the years and inherited a lot of applications. One of their particular applications was a 15-year-old Java EE application deployed on WebSphere, which, as you can imagine, went into this huge monstrous thing with really big maintenance costs and they could only get it out the door once a quarter. So as part of their modernization effort, they refactored it to a more modern application, still a monolith, but kind of separating the front end with an AngularJS app and the back-end services all being restful services to be consumed by the front end. Again, still a monolith, but they were also able to containerize that monolith and move it to OpenShift, wrap it with a deployment pipeline, and instead of 70 steps, manual steps, required in the past, they were able to push a button and, you know, in a few minutes get bits to production much quicker and in a more automated fashion. So they moved their production cycle times from that once a quarter or three months to one week. Not only did they achieve that one-week delivery, they also cut their production failure rates in half, which is super critical. One of the benefits of a modernization effort is assuming you will fail and you will fail, but having the tools and processes underneath to recover from that and be able to, you know, keep the business moving, keep business continuity in the face of those failures with a modern deployment platform like OpenShift. Okay, so the classic example of this incremental replacement that KeyBank was setting themselves up for is called a strangulation, and I'm going to demo this in a moment. Instead of rewriting the application entirely, small bits are re-implemented using things like microservices. Over time, you can migrate the entire application over from a monolith to a microservices application in small bits, which means less risk, less downtime, and more time to kind of get things right. It also involves, you can also introduce new business value to the application because simply rewriting an app brings no additional business value. But if you are able to add business value, which again I'll demo in a moment, you can really justify that cost of the overall effort of strangulation. The last one is a refactor. This is the complete rewrite. It's generally more expensive, but there's a lot more upfront cost, but can also provide the most benefit. There's a number of choices to be made when you go down this path around language and framework and development approach. This is where Roar comes in, but those choices need to be made before the first line of code is rewritten. So as you can imagine, each of these comes with a number of different trade-offs. Rehosting, again, is generally the cheapest. It takes the least amount of time, but also bears the least amount of fruit. It's still the existing application and all of that application's existing bugs still remain. Replatforming is kind of in the middle. It gives you a chance to start down that path out of a complete modernization, but it does come at additional cost of moving the application and introducing new services and rewriting incremental parts of the application versus just lifting and shifting and not touching it. And then lastly, rewrite is, of course, the most expensive, but when done correctly has the best bang for your buck. So again, generally deciding which approach you take involves some analysis, some answering of some key questions like what's your overall business objective for app modernization, and are you able to measure that business objective so you know if you're going in the right direction? A number of other questions here. I guess the last one's probably the most interesting. Considering regulatory requirements that may dictate the types and locations of deployments that you may be required. Like for example, you may not be able to host your customer's data across the ocean in a different country, for example. Or if you have regulatory requirements around networking and certain types of traffic cannot pass through certain types of deployments or certain nodes, for example. Those can all be handled, of course, with a modern deployment platform like OpenShift. So we talk a lot about building microservices and fast-moving monoliths, but the reality is, well before you start considering microservices, there's a number of other things that have to occur upfront. Because simply writing a microservice application using a 30-year-old process that takes a quarter or three months to release software in is not going to do you much good. There's a number of things that have to occur upfront, like number one, accepting and reorganizing to a quote-unquote DevOps approach where you kind of break down that wall of confusion between developers and operators. They start speaking the same language, for example, using Linux containers. And they both agree that they're kind of responsible for their own bits of code when it goes all the way from the developer's desktop out to production. Getting developers efficient at pushing bits to production essentially means getting out of their way. So self-service and on-demand infrastructure where developers can order new development environments in minutes instead of weeks is critical to meeting those goals of getting bits to production faster. Once you have developers working quickly, you need a way to automate the build of those applications in a consistent manner using things like Red Hat Ansible or Puppet or Chef or some other automation framework. Once you are able to build consistently, you need to be able to deploy and deliver consistently and continuously. So this is where your automated delivery pipeline comes into place using CI CD platforms like OpenShift and Jenkins and things like that. Once you have the builds moving quickly through a pipeline, you need to be able to land them in production safely. So advanced deployment techniques like Blue Green and Canary deployments allow you to minimize the risk of bad code making it to production. It will happen. With these advanced deployment techniques, you'll be able to minimize the impact of those changes, possibly prevent them, but more importantly be able to undo them if they do occur. Once you have all that, then you can start talking about microservices and fast-moving monoliths and modernizing the applications themselves. But again, you still need to consider the number of frameworks and APIs you'll need. This is what Roar is bringing us. So today the new digital architecture is done in the context of all these buzzwords you see here. APIs are front and center. They're super critical for integrating individual small applications together using contracts and well-defined APIs and API versioning and things like that. The number of frameworks, languages, and technologies you can use to do this pales in comparison to even like five years ago. So what we've seen in the industry is a move from the traditional sort of monolithic Java EE application server that contains both middleware and the operational platform enclosed in a handful of industry standard web application servers like JBoss App Server or Oracle WebLogic or IBM WebSphere or even server containers like Tomcat to a split, a separation of the operational platform from that middleware tier. So what we've seen is essentially a bifurcation of the functionality previously supplied by the App Server now being provided by an operational cloud platform like OpenShift and by a set of middleware services like JBoss Middleware, containerized middleware services on OpenShift to provide the same level of functionality but in a more efficient and scalable and modular way. So this is where Red Hat OpenShift application runtimes comes in. So that blue tier at the top, the runtimes that are supporting those applications, Red Hat OpenShift application runtimes is a curated collection of those time-tested frameworks and runtimes that are targeting, specifically targeting cloud-native microservice applications. The product contains a number of frameworks and runtimes that you'll undoubtedly be familiar with. So we provide two groups of those runtimes. The supported runtimes are fully supported by Red Hat. We provide lifecycle management and support contracts, for example, for JBoss EAP, Wildlife Swarm Vertex, Eclipse Vertex and Node.js. The other groups of frameworks are those that Red Hat tests and verifies to make sure they run smoothly on OpenShift like Spring Boot, Spring Cloud, Netflix Histrix and Ribbon. And as we go forward, more parts of those libraries will fall under the supported umbrella and be integrated with Red Hat technologies. Roar also provides launch. So launch is a project generator based on a collection of cloud-native samples using the supported and tested and verified frameworks to provide a very efficient and robust initial developer experience. And you'll see that in the demo as well. So let's skip over. So here's the actual launch itself, and I kind of wanted to just briefly demonstrate it. So launch is a set of samples in the cloud that not only provide you project starting points, but actually will deploy it for you onto OpenShift. So it's essentially a wizard-based interface that runs on OpenShift itself and deploys to both OpenShift Online as well as to your local OpenShift container platform if you have it or OpenShift Origin if you're running that as well. All of the run times are supported that we have in Roar, so Spring Boot, Vertex, Wildlife Swarm and Node.js. So what this looks like essentially is, here's the website here. So developers.redhat.com slash launch. So you can launch your project. You can select either OpenShift Online or if you want to build and run it locally, you can do that as well. So I'll just choose that option just for brevity here. So it'll essentially set me through a number of options for the different run times. So if I reload this here, it might have gotten logged out. Yep, it looks like it got logged out. So let me go ahead and log back in. I'll try this one more time here. Okay, so I've selected my deployment type is build and run locally. So I'll build and run locally. I can select the mission type. So here are the microservice missions. You can choose circuit breaker, externalize configuration, health check, and so forth and so on. I'll go with say a circuit breaker and then I can choose the run time. You saw are available for the different run time. So we'll go with vertex. I'll click next. I can provide the project information and click next. And in this case, since I'm doing a local install, it will download it. I can click download. It'll download a zip file. I can then unzip that and load that in my IDE and go and take a look at that example code. Again, not something you're going to use in production. This is really an initial developer experience to get you up and running quickly. It's more than just a project generator but it's typically microservice applications like health checks and fault tolerance types of features you find in typical microservice applications and then applies those to the individual run times within Roar. So that's the starter and you can quickly get started with that. For developers who are more interested in advanced developers who are going beyond the getting started experience, the way you consume Roar from Red Hat depends on the technology. So three of them are Java based. So Vertex, Wildlife Swarm and Spring Boot are all Java based and so the typical way you build Java applications is with Maven or Gradle. So the artifacts that you'll essentially download for the Roar product come from the Maven repositories that Red Hat hosts in addition to the upstream repositories for those unsupported components of the run times that you'll see in a moment. But we have a maven.repository.redhat.com which is our official Maven repository for Node.js. Node.js has not a Java application obviously. The way you consume that is through the Linux container image that we have hosted on our Red Hat container catalog which is at registry.access.redhat.com So essentially these are the release channels that these bits are going to that you can use in your projects. Quick example and then we'll get to the demo here. So here's an example using wildfly swarm to consume wildfly swarm you simply using Maven in your palm.xml you declare the repository from which the bits will come from then you declare a dependency on the bomb or bill of materials this brings in all of the dependency information for wildfly swarm within the context of Roar then from there you can then specify the individual components within wildfly swarm that you want to use. In this example we're using a fraction called monitor. I'll tell you what fractions are in a moment but you can bring in the different parts and different functionalities that you need from wildfly swarm or from spring or from vertex using the same technique and with Node.js you'll make a change to your package.json file which again you'll see in the demo. Okay so enough talk let's get to the meat of this presentation which is a couple of the actually four or five examples the code is on github if you want to follow along there are two branches there's the master branch which contains the starting point from which I will start and then if you get stock or need help you can check out the solution branch which has the essentially the solution to these different exercises. Okay so we'll start with wildfly swarm so one slide on what swarm is so wildfly swarm if you think about it from the developer's perspective shifting to these micro services comes with a lot of changes like the infrastructure is changing to the cloud the app architecture is moving to more modular distributed services so wildfly swarm tries to provide a familiar path to micro services for java EE developers that's important to remember it's targeted at java EE developers who are building java micro services in order to maximize their production their productivity and use their existing java EE skills for building micro services using a subset of java EE the complains with java EE in the past is that the spec is moved really slow and the app servers are really big and bloated with swarm it's essentially just enough of the app server that you need for your micro services applications so it's particularly useful if you have an existing application like say a monolith and you want to move that to a micro services application architecture over time you you can combine java EE and non java EE technologies using swarm to essentially reuse your java EE knowledge and bring in those micro services functionalities that you need in the application swarm is based on a standard that the components none of the components come from the java EE world standards are good but sometimes they don't move as quickly so you may have heard of micro profile micro profile is a collection of specifications that are very useful for java EE micro service developers who are writing micro service applications using java a number of vendors have come along and grouped together to kind of form a micro services set of specifications in addition or alongside of java EE so it's not part of java EE these vendors are interested in micro services applications and moving the java technology forward to meet these modern business challenges so red hat of course is involved with the number of others that you can see on the slide are involved as well so it's not just red hat it's definitely a community of not only vendors but also communities themselves like the London java community or the Brazil java community have come together to kind of set this in motion so there's a new release was just announced at java EE last week version 1.2 and then 1.2 contains a number of technologies which we won't specifically get into today but just know that the micro micro profile is a set of specifications and wildfire swarm is our implementation so within the raw context here's the support for wildfire swarm we're targeting again micro services so you'll see a number of fractions fractions are the components of wildfire swarm that that encapsulate certain functionality so you can have a fraction for health checks or a fraction for topology or a fraction for externalize configuration and you bring those in using the maven.xml example I showed you earlier the supported fractions the certified fractions which again are kind of tested and verified to work well with swarm are there as well and then you can see the upstream components which are currently unsupported as time goes on and we hear back from the community and from our customers some of those may drop down into supported status okay so what does this mean for micro server for existing application so let me show you let me go back to this bring up my notes here so I don't miss anything here okay so what we're going to do is use wildfire swarm to essentially wrap an existing application so I this first demo I have an existing application it's called it's a monolith it is a basically a let's see here wildfire swarm so it's essentially a storefront and let me just go ahead and run this first so you can see exactly what I'm talking about I can just basically do maven clean package and this will build my monolith so this is an existing java EE monolith I have a number of services as you can see in the source code here I have some stateless EJBs for handling the catalog the product catalog I have stateful EJBs for handling the shopping cart so the individual person's shopping cart and I just built this application so I can look at it here here's my monolith.war I can take this war file and deploy to any java EE application so this is you know 10 years plus old technology to deploy this application what I want to do is wrap it with swarm so the first thing I'm going to do is actually look at the palm.xml and let's take a look at the changes needed to start using swarm so here's the maven project file very simple the only dependency it has is on java EE 7 at the moment this brings in you know I don't know how many JSRs are in java EE there's you know on the order of 40 or 50 or maybe more so it brings in all that what I want to do with swarm is just bring in the parts that I need and I don't quite know because this application was written a long time ago and we don't quite know what it's using so wildfire swarm has an interesting feature for moving from monoliths to microservices called automatic or auto detection of the java APIs you're using and it will automatically bring in the components of wildfire swarm needed to resolve those dependencies without bringing in all of java EE so let's take a look at what I need to do so I have a plugin for my IDE this is available for both Eclipse jboss developer studio as well as IntelliJ as well as NetBeans the plugin makes it very simple for me to set up wildfire swarm so I choose that option I'll just click finish here and so what it's actually happened is it's added two things to my XML file it's added the plugin itself the wildfire swarm plugin it's a Maven plugin it's also added the bill of materials for wildfire swarm that's all I need to do it also adds a version specification so the version of wildfire swarm I'm using which you can change over time that's pretty much it so if I want to go ahead and build and run this I can run Maven wildfire swarm swarm run and that's essentially going to build my application using wildfire swarm and do that auto detection looking at the source code and figuring out which components are needed and you'll see this in the output here so as building you can see right here it's detected a number of fractions that I need I'm using CDI for injecting some resources I'm using EJBs I also am using Jaxrs to expose the restful API out to my front end and so then it basically packages that up into a single runnable or fat jar you can actually see this if I take a look at the fat jar let's go look at the fat jar so here's my fat jar right here monolith dash swarm dot jar and if I go back to this other terminal you can see it's running now so if I were to actually load this in my browser let's take a look at here I'll just go to localhost8080 and you can see this is my model with my 10 year old application written in Java EE running using wildfire swarm only using the components I need so this is a very very quick way for an existing Java EE application to kind of warp into the future using wildfire swarm but let's stop there so let me stop this one and now what we want to do is deploy it to OpenShift for example because I want to write a Jenkins pipeline to wrap around this monolith so that's absolutely easy as well so we're using another plugin that I've already installed called Fabricate so this is an open source project championed by Red Hat which provides integration for projects it has a Maven plugin to integrate your projects with Kubernetes and OpenShift very easily so I can do things like Maven actually I've already I have a local OpenShift running so I'm going to go ahead and create a new project so this should be familiar to all of you as an OpenShift fan watching here so I'll create a new project called Swarm okay so I've got my new project now I can simply do Maven Fabric8 deploy so that's going to go ahead and deploy this existing Wildfly Swarm application which is wrapped around my monolith helped to OpenShift so as that builds again the same the same thing occurs it does the auto detection looks looking for fractions it found the number of fractions I was using it brought in a number of other fractions to the transitive dependencies of those of those dependencies and then it starts this build using OpenShift so while that's building let's shift over to OpenShift and take a look so here's my new project here the Swarm project so nothing's happening yet there's a build in progress you can see the build here running this is my Fabric8 build of my application ultimately going to be running using the Java S2i image that's provided by Red Hat so once that build completes looks like it's done it will then deploy that application you can see the application is being deployed at the moment and looks like it's up now now if I click on this you'll notice I get an error here and you probably can guess what this is as OpenShift experts it's lack of a health check so my application has no health check so that's kind of one of the first things you're going to want to do when you're moving from model list to microservices and I'm going to show you how easy that is to do with Swarm so if I reload this eventually it will be available and here's the application now running but again that health check is super important not just to avoid that error message but also when you're doing things like rolling up upgrades or trying to do canary deployments and things like that OpenShift needs to know when the application is healthy and when it's not so let's go ahead and add a health check to Wildlife Swarm and I'll show you how easy this is again we're going to invoke our little plugin which doesn't do a whole lot and I'll show you exactly what it does once it does it but I'm going to add a fraction so I'll click add fraction I got a list of fractions there's a number of fractions in here some of them supported some of them are are unsupported upstream fractions from the community but the one I want is supported it's called monitor so I'm going to click monitor and click finish so what this basically did the only thing that it did to my palm.xml was bring in this dependency this org.wildfly.swarm monitor dependency what that does is then gives me the ability to define health check so let me go ahead and define a health check real quick in this application so I'll create a new Java class we'll call it infra endpoint this is my infrastructure endpoint now it's a restful endpoint so I need to give it a path we'll give it a path of infra and then in my endpoint I want a health check endpoint so I'm going to create a get getJaxrs endpoint so that path is going to be health it doesn't matter what I call these I can call them whatever I want you'll see in a moment how swarm detects this it is with a health annotation on the class so public health status that returns this custom health status object you can just return strings if you want but I'm going to return health status because you may want to do something more fancy than just simply saying hello I'm alive so I'll just call it health and in this case I'm just going to return health status.name I'll call it foo.up so what you can do is actually create multiple health checks obviously health checks you shouldn't be too invasive you don't want to actually change the state of the application but you may want to check one or more things in different modular methods for example and then you can aggregate them using a single health endpoint that looks at all of them and if one of them is down then then the application will be considered to be not healthy but in this case I'm just going to do something very simple here one last thing I need to do back on my example of palm.xml when I start declaring swarm fractions specifically declaring them the auto detection is by default turned off because it assumes that you know what you're doing and you no longer need auto detection because you're telling swarm exactly which fractions to bring in which is a good way to minimize for some fractions they might not have the most robust auto detection code but in this case I want that auto detection to continue so I'm simply going to configure the plugin to tell it to continue to do that auto detection by setting it to force okay so I'll save that and I'll go ahead and deploy it again so instead of using the command line I'm simply going to use my maven integration in my IDE and just do a fabricate deploy again so this does essentially a maven fabricate colon deploy but I can simply double click it instead of actually typing it because I'm sure you're sick and tired of seeing me type so it's going to rebuild the application again it's still going to do that auto detection you can notice it picked up my configuration of force still does the auto detection also brings in the detections that are needed as a transit dependencies and then rebuild the application and redeploy it out to OpenShift so I shift back over to OpenShift I can see the application will once this build number two completes here the application will then be redeployed one interesting thing to note is during the build if I go back to the build here you can see the health check was automatically added here because I was using that monitor fraction and the associated developer annotations for wildfire swarm it automatically knows how to add Kubernetes OpenShift endpoints for health checking and then it uses that in the application itself so it has the application the new version of the application comes up we can take a look at the log file and we'll see hopefully the health check will be automatically wired up correctly for us I can see it's still in the process of coming up and there's my health endpoint that was added services in for health oops lost that one so it basically took this health check that I declared with an annotation and added an automatic endpoint to the application as well as declared and defined the health check for OpenShift so if I go back to the overview screen I can see my application is up now and when I hit it it actually is up the health check is defined in the deployment config which you can see here automatically defined for me so very simple way for a monolithic application to start down that path of microservices using wildfly swarm okay so let's shift back and go on to the next next runtime here so we'll go back to our slides if I can find them let's find them here it's on the screen here nope the screen nope okay here we go so let's move on so that's wildfly swarm what we're going to do next is talk about spring so spring and spring boot is an opinionated framework for building microservices using the spring framework and with Java EE technologies like Jaxrs JPA and a few other spring choices or spring projects we've Red Hat's already certified spring boot apps on OpenShift using OpenJDK as well as Jboss web server includes support for it so going forward we're going to continue to certify more and more Red Hat technologies to be used with spring boot like Hibernate or Infinispan or more spring is in Roar it's basically the same spring that you know and love but tested and verified by Red Hat's QE department so that includes spring boot, spring cloud Kubernetes ribbon hystrics, the Red Hat components that are part of the spring ecosystem are fully supported like embedded Tomcat or Hibernate or Apache CXF also we have a single sign-on technology with key cloak and Red Hat SSO we have messaging capabilities with AMQ but more importantly we have native Kubernetes and OpenShift integration much like you saw with Wildfly Swarm we have a similar set of features for spring and spring boot we also again support the spring boot runtime using the launch the developer experience website that you saw a moment ago we also have a number of starters that we've contributed to the spring ecosystem starters are basically a simplified way of bringing dependencies from the spring ecosystem into your project using things like POM, XML entries and things like that and then again as I mentioned as we move forward additional functionality like support for transactions or JBoss AMQ will be integrated into Roar as well so let's take a look at what that's going to look like in reality so the next demo is a spring and spring boot demo so what we're going to do is take a piece of our monolith that you saw a moment ago so let me go back to the monolith and show you what we're going to do here so this is the monolith again these components are all part of the monolith but we want to start splitting them out and making them into individual microservices so for example the catalog microservice the thing that gives us the name of the products, the image and the description like for example this one over here we want to turn that into a microservice and in addition add some additional business value along the way and we're going to do that with spring and spring boot so I've taken the catalog functionality from my microservice and split it into a spring boot application you can see the application here let me close these other ones so they get out of the way here and this is my spring application here so you can see I have my spring boot application declared here very simple simple main file I have one controller this is providing the list of products it's essentially going out to its own database and getting a collection of product descriptions and feeding that back as part of this RESTful interface and that's basically it so let me go ahead and deploy this and let's see what happens here so let me go back to my integration with Maven here spring boot plugins let me create a new project first OC new project spring and I'm going to go ahead and deploy this microservice out to OpenShift I'll just do fabricate deploy again fabricate is an upstream open source project which knows about OpenShift and Kubernetes and is able to take Java applications package them up and build them using S2I and then deploy them out to OpenShift and provide additional functionality like creating config maps and service accounts and secrets and things like that so this build should not take too long I've also created a very simple UI on top of this microservices just for demo purposes okay so looks like my build is complete so let's go ahead and go back to OpenShift and see what what we got here so I go back to my overview here and I got a new project called spring so here's my catalog it's spinning up looks like it just completed and if I hit the external route for that particular application here's my simple user interface for my catalog it just basically is a grid of the different pieces of information in the catalog database so I click this fetch button to refetch the catalog as needed so the scenario here and what we want to do is not only split out the catalog but also add a new business value the scenario is that our supplier chain is pretty weak and the products that we're getting like the Red Hat Fedora or the J Boss Forge Community Project Sticker they oftentimes have a lot of quality problems from the manufacturer who are creating these kind of chachkies that you can give out at a trade show so we're constantly getting product recall notices we need to be able to quickly remove products from our catalog the problem is that our catalog backend is 30 year old technology that takes weeks to get changes into so what we want to do is provide a new interface and we're going to do that with Spring Boot and OpenShift through a config map so that's kind of the typical way you do externalized configuration in OpenShift so I'm going to show you how easy that is to do in the code itself so what we're going to do is we have this is the code that actually returns the list of products and we want to be able to filter that in our microservice because this is a simple call back to a simplified database but imagine if it you know again a super old very large system that takes weeks and months to get changes into so we want to do it here so in order to do that here we need to provide an interface so we're going to create a new Java class we'll call it store config and in order for this to participate in the Spring ecosystem I'm going to declare it to be a component not only a component but a set of configuration properties with a prefix of store and you'll see what this is used in a moment is basically anything in the config file that starts with store will be considered to be part of this configuration class so my store config is going to contain one thing it's a list of recalled products and it's going to generate some getters and setters for it so I'll go ahead and do generate getter and setter okay so here's my essentially a bean a spring bean which encapsulates my externalized configuration now once I now that I have that I can now inject it into my controller using Spring's auto wired capability so private store config and now that I have my configuration I can now filter my list so dot filter and I'll filter the list of products return to only those that the config get recall products does not contain the product in questions ID get item ID so this is essentially a lambda expression that will for any product that does appear in this list of called products will be filtered from this list so I think that's in place so the only last thing I need to do is bring in the components of the spring ecosystem that support this so spring cloud Kubernetes in particular so in my upon that XML I've already typed it in here for demo purposes I'll just uncommon that out save that and now I should be able to redeploy this out to to open shift so double click the deploy button here again run this and what will happen is it will deploy it to open shift create the config map which will contain the list of recalled products and implement the filter that I just implemented to filter the list of products when I want to remove something from the catalog so that that looks like it's in progress so while that's going let's switch back to open shift and I'll show you the config map here so the config map is created here's my list of recall products that's completely empty at the moment but my application looks like it's in the process of being redeployed here should come up momentarily take a look at the log file just make sure nothing crazy happens here you see the the spring tag here looks like everything's working let's go ahead and hit this endpoint again so here's my application all my products are there so let's go ahead and remove this first item so 329 299 so we go over here and edit our config map which now gives us a an externalized configuration and the ability to remove products I can if I edit this config map and add that number it's saved it automatically is reloaded and if I go back to my my new application here and as a fetched catalog you can see the red hat for doors now disappeared from the list of products so I can edit that I can add and remove products this provides a huge business value because it saves the business reputation of by not distributing junk quality materials and my business is happy so if I now let's think back to the monolith so the monolith is here this is my monolithic application which is my real business critical application not the toy application I just created but it has the same interface so but the red hat for door in this case is still in the product catalog and this is because obviously the monolith has no idea that I just created this microservice so let's tie them together and start that strangulation of my monolith into a microservices architecture so we'll keep our existing code as is we're not going to change our monolith at all we're not going to change our new microservice we're simply going to tie them together using OpenShift and its ability to do clever software defined networking so what I'm going to do is in the in the list of routes that I have for my for this application so again remember I have the catalog and I have the the where is the swarm application here my sorry my monolith actually I need to deploy this to the same project so let me go ahead and switch to my project and redeploy this the catalog because I want them to be in the same project because they're going to be talking to one another and I haven't set up the ability to talk between different projects so I'll just redeploy this exact same microservice to my to my swarm project to my monolithic project so that'll come up and once that comes up I'll be able to then do that that clever routing I was talking about which will essentially strangle my monolith and remove the catalog functionality from the monolith and replace it with my microservice using Spring Boot from Roar so once this new application comes up I should be able to create that so let's go ahead and make sure the build is still in progress looks like it just completed so it's now being deployed here so here's my catalog in the same project as my monolith so I'm going to go ahead and create a essentially a redirection route so you're all familiar with routes from the OpenShift world I'm going to take any application request that comes into my monolith I want to now redirect that to my new piece of functionality written as a microservice so I'll create a new route I'll call it redirect the hostname I'm going to use is the monolith hostname the path is this the path that the monolith is expecting to get its list of products from so anytime this path is hit I'm going to redirect that to the catalog service on the same ports and same restful addresses and just hit create now I have this redirection in place now if I then hit my monolith I can see that once my application comes oh I need to edit the config map again obviously to remove that the same project because I've redeployed it so let me go ahead and edit that add my Red Hat Fedora product ID to my list of recall products I go back to my monolithic application here you can see as I reloaded it the Fedora is now gone so I've essentially strangled my monolith I've started the strangulation process and you can do that with a number of the other components here like the pricing and the inventory service and the ratings and reviews if you had those and then ultimately you'll get to a point where you've completely gone from monolith to microservices using using Roar so we've got I guess about 10 minutes left I have actually two more demos I think I'll just do one I'm going to focus on vertex because that's kind of the interesting runtime here and we'll quickly go through this see if I can do this rather quickly so vertex is the third runtime within Roar it's really great for high performance, low latency high concurrency applications, web applications in particular the reason why it's good for that is because of the nature of reactive programming and event driven asynchronous execution models so to briefly illustrate that take a look at the execution model of a single threaded synchronous application this application has three tasks to perform, blue, green and red so in a single threaded synchronous model blue runs until it's completely done and exits then green runs then red runs not too much to say here except that it's going to be really really slow especially for high performance applications that are waiting a lot on disk or network or some other resource the second model is a threaded model this is a traditional model that you're probably familiar with if you're a java ee or java developer this is where the tasks run in parallel on different cores or different threads of a computing system the CPU is able to switch between them freely at any time so that means as a developer if you're writing code in these they have to be thread safe you have to deal with synchronization locks and mutexes and you have to coordinate between these threads so that the state is what you expect and you don't corrupt the state or get things like race conditions or deadlocks this is also called preemptive multitasking the third one is the asynchronous model this is one we'll concentrate on this is where the developer controls that interleaving it's also called cooperative multitasking gets rid of those really nasty things like race conditions and deadlocks and blocks of synchronized code it does this through a mechanism that you can see in a moment but important to know that it's been around for a long time it's not like Vertex or Node.js invented this stuff it was used on the space shuttle 30 years ago where when you push the button to fire the thruster it better fire the moment you push the button and not a few seconds later it's also been used in Windows and Windows 3.x in particular essentially what it is the bits of code from blue, green and red in my example they run until until it's until it's done until it reaches a good stopping point for example when you go out to a disk or a database once that occurs and you're waiting on a callback other code can run like red code or blue code in this example so it's important to note that your code runs uninterrupted until you sell it to stop so if you write user interface code and you block you get things like this you can paint the screen with a dialog box this is a terrible bug but I'm sure you've all seen it in the past this is because it's red blocked when it shouldn't have in the UI and it produces artifacts like that so that should be avoided but consider what it buys you essentially it buys you nothing if blue, green and red were completely CPU bound it would buy you absolutely nothing because they would just run until they're done and there's no waiting for anything but it does buy you things when there are wait times like for example web servers or user input or some other thing that takes a non-trivial amount of time it also is a big benefit when you have a large number of these things because then the interleave can happen in such a way that you can save a lot of time if you need to run blue, green and red and the task takes a certain amount of time for threaded if you do a synchronous you can save a lot of time by essentially interleaving bits of code that are waiting for callbacks so that when blue stops waiting for a callback from a database call then green can run and then blue can run some more when it gets that callback and then green can run some more when blue stops again and you get the idea you can interleave all this ultimately at the end on the right side of the screen saving that amount of time for running all of blue, green and red and this is what Vertex does this is what the basis of reactive systems and asynchronous event driven programming are Vertex is a reactive toolkit for the JVM and it has a number of supported languages within that within the JVM so like Java and JavaScript and Ruby Ceylon, Scala Kotlin and a number of others it's again ideal for that high concurrency and low latency services where you have a lot of people hitting a website or a lot of machines talking to one another it does this through event driven non-blocking I.O. libraries throughout the entire set of libraries and components within Vertex so there's no blocking there's no waiting I mean there is waiting obviously but there's no blocking there's things like promises and futures and callbacks and things like that to really make this application much more flexible and resilient to failure and much more performant so within Roar here's the list of supported components within Vertex you can see again we target the microservice developers so externalize configuration circuit breaker, health check, service discovery and a number of other components within the Vertex ecosystem that are targeting specifically for microservices and reactive microservices so for example if you're familiar with RxJava or you use RxJava there's a set of reactive extensions that are in tech preview but on the road to being supported or if you're using reactive streams if you're integrating with AMQ we support both the AMQP protocol as well as MQTT for cluster management we're supporting Infinisman obviously that's a open source project championed by Red Hat and exposed in Jboss datagrid and then of course the Vertex core itself which not only consists of the core web interfaces but also a shared event bus that you can do distributed messaging across different Vertex instances across your cluster okay so with the current release of Vertex it's 3.4.2 and that is what is currently included and supported within Roar to use them very similar to Swarm you simply declare some pom.xml entries we also have boosters as you saw which demonstrated a number of microservice concepts within the realm of Vertex and reactive programming and reactive systems and the examples that one of the examples I showed you is available along with a number of other examples at the website here listed at the bottom the last demo that we have time for here I also have a Node.js demo but that one's relatively simple and you can check that out after viewing this one so last thing we're going to do is Vertex so let me go ahead and create a new project to hold my Vertex example to create a new project so what I have is again very similar to the Spring Boot example I have a catalog the same product catalog that is implemented not with Spring Boot but with Vertex which is an external database so the first thing I'm going to do is deploy that database very briefly here so OC project make sure run the right project OC process it's in a template here so I'll just create it and while I'm doing that well that's being created so it'll deploy MongoDB as the database so the structure of the Vertex project is very different than your typical Spring project the core kind of component within Vertex is called a vertical which contains basically your business logic you can split it up into a number of different verticals but effectively you are writing code in a vertical much like you would in the Spring Boot component for example so here's my simple vertical this vertical is a web vertical exposes a set of restful APIs so I can say slash get slash products and it will give me that list of products from the catalog okay so what we're going to do is we're going to add a circuit breaker using the supported version of Vertex circuit breaker within Roar so to add that circuit breaker the first thing we're going to do is declare it in our palm.xml so let me just go and find a good spot here to declare this so I'll just put it here we'll call it dependency Vertex-Circuit so very simple just like all the other dependencies I have a new dependency called circuit breaker once I have that now I can uncomment the code that I've already written here so I can bring in a circuit breaker object here and that's that will be configured here let me import that properly and then here's my circuit breaker object itself and there are a number of configuration options for a circuit breaker like how many times will it fail before it opens the circuit if you don't know a circuit breaker it basically protects calls to some other service if that call fails a number of times it will do what's known as opening the circuit and falling back to some other strategy to get the same amount of data this prevents subsystems and microservices from being overloaded by too many requests if it gets overloaded the circuit is open and it gives the service a chance to recover through things like open shift scaling and and pod you know detecting that a pod is unhealthy and killing it and replacing it with a new pod so that's what a circuit breaker does ultimately then comes back once the service is ready to go here so I have my circuit breaker object here's the API call that I want to protect so what I want to do is protect the API call that makes the call to his database so this is reactive code already this is calling to our database what we want to do is let me get rid of some of the screen here they're just basically going to wrap this with a circuit breaker and show you what the what the effect is so instead of making that call to get products and waiting for a response through my callback I'm going to call circuit breaker dot execute with fallback so let me fill out the code here and then I'll explain exactly what's going on here okay so this is the call to my circuit breaker to protect the call to the database it you give it some code to call you give it some code to call when the original call fails which is called the fallback and then you give it some code to deal with the result of either the fallback or the original version of the code if it succeeded so we essentially just need to fill these three things in so let me get rid of and I'll just keep this here okay so let's fill in the code to call the database so we're going to take the existing code and just copy and paste that here actually let me leave that here I'll just paste it in here so here's my existing call right I don't care if it succeeded or failed so I'm going to remove the error checking code here I only care I don't care about anything in fact if it fails I want it to fail properly and go to my fallback so I'm just going to remove the code that deals with error checking also remove the code that deals with responding back to the client because I only want to do return this this value from this call so I'm going to do I'm going to complete the the future so response dots where is it it is it is in events.complete and I'm going to complete it with the list of objects the list of products from my database so that list is encapsulated in this JSON object so I'm just going to complete it with that okay so that's the code to my existing code to make the call and return the value by completing the the future there's a lot of reactive stuff I'm glossing over here we don't have time to kind of do a complete thing but this is essentially a callback I call this get products and then when that returns when that's ready to return this code is actually completed if that if that code fails then I want to return something else so this is my fallback so in this case I want to return something else and we're just going to hard coded here so we're going to return a new JSON array which contains a single product and just for demo purposes we'll call it let's see fallback products fallback description and give it an item ID of call 1 and then the last is the price so we'll just give it a 1 million now in reality your fallback is going to do something a little more interesting than this it's not going to return something hard coded in most cases it's going to do something like check a cache or go to an alternate service or something like that but in our case for demo purposes we're just going to return this method sorry this hard coded value so there's my return for my fallback and then lastly the code to actually deal with sending it back to the client is the same code from down here so we'll just copy and paste that code in here instead of this object we're going to call the event dot get event dot result dot and code prettily so there's the code let me delete the old code here's my new circuit breaker enabled responsive code using vertex and roar a lot of this can be simplified you'll notice my ID is telling me that I can replace these with simplified expressions so I'll go ahead and do that a couple of others in here I think that's it for now okay so it looks like I'm good here now I've essentially wrapped my call to my database with a circuit breaker configured in the code that you saw earlier so let's go ahead and try this out so let me go ahead and deploy this out to OpenShift again I'm going to use the same integration I have with my IDE here for vertex plugins go to fabricates make sure I'm on the right project project vertex and I'll go ahead and deploy this to my my new project I created so what should happen is when I hit this slash products API it should wrap the call to the database with a circuit breaker so you can imagine what this demo is going to be I'm going to run it it should look fine I'm going to kill the database and then we'll see hopefully the the fallback be employed here again the fallback in my example is going to be a very simple hard-coded list of products this fallback product here but in a real-world application you would do something a little fancier okay so looks like that's been deployed so let's go back to OpenShift and go to my new project here my vertex project down here and it looks like my database is up here my new catalog microservices up here so let's go ahead and hit that so here's my my catalog I can click again the fetch catalog I can get a list of products the same exact set of products that I had before now let's kill the database and witness what happens and how that fallback actually happens so I will go ahead and take my database down by scaling it to zero so if I go back to my microservice here sorry my click fetch catalog you see it took a couple seconds and then the fallback is was employed to return this fallback product again you would do something fancier in a real-world application let me bring the database back up and then after the time out the configured time out of I think it's five seconds in the in the circuit breaker as well as the health checks that need to pass in OpenShift once this application comes back up and all those time outs expire the and I hit the service again it will retry that call to the database that time it should succeed because the circuit is closed again and the application goes about it's normal business so it looks like the application the database is still coming up so if I hit this end point it should still fail if I hit fetch catalog looks like it's still failing so once this new service once the database comes back up looks like it's back up now so if I shift back here ultimately once that time out again there's a number of time outs that are in play here to give that service a chance to come back to life so it could take up to 20 or 30 seconds for this catalog to come back so we'll just keep hitting it here and hopefully it will eventually come back unless I have a problem in my code is not unlikely so it looks like the database came back the circuit was closed using the vertex circuit breaker and my business is back up and running using vertex so that's it for the demos I had it again I had a Node.js demo which we don't have time for today but you can check that out on the the code pointer I gave earlier here github.com slash James Faulkner slash roar dash examples so last slide here summary so essentially what we've done is I've showed you how you can take monolithic applications and move them to microservices applications either in a big bang approach using roar or incrementally and preserving the value that you've already invested in your existing applications so there's multiple technical solutions for this modernization depending on not only how much time and resources you have but also regulation and the amount of risk that you want to take not everyone moves at the same speed so roar and red hat in particular are designed to support you whether you're doing traditional java EE with stateful workloads or modern cloud native workloads so with red hat and roar you can we provide that trust solution for both your today's existing business critical apps as well as a supported path to the modern application architectures with microservices and the popular frameworks that we that we talked about today so I think I'm done Diana I know I went over time I appreciate it and I'll hand it back to you the only question that anyone had and you just roared to use a bad phrase through those demos and I am so impressed because they were all live and they're all going and nothing crashed I was like I was just waiting for something but no it kept going and kept going it was like the energizer bunny on top of the open shift so wonderful if you could go back one slide so that the one question was about where these the files were so you had a slide there somewhere that had the directory where your slides all were hiding not the slides but the get have repo with the examples yes this one here yes and on that that would be the great spot because everybody was looking where all this code was so that they could go play with it themselves and you've done a great job on this so thank you and I'm totally psyched I learned a ton of stuff and I love the metaphor of the strangulation hadn't heard that one before and so I'm so going to use that again that was great a great way to strangle those monoliths and it's a bit morbid but it lines nicely with the UNIX killing processes and zombie processes and things like that the perfect Halloween session so thank you very much well I'm definitely going to have you back on to do some more because you just rocked it today so thank you very much and I think there's one other thing in the chat but I'm betting yeah fantastic presentation really really well done so thanks I'm going to end the recording and