 Well, hello everybody and welcome again to another OpenShift Commons briefing. This time, I'm really psyched to have one of my colleagues from Red Hat, Sineq, who's up in Stockholm with us to talk about modern architectures, going beyond just microservices and doing some full-stack demos, and by full-stack, we really mean full-stack. We're going to try and get him to show off using a lot of the middleware and other pieces that really do the heavy lifting for some of these applications that folks are building, and we're going to have Q&A in the chat and Q&A after his presentation, so he can't see the chat so I may have to interrupt him once in a while and ask the question if it's really somebody stumped, but otherwise we'll save most of the questions for after he's finished with the presentation. So without any more preface, I will let Sineq introduce himself and take it away and we'll look forward to this journey. Thank you, Diane, and hello everybody. So I'm going to talk about modern architecture beyond microservices today. I'm going to show you a demo, a full-stack demo of essentially what is beyond microservices. So microservices, everybody by now knows what it is, but in order to build services in an enterprise environment with this architecture and take them in production, what other layers of stack are required, what else do we need to do to be able to take these things into production. But before that, let me introduce myself, just a few sentences about me. I'm C.M. Axadeck, I'm a technical product marketing for OpenShift at Red Hat. OpenShift is the Red Hat's container platform and you can reach out to me on email or Twitter after the session, of course, if you have any questions you might have in future. But let's get it started immediately. I'm going to turn off Amido after a little while so that it doesn't block any part of the slide of things that I'm showing to you. So microservices, a lot of people have been talking about microservices for the last two years. It has taken over the whole blog sphere and half of the session at any developer conference you see is about microservices. There are good reasons for that. I have listed a few of the reasons people are so interested in microservices. For the use cases, of course, that it makes sense. It's not a silver bullet for everything, but generally for complex application, it helps the faster time to market and to be able to be more efficient and scale easier. For the reason that you can break down their services to smaller ones, then you can develop them independently, deploy them independently, scale them independently. That simplifies things and also makes the management a lot simpler. If an application is built of 1000 classes and hundreds of them have changed, you want to make a deployment of that. It's a lot more complex than an application or service that is 10, 20 classes and two of them have changed. So the risks are a lot lower and that's like among the benefits that we see around microservices and a lot of people are trying or thrive to build application using this architecture when it makes sense. Application that I'm going to use in this demo is called CoolStore application. It's an online shop for selling products, certain cool products from Red Hat, like the Red Hat PoloShirt or Red Hat Fedora. It is a web-based app and Polyglot uses different type of technologies for building various services and it uses the microservices architecture. Some of the services are based on nodes, some Java using different type of frameworks and they're all deployed as containers. I switched to the application to show you. This is the web application I'm talking about. You have some really nice product on that and you see the inventory for each of these products and you see that you can add them to your shopping cart. These fedoras are actually really popular. I add a couple of more of them for my friends. You can go to the shopping cart, see what is added. I change my mind about a sticker. I don't want that anymore. I just want the fedoras and I can't just make a purchase and go on. It's like a web shop as you would expect it. The architecture of this application is built from each of these pieces being a microservice on their own deployed independently. So we have the web UI which is the front end that we were just browsing based on Node.js and Angular.js and then we have three backend services. One is inventory service and microservice that uses Java EE and runs on JVLCAP backed by Postgres database and that one provides the stock status, the blue number that you see how many is left is coming from that inventory service. Then we have the catalog service that gives the list of products. That's a microservice that uses JaxRS, grids rest service and runs on Tomcat. We have the cart service that is also rest service running on JVLCAP. The front end of our application does not talk to these services directly. We are using Agile integration and the component based on Apache Camel and Spring Boot that is aggregating this API. The gateway that Spring Boot service makes a call to all these backend services, get the data, cleanse it, transform to the JSON format that we want to consume in our web UI, add or remove the information that it needs and sends JSON data to our web front to be to be visualized. Which builds that whole service. So we see that the inventory part is one microservice on its own. Those lists of products come from another service and the shopping cart functionality is also a service on its own. And these services are all deployed as containers independently on OpenShift. What is OpenShift? I keep mentioning it. I haven't explained really what it is. OpenShift is Kubernetes for the enterprise. So it is a container platform based on Docker containers and Kubernetes Orchestrator and all the other pieces that you require to be able to build containers and run and manage them in production at scale. So it provides itself service. It's a polyglot platform. We can deploy different type of languages on it. We will look at JBossMiddleware, Spring, and Node.js in this example. And you can automate a lot of things around it. And the most important of all, it is a secure platform for running containers. Security is one of the big issues around container to make sure that if the container is not secure, you shouldn't be able to run on the platform. So platform makes sure that unqualified or non-compliant containers cannot be just deployed. If you look at our application, so each of these services are packaged in a container and deployed on OpenShift. And they expose a REST API that other services are calling them, like they call the SOAR Gateway service, or it's a web front calling that REST API to collect the data and visualize it. And these services might be Node.js, Spring Boot, Python, JT, Spring Boot, Trump is our whatever that is really. This could be a combination of that. And that services, the platform works with those containers, regardless of what they are. It's actually, it only supports, also supports .NET Core, 1.1, since Microsoft has Open Source Center runs on Linux, so you could even run .NET applications on the platform. So let's take a look at OpenShift and how these containers are deployed, how this application is deployed on the platform. I have a number of projects that we're going to go through. And what I want to show right now is the production environment of this application. We have a series of service groups. Each service group is one or a number of containers that I have grouped together to be able to easily understand and see them. So I have the web UI running at its own container. And then I have the Gateway, the Catalog Service card, and the rest of services as you see. I get some monitoring information, metrics, how much memory, CPU and network is being consumed for this specific container. And also, how many containers are backing this service? So we have built-in load balancing for each of these services. Right now I have one container running for my webfront. If I click on that, I get a list of containers for that webfront. Just one right now, click on that. I get the details of that container right now. There's some really valuable information. For example, what IP this container is assigned to it and on which node is actually running. All these containers are running on a bunch of virtual machines. And I can see on exactly which virtual machine this specific container is scheduled. On the right side, I see some good information about which image was used for deploying this container, how much memory and CPU is assigned to it. And, of course, we can do bare smoke with this container so you can consume more CPU than you're allocated if there is enough capacity, for example. And what's the current state it is running? Then we have the metrics tab. I get monitoring information about my container so I can identify anomalies over time. I'll see right now there is one gigabyte of memory allocated to this container, but only about 68 megabytes is consumed. The same goes for the CPU and network. It helps me to debug this container. Suddenly, I have a Spock's number, but it also helps me to size the resource allocation a little better. As you can see, I have too much memory assigned to this container. If this consumes only about 100 megabytes, I can probably reduce that one gigabyte to some number that is lower without affecting the performance of this container. The same goes for the CPU. So this metrics, the container-specific metrics, it helps me a lot for better utilization of resources for the container or also debugging things. The logs tab, it helps me to look at the logs of the application. You see it's the Node.js application running, the Node.sever.js command. It's really helpful for debugging and see what's going on in the container. And we actually, in OpenShift, you have a central log management solution also built in based on Elastic and Kibana. So even if this container is removed, the logs are going to be pushed into Elastic and will be available for later analysts. Terminal is also a really handy tool. So within the web console or through the command line environment, I can directly have a remote shell access inside the container without having an extra tool. So right now, I'm actually inside that Node.js container. If I run a PS ex command, see which processes are running, I see that there is a Node.sever.js. I could like do some kind of debugging and access some information inside a container. This is like really helpful, especially during the development phase or debugging in production. You want to get some info of what are the stages, what are things happening inside that container. If I go back to the overview of my production environment. So that was the front end. Let's take a look at some other components that we have. In this application, we have the cart service, which is the microservice running based on Jbos EAP. If I go to the logs, I see that it is using this container is using the Jbos EAP 7 application server container image, a certified image that comes from Red Hat patched and secured to make sure that nothing, no vulnerabilities are in it. And the service is deployed on that container. We have the other service as well, the catalog service that is running on Tomcat instance. Logs. Oh, I want the cart again. You see that Tomcat is this service, the catalog service, the rest service that is deployed on top of Tomcat. So as you can see, the middleware or the programming language or the type of runtime that is used for the application doesn't really matter from a deployment perspective. It's a complete polyglot environment. You deploy whatever type of application you have built with whatever type of framework that you have built. Let's drill a little down into CoolStore Gateway. So Gateway, remember, that was the API aggregator based on a spring boot and Apache Camel. And these two are actually part of Jbos View's integration services, a middleware product from Red Hat that comes with a supported spring boot and supported Apache Camel for doing lightweight integration. Let's drill down into that container, look at the logs. We see the popular spring boot logo in the logs. So it is running based on that. And I said, I mentioned this is like agile integration and we are having those aggregation of the APIs in this container, but we don't see much of that right here. Let's take a look into the details of those integration flows running inside that container. I can click on OpenJava Console on the spring boot Camel container CoolStore Gateway API and that would take us to Haltio Console, which is specifically for Jbos View's integration services to be able to take a look into what integrations are available inside that container, what is implemented, what is their status and how they're going. Now we're inside the Fuse integration service console. We see a list of all the Camel routes that are implemented. And a Camel route is basically a composition of different type of microservices or calls to different APIs. If you look at how we can compose microservices using Jbos View, so we can define a series of steps using Apache Camel that each of those steps could be using one of the enterprise integration patterns. What are those ones, for example, call these four microservices and wait for the response and then split the message that comes into smaller pieces, transform them and merge them together. So there is a known list of enterprise integration patterns based on the enterprise integration patterns book that's Apache Camel defines based on its own DSL and it makes it really easy to define this complex integration with a few lines of XML or Java code. If I go to one of these Camel routes, for example, the product route that is aggregating the product information from the inventory service, I can get some information and stats about this integration flow. For example, what's the endpoint, if some integration flow want to contact in the message to this one, how many exchanges has happened and what is the processing time, max and mean and averages, so on. Looking at it this way it's a little difficult. I think the easier way would be to visualize it as a as a standard integration flow diagram. So you see the standard notation of enterprise integration patterns and also shows me how many exchanges have happened so far through that flow. So I can look exactly how this integration is done, a message comes in, then we make a choice based on what type of request it is. We transform the message, change the body of that message and send it to the inventory service in the back end which is the the inventory micro service that you're calling to get the data back and transform and provide it to the front end. I can also take a look at the source code for that integration flow. As you can see it's about 30 lines of XML that I have implemented for exposing the REST API, getting the requests from the front end, transforming the message, making the call to the back end system and transform back end sender its response to the front end. So that whole flow that is generally really complex to implement in code is summarized in a very compact, concise implementation through XML using KAML and JBasfuse integration services. What we used to do before for integration used to be using the source suite and central integration through a very dedicated team with really long delivery times for implementing integrations through the IDEs that are specific to this integration. All those things are gone and not usable when you're building a micro service application and you have integration needs. So KAML running in a spring boot, running in a container, it allows us to take advantage of that micro services type of architecture but at the same time be able to use enterprise integration patterns to build these integration flows into our application and simplifies integrate simplify integrating services together. Let's go to open shift console again back to our containers. All right, so we have all our containers deployed and it is running as a micro service but as I mentioned before this is an online shop and in online shops it is extremely important to provide a really good experience for the users otherwise they will switch to some other online shop. Like the easiest thing for a user to do is to google for the same product find the next shop or an amazon or somewhere and make the order. So conversion is always a very difficult matter on online shop and because of that we have to be extremely careful about failures on the online shops on the e-commerce websites because failures happen anyway right but we have to just make sure that we have built an architecture in for our application also infrastructure that takes that considers failures happening different type of failures happening and make sure to isolate those the scope of those failures to the services that are affected and do not propagate that to the rest of the application and these failures happen at several levels and we need to think about all those layers both from the infrastructure perspective and also in the application side and have solutions for that to make sure to to minimize the effect of those kind of failures. Running as containers and open shift solve the provides some level of that that resilience and fail fault tolerance for example we can in open shift easily scale up our containers to have more instances of that service running so that if one of them crashes you have other instances providing that service to the to your front and to your users. We have the inventory service in this container is running and right now it's running as one container I can click on the arrow up and scale that to two containers and open shift make sure that after that container is up it is automatically also added to the load balancer so that when we refresh the service a request go in a round-driving passion to each of those containers like if I click more then it will create more instances of that container the same way I can scale the service down so I have two containers running and if one of them crashes we have the other one that can provide the service. We can actually automate this process through what we call auto-scaler in terms of open shift so you would define a certain metric let's say the network traffic or the CPU metric if the CPU goes beyond a certain threshold 80 percent let's say you you want open shift to automatically scale this inventory service to a maximum of five containers and if the threshold of CPU drops below what you have defined then it would scale this back to whatever the initial number that you had defined in our case what so we can automate this scaling up and down based on a load on an application and simplify even management of the scale so this is the first thing we could do to scale our application up have several instances in case a fault happens but we don't stop really there with open shift around managing the health of this container so if I click on the service I see that they have two pods with two containers backing this service one of them deployed an hour ago and another about a minute ago this is the new one and if I click on that container that was deployed older I want to go manually delete that as if that the crash had happened in that container and it was stopped for some reason. I delete the container in our production environment and if I go back to our overview of the product production environment you see that open shift has immediately sense that something has happened we had defined a number of containers for the inventory service to be two we removed one of them open shift has a health check mechanism that by default checks that if the container is up and running you can also override that based on your application so that the health check makes sense for the logic of your application as soon as it sees that the container is not healthy or it doesn't exist it spins up new containers to brings it up to the same number of instances that we had defined for it so we had defined to have two containers I removed one of them it immediately discovered that one container is missing and is scaling it up to two containers it shows three temporary because one of them is being deleted and another one is being started and after the operation finishes we would have to still two containers running at the same time so that's the second level of resilience that we can provide around the containers to make sure that whenever we defined how many how many these containers this service to the scale to open shift make sure that those number of containers running at all time even though it crash happens something happens in the application but that's not really the application level tolerance so this is the container level tolerance that fault tolerance the open shift is taking care of but we still have to make sure that from application perspective we also take into consideration what happens if one of the services that are dependent on fails for example usually in the world of microservices we have services calling each other so a client calls the first service that service is relying on through other services and those services in turn might call other other backend market services if one of these services is closed usually this directly affects the calling service performance so the first service has to wait for the second service till the response comes back and if every time it takes 15 second it means that my client in this case the front end it would have to wait the user has to wait 15 seconds to get the response so one service being a slow that makes the entire application being still even worse than that you might have a service failing and when that service failed that would cause the next service calling it fail and that would cause the next service fail till all that that all that is propagated to the user so we would call your experience cascading failures in your application and these are just a few of the scenarios that you would experience and you would need to think about in order to build service resilience into the architecture of the application in jbos few services and apache camel there is built-in support for integrating your integration flows into some of the components of netflix os s like history for a circuit breaker pattern and a turbine server for collecting and aggregating this data so what does his tricks do in this example if the one of these dependent services are failing if we're using his tricks in the scenario his tricks would make sure that in the next call we don't call the service again for a number of for a seconds or a number of minutes whatever is defined will blacklist this service and not call that if the first service calling us we would immediately return a fixed response that is previously defined or fall back to another service whatever that integration developer has implemented through camel so this way we give some time to the failing service to come up back up or get get debocked if after a while after that blacklist period finishes we're going to give that service a try again if that service works then we remove it from the blacklist and start calling it again this this scenario is called a circuit breaker that when the server service fails we're going to open the circuit at this moment at this stage so that calls cannot get propagated to to the to the failing service history also provides a dashboard that can visualize all this I'm going to click on that to take a look at the circuits that we have in our cool store gateway you see we have one circuit per each of the rest apis that you're calling on the back end services there is the one for the cart service deleting items one for getting it and the product inventory and so on if I refresh the page a couple of times so that we get more calls on this rest apis we see that number of successful calls has gone up on these apis and all the circuits are closed means that everything is healthy all the calls are normally going to the back end services and is as expected but let's go and take down some of these services as if there was a proper crash in our containers and we cannot recover from that so right now we have two containers backing our inventory service I'm going to scale that down to zero so that our entire inventory service is gone we cannot show stock status for any of the products so if we hadn't thought of service resilience into our service that usually causes a failure in that application so when the web page is refreshed we have a call for getting the inventory status for each of these products and since that service failed there was an exception for the entire website and the whole cool store rep shop comes down and that's that's pretty much the worst thing we can do for our conversion because as a as a e-commerce shopper wants to go to a shop and you get a job exception or the whole site is down it can guarantee that the user never comes back to that to that e-shop it's the best way to lose customer and that's of course we don't want that to happen so what we could do is that we could isolate using the circuit breaker isolate that failure to only the inventory status so we could if I refresh this page the baby implemented the service resilience using his tricks and the camel in the cool store gateway is that if the inventory service is down just show a fixed text static content instead so we can show what's the inventory status for the service but we do not stop our users from making orders so we can allow them to still browse the website make orders shop and we take his tiny risk that maybe the order something that is out of stock and if that happens there's always a chance that they could back order the those products from our suppliers and send it to our customers a little later or just apologize for it and offer some other product so that would just possibly piece off significantly smaller portion of our customer rather than bringing the whole website down and lose a significant not to ruin you because of that let's refresh this page a couple of times while the inventory service is down and see how the circuit breaker is kicking in so as you see we don't get any delays because the inventory service is down since his tricks knows already that that services down is failing it doesn't even make the call to that service anymore it directly retains a response so we have still a very responsive web shop despite one of our services down and we partially have shut down some of this functionality but go to districts monitor we see that the circuit for the inventory service is open which means that you're not making any calls so that back end anymore everything else is functioning all the circuits close except the inventory service so when the service in the pages are fresh we just keep calling the inventory service and just show the static content let's scale the service back up takes a second until our inventory services comes up as soon as this comes up we can go and check our web shop to see if history has notified that the service is up and it can again show the inventory stages for our refresh page a couple of times back up now we see the inventory information is displayed again and history has removed the inventory service from the blacklist and close the circuit so since the call has been successful now it starts passing all the calls to the back end service so using history Netflix history and integrating that into camel with a string boot as a part of jboss views integration service we can bring more resilience into inside our application as well by not bringing the whole application down when faults happen and try to isolate that as much as possible to only those services that are failing all right so that's an overview of the application that we are working with but what we want to do in the rest of this demo is that we have been we had there has been an issue with this polo short polo shirt that you see on our cool store and there has been some problem with the color used to produce this so the manufacturer has asked us to recall this product from our cool store online store and the way recalling products works in in commerce is that you usually get a deadline for taking that product back taking product down and if you don't take it down on that till that date you have to pay financial damage for every day that is that product is on because that damages the manufacturer the supplier's reputation so we need to take this down as soon as possible but that inventory list come the the stages of inventory and the list of the product that comes from an ERP system in the in the back end that is about 20 years old and we have made a request to remove the product from from the catalog but they have given us two weeks to do that and that's one week later than the deadline that we have got so we have decided as a as a work around in the meantime to modify our inventory service for this product and set the inventor to zero so we intercept the calls basically to do this back end system return a zero inventory and prevent our users from buying this product and skip all the financial damage so I'm the developer that this GRI issue is assigned to me to make that change and push that to production right away without causing any downtime so this is happening during the day and I don't want to cause any disruption in the production traffic code is hosted on GOGS which is a Git service similar to Github and this whole GOGS service is actually running inside OpenChift as a container with a Postgres back end as well I'm going to log in with my developer account if I go to explore you see there is a team repository called CoolStore microservice which is all the code for all the services running in this application but the way that we work in this project is that developers don't have direct access for committing code into the team repository we have more senior developers that have commit access and they can review the code that is written by other developers and if it's good if it's alright we can then merge it into that team repository this is the process that they use to maintain code quality for our application so this process became popular on Github and it's a lot of teams including ours is using that so we have a team repository and me as a developer I fork that team repository to get a copy of that in my own personal space I can clone that make code changes test that then I'm happy with the changes I commit it back to my Githrepo to my fork repo and after I'm ready I can send the pull request to the team repository so that a senior developer or code reviewer can take a look at that pull request we can discuss different issues maybe modify some of the parts to make sure that it complies with our code convention with the best practices and after we get approval from a number of code reviewers then we can merge that that code reviewer can merge that change into the team Githrepo repository so we'll use that process to in this demo as well so what I need to do is that I don't have access to commit anything to this Githrepo I need to fork this one to my own personal account to my developer account okay fork now we see that I have a copy of the CoolStore microservice repository under my own account and it also says that it's forked from team CoolStore microservice let's copy the URL to this Githrepo and go to my IDE and clone the code and make some changes go to the Githreview this is JBoss Developer Studio which is essentially Eclipse with the JBoss tools, plugins and some other plugins installed on it I'm gonna clone the Githrepo by default it reaches from clipboard so the default URL is correct there I'm gonna enter credentials and clone the Githrepo I don't need all the branches I just want to work on the master branch and the default location is fine put it there to big repo so it takes a few seconds so it takes down all the bits you could also use any other IDE in this sense so if you're used to IntelliJ or other tools it doesn't really make any difference you just clone the code and start working with it so I have clone the CoolStore microservice the forked repo that I have in my own workspace and I want to modify the inventory service so the first thing I'm gonna do is to import that as a Maven project that was a Java project running on Jbos EAP based on Maven switched to the Java view I have my inventory service open it is a standard Java project with a number of REST services since we do test-driven development in our team I'm gonna start by writing a test that verifies that the inventory status for those recall products is zero and afterwards I'm gonna write the code that makes sure that unit test passes we have an inventory test service that was prepared before that does exactly this check it takes a list of the recall products and then test if if the inventory status for those products are zero and right now this is ignored from our test suite I'm gonna remove the ignore annotations so that this is included and runs in our as a part of our unit test I can run that unit test immediately inside the ID as well I say run I see that it fails expectively because we haven't really made any code change yet I just enable the test that test those products are actually recalled and the message of course is that the product be expected to have zero inventory but there are 10 of them let's go to inventory service and remove the stock like basically removes return zero for the stock status of those specific products we have some commented code here that actually does that for us so we are circumventing those data that we get from the ERP system and if it's those products that are recall we set the quantity to zero in the inventory and return the result in the front and after the ERP system is updated and we can remove this code and let the data pass through from the backend system okay let's run this unit test once more this is green so the code seems to be working let's commit that and push that to our github repository I have to add these files that I've changed to the list of changes that I want to commit moved from stock and I commit and push them to my fork repo okay take a look at our repository refresh the page you see that the last commit is the one that we just made and those code changes are made so now that I have made the code changes I have to make a pull request to send these changes for a review to the team repository I can do that by clicking the screen button to create a pull request give it a title products call products to move from inventory I can also see the bottom the list of commits including this pull request and what files have been changed create the pull request okay my job as a developer is done I will log out and log in as a code reviewer back into Gogg's Git server I see some logs here I see that the developer has sent the pull request to this repo could go to the repo and take a closer look as you can see that commit doesn't exist in the team repo yet but we have one pull request waiting here to be reviewed and that's the one that was sent a few seconds ago by the developer click on it I see the name of it some description I can see the list of commits that have happened list of files that have been changed in this part of this pull request and then generally there is a convention in every team that how many people have to review and approve the change to go to be able to merge into the team repository in our team one reviewer is enough I create a plus one and add a comment to show my approval for this change and since one approve is enough we can merge this pull request into our Git repo that team Git repo if I go to the list of commits now I can see that now that Git commit is available in the list of commits in our Git repo all right now that they have made changes into our team repo we want to push this into our production environment we generally do that through continuous delivery pipeline so in OpenShift there's a support for building pipelines to automate to deliver the application and push it through different stages of development and push it into production right now in the list of services that are built for that cool store application is built of we are working on an inventory service so part of our pipeline going to focus on testing the inventory service in an isolated environment and part of the pipeline would test that with the rest of the services this is the layout of the pipeline how it looks like we have every change that happens in the team Git repository like the merge that we did right now for the pull request first the inventory test environment is used to build a JAR file build a Docker image for that application with the changes for the inventory service deploy it on its server and run a couple of tests if all of that is successful then we promote that Docker image to the cool store test environment and there we test the entire cool store application with all the services together so it's kind of a system test environment or user acceptance testing environment and after that is successful we will promote the image into our inventory image into production environment and make a production deployment but at that stage we do not replace the live inventory service but we deploy it into the new container running side by side the live inventory service without touching the production traffic and at that point we wait for approval from a release manager or someone that is authorized to approve switching traffic or going live in production this step is usually integrated into your IT workflow management if you use ServiceNow or GVAR workflow or something else or even like chat-ups in Slack or RocketChat this step is generally integrated into those systems that you could get a notification in that system or a task that deployment is ready in production and we're waiting for go live approval you click somewhere and traffic is changed to set the new service live in production let's go take a look at our project we have a number of projects in OpenShift for different environments we have the inventory test environment that is testing the inventory service in isolation we have only the inventory deployed and its database and then we have the test environment that the entire CoolStore application is deployed but we don't touch any other services as this pipeline except the inventory so we only update the inventory container in the test environment and test all the services together and then we have the production environment which is the production for the application running live and that's the one that we have been looking at at the moment and there's a CI CD project also that we have all the infrastructure for our CI CD you see the GOC server running with its Postgres backend with some persistent storage attached to the Postgres so that the data doesn't disappear when the container dies or moves around we have Jenkins running that is running our pipeline and Nexus as our Maven repository manager if I go to build and pipelines I see the OpenShift pipeline running in the CI CD project and it's already finished building the container for the new inventory service with the change included it has run the test has been successful it promoted into the test environment to run all the services together and test them and even since that has been also successful we have promoted it into production and deployed into a container that is not live so we deployed into a new container in production we do not replace the old one and start running smoke tests and some maybe even manual tests against this new container and at that point we wait for a manual approval for the go live so what we are what this process is called is a blue-green deployment in order to be able to deploy in production without causing any downtime so right now at the deploy production with no traffic step we deployed into a new container the green container and the production traffic is still going to the blue container and after approval happens we switch the traffic to the new container and still keep the blue container up and running so that we can test the new container still with the complete production data production traffic if something happens we can roll back to the previous version by just switching our router to the previous version so this allows us to without disrupting traffic in production easily go back and forth between different versions of our application that are running in parallel in production so pipeline has continued smoke tests has succeeded in the production the new container and we are waiting for an input an approval to go live with these changes let's go take a look in our production environment see how that looks like if I click on the inventory live service we see that there are actually two containers providing the inventory one is called inventory green that was deployed about an hour ago and we have inventory blue that was deployed three minutes ago this is the new container that we just set up with the changes but as you can see in the traffic split 100% of traffic is going to the older container which doesn't contain our change and 0% is going to the new container so we can test this container the new change in production with the production data but without really affecting any of the users because we're sending all the traffic to the previous version we could even do other patterns of deployment for example you could do cannot release but instead of putting 0% on the new container we could put like 5% of traffic in a new container and see how the new service reacts to the production traffic to the portion of traffic that is going to this new service and progressively increase that traffic to 100% then we are confident that a new service can function as expected what we are doing in this demo is a blue green deployment so we don't send any traffic to the new container before we are sure that everything is functioning and just switch router switch the traffic completely to the new container let's take a look at our cool store we wanted to take down this polo shirt out of stock we see that that change hasn't happened yet because the inventory service that is live right now does not contain any of the changes we have made in the code let's go back to our pipeline and I click on input require so this pipeline is using Jenkins DSL for describing continuous delivery pipelines it's a very powerful syntax and very popular for building pipelines and in this demo I haven't integrated this into service now or any other workflow system so we're going to directly go into Jenkins running on OpenShift and approve the launch after approving the go live directly in Jenkins since Jenkins is running on OpenShift the authentication is also integrated with OpenShift so I can log in with my OpenShift credentials into Jenkins and since I'm authorized to approve this go live I click on go live and that would switch the traffic to the new deployment okay the pipeline is executed successfully let's go back to the production environment and take a look to see how the containers look like so as you see now it's switched we see 100% of the traffic is on the blue container the inventory blue container was deployed five minutes to go through the pipeline and zero was going to the previous version the green container which was deployed an hour ago so now if I go to the CoolStore web page any call to the inventory service would come from the new version of the container if you see something is not functioning properly then I can always switch back from the blue container to the green container and have the previous version function in a matter of a second so it wouldn't cause any disruption of traffic in the production and I can quickly atomically switch traffic between the previous version or new version do forward like roll back and rolling forward quite easily directly in the production environment let's refresh the page and see if the product is taken in off the website as you can see the inventory says unavailable so we still have inventory for all the other products but now we have intercepted a call to that ERP system and and setting the inventory for this Polo product and save all the financial damage that should have been caused by the old ERP system in the backend all right so this is how we could make a change and quickly push that into production as a developer or as someone that is authorized to do deployment in production without causing any any downtime that we could do this directly in the production during the daytime no maintenance window is required to be on the weekend but as a developer I'm also interested to make sure that my application is secure I hear a lot about this high-profile CVEs that break out on internet like the heart bleed or poodle or shell shock and things like that and I'm a developer I just write code so I don't know too much about how to make this container secure but I don't want my container to have these issues and OpenShift provides a way for you to be able to do that also easily because of that OpenShift provides management suite called CloudForms as a part of its offering that gives you a little more insight into the containers running with the CloudForms management engine you can manage OpenShift and containers and also other type of infrastructure if you're running on Amazon or VMware or Red Hat Virtualization or something else it's a single pane of glass for managing any type of infrastructure whether we want to show infrastructure what I want to show you right now what it can do for the container and it is looking at my OpenShift environment and it gives me some stats how much CPU I have left as a pool of resources in my OpenShift environment how much memory you see that memory-wise it's a little orange so I have 80% use and not so much left I got to add more nodes to my OpenShift environment and also a list of all the resources that I use how many projects are there how many images and so on if I click on a project I see a list of projects and how many resources are in each of these projects how many pods containers and right now I'm logged in as an admin I can see everything we could remove some of the access so that a person is only can access his own projects or his own containers not to be able to look at anything else go back to the overview so what we wanted to look at is to make sure that the containers that I've deployed they're secure and they do not contain any CV issues I would go to the list of images that are available in my OpenShift environment and we see the inventory service that we created that we deployed is up on the list I can click on it and it gives me some really valuable information about this image deployed first of all it tells me in which projects pods containers and nodes this particular image is deployed this is also really helpful because sometime you have one image deployed in several containers and by coming into CloudForm it would immediately tell me this specific image is deployed on those five projects so I have an overview of which teams are using the image that I have made available in the inventory and if you come a little more down we see something interesting we see that the compliance as seven minutes ago so about seven minutes ago a compliance check has been done on this image and we are compliant but what does it mean what is that compliance that is checked on the right side you see what compliance we have configured in CloudForm at the moment we are using OpenScap which is a standard way for checking CVEs and vulnerabilities against the Linux environment Linux system and since these are Linux containers the same rules apply to Linux containers as well we see that 421 rules have been checked and right now we are compliant and we are not causing any high-profile issue I can also download that report of the vulnerability scanning as a really nicely formatted the HTML to send it to people and share it in the organization if it's needed let's download and open it take a look at it it gives me what the name of the benchmark and the list of the rules that have been used to verify this image how many rules have been passed and how many have been failed we have 420 rules that are green and one medium severity doesn't fail so by default in my CloudForm environment I have defined that only prevent containers from being deployed if there is a high severity issue vulnerability existing in the container since this is a medium one so we don't block it from deploying we allow being deployed I can get a list of all the rules that I've checked with their severity if I click on one I see which CVEs are related to each of those rules and how the result of that check has been to get more information so we made sure that this container is secure it doesn't contain any vulnerability and we can always share this report within the organization to make sure everybody is aware of which containers are compliant within our application and if something is not compliant it can prevent it from being deployed in the application in the on openshift platform so I took a little more than was expected it didn't leave so much time for QA I think I'm going to stop at this point and allow if there is any a burning question I could answer Wow well my mind is blown this is going to be the best demo I've seen in ages in terms of covering off pretty much every aspect of the full sack there was one question about the hystrix container whether and maybe this is a good way to bring some closure here he was looking for the container in GitHub he found the cool store but he could not find the hystrix container can you pop over to where the code is for all this demoing you've been doing absolutely I want to slide put it here that this is a can absolutely try this at home and please do try it at home you could go to this guitar repository on the javas demo central and cool store microservices and let me show you just be careful that we have all be aware that we have a couple of branches if you're running on different versions of OpenShift for 3.4 can go to the stable branch to make sure that it doesn't break in the middle of your things because it's a very active repository we keep working on it and you get a lot of information about how to deploy the thing so the application code is here but hystrix we are using the images that are available on Docker Hub and on their OpenShift there are a bunch of templates that we use to deploy that there is a template for Netflix so it says a list that takes those containers from Docker Hub registry and deploy them on OpenShift on the same repo there is also some guide if you want to deploy that whole demo and there is a provisioning script that helps you to run that and sets up the whole thing in whatever OpenShift environment that you have well well you really covered pretty much every base there were a couple of questions and then you hit them too while you're talking so I'm I'm mind is blown see I'm thank you really touched on a lot of things I hadn't seen the hystrix stuff before I love that you closed on the OpenScap stuff that's one of my favorite things to remind people to do and this has been pretty awesome I also think that there are a number of pieces and parts of this that could be full on demos themselves so I look forward to doing some deeper dives and drill downs on different parts of this as well but for those of you who are listening or who are watching this later please do reach out to us and if you have questions send them to either the mailing list at OpenShift Commons or directly to Simeon you want to throw back up your contact information there again on the screen so we end with with that side that's a little bit yeah and if if they have issues with the demo should they just log an issue in in GitHub on it? Any of those works absolutely so the first choice is that just through an issue on GitHub or drop me an email either of that would work I can if you have problems setting it up or you find box definitely contact me and are you by any chance planning on coming to KubeCon in Berlin in a couple of weeks and to the OpenShift Commons gathering I know you're in Stockholm so you're in the right continent at least yeah absolutely I'm looking forward to it actually counting weeks still as that did and I'm going to hang out in the OpenShift booth there so definitely come by if any of you guys are there come talk to me give me feedback and we can have a chat about any of these pieces we presented today perfect so I'll definitely get some FaceTime in with you and everybody else can too at the gathering the OpenShift gathering on March 28th and the 29th and 30th you'll be captive probably in the booth at OpenShift the OpenShift Red Hat booth so thanks again Samyak for doing this and we look forward to everybody's feedback and seeing some versions of the Cool Store up there in the universe tweaked out for your cool stuff so thanks again take care everybody thank you Dan bye