 All right awesome exactly four o'clock. So thanks very much for being here I'm going to be doing an introduction to OpenShift talk a little bit about what it is What runs underneath some of the technologies is just going to be like for a few slides possibly five slides max Because I'm sure you guys are interested in actually seeing the product For you for those of you that are doing the workshop now Very happy to have you here feel free to continue doing your workshop or or or watch the presentation It's all going to be recorded for for later use anyway Thanks. Thanks very much. So if for those of you that are using the OpenShift right now These are the layers that make the platform, right? So the first layer that we have it's an operating system layer So OpenShift runs anywhere Linux runs so anywhere red hat enterprise Linux or CentOS runs And that is the first layer that exists. That's where your containers are running So everything that you do in OpenShift runs inside a container, right? You don't have to necessarily create the container yourself sometimes You don't even know the things that are running containers because the platform does it for you, right? so in OpenShift we have Two options to run your containers the first option is an atomic host which is a very lightweight operating system It's about 200 mags on disk To run your containers or very light weight operating system to runs just containers The other option is the traditional red hat enterprise Linux that you all know and love And the layer on top of that is the container runtime layer So the container runtime layer orchestrates and negotiates with the operating system kernel the isolation of your process Remember that containers are an isolated process. So you isolate yourself from other processes You are isolated. There's also from from from things that might harm you So your process does not cannot be accessed by other processes in the same operating system So that's a that's the basic explanation of containers. It's a process that's isolated from others So that means that you can have your own share of memory your own share of of compute capacity CPU And it's secure. It uses AC Linux to secure so that The process that you're running or the memory space won't be invaded by other processes The second layer that's on top of the layer. It's the container orchestration and cluster management layer And this is where the technology that we use for this is kubernetes I'm sure you've all heard of kubernetes the most popular technology out there today and The objective of kubernetes is to orchestrate across multiple nodes And multiple containers right we've seen that if you're running like a single container in your machine Or let's say at ten containers in a production environment You probably do not need kubernetes because it's your environment is likely not Does not justify Installing a container orchestration the same way that if you have virtualization Just for one virtual machine or five virtual machines. You probably don't need VM or a red hat enterprise Virtualization you be fine. I guess with virtual box But once you get to an enterprise environment were more regulated and you are deploying hundreds or thousands of applications across thousands of nodes in different data centers with different Security groups and people that control the process you need a more capable container and cluster management solution That's would allow you to tell who can do what at a what point if you're deploying an application that needs That it's under a regulated environment for example a PCI regulated environment You make sure you want to you want to make sure applications land on specific nodes that are regulated and such and the technology That does is in open-shift is kubernetes right so it controls networking storage Includes a container registry includes logging a matrix as well and security is a cross concern cross cutting concern on the platform right, so we see security from the moment you run or your your Your container process to the moment you have let's say it exposed to the outside via for example Something like as an API management solution So this is all about running containers and if we go a level up We see about we get into the use cases of building containers and automating the deployment of containers and also a catalog where I can pick base image and base content for my containers, right? You can always build your containers yourself and build bringing a container to open shift Which is one of the use cases that you see on the workshop, but most of the people interact with they already have a build process Where they build war files they build jar files and they want to have a container image created for that process So in open shift we have a few technologies that help you Create this container for you So as I said use a developer don't need to create the container image yourself the platform does it for you And we have learned that transfer the responsibility of container creation to platform allows you to add more governance, right? It we don't believe to be a recommended solution to have the container that the developer builds on his own machine to be taken to a Production environment, so we prefer having a central place where we apply governance on the On the building of containers, so this is it so this is the The three layers that we have on open shift and I'm going to now get into the product and talk to you a little bit about this So my session is going to take I think at max 15 minutes more So the first thing that you see when you get when you enter an open shift cluster And as you can see here the version of open shift that I'm running is open shift online, right? So that means that this open shift online is deployed on a cloud provider in this case This is running on AWS But it could be running on any cloud provider right but the version of open shift that we install Or that's running an open shift online is the exact same version that you can install in your own laptop That our customers install in their data centers that are partners installed to run So it's the exact same version is the exact same bits So and we are running a no-line Environment with the exact same bits that you get to install on your all environment So this is red hat selling that what do you run for you? We run it for ourselves. So it's something that you can trust So the first thing that you see an open shift is that you need a place to run your applications, right? And this for us is seen as a project So you're you're asked to create a project and my project is going to be called It's going to be called alex project just because I like alex So the this is my project. I'm just going to give it a project name and what the project is is An isolated area from a compute capacity from a compute capacity perspective also from a network perspective and from roles and Capabilities perspective from other groups so a project is like it's like your your execution space and we see Customers and and partners using projects to represent things like different environments For example, they have a project that represents a production environment and another project that represents a development environment or a staging environment We see companies that use projects that give projects to their developers So they'd have an area they can play it in developer application with but I have to be you have to have in mind It essentially it's a quota bound and resource controlled area that you can run things, right? So I'm just going to start with a very basic example. I'm going to run an OJS application here because Because I like no JS, right? I'm going to select the version of no JS and there are essentially three ways You can bring content into OpenShift one first way is that you give us your git repo like in this case You can give us a git repo and it's in this case has to be a publicly accessible repo because I'm on the internet But if it's running on your environment It just need to be any git repo that the cluster has access to so if it's in your company A git repo that the cluster has access to in your company So this is one of the ways the other way is that if you have a war file if you have a binary that you want to bring For example, you have Jenkins to build your war files in your ear files And you want to bring that war file to to OpenShift So then you can embed that war file today into a container like a Tomcat container or JBoss CAP container We've seen customers running for example web sphere containers So this is the second way bring in a war file and the third way it is essentially bring your own container image So if you have developed your container image Some other place you can bring that image to OpenShift and those of you that are doing the workshop I think one of the examples that you did so you're going to container registry And you're getting the container image and running that on OpenShift. So for the sake of this example here, I'm going to Use a git repository and bring and create an application based on a git repo that it's publicly accessible This is going to be a JavaScript application, right? So it's going to be my JS app That's the name of my application. The objective of the OpenShift platform is to automate every single step From code to production, right? So I can click create here But if I see the advanced options, it gives me more options as to where I want to bring the content from a git repo But he also asked very nice things for example like would you like a route for your application to be created, right? And a route means a externally accessible way to access your application another question that he asks is of course support if you'd like to secure that route and Since we're going to actually build the container image. It has options that deal with the build configuration So as I mentioned in the slides One thing is running containers, but we've learned that in enterprise environments You're going to be building containers all the time So we all have we also have facilities to make the to make building containers very approachable to developers and to DevOps groups Or any user once you have built a container you can also have the options to configure and specify how you want to run things So for the sake of this example, I'm just going to keep the defaults that I have here and I'm going to create my application, right? So it says that's creating my application and I'm going to go back to my project and what's doing right now Is that it's going to get the source code that it's in a public git repo. It's going to clone that source code In this case, it's an OJS application So it's going to get all the dependencies for that source code Package that an OJS application inside a container image So this is building a container image for you a Docker image for you You don't have to do that yourself if you want to do that if you'd like to do that If it's part of your company policies to build container images great Just bring your container image to the platform But in open ship you don't have to do it we build the container image for you And as I said we've learned that once you have a centralized place to build all your container images The same way you have a place to build your jar files your war files. You can add more governance You can add more control. You can add more checks to that process. So in this example here We're building the container image. So what happened is that it identified? It was a no JS application It read the no JS modules that I needed to be running It executed a package command to get no JS modules And then it started to push this created an image for you And then it started to push this image for you into a container registry, right? So at this moment my image is already in a container registry that can be used by anyone if I come back to the overview I will see that my application is very likely running. So yes, here it is My application is already running. So a lot of things happen here. You went to a git repo clone They get repo Downloaded the dependencies package the application added it to a container image push their image your registry pull the image Your registry found a node to run my application Created a route and run the application. I think this process took about let's say one minute and 30 seconds I would say and we've learned some of the the companies that we interact with that simple things as having a route created for your application a DNS addressable route takes Times like let's say I need to open a ticket to have a DNS addressable route and that is going to take like 15 days We believe that things that can be automated should be automated and we do this an open ship So this is as I said is a very simple application. There's nothing much here, right? It's an OJS application Here's my application running, right? I can go to the git repo and see that it is here And in this screen you see some interesting things, right? You see a number a circle and a pod and an arrow up, right? With this it kind of gives you an idea that I can click the arrow and things will happen, right? And that's exactly it so today for this application You have one container running for this application, right? And if you want to scale this content this application to that say two containers It's just as easy as clicking the arrow up, right? So that means that you have now two containers running this applications that the my route because I have a load balancer was Already updated with the address of the two containers to run my application So I didn't have to do anything other than click the arrow up. I'm sure all of you are saying or asking the question But I did have to click the arrow up. Can I have auto scaling for my applications? Yes, I'm going to show you how to scaling real quick real quick because we all love this So first thing that we have to do to configure auto scaling for the application It's actually set some limits for my application because I need to know based on what I'm going to scale things, right? So for this application that is a Node.js application I'm going to set the resource limits and this is I'm telling the application how much memory and CPU the application can consume Right and if you remember from the let's say virtual machine days The least amount of compute power you could give to our application. It's one VCPU right an open shift You can have actually milli core so you get one core Divided by a thousand and you can have this level of granularity for this application. So I'm just I'm going to do here on a CPU level so I'm going to say that this application is going to use 255 mags of RAM and the CPU is going to be calculated for this application, right? So I'm going to save this and Now something very nice is happening right now. This is something that people love This is something called rolling deployments, right? So that means that you're going to bring a new deployment of the application While the other deployment is running, right? And the reason I need a new deployment is that we believe in a mutable configuration for applications So I didn't go to a running container and change the specification, right? I actually created a new container with newer source limits and I did that in a rolling deployment fashion So I brought a new container up added to the router and then I took one down Added a second up added to the router and then I took the third one down So if this your service level wasn't disrupted because I always had applications Responding on my route, right? So as you can see now I have and now that I have actually resources I can go to my applications deployments and I can see here For this specific deployment, it's the version 2 so everything is version I can have metrics for my application and here you'll see for example The metrics ever that I'm using for my application takes takes one or two minutes to start gathering metric for application But you soon that let's continue on the auto scaling scenario, right? Because we all love auto scaling auto scaling is pretty easy now that we have defined it a Limit so let's say my application is consuming at max 250 to 56 mega RAM And then actually this is configured to run half of a core if I'm not mistaken I can actually check how much I'm using right now resources quota. So my project is using now 5 to 5 10 max of RAM and in terms of Yes, so I have the two gigs available I have 5 to all this so run So let's go to the deployments of my application and I'm going to set up not auto scaling right as I said It's pretty easy. I pick my pick up my my Deployment configuration and I said add auto scaler right so and this is the moment where I tell the application the minimal and Maximum amount of containers or pods that I'm going to have running for my application So in this application, I'm going to say and then just before I do that Let me just show you something real quick. So they're having mine. So we have to let me just bring this back to one Right so I can show you the auto scaler Kicking off. Okay, so scaling back to one. It's asking the pod to be terminated is going to terminate real soon So let me go back to my deployments I'm going to add it to the deployment configuration again At an auto scaler and I'm going to say that the minimum amount of pods that I have it is to remember that I have one Right, so I'm saying that a minimum is two So I'm going to have a configuration that says the minimum is two in this case Let's just put a maximum is 10 my CPU target is going to be 50% So whenever the CPU utilization across all containers reaches a point like 50% It's going to bring a new container up if it goes below and you can specify the thresholds If it goes below 50% it will it will terminate that pod right so pretty easy way to do Auto scaling so I save this and you see like I have a horizontal pod auto scaler configure for this application right pretty easy here It's a 2 1 I can see that they already have metrics for my application And I have my auto scale saying that the medium is to its maximum 10 So this takes about let's say I think one minute for the auto scaler to start reading those metrics And we recommend that in production environments you should pre-warm some applications So if you know that you're going to have load coming in let's say in a weekend Of course, you shouldn't prepare for the whole load But if you always prepare yourself for more than you currently run So now that out the scaler have identified that the minimum is two and it's actually spinning up a new container For you to run that application. I didn't have to do anything right? So this is keeping the resilience resiliency of the application right so another It's aggregated across all pods so if just one of them it's behaving in a strange manner It's going to consider all pods to do the math whether or not It should also scale right and you can have other types of Checks on that pod for example, you can add a health check to that pod that if it becomes respond unresponsive You kill it and bring anyone right and I can show you that real quick So as I said I can come here to the monitoring of my application just to show My auto scaling It's I'm not going to do an auto scaling because auto scaling is like that thing like all right. It worked, you know, so It's kind of boring when you show I Have to send fraff, but it's okay. You guys believe me right since enterprise software So this is metrics for my application It started to collect some metrics for my application in this CPU memory Network, but it also integrates with monitoring tools like prometheus I'm sure you've heard of prometheus to do monitoring of containers so you can bring a prometheus Monitoring tools to run to monitor your application right so now I'm going to do to go to a second part of my application in my demo Which is very nice. It's something called a blue blue green deployments, right? So a be deployments It's one you have on the same route of the application You have two versions of the application on the same route So in order for dude to do this I actually going to need a second version of my application So I'm going to create a second version of the application It's going to be a slightly different. So if you remember this is a welcome to your node. Yes, I went shift gray ish background And I'm going to deploy another version of the application as I said slightly modified with gray background green background. So I'm going to point you to my personal git repo here and this is the green one Just so we know and all the process is going to kick kick off again, right? So as I said, it's going to go to your git repository as it's doing right down here It's going to go to your start. So the build is running. Let's take a look at the full logs of the build So it's going to the get it's cloning. It's cloning the repository. It's pulling an image So it needs a base image to layer your code on top of it, right? And you right had provide images where we maintain the life cycle of those images So if there's a CV in any of dependencies of the image for example Let's say Geelyb see that was a recent vulnerability. You see that will fix that vulnerability for you And you can always be updated Awesome very like when this happens. Let me do the build again try to build Why we does that? Let me talk a little bit about the Container catalog. So redhead was the first company to actually create a health index for applicant for containers So if this we take containers and we analyze if there has been any vulnerability identified in the containers So for example, I can take a look here at the red hat atomic base image and we can see that This image was updated six hours ago So six hours ago, we did an update to this image and as I said since you're building applications that now are Basin containers and contain all the dependencies of your application from the let's say the Java Dependencies to the operating system dependencies you make you need to make sure that all those dependencies They are always the the the security vulnerabilities in them are always addressed So we created this thing called how think this for for the for the application and we all we always work To keep those those those those image safe, right? So in this image you see that there was a bug advisory for that image and if we actually get into the image and see the contents of it We can see why the image was upgraded and the topics that were graded and the versions that upgraded image, right? So this is fix a specific bugzilla or you can see that a CVE was fixed again keeping you safe all the time, right? Let's go back to my to my application here the green one Didn't like the build process. So I'm just gonna As the build again for some reason they like this one just gonna create anyone here So while it does that I'm going to show you another very cool thing that we have and I'm gonna have eight minutes But this is very nice It's called the concept of health checks and this is for you to have verifications to your application Before the container before open shift can do something with it So we have our very nice no JS application here And I can see that this application has no health checks, right? So at some point in time see container just does not have any health checks And this is like the the open shift in Kubernetes uses health checks to make sure your application is in a healthy state For example, you want only to add your application to the router to the load balancer after it has been after has finished Passing a health check. So I want to make sure the application is running before I add it to the router, right? I want to make sure the application is running before I can do a roll-in deployment So there's all the things that it can do and it can do like pre-start check and also a check that verifies every Many seconds. So let's add health checks to this application and this health check that I'm going to do is a very quick one I'm going to do a readiness probe and this will just query a Endpoint and if I get any response from that endpoint like a 200 ok for example That means that my application is running and this is a readiness probe So this before open sheath adds your application to the router it verifies Are you good to receive requests and we see people using that for example for a cashier warm up So I want to do warm up off your cashy and you add an readiness probe, right? So let's add an readiness probe. So in this case, I'm just going to do a get on port 8080 timeout is going to be No timeout for this, right? I'm going to add a liveness probe. This is going to verify if it's running right the initial delay and the timeout and this is a All right, so now my application has health checks So because I changed the configuration my application remember right when we change configuration we do deployment again It kicked off a new deployment. So now I have the third version of my application deployment of my application running, right? so What happens here? It's a it's an interesting case for this because before adding to the route It's actually going to wait a little bit more So if I didn't have the health checks it would like see that the container was running and make it addressable right away Which if I went to the URL I could possibly not get the complete experience But now that I added a health check I was I only added it to the route once it has passed So I can just come here and see that my application is running Right, so the other one that I created for my AB test is here, which is the green one So if we come here, we'll see that this application has a green background because I want to see blue green So in order to test this I'm going to bring an incognito tab Going to disable cookies here So cookies are disabled in this browser. So that means that I want to show you a B So because we have sashing affinity and you use cookies, I wouldn't be able to show you a B, right? So let's leave this here, right? I'm going to go back to my blue-green deployments So it's very easy So I'm going to take the application that I think is the main one right and in my case the JS app is the main one So I'm going to change the route configuration for the JS application To add another deployment to that same Application so I'm going to add it the route configuration and I'm not changing them deployment or the bill I'm just changing the route configuration and I'm going to now split traffic for that route across to the applications So I chose the green one and here I'm saying what is the balance that I want? So normally for a B deployments you want to test the feature or a capability So let's say a sign 1% of the traffic 2% of the traffic just for the sake of this demo I'm assigning 50% of the traffic to each each version of the application, right? So I'm going to save this and it like gives me this very nice graph here showing that I have 50% of traffic going to the Green app 50% of traffic going to the JS app. So this is again done at the router level So the router is already configured to do that. So that means that when I copy this URL in This browser that has it's an incognito tab with no cookies every time that I open and I'm going to refresh oops My cookie thing is not working late Good see every time that I refresh this I Have a different version. So it's the same URL, but I'm sending traffic to different applications So let's say you want to test the feature Or let's say you have a marketing campaign or you have something that you want to test and only in the percentage of application like we've really like a two three sets of operations you do that so those so these are the capabilities that I wanted to show About open sheet from you. So I showed I showed the client application building application from source code. I show I showed Scaling using the outer scaler. I'm sure I showed setting resource limits I showed health checks and I show using a depletion of deployments for open shift And my time is done. So thanks very much for your for your presence here. Thank you. Thank you