 I love working there. So Max should work always out of the box. I will try the BJA. I got an adapter for BJA. Okay. Do whatever you want. Five minutes. So it's 25. Five minutes. So at 50, just show me the five minutes. Thank you all for being at my talk. It's great, so many people. Some of them I paid for them, but I don't know if this works. Yeah. So who am I? I'm Jorge Morales. I'm a developer advocate for the OpenShift team. I come from Spain. That's why my word accent, and this is the places where you can find my word. Okay. So application development is changing. It's been changing lately. So now we have been going through this shift from legacy to microservices thing, pain. So developers has been changing also how they package applications. Right now. So they've been packaging probably as C files, star files, and VMs, deploying VMs. Now they are starting to deploy applications in containers. And the application operational environments are also changing a lot. So we are seeing a move into the cloud how we are starting to write our applications as developers. We need to think in a lot of things that as developers we usually didn't think about that much because that was more the operational types of things. And the development methodologies right now also changed. We are in a moment where DevOps is really important. We are getting a lot of, as a developers, a lot of visibility into what's an operational platform for writing our applications. How Red Hat or how we are supporting all of these changes that we are going through, through a lot of set of open source technologies, which in the middle of them, it's Kubernetes, which is the most important one. Docker provided us the Docker container format for running applications in containers, packaging our applications. And Kubernetes has given us the ability to run them at scale. And Red Hat has an open source for that. But the new developers, they are just not simple developers. That means that as developers you need to understand and you need to focus on some other things. You need to understand how your application will be running because there will be some things that are now responsibility of the developers. If you think about what types of workloads you can have in OpenShift, there is a multiple type of workloads, how you will be able to deploy your application. We'll focus on replica sets, which is deploying your application as pets, or as a pattern. That means that they will be replaceable and usually will bring you or will give you the possibility of having your application running and replacing it frequently. This type of workload usually gets described in OpenShift in a thing called deployment config, in Kubernetes that's deployments, which provides some functionalities. Like the number of instances, the description of what the application will look like, the containers that will be describing your application. And one of the most important things is how you will be rolling over new versions of your application. If you think as an application developer that I want it to be, my applications I want them to deploy frequently in this DevOps world where I will be making changes very constantly and I want them to roll them out to production or to the users without them noticing the strategy is a key thing that you need to understand. There are three different types of strategies that Kubernetes supports out of the box. Recreate, running and custom. So to recreate the strategy what we'll do is just tear down all your instances of the application and create a new or deploy new versions of your application. Of course, this type of strategy doesn't allow you for zero on time because you will be just tearing down whatever you have and bring it up. There will be some time when your application will not be live. So it doesn't allow you for zero on time. There is another strategy that is rolling deployment. That means that gradually will be introducing new instances of the new version of your application and once that version is up and running it will tear down the old version. So it will gradually move from one version of the application to the next one. Of course, there is a lot of additional capabilities like giving you the power to roll back an error and things like that. Does this strategy allow you to do zero on time? Yes, it does. And there is also a new different strategy that is custom that is at the end you will define how you will do that. So that depends depending on how you create that strategy you will be able to achieve zero on time of your application or not. But what's the problem now? The problem is that when a container starts in OpenShift it's not really when your application starts. There is a time since the container is ready that your application may be still starting up and the platform needs to understand the time difference between the container start and the application start. So for that the rolling deployment strategy as it will wait for the new pods it needs to understand when really the application is up and functional to do that rolling deployment. For that it uses two things health checks readiness it just tells the application when the application is ready to accept requests. So that means that if for any reason your application breaks it will just take it out of service but the application will be still live. And the Lightness Pro it's a different type of check for you. That will tell you when your application is healthy. So that means that the application if it's not healthy I will redeploy a new version of the application. In this strategy when you are using when you are using this raw replication controllers if one of these checks fail it will just deploy a new instance of your application. These checks can be made in three different ways. It can be an HTTP check that it will just the platform will do a query to your application and if it gets a response code HTTP response code between 200 and 399 it will be okay otherwise it will be failed so it will just act on as a failure. There is a container execution check that means that the platform will just execute a command in your container. So you can provide a script or you can provide anything in your containers that the platform will query and will just give an answer if your application is healthy or is ready to accept requests. Because it's a script it uses the zero return code for success and anything else for failure. And there is the TCP socket checks so that means that if I have a port to be listening that means that maybe my application is ready. So here comes how it works. Readiness Pro there is a lot of configuration but Readiness Pro just says when my application is ready to accept request. So if my application fails a certain amount of Readiness checks that I configure it will be taken out of service. Not really it will not be redeployed but it will be taken out of service. When the Readiness check then sucks six again so the platform will be checking your application constantly. When it is when the check sucks six again it will be taken again into service. So this is a graphical representation of low balance services. You have like three instances of your application if one of the checks to one of the instance fails what will happen is that there will be no request going to that instance and it still will be live. The instance will be live, the platform will be querying for your application to be ready and whenever it's ready it will put it back into service. The Readiness Pro on the other hand what it will do is if there is a failure on the Readiness Pro what it will do is trigger a new deployment of the new version so it will just say this application is not functional it will I cannot recover or it will not recover from the problem so I will just spin a new instance of that application but there is a problem whenever you do kill of your instances and it's that there is a time usually when you are using for example application servers where you are using databases that you need to release the resources that you are using in a safe way so they can be reused they can be properly terminated so you need to also whenever you create your application take into account that there is one period of time between you get the signal of you are going to be terminated so you will not keep accepting requests and you will be able to start releasing your resources safely and there will be one period of time that your application will be still without request so you can close all of that this is something that you need to take into account and of course because these applications running in a container you need to be able to propagate all the signals the term signal that the platform sends to your container you need to be able to terminate that signal so you should use exec to propagate that signal to the start process how this works is the same whenever you get a sick term signal it will just take out the readiness check after the period of the amount of readiness that you say it will be not ready the server will stop will start stopping the container and after some time it will kill it if the service has not stopped gracefully it will kill it so with all of that OpenShift deployments provide one additional functionality or whenever you are doing this through the deployments it will do this in a continuous in a continuous way okay so we will take all of that previous information into work when you, when there is a trigger for a redeployment either because you want to release a new version you want to redeploy a new version or the platform wants to redeploy based on some what you have done there will be a point on time that gets the redeployment process starts the first thing that will happen is that you will start a new endpoint a new instance of the new version of the application that's the the upper graph once that application is being started and the checks, the life net checks will succeed you will remove the old version so there will be no requests, no new requests coming in and eventually the new the new, the old version will get released, there is a period of time where you will wait until that version is released so that means that you need to take into account all the timings involved in this process and you need to take into account how you configure this as an application developer so you can allow for being your application constantly up whenever you redeploy an application so this is usually the graph that what actually and for example if I'm developing bad application, there is a short time where two employees are available so I couldn't have two sockets really from 80 no, there is different instances so that means that once this instance gets up, the other instance gets down, so the moment you get a request it will always go to, at this point in time, it will always go to the new instance of course to the new instance which is up of course all of the client sockets that you already have open to the old instance will still be there and you have time for your application to stop then gracefully, if you are in the middle of a transaction, you have time for that transaction to finish and this is usually the graph that you will see whenever you are doing a continuous test and redeployment, at this point in time you will see that redeployment of a new version will happen and all the result codes from querying the application constantly are 200 so I will test this I will show it I start a local cluster yeah is this big enough so I am using OC cluster locally in my laptop which I use a script to make it more developer friendly so I have a cluster called devconf so I will start it up so once the cluster is up application deployed which is taking into account in the code everything how to release the application is creating all these probes the platform will be able to understand redeployment and I have three instances to show this so then the sample is a really simple example but it makes the showing it really easy so I just show a hello world so what I have is a SOVUI as well running somewhere with a load test so I have a number of threads 10 threads that will be querying that will be querying constantly without delay so this will be more frequent and the strategy will be in thread so I started as you see as I started for this test I have some assertions to validate that what I am doing is getting the output of the application so if in a point in time any of the requests fail to get a 200 STB status or doesn't get the hello world internally will just show as an error so I can now do a redeployment so to do a redeployment I can just redeploy the same application the same version of the application just for an example or I could deploy the new version of the application I will just redeploy I will do it from the console somehow the command changes in the last release having test this you go here and we just deploy the version and we can see the pods we will see that there is some pods creating and as soon as we have one new running there will be one terminating there is one running and one terminating why we will see some of them terminating and we will keep terminating for one because this is the great full shutdown that means that I am not killing the old application instantly I have to do that for room to die gracefully and to release all the processes so right now I have three instances running and two of them have already terminated and the third one will terminate soon so if I go back to the sub-UI I should see that there was no errors in this column which is not big enough but there is no errors so I did an update of my application to the client noticing that I have created the application so we achieve it on time what if there is persistence so this really looks fine for microservices for stateless services but what if there is persistence as a developer what you need to take into account is that if you are modifying also something related to persistence like a configuration or a database you need to try to design all of that to be backwards compatible not only to be backwards compatible but you also need to take into account that your upgrades may be rolled back whenever you roll back your application you may not be rolling back your database so you need to take into account that changing into schemas need to be compatible what if those changes cannot be compatible or if you need to change also the database, the schema so if it's compatible you can use pre-deployments which is something that comes with the deployment that will make the change into the database just before you update the application itself you can use blue and green deployments or you can use a pipeline orchestrated where you include for example a job that will do the database update and in the pipeline if there is a rollback you can also include a job to do the rollback of the database if there is a change that is not backward compatible of course you will not be able to achieve in any way zero on time there will be some cabins but this is not the topic of this talk so will you be able to achieve zero on time if you are changing the database spends but probably so remember when as a developer you need to choose the appropriate strategy usually for development it's very handy to have a recreate because you will use less resources and it will be faster but whenever you want to have zero on time you always need to have a rolling update you need to create health checks so your application need to have specific endpoints providing the liveness of your application and needs to have that available because otherwise you will every time you update your application you will have failures to your customers to your clients you need to propagate the signals to the internal process to provide and provide graceful sat-down so you need to account into be able to release all day yes so all the images that all the images that openshift provides they have reigning checks or they no maybe not the python or okay so the SCLs don't the SCLs don't okay the languages so wildfly, EAP so the prox they do they provide these the languages not because at the end what you will be providing is your application your specific application it doesn't how do you check but if you think about the checks the HTTP and socket doesn't really need anything specific from your application of course HTTP what it needs is that you create an endpoint that provides that logic if you go to the container execution that means that you need a script in your image to provide that healthiness of your application if there is data involved try to make the updates backwards compatible that means that usually if you break the schema or if you make something in SQL that is not compatible you may be able to roll over but the problem may not be able to roll back so don't do that always keep in mind that roll over forward it's as important as rolling back backwards so that means that test under load test under load the roll over as well as the roll back the roll back is also very important for development and automate everything if you want if you can through pipelines, the use of pipelines that the platform provides these are the resources then GitHub project is over there and this is the script that we have created my team has created to make OC cluster app developer friendly okay so you have any questions sorry that depends on the application actually with websockets what you will need is for your client side to reconnect because websockets are persistence connections so on updates that connection will break so you will need to take into account to reconnect the application which may be transparent depending on what's doing at the time or maybe not start serving the liveliness the liveliness so in a redeployment it will be it will be to get it into services the readiness pro the liveliness pro will get both of them can be at the same time or can be different timings your endpoint will be hit when it's ready so with the readiness pro the endpoint will be added to the service endpoint load balancing usually what I've seen is that most of the people they use the same probe for readiness and for liveliness most of our the templates that we provide they have that out of the box for EAP for example they have an internal script doing the checks using the CLI to check any more questions? that was the last so that was the last, thank you did you copy the presentation to the USB? no? that's for the website