 Hello, everyone. My name is Shiyang and my colleagues here, Yu Xiangzhu, we're both from Beijing office. So today's topic is about continuous delivery midst open shift. If you are a developer, I think you must be familiar with CI, CD, and maybe open shift. So today we want to talk about more about Red Hat, how we use the continuous delivery plus open shift. And the presentation will be split into two parts. First part I want to talk a little bit more about the before we're moving to the all-in-the-open shift, what's the problem we had. And the second part my colleague Yu Xiang, he's from P&T DevOps, he will talk a little bit more about the how the way solve the problem. So without any further ado, we can just jump into the problem. There's two major problems. I will break it down like into different perspective. So the first one is more relate to the environments. So in a traditional environment, bug cannot be reproduced really easily for some historical reasons, like some of our internal projects at Red Hat. It's a little bit hard to keep depth and QE and the stage and the production environment similar. So here's a story. There's a developer, he made some code change, maybe it's a feature implementation, he write the unit test, function test, integration test, and compile past the CI CD and then he released. Then comes to the quality engineers responsibility. He just fetched the release from maybe RPM or something, but unfortunately he found a bug. That's a little bit sad. But since the precondition I mentioned before, since the developers and the QEs, they're not using or sharing the same environments, their environments are different. When the QEs submit a bug, maybe on bugzilla or something, it's hard to reproduce for developers. This could be a severe problem. Sometimes it may took like four or five days back and forth the QE and developer collaborate together to troubleshooting and figure out whether the issue is related to the environment or code change. So here comes the second one. The traditional environments are hard to clone, provide, rebuild, or even deliver the same environment or mirror environment really easily. So your customer may want to clone your environment for test. You may also clone all the service dependency, which I will talk a little bit later, of your project to the other teams. In an extremely conditioned like, what if your environment lost due to the hardware failure? Is there any easy way for you to rebuild? So previously, some of the team, they had a really long documentation TDS for step one, two, three, four, five to help you to how to rebuild the environment or clone or set up the correct environment. Some teams even better, they automate it with like scripting, like automation tools, but like each team like reinvent the wheel, this is not really good. And even like there's a high level, you know, decision make, like make all those tools consolidated into one. Sometimes that tool is really hard for like each team to fit their, everybody's need. So is there any more genetic way to solve this problem, even without like writing any tools or codes? So the third environment problem is some traditional environments, sometimes lowering the performance and resource utilization. So many of your team or projects may already have the pipeline for CI and some of the CI maybe are jinkers. Due to some reason, you might have some fixed numbers of slaves. And this is not really flexible sometimes. And if you want to run, let's say, a destructive case, maybe your slave is a VM and you have to wait the previous guy to finish up their test and recovering the whole environment so that you can occupy the machine. But ideally, we want to, at Red Hat, we want to like use each machine as much as possible. Like you can run multiple jobs in one machine, it's VM or bare metal, simultaneously paralyze and use the resource as less as possible. So the second problem I want to talk is more about the service dependencies. When I said service dependency, I didn't mean the OpenShift service or how good you feel about the waiter or waitress service you last night. And when I said the service dependencies, I mean the micro service, which is a kind of best practice for you to deploy your application in a cloud-native way. So imagine there's a huge company and when your project's getting bigger and bigger, there's a lot of increasing micro service. Your service depends on service of someone else. Someone else's service maybe depends on yours. How do you make sure an upgrade of the service will not break the others? Let me just like explain it more simpler. Maybe we have a downstream and an upstream, like two teams. One is working on downstream and another work on the upstream. You can also consider it as two service dependencies. So they're both pretty productive. They like making code change pretty frequently, maybe every day, couple minutes and hours. But due to the system complexity and a lot of, like, interlive the service dependencies, the release, sometimes it could be really hard because you have to make sure all those components, all those dependencies, pass the CEI, it's a complicated release process and it usually sometimes took four or five days. This difference sometimes could make, like, your code change will accumulate, like, four or five days, so that each release will, like, accumulate a lot of code change. This is not really good, and sometimes it's a high, like, risking task because according to the philosophy or best practice of continuous delivery, we want to each time, like, push, make the code as small as possible so that you can isolate the problem and troubleshooting more easily. And the second one, we want to release the code or make the release as frequent as possible, even every minute, so you can travel and find, roll out and find the problem as soon as possible. So with all those problems we had, how do you solve it? My colleague Yu Xiangzhu, he will talk a little bit more about how the P&T develops. We solved this problem from the Red Hat. Thanks, Shiyang. So in Red Hat, I want to share with some experience in Red Hat about how we use the development practice of continuous delivery on OpenShift to solve all the problems Shiyang just mentioned. Before we talk about the concept of continuous delivery, let's think about the concept of continuous integration or just the CI. I think most of you probably already heard this term or probably your team already has some CI pipelines working every day. Let's recall this without CI. So in the old days, some developer teams had a rule, some people called the build person. So you have a development team and maybe five or more developers working on a project. So when your manager asked you to release a new version, the build person should get some work from somebody and another people, the files from another people. He gets all the files from all developers and makes it up. And hopefully the shared code base will build and works. So if anything goes wrong, you probably need to take a very long time to figure out what's going wrong. So it's a slow procedure. It may take several weeks before you can actually get your build success. So with continuous integration, we require every developer to integrate their change to the source code to a shared repository. As soon as possible, maybe several times a day, this will encourage the developer to make little change to the source code and they test with the whole project. So if anything goes wrong, you can easily figure out what's the problem and you can easily fix it. That's the idea of continuous integration. But the problem is even with CI, your shared source code is buildable and even works perfectly in developers or in workstation or even in the development environment, nobody has the confidence that it will be fine if you really put your new version of your software into production environment. So the developers always eager to deploy their new feature to the production environment. But operator guys, they may be not happy because nobody knows what we are having. So now the continuous delivery, the concept comes into the play. So continuous delivery is also a development practice extension to the concept of CI, which requires developers to rapidly deliver their change to a production-like environment and make some automated tests to ensure that if we deploy the new version of the software into the real production, it will be fine. So here I want to make a clear distinguish between two very similar terms. One is called a continuous delivery, another is continuous deployment. So in continuous deployment, every change from a developer will be constantly deployed into production automatically. So in some internet companies like Facebook, they already put continuous deployment in practice for a long time. But in continuous delivery, we can actually do it, but we may choose not to. It's more like business decision. So we can deploy our shared code base into the production if we want, but we may choose not to. So that's the difference between continuous delivery or continuous deployment. How to implement continuous delivery? I would say the ingredient of continuous delivery is automation. So basically, you need to automate everything you can imagine. So you need automated build, automated test, all the script like to deploy your application into different environments and automatically do the configuration, set up database and provision machine, everything. So it's a lot of work and it's actually not easy to automate everything. But today we have OpenShift. The following section I will introduce you some features, some advantages that OpenShift will bring you to help you implement your continuous delivery pipelines. In OpenShift, your application is run, shaped and managed in the form of containers or container images. So the container image has your program and all the dependencies, the fast system in the image. So you can actually minimize the difference between different environments and different deployments. And OpenShift has a feature called OpenShift template. So with OpenShift template, you can actually write a template for all your environments. People from other teams or if they want to, like, clone your environments, they can just run, use your OpenShift template to start a new deployment with your, with the exactly same configuration, just like a one-click. And you can also leverage all the hardware resource of the whole cluster. So you don't need to worry about the hardware, don't need to worry about provision machine or upgrade your hardware configuration. It's all automatic way. So the most exciting thing is OpenShift has a feature called OpenShift pop-line, which can help you implement your CD pop-lines in a manageable and fancy way. This is OpenShift pop-line. OpenShift pop-line is actually something that brings the power of Jenkins into OpenShift. So Jenkins is a very powerful CI CD tool and it has a feature called Jenkins pop-line, which allows you to write your pop-line code in a text file called Jenkins file. And OpenShift extends this technology by providing you a new domain specific language so you can, in your Jenkins file, you can write some groovy, in groovy language, you can just write the OpenShift DFL to interact your pop-line code with the OpenShift API server. And once you create a new OpenShift pop-line job, Jenkins job will be automatically created and linked to the OpenShift job. So it has a built-in synchronization mechanism. Here's a real example. In Red Hat, we implemented a CD pop-line for RiverDB. It's an open source project hosted by Pagger. If you're interested in the CD pop-line implementation, you can go to Pagger.io, search for RiverDB. So here's the high-level workflow of the CD pop-line. Every time a developer submits a change to an SEM, the change will go through the whole pop-line. So the pop-line will firstly perform some basic text and run unit tests to ensure your code is, your code looks good. And then maybe build RPMs, run RPM checks. And finally, the pop-line will build a container image for you from the source code. Then the image will be deployed into a temporary environment, and we will run some functional tests and other some high-level tests to it. If all tests are passed, we will tag the image as latest and push into the container registry. So when the image is tagged as latest, we will say we have the confidence that the image is good enough to deploy into a development environment. So the next steps are actually promote this container image into the stage environment, and finally the production environment. It does so by, like, if you want to promote an image from the dev version to the stage version, we need to, like, pull the latest, the dev version of your application and all the stage version of your application dependencies. Then we will run some end-to-end tests and other high-level integration tests to ensure that once you promote your image from dev to stage, you won't break anything. So similar procedure happens when you promote the image from stage into production. But we introduce a switch here so a human can decide if you really want to promote a stage image to production. If you remove this switch, it's called a continuous deployment. But if you add this switch, it's continuous delivery. This is how the pipeline looks like in the OpenShift dashboard. But if you create an OpenShift pipeline job, click the build pipeline button, you will see all the pipeline jobs. And click into a pipeline job, you can see all stages in your pipeline here, like this. So now how to write your OpenShift pipeline. Here is the... So we need to introduce the Janks file first. The Janks file is basically a text file written in the GUI reprogramming language. It's a feature provided by Jenkins. You can create a text file like this. The agent part specifies which node you want to run your pipeline. And in the stages part, you define different stages in your pipeline. And in every stage, you can write some GUI source code for your pipeline workflow. In OpenShift, you can actually create a build config, but that's the build strategy to Jenkins pipeline strategy. And then you can put your Janks file here. Actually, you have two options. One option is copy your content of Janks file into the build config object. The other option is store your Janks file into an SEM, like GitHub, and then reference your Janks file from an external Git repo. So here I have an example, build config, and you can download my slides and take a look. If you have a running OpenShift cluster, you can add the YAML file to your OpenShift cluster and click the build button. Let's see how it works. So this is the YAML file. So you can see we include Janks file here, and we define the agent. The agent is actually a pod template of OpenShift. So the metadata part is the name of your OpenShift pipeline job, and here you define the pod template. So your pipeline will be run in a pod on OpenShift. This feature is supported by a Janks plugin. Then you can add this build config to OpenShift. And here in the OpenShift dashboard, you can see a job is created like this. You can click the button, start pipeline, or use your command line, or say start build the pipeline job name to start a new build of your OpenShift pipeline job. And here a new build is started. When you click the view log button, OpenShift will navigate you to the Janks master page. So you can see the procedure of how your pipeline runs in Jenkins. OpenShift will automatically start a new pod based on the pod template you provided and run all the steps inside your OpenShift pod. And go back to the OpenShift dashboard, you can see all the steps here. Because your OpenShift pipeline job is configured in an OpenShift object in the form of a YAML or JSON, so you can actually use the OpenShift template to package your pipeline job as OpenShift applications. With this approach, you will give the flexibility to your team members or downstream developers to deploy their own pipeline jobs for their application folks. So with OpenShift pipeline, every time you trigger a new build of your job, a new pod will be created and provisioned so you can actually run your pipeline multiple builds in parallel. You don't need to worry about if someone occupies that machine. So when you have a very long and complex pipeline, this will dramatically speed up your development cycle. And with this approach, all your pipeline code secrets and the whole build environment are presented in SEM. So you don't need to worry about how a failure will cause you to lose your build and test environment. So you can store your OpenShift pipeline job as build config YAMLs in SEM, and your credentials can be presented as OpenShift secrets. So you can create a general OpenShift secrets with this tag, so the secret will be automatically to the Jenkins master. So you don't need to backup your data on the Jenkins master. All you need is keep your YAML files and you will have everything. Let me summarize all the benefits we have with OpenShift pipeline. You can actually use OpenShift pipeline DSLR to interact with OpenShift with APS server without calling the OC command and the pasta output manually. And you can use the Kubernetes plugin to run your pipeline Jenkins leaves as OpenShift path, so you don't need to worry about provision new machine to run your pipeline. And you can actually define some OpenShift templates for all environments of your application and you don't need to worry about the environment could be destroyed or be lost. That's all we talk about today. Question Q&A time. Thank you guys. So if you guys have any questions, you can just ask here, please. Can we use GitLab instead of Jenkins? Can we use GitLab instead of Jenkins? You mean the GitLab? GitLab CI. Oh, GitLab CI. Actually, this feature is highly integrated with Jenkins. So every time you create a new build config, OpenShift will automatically create a Jenkins job in the linked Jenkins master. So by so far, the pipeline is actually running in Jenkins and Jenkins is running as OpenShift path. So your question is, do I know, do we know how the pipeline works between different namespaces or projects? Yeah. Okay. So your question is, if you have many projects, how do you deal with this situation? So actually, when the OpenShift build config is our namespace, the resource, so when you create a pipeline in a project, OpenShift will look for a service called Jenkins inside your project. So if you already defined an OpenShift service in your project, it will find the Jenkins master using the backpoints of the service. So if you have a Jenkins master, but in every project, you have a service inside every project, but the service is linked to the same Jenkins master. The job will be created in that Jenkins master, but you can also have different Jenkins masters. So OpenShift does it just like looks for the OpenShift service called Jenkins to find the Jenkins master. Building one project trigger on a successful run on a build in a different project or like if you want to, once the data build passed and you wanted to push that image to stage, do you have to trigger the stage build So the question is in the promotion workflow, if we like push the image to the dev how to trigger the next stage. So currently we have in our internal project we have a micro service called report tracker which will track the registry tag changes. So if there is a new tag change like the latest tag change to another container image, we will know that. We will set up trigger to trigger the next stage. What's the result of the test? Result of the test page. Where they are kept? Where they are kept? Is there some evokes place of the Jenkins slaves which are put now into some NFS or some others to keep the history? So your question is where we keep the test log and other results? Yeah, so it's actually the logs are still in Jenkins. So it's still in Jenkins. And the Jenkins master, so the Jenkins slave is informal so when the job, when the build is finished the slave will be killed. But the Jenkins master has a persistent storage config. So everything on Jenkins master is actually persistent. So actually you need to use the, in Jenkins there, in Jenkins pipeline there is a step called archive something to like move, to copy the content in your Jenkins slave into master. Master is also part, but we, in OpenShift you can create a persistent storage claim and mount the storage, mount the volume into Jenkins. So actually all the files are persistent in the persistent volume. So if you kill the Jenkins master, OpenShift will start a new part for the new Jenkins master but it will still mount the same storage so everything is still kept. You mean the two stages, one is finished? Yeah, in the Jenkins pipeline you find something. This is taken from the official Jenkins pipeline documentation. So in Jenkins pipeline if you write stages like in the stages section stage one, stage two, this is called sequential stages. So once the first stage is done Jenkins will automatically start the next stage and there is another form of stages called parallel stage so you can actually define multiple stages that are started concurrently. So, yeah. Yeah, yeah, yeah. Troubleshooting. So the question is through the pipeline and the deployment like applications are running in OpenShift pod and sometimes your job fails and the pod is killed how do you troubleshoot, do some troubleshooting? So there are many ways to do that. So firstly, in your Jenkins pipeline actually you can write logs to the Jenkins console. So in most situations you can actually figure out what's going wrong based on the logs. And other tricks like you can actually, for example, if you want you don't want the Jenkins live to be killed you can actually add some. So when you find a pipeline build it failed, you can use the replay functionality in Jenkins to rerun the build with the exactly same parameters and you can actually change the pipeline code on the Jenkins configuration page and add something like a sleeve to keep the pod running. And also I want to add one more thing. I'm from the CLI guys so I know there's an OCD bug man which you can use a little bit if it helps and there's also a lot of great CLI tools for you to debug. I just mentioned some OpenShift provided tools to help you debug some problems inside OpenShift but it's a more generic topic. Thank you, I have a question.