 Wait Can you guys hear me? So I thought it would be a talk, but it's gonna be a conversation So let's get it started So today, my name is Sambaran Kashyap Raleh Bundy. I go by SK and On Red Hat Internet like I have an IRC idea of SK and on FreeNode I am Sambaran and I work for Continuous Productization Team and my work mostly involves around like OpenShift and Building CI pipelines and building tools to optimize the pipelines, etc So today as you can see the talk is about like OpenShift CI pipeline for dummies and I don't assume any knowledge of OpenShift or containers or like per se software also But I guess you might you guys might be knowing about software since you are at DevCon so our agenda is to talk about like what is software and what are the problems with software and Version control and its need why do we need version control and common terminology which is coming across like when you start with CI pipelines, continuous integration, continuous delivery, etc and how do we distribute like code on PyPy and How to build your pipeline using Jenkins on OpenShift This would be followed up by a small demo which is actually required for 20 minutes, but I'll try to fast-forward it for the talk Going ahead, what is the software as it says as this is a conversation Like do you guys have any definitions about software? Yeah, that's from Wikipedia. It doesn't count. So like okay in my opinion like for me software is like Something which is written like for me software is just a code code written in a file and which runs on my computer or through which it can run on like Any other device like which has the computing power it can be in my washing machine or like it can be a television Thanks to Samson and other like tech companies now software is running everywhere. So Then that's how a software is defined. Finally, it comes down to a piece of code or like a set of instructions that you tell a machine to work on and These are the problems which with the software like usually in our like software world the first problem is like does it install properly and The second problem is does it install properly on my machine because you guys might be working on Mac Mac books Fedora's are like any other like any other machine per se but Does it work finally and does it really work? There is a subtle difference between does it work and does it really work because Each person who is using a software has their own use case of like for example if you If you use a paint software like a child would be using For like painting random diagrams or a professional would be using for certain other use cases by clipping or cropping So when you when we talk about does it really work? it extensively tests about like software is working for many use cases or not and Where do we store the software and how do we store it and how do we verify everything which has which is Which is already mentioned about in the questions going ahead and Another nightmare we see in every night or like sys admins like any of you guys are like sys admins By any chance. Okay. Have you ever? Got this like it works on my machine by one of the customers so that that's what like happens with me always because I I'm one of the maintainers of like project call as linchpin and most of the times like the GitHub issues we see was like, okay like It doesn't work on my machine and some people say that it works on Fedora, but it doesn't work on centos But we want to make it work on centos that is like a usual nightmare, which we usually usually like have You know to do that like the next big thing is like where do we store the software like? There are like many options to store the software like during my undergrad days I used to mail my code which I'm totally embarrassed of because like we didn't have like a GitHub or like anything maybe I was not aware of about the version control systems at that moment We used to mail code as we're using Gmail, but thanks to Gmail they have started blocking the code In zip files like these days so we can't there that is not an option anymore and Later on we started using traditional software providers like Google Drive drop box and one drive But the inherent problem with this all this kind of software Softwares are like we couldn't maintain the versions of of the whole software So for example, I have made a change like Ten days back and I want to get that change again right now So at that moment like I don't know where if I use Google Drive to do that Every time I upload my files to Google Drive. It kind of overrides it Unless it is Google Docs It has recently introduced a version control system where you can go through different versions But that that didn't feel like a proper way to store my code. So then came get So I just wanted to share this manual page of get that that is like Mentioned as like get the stupid content tracker. It's not stupid anymore. It's like it's the best content tracking Can content tracking software or the version control software have ever experienced? so How does it work? It has like many many features out there, but All I do is like I memorize like 4 to 5 commands and I know there's some purpose of that get add commit No get pull get add get commit get push So these are the four commands which you need to know like in order to like maintain your software on get so This is an interesting definition like of version control is like The basic idea is about like the homomorphic endo factors mapping Submanipals of Hilbert space, which I don't know what it is Even like I'm not sure like creators must be knowing about that The advantages of version control, which I found it is like we can get the continuous backup of a software So we can back up the software like to certain extent that we can revert that software from the like Not even 10 days or like it can it can time is literally time travel from one Particular checkpoint to another checkpoint. So these are the like get one-on-one things which you do There will be a remote repository on GitHub or somewhere on your hosted server and you'll have your working directory and you use get add to Get add command to add the working directory onto the staging and staging and then you use commit to actually like Resolve finalize the finalize the version of your software and you push it back to the remote repository so that's how you do like get and there's like Whenever you have the perfect backup of your software and you always have confidence to move on or to like go with the next steps like installation of softwares or like Resolve resolve some other problems with the software Going ahead Let's see the terminology like before you need to know about like a continuous CI pipelines before we get into the actual topic one is containers and The other terms are continuous integration continuous delivery continuous deployment and there are like many Continuous is these days like our team at Red Hat is called as continuous productization And there is like continuous improvement coming up there. There are like many things which you continuously do then And we will see through the definitions of like some of those and we know what we should we should be knowing what Jenkins is And we should be knowing what OpenShift is and different types of configurations in OpenShift So coming to the continuous things the thing which has been bothering everyone like since many decades is like Well, like whenever a software is released like there will be version soft softwares There are a lot of enhancements a lot of bug fixes and like a lot of features coming up So the basic trend nowadays which goes around like all the software is continuous integration continuous delivery and continuous deployment So continuous integration comes in a picture where you keep on merging every comment every comment which a developer says on to on to your like production environment or like You create a build of the software which is which is continuously Which is continuously aligns with the actual production Repository so that is when we call it as a continuous integration because every software has like lots Nowadays every software has a lot of dependencies and pieces which you need to put it together like a Lego And if one of the like pieces fails the whole software fails again so to make sure that everything is working fine continuous integration is one of the process and The other other thing is like the other most ambiguous two terms are continuous delivery and continuous deployment There is a subtle difference between Continuous delivery and deployment delivery is something which you reliably release your software So whenever I say Software X has released like 1.0. It is the most stable version And I'm going to deliver it to all the distributions out there And so that people can install it Install it, but continuous deployment is something which if there is a running software like an Apache server Or like a Python based server and you continuously update the existing Packages while running the software out there and we have a stage production and test environments like all together Making sure that the deployment of the software runs Runs runs as per the tests which are being ran against that particular application Going ahead. Why continuous? Why do we need continuous because like whenever a software really is released? It doesn't mean that the software is perfect there It is a process where like the evolution happens and it turns into a like production quality software where people can use using the using all the test mechanisms and other other things but the basic reasons of like the continuous things that are which are happening are because to ensure the standard practices and to ensure the software is delivered fast at a faster rate and people get the time to market of a software is reduced and If we fail also like we would like to fail consistently so that we could fix it in the next version So that is the important Go get away with the continuous improvement of the existing software Going ahead. So this is the CI CD loop which every software is like now It is following where you code a particular software and you plan You very and you build that software and later on you test the software release and deploy into the production and Make the customers use it and later on you you can just monitor it So depending upon the feedback which you get you can just again plan recode and rebuild everything So and enhance it so what are containers so containers are these buzzwords recently came into picture like after a long time there There are these are this isolated user spaces. So if you have a very big machine of like 128 GB RAM and like 8 terabytes So if you want to share the whole machine with different processes together We need to create some isolation So previously it used to happen with virtual machines But later the problem with virtual machines was like virtual machines were very heavy in nature And each has its own operating system complete operating system running and a lot of unnecessary like software Which is running as an isolated environment in in terms of virtual machines, but whereas containers lessened the burden by creating small user spaces using Technology called SC groups. I'm not going to go in detail into it, but it kind of simulates the whole Server environment in terms of a process and you can you can give you the whole feel of the Operating system within a container So that resulted in this like previously people used to say it works on my machine now people started seeing it works on my container So this is an example file of a darker Docker which is used to create your own like container. So what it does is like it pulls from the main repository of like Fedora and It pulls down the Fedora image and it runs a command called as DNF install and runs the command and installs the software on top of that image and It also says that okay. This will be the slash them slash would be the you are start working directly And this is a container if you run the container without the last statement there it would just run and It's top it would stop after like few seconds because container needs something needs something to like achieve each container has a purpose of of running a command or running a web server or running anything but if you want to keep on like if you want to keep on Make the container running always you use some like Some command like tail hyphen f dev slash null to make it It works like a infinite loop. So to make the container running always and Finally Jenkins Jenkins is a tool of which people started using for the whole automation purposes where it can achieve like Love it can achieve by means of like plugins and shell scripts or Jenkins job files by creating many scheduled running bills and triggers notifications and automated scripts and pipelines and it also does the multi node delegation where like if there are Like different types of operating systems. You want to test the software on Jenkins is Jenkins is a software which you go to Because it can you do the delegation lift if you want to run a script on a Fedora slave. It can also do that and it also has like Logging mechanism where each it can record everything every Instruction which you run against a particular same slave using Jenkins So five like ingest Jenkins is a tool which automates stuff with the help of scripts plugins and job files And how Jenkins coming coming to picture like with opens open shit Jenkins is a Is a tool as I said it builds software or it can do anything which you want to Jenkins takes the code from git Git server and it can also it has a feature to build a Docker container and it can run It can also run it on open ship So open ship comes with a CI server that can Build deploy and deploy the containers for you So whenever software is nowadays delivered like you in order to make sure the software is always running We deliver it in inside a container instead of a RPM package or any other like a script any other depository because containers have become a reliable source of software distribution because Distribution because because of their isolated environment nature and because of I'm sorry Isolated environment environment nature and it is sure it is sure that it would work at any point of time using Docker so open shift is a Open shift or recently changed its name to like okd or origin Kubernetes distribution It is it kind of built around like the Docker containers and Kubernetes distribution platform Which manages the whole like a different kinds of containers and which it can also load balance on the containers So open shift has like many Many configurations which you can make use of to build containers and do to deploy containers and to store different metadata about the containers and Also, it can also the create pipelines and pods inside the container and We'll see that in detail a build like open shift in open shift terminology like each Configuration is termed as a template to do a specific task For example, when you see a build conflict Build conflict is something which you tell open shift how to build the image of a container and a deployment conflict is about Is about the way how you deploy or how you run the container with all different load balancers and like any health checks or triggers etc and Config maps are something which we use to store the credentials or any other meta data Which is related to the container which can be available at the time of Running at the time of running a container as environment variables inside the container Finally like open shift also has like previously it used to be a preview, but now open shift supports Pipelines where you can have a series of steps user which can be ran through Jenkins which uses the containers Containers using an open shift domain specific language and separate open shift plugin so that like you can achieve the whole pipeline deployment or like a software deployment or release process using using user-defined steps inside a Jenkins file and Power is nothing but a collection of containers, which you want to run stand alone instead of running it as a deployment or the advantage of Deploy a pod is like this is the best mechanism to test your container so you you just create a pod file and You copy paste the pod file on to the open shift environment and it runs as a container and you can delete the pod any time however in case of a deployment Deployment is like by by configuration by default configuration deployments always tend to be replicating in nature even though if you accidentally delete it so Deployments are something which you use for a prediction environment Finally, like how do we deploy? How do we deploy the containers and how do we? define all this Deployments so the best part is like OpenShift talks YAML. So instead of having a long JSONs or like any other XML definitions OpenShift has a simplified nature of like defining things using YAML before that like if you want to have your OpenShift run running on your local machine You just create this is a this is an instruction set which is used for installing open mini shift on your current Like Fedora machine. So if you want to do it on your like Windows machine or like any other Distribution there are a set of documentation set of the documentation instructions on the OpenShift website, but like for Running an OpenShift on your like local environment to test things we use We install the LibWord dependencies and we install like we make sure that your current user name Is added to the LibWord group to manage the To manage the KVM and all the virtual machines and you download the MiniShift binary from from the MiniShift GitHub repository and You just start start with like MiniShift start command So this is this is a build config example build config for building the home Docker file Which is already like mentioned before So most of the times what I what any person like would do is like just copy paste of working working YAML file and try to edit it while understanding it so the pretty like Build configs are like pretty intuitive in nature because if you can read through the whole build config You you can understand by then by the key value pairs So for example, it uses an API version of like version one and it's a template and Further it has it has been labeled as a template by template name called as like Fedora and it has like different annotations Which can be like perfectly ignored because these annotations even though if you don't mention mention it these are generated by OpenShift inherently and Each conflict has its own objects like for example We are using an image stream to build the whole Fedora Fedora container and This image stream is again referred Referred as an output for the container so whenever a container is built it is being pushed to the OpenShift image image stream and This particular Section called a source is used to build S2i images the source to image images from get itself So if I mention a GitHub repository URL as a parameter It can go back and pull that repository and put it inside the container Or it can refer to the Docker file remotely which is on onto the GitHub repository or any other external get server and it also follows like different strategies currently We are using Docker strategy there and with no cache is equal to true Like which says that whenever a request to build comes to OpenShift It should build from from the scratch instead of using all the layers inside the Docker or a like instead of using all the like Instead of using all the pre-populated steps or pre pre ran steps it usually tries to build it from the scratch and It can also have like different kinds of triggers like you can make make OpenShift do the things like Whenever a person pushes to the repository whenever there is a commit being created You can trigger a change in the whole build conflicts So for more information like you can just refer the OpenShift documentation Like which is more detailed. This is a basic example to create a Fedora builder container using a build conflict Going ahead You can pass the parameters in build conflict using a parameters like attribute Which has which has like which is nothing but a key value pairs which you pass to a container while while it is being built and the next part is like a deployment conflict as I said deployment conflicts are like very self-replicative in nature like if there is any There is any accidental delete in the in the environment It tries to create recreate it again so that it doesn't affect the like Customers who are using the using the deployment conflict and It also has config maps as I said before it is used mostly used for passing metadata or the credentials as environment variables inside the containers Finally the pipelines so Pipelines are something which where you declare as I said where you declare different stages of a particular Jenkins environment so that Jenkins can pull up pull down the Jenkins file from the external repository and run those steps within the containers so the example pipeline looks like this and It is as said as said before with the build conflicts is kind of applicable to pipelines Also, it can also pull from the remote get repositories So this is an example repository which is being created and This is a sample file where it uses a group groovy DSL Where you can declare different stages of like building a container in this current example We have used In this current example, we have used like different stages like build deploy container wait clone clone the particular source code and install the source code and test the source code and Start like start building the another Another container using the same source code and you can also like create a release with with running commands like Comments like twine or like a pipe I release or or any shell script also So finally there is a stage called as a cleanup stage. So once the the whole work is done. You can directly clean up So as I said before like you can run pods as done as Individual like containers like without anything anything related to I thought anything related to like a deployment conflicts or build conflicts. So this is how you do it So let's create a project and create a build conflict and create a pipeline and start building so I have a demo like demo to Distinguish that simulates the whole distribution pipeline where I use a Python package called as like gummy bears, which is already On the top repository, which does nothing but like when ran gummy bears it just prints. Hello there So this is a pipe I package and like let me stop demo So and as I like mentioned in the examples This is one of the Docker files which I've been using to create the whole pipeline release and each of the steps in the pipeline Or like to build the whole package and later on test the package and deploy it to the pipe I repository and this is an open shift environment where you just Go ahead and copy paste. There is an option called as add YAML to the project where you can just copy paste the build Conflicts and it creates the whole image images and it creates the deployment configurations and it also creates the pipelines So now we are just creating the pipeline and as I said, like we are just copy pasting the whole the whole YAML files and Changing the parameters accordingly So once the pipeline is being created, we need to We need to start the pipeline but in this case like pipeline just failed because the credentials were not available So I just created a like a conflict map which shows the credentials and Now the pipeline is started again Let me just fast forward it So as you can see it's currently building inside the Jenkins container The container is built from scratch. So it starts from DNF update and installs the packages and installs the whole pipe I package as you can see the there is an error involved here like Because there is no change in the pipeline repository. There is no change in the repository So I created a commit which updates the version of the particular software and rerun the build again So once the build is being rerun It updates to the pipeline repository again With the help of the pipeline There you go. Like we got 0.0.3 version And that's it. Any questions? So Yeah, like openshift has like an image different set of image catalog where you have the Jenkins images which are already there. So once you deploy the Jenkins image it automatically detects the pipelines and it is kind of pre-packaged with it OpenShift plugin so which identifies the open shift resources within the Jenkins Yes, that's it Any questions?