 How long do you want to have a Q&A clutch in the Marseille? We'll start at 30 and we'll probably take 40 minutes then 2 minutes Q&A. So we'll all stop at 40 minutes and 20 minutes Q&A. That sounds fine. It depends. That's what I prepare. We end at 16.20. So 16.10 I'm going to show you. I will show you like 10 minutes. Yeah, 16.10. It will be 16.10 so that you know that 10 more minutes and it's like a start of Q&A. Oh, it depends. I assume that you don't need me sitting anywhere. Yeah, I think so. You don't need to be here. I'm here neither. Not really. This is more about Jenkins on OpenShift but we are in Henry. We have guys from Feed Henry sitting here but I won't be talking about Feed Henry at all. Because I will explain because there will be a lot of simplifications. Feed Henry is really big. Yeah, tomorrow there will be. Look for Wei Li. He will be presenting about microservices again. And he might have more Feed Henry stuff. Or if you meet him and look at the main pack and see Wei Li. That might be about how we are using CI. Okay, so if I may start and introduce myself. So my talk should be the story of continuous integration in Red Hat Mobile. I'm one of the QE leads in our Red Hat Mobile team. And well, it's an ongoing story. We don't really have continuous integration yet. But we are building stuff and this thing will be more on the technical, maybe demo showing side. Because we want to share this with you. And it's basically a thinly wailed advertisement for Jenkins on OpenShift. Because that's what we are planning to use and what we are prototyping. So first I will go through a few slides where I will describe the situation. I will do a lot of simplifications. So I will pretend that we only run things in OpenShift. In reality we have like large SaaS, Chef, Orchestrative, Deployments. But I don't think you would care that much if I spent 20 minutes talking about that. You might assume that I'm talking about stuff that actually really works. It doesn't work that well. But you should be able to run it. Because we have a lot of these configurations in public on our public GitHub. I will kind of assume that all of our repositories are open source. Even though we do a lot of our development behind closed doors, this is slowly changing. But those repositories I will show you are of course already open. Of course I will try not to show you any AWS secrets or something like that. And I will pretend that currently we have like single service. We are concerned about... Because talking about those... That migrating from monolith to microservices thing that John talked about in his presentation. We would be over here just to explain the mess we are in. And John maybe in his talk two hours ago kind of talked about that. And I don't want to talk about that. So I have this little service. It's called H&M Bass. And it's... Let's call it a stand-in for our product. Because it's a nice little service. Well, actually it's a nice little single endpoint. Behind it there are four microservices running in OpenShift. And it provides kind of a back-end to our drag-and-drop mobile questionnaires. Zero code kind of application. Even here I'm lying about it. This is how the questionnaire application looks like. For example, you would submit something safe. Then somebody else could download CSV or PDF or something like that. And this is how the H&M Bass service would look like deployed in OpenShift. You see those four services. The last thing, the fifth one is just the MongoDB. So this more or less is the first thing that really is out through microservice. And that's why this is what we are looking on in our tooling team when we want to then develop the whole CICD pipelines around that. So how do we currently build our stuff? We have unit tests, triggered on PR. And this PR build usually in the end creates some sort of deployable artifact. Most often Docker image. And we have kind of semi-automatic configuration management, more or less like this. Like in the end there is the Docker image and we have service template. That's actually the config file you throw at OpenShift and it does the deployment stuff. So that's the first thing. And in the end you need to merge the pull request so that you have the service template you then deploy with OpenShift and then maybe another more or less manual separate thing. You would run an integration test when you want to make a release of this mock service. In reality of course we have like three different repositories because each service has its own like built path, etc. And in the end this is how it looks like. We would have like three different images referenced in the service template. OpenShift downloads these images and then we would have the running gambas. And as you can hear me I'm talking a lot about stuff you do manually. And this pull request build is nice but we can do better and we want to do better on Jenkins 2.0 and OpenShift. And we have like three really pragmatic reasons why we want to do the upgrade. First is we currently have this pull request build on Jenkins 1.0. Ordinarily like AWS Node running somewhere. But Jenkins 1.0 is no longer updated really. And we are really heavily invested in OpenShift so it makes sense to use more OpenShift to do stuff we want to do without platform. And we think this might enable us to have the proper CI pipeline where there wouldn't be like four manual steps in between these different builds. So do I need to cover what OpenShift is? Anybody who wasn't on Fort Alt already. One thing I maybe want to emphasize that really nice thing about OpenShift is that it adds building capabilities. And I think tomorrow there is a whole workshop about using OpenShift builds and this is some of the stuff we are using from that toolbox. So OpenShift builds are really cool because what I showed you previously, like here you can see we have like three static images that we needed to produce and upload to some Docker Hub or something. But OpenShift in theory can just pull your repo, build everything and then deploy the new thing without the middleman, without Docker Hub interfering, without you needing to upload something. So that's a cool thing about it. You can use Docker files, you can use source to image, you can have all of this like directly in the giant YAML, you will throw at OpenShift, you can have this sourced from Git repo somewhere. So what do we have actually so far? We have figured out I think really nice way how to do the Jenkins configuration and we kind of have a prototype for our release pipeline and we have some plans for OpenShift continuous integration. So I will be talking about how we actually do the Jenkins config and I might actually do a little demo here because when I actually have the config here I might actually do that. So let's create new project that will be just CI. Will this work? Awesome. And let's create new application that will be basically all of our CI. Currently our plan is in the end the whole deployment of our CI infrastructure would look like this because the OpenShift templates are quite sophisticated and can host quite a lot of information. So I will just create a new template. What did I forget? Oh well, fortunately I can cheat even more and I already have one prepared because then I would be sad and I would be debugging here. So what I mean by a lot of information that you can store in the config is that we don't have just the running application, that's just a single Jenkins, we actually use build configs and currently we use two kinds of build configs, one build config for specifying things about Jenkins like plugins, etc. And second set of build configs are about our slaves where we can specify based on this repository what do we want to have on our Docker slaves. And this way anytime you need to do any sort of update it's really simple to just push the new Docker file into the repo and then you would click here on start build and it would do a rebuild and suddenly you would have a new slave with new configuration right when you start the new job and I think that's really cool. So this is how we use the source to image that's kind of dry text but it's still what OpenShift can really do for you that you just specify you arrive for your continuation and OpenShift takes care of the rest. Same idea for our slaves. What is nice that actually you can really simply declare different slaves in the template just by supplying a notation and Jenkins when it gets built would know right away that alright I will use this Docker image for our Java jobs I will use other Docker image for our Ruby jobs with this annotation so I already showed you the slave update you would just click the rebuild So another thing that we currently rely really heavily on our configuration is entirely in Jenkins job builder and it's kind of this small Python project that can take a whole amount of YAML configuration, parse it and spits out the XML that Jenkins needs to have its jobs configured and we quite like it because it enables code reuse and everything is in Git repo and we can have PR reviews on our jobs updates and that's really nice because previously it would be just random colleague, woodcloner, random job and nobody knows what is happening and where Another nice thing is portability which means that now when we are developing these new fancy Jenkins on OpenShift we still have a lot of configuration from our old Jenkins and we can just port it over if we want So I can show you for example here I have 3 jobs that actually are from our own Jenkins and this is running on the Jenkins CI that I have here running from OpenShift and if I wanted I could do just hopefully maybe it won't work but fortunately it worked hour ago, maybe this one Yeah, no, error, okay, that's pity I should have left these 3 and one of these that I knew that worked Okay, so basically you have to take my word that these 3 I created with this command that now fails for some unspecified reason I could try to run it, maybe that won't work as well So let's try to build something pending, okay, okay, it creates the slave pod I think I should actually be able to see it here Yeah, few seconds, so when I run the job it actually live creates a new pod, new slave everything is in clean environment based on the configuration I supplied like those 2 hours ago when I was rehearsing with the template So I could now talk more about Jenkins jobs configs because I think they are really cool This is like the simplest smallest job you could create It would produce something like this if you ever had to manually edit Jenkins XML, you don't need to read this This is more realistic Jenkins job and you can start to see that it's quite complex configuration even though we like it that it's a configuration and not clicking in GUI It even has some templating capabilities which we like but it can get weird where sometimes you think you have a template and then when you try to use it you forget to pause in a parameter that you define in a template and it doesn't give you a reasonable error and that's the reason why we actually have a lot of problems with Jenkins jobs It actually is really hard to learn and even though we have this technology it still often happens that some of our colleagues will just copy paste Jenkins job in GUI because it's so much simpler than trying to understand this weird templating YAML system and if it throws error it will just be exception somewhere in the depths of the Python executable and you know nothing what happened Last but not least, what I showed you that quick command I tried to update a job it is actually quite simple to just overwrite somebody else's jobs because by default when you run this it updates all your jobs in your repository and last but not least it's quite hard to install We are solving this in two ways Usually we either use a Jenkins job to update Jenkins jobs so that you don't have to run this Pythonic thing or we have Docker image that you would run instead that contains this thing Okay, so I tried to show you this demo to port a config and it failed so I'm afraid I can't really show you that but I have run Jenkins at least So the second thing we have considered really strongly is what if we just move the configuration to Jenkins file because that's a big new feature of Jenkins 2.0 It actually existed previously with some plugins but now with Jenkins 2.0 it's front and center and it makes most of jobs configurable with a groovy script and remember this weird demo thing with Jenkins jobs it would look like this much less lines it's even kind of more readable you know roughly what happens there are still some quirks like the properties list thing but all in all I would consider this to be much less readable and this to be much improved version of the demo thing unfortunately it looks like we won't be able to get rid of Jenkins jobs entirely because you still need to configure it somehow how do we create our jobs maybe we will reconsider but currently it looks like the solution will be that we will configure where the Jenkins files are from Jenkins jobs scripts and so far it looks very well there are one more thing to consider when trying to use Jenkins files and this pipeline script you might think it's a groovy and that it's the same programming language and you might be tempted to do things like hey I want to do on several nodes npm install so parallel for each label do npm install or something like this unfortunately you can't because the groovy that runs inside of Jenkins is quite heavily hacked mostly to allow suspend and resume if you have a long running job and suddenly your Jenkins crashes you want to be able to resume and repeat only the part of the pipeline that haven't run yet so then you need to do things like this that you would have regular for and you would have temporary variable and you would iterate before head or and all of this would happen like in first second of the job run so that when you actually call on Jenkins to start parallelispawning all the nodes it knows the config and when it crashes the hope is it would actually rerun only those builders that we created in this that haven't run yet or didn't finish yet but still it's something you need to keep in mind that you might think it's groovy but it's not really so I started talking about this a bit earlier we still want to configure things with Jenkins jobs and you actually can include pipelines in Jenkins jobs just fine or you don't of course need to supply the DSL inside you can point it to a path in your Git repo you checked out last thing we needed to figure out is that we really need to supply some secrets to our jobs sometimes for example some of our jobs provision stuff on AWS we don't want to have that public and we want to have that somewhat configurable as well as having access to private GitHub repos or having access to private NPM registries etc we have a workaround for this one day when we will have enough time maybe it will be like another Jenkins jobs builder plugin because even though I don't like Jenkins job builder as much it's really easy to extend currently we have this literal groovy file all right XXX is not our AWS access key of course but you get the idea and then we just throw this file with a post at Jenkins script URL and it can interpret it just fine and it will create what we need it might be that in the end we will configure more and more stuff that are auxiliary to jobs with groovy scripts like this currently we use it mostly for AWS, GitHub etc so last but not least what are we actually trying to do with this as I showed you at the beginning currently we had sort of like this jointed step where you still needed to merge pull request to your template config repo and then you could maybe throw it onto the OpenShift and that leads to several problems because sometimes it happens that our developers don't merge properly or don't rebase properly so we realize that we need a proper pipeline and so instead of triggering every single pull request we decided we will trigger a release and build all the components that have changed in previous release in parallel then we would collect the artifacts and update, oh I forgot to change not Chef repo, the template so in this case we would use Jenkins pipeline job and groovy, some GitHub API to collect what has changed, what didn't and maybe our old build configs that we had in Jenkins job builder and reuse them to do this massive rebuild and it's still proof of concept that's why I don't even try to show you but as you noticed there isn't really OpenShift in this because we don't really need to push OpenShift in everything there are a few things that helped because we are using OpenShift and triggering 15 builds at once is easier when you have this dynamic spawning docker based slaves but you could actually do this without OpenShift, Kubernetes plugin is completely standalone there is then the other question currently, when we don't look at our POC but the way we are currently doing the thing you actually can update the service on every PR which means that anytime developer needs to deploy something he wouldn't need to do a massive rebuild of everything we still don't want our developers to go through just because they want to see if their update is running fortunately we can still use OpenShift builds for this which means that even though we still are building the components the usual way with Jenkins currently even reusing the old configs verbatim we can create a new template that would just reference the component GitHub repo and it can actually self-host a docker file that would build the component online in OpenShift and with this if the developer would just want to try out do some integration testing or something like that he would just supply alongside the giduri where is his branch and that would be it and he throws it at OpenShift and that's where the building happens and because OpenShift with docker is quite good with caching previous stuff unfortunately this can take quite a little time this is actually how I was currently developing those slaves and updates to those slaves I actually really just spin up this all of our proof of concept Jenkins things and then just pushed commit after commit little updates here and there rebuild, rebuild done in a second rerun thing that I needed to integrate with as you can hear it's still not ideal but it's better than the alternative we have now I think in future we will probably move to something that would allow more local development but this is currently the best thing we have for running something on OpenShift so this is just quickly the build small build pipeline inside of the OpenShift so you have the template that actually can consume the for example embass PR branch that does the image build then OpenShift can deploy the image so there are still problems we currently even in our proof of concept still don't have integration tests in the pipeline and we are not yet sure how we would do that so we are looking forward to tomorrow's workshop and the second thing is yes currently where we do our proof of concept developer would really need to push into his own repo somewhere like every single line he wants to work with that's kind of suboptimal and we are still searching for some nice solution to this I went through this fast so in conclusion we think OpenShift build conflicts are really cool and that Jenkins pipelines are really cool and that Docker based slaves are really cool and the rest of it we kind of duct tape together with Jenkins job builder and groovy scripts and we are hoping to find maybe more elegant solutions oh it's thinking of duct tape maybe I should show you how a real job builder Jenkins file can look like so currently this is the proof of concept we have it has like 200 lines and this 200 line behemoth represents what I had in a nice little graph in like three nodes so we have still some way to go to have something where we actually would be comfortable at the end of this to try to trigger deploys and integration tests so I sped up throughout my 40 slides quite quickly I thought I would stay at least a minute on each of them but in that case I thank you for your attention and I'm looking forward to your questions there are some technologies that cannot be categorized like windows for instance or mobile related builders how this strategy of deep works on OpenShift helps you with that? Are there any significant wins? To be honest if you have no reason to run your stuff on OpenShift at all there are not that many wins unless you really like the configuration aspect of this and you think that you would be updating your Jenkins so often that yeah I need to have CICD for my Jenkins then OpenShift is a really good CD platform and it would help you it doesn't hinder you in any way because even on OpenShift it's still full Jenkins you can load any plugins you want you can hook it up to any infrastructure you have besides your OpenShift which means if you need to provision on you've seen that we are using AWS we will have configuration for OpenStack as well and we are planning to plug in Mac VMs to do our iOS testing so it might not help you fortunately it doesn't hinder you so that would be the answer to this question currently we don't have such large pipelines in our proof of concepts where it would provide value but once we get to the stage that we actually want to trigger integration tests or deploys then there are always things that are hard to automate but really easy to check manually and that's where this step would go where it just pings you on email and tells you hey build finished do you really want to proceed and you then click a button and say yes or no or something like that currently we are glad that we have the beginning and in the beginning the deploy where did I have to think it's like this you can see that we have this separation that alright this part has finished and we have the service template and then somebody else maybe deploys it and does the testing etc and right now what we solved is this thing that it finally is not like three disjointed processes that often clash and requires care from the developer to merge correctly but this is suddenly in a pipeline of course next step that's logical is then draw like these two things are named almost the same so they kind of fit together like two Lego pieces so that's the next step to put them together and this would be where your manual step would fit nicely where it would ask do you really want to deploy do you really want to run load tests stress test integration tests end to end etc currently I think the the question was that we are using a lot of duct tape in our proof of concepts so what is the largest piece that we duct tape together currently it's the secret storage and secret configuration management so if anybody has better idea than throwing groovy files for Jenkins to interpret I'd like to hear it okay so you are identity security awesome I will awesome thank you currently more or less yes you need currently in these proof of concepts you need to have it in a pub not publicly but for the open shift accessible repository somewhere but because the configuration is the same for everything you just then run the new build template for your repo you have somewhere so yeah but yeah if this happened because you had a pull request and that got triggered and deployed somewhere then you would see it as a result of your pull request but nothing would stop you to run your own on the open shift we have well if you have access to our open shift that is that's why I currently showed you just our more or less open source stuff but it's something we are thinking about our current way of thinking is that we will probably in the end enable authentication with github to our public infrastructure but we have not decided on that yet so in that case if you really would be our contributor and we would add you to the right github group we would have no problem with it for you to look at stuff and we are currently investigating how to differentiate different users like alright do we want anonymous users to see results of builds probably yes like if it's somebody who just created pull request we probably want them to see what is the build result but maybe we don't want them to be anonymous maybe we want them at least to be logged into github maybe we then want to differentiate between people that are contributors and people that actually work at Red Hat etc currently we are looking at integration with github as an auth provider because OpenShift provides this and Jenkins can take the auth information from OpenShift and use it currently or you mean in our proof-of-concepts I think in the concept you need to get a real how fast is our proof-of-concept if you go from clean stage it's terrible it can take like 20 minutes if you have at least something cashed on OpenShift it can get radically better but it really depends what you changed so we are still investigating how to optimize the build config itself and looking at probably changing the last thing that I showed you you can see was this it? no did I forgot to plug in to oh I think my PC just died so I can just tell you so we actually currently in the proof-of-concept are using just the Docker build and that's not optimal if you are building things from source source-to-image builds have a good caching strategy and we are investigating how to use them because those are consistently faster seconds yes in the case that I've used this for the development of the slave images where I used literally the same same process where when I update slave image I need to update to Git repo and that triggers rebuild and good case scenario is literally the same as I would trigger Docker build which when cached can take like 4 or 5 seconds because if you add just the final layer to a Docker image that's fast and OpenShift would do basically the same thing so yeah seconds okay any more questions in that case okay so because we have 2 more minutes if anybody has last question and if not unfortunately my PC died anyway so I wouldn't be able to show you things so thank you very much yeah I think so could you put the presentation yep this is the live demo or why because you have just 20 minutes so you can you have a lot of time you have a lot of time for troubleshooting why do you want to be doing it for the upcoming to be needing to be the presentation so you can get the presentation do you have a similar one or something like this one I have one in my Google slide but I don't want to share it you don't want to share it no no no this is Google slide start just have them to exist you have HDM absolutely all the way you have like running how exciting running so let's take this composition so there we go wonderful I want to say still great basically we have we have good one you can there we go we have to take web service