 So, I'll be talking about how we can use Docker and step together to set up a complete CI CD pipeline. But before I do that, I'm just curious to know how many of y'all are developers here? How many of y'all are from the off to Bajang? These are very few. And so others are those who are neither off than developers? Okay. So, y'all are from management side? And how many of y'all have worked with, have some knowledge about Docker or worked with Docker? Okay. And step? Okay. Okay. So, since I see there aren't many who do this. There aren't many who know Docker or Chef. Well, could you like me to explain a little bit about what is Docker? What is Chef? Where they are used? So, I can start from there. That's good, right? So, I have, I have family been a developer for several years. Till about three years ago, I was doing development for Chef. So, I've been writing some Chef plugins or I did a lot of contributions. So, to the Chef, nice plugins, nice winRM. So, while doing development for me, the domain for me was DevOps. And that's where the entire DevOps team seemed very interesting. And I gradually shifted my focus on doing pure development to being a real DevOps. So, I have seen DevOps gradually evolve from writing some script or what, or what From ops perspective, even I do not have a lot of knowledge or I might not be I'm very expert with admin. But since I was doing a lot of development for Chef, I was exposed to many new cases where Chef was being used, the kind of problems that people face in the delivery pipeline. So, eventually I started providing DevOps consultations. Right now I'm working with a company called WhiteHeads Technologies in Pune. There we provide DevOps as a service. So, we provide DevOps consultations. So, I heard that vertical in WhiteHeads Technologies. So, we believe that right now the DevOps phase, right from the definition of what is DevOps to how you can do it right for your particular business case. So, there is a big gap between how you can do it versus a lot of tools that are available. So, Docker and Chef, I would say that they are some of the tools that are available in the DevOps phase. But how to use them, where to use them, and there are a lot of other tools like GenC, and you might have heard things like CI, CD, Continuous Testing and so many things, but what makes sense for your particular business case. So, we try to help consult and try to provide some solutions about how you can do DevOps consultations. But in that process I've also worked a lot on Docker and we're actually authorized Docker consultations also and we're also authorized Chef consulting partners. So, basically technology innovation and the course to keep learning about DriveV, and I feel since the DevOps phase and things that are happening in this field are pretty dynamic and a lot of new things are happening so I feel that I'm at the right place. I love to travel, read, write, and I have two kids back here. So, this is a rough agenda that we will cover. So, I'll just talk about what is Chef and how it got evolved, where it was used, followed by containers, a little bit about Docker, how and why the need for, is there a need and how we can actually use Docker and Chef together. I'll try to provide some examples about how you can do it. There is something called as, those who know Chef, there is something called as cookbook, and then there are other Docker cookbooks where you can actually integrate Docker and Chef together. So, I'll give you an idea about that. So, long time ago, several years ago, maybe about 10 years ago, how many of you all here have more than 10 years experience? Quite a few. So, about 10 years ago, if you imagine the way a product was being written and being delivered versus the need for that entire delivery pipeline, has really, you know, there is a need of continuous delivery, 10 deployments per day and what all you've been hearing since morning. So, rapid development, fast accelerated deployment and there are a lot of these buzzwords and keywords that you're hearing. So, there is a way here. So, 10 years ago, when people from systems admins were told to deploy something, so what the method that was followed is, in one of the, I think, there was this wall, there is this wall of confusion, there is a partition between the developers, they write their code and then they hand it over to a build engineer. The build engineer will sit at his desktop and get that code from somewhere, he will build it. And this build process itself used to be one nice job and then the next morning, it used to be passed on to another team who used to take that build. So, if it was a Java code, it would have been a war file, they would take it, they would go and put it in the production, it's not that simple when you are touching anything in production, there has to be certain norms followed. And then consider a case that a bad production, a bad code has been pushed into production and you want to roll it back. What those system admins used to do is, if they had a Linux machine where they are deploying something, what they used to do was, there was something called as golden images. So, they used to take a snapshot of the entire server and that snapshot used to run in gigabytes of one image, used to take gigabytes of memory and for every releases, for every changes, they used to maintain huge such data. During that time also, I mean, the storage devices also, if the server, the physical server went down or there was a problem and if you wanted to copy that back up from one machine to the other, it used to also pose a problem. There were a lot of other issues. When you're talking about a server, there are a lot of things that come into place, like which port should be open, what are the security policies that should be applied. A whole, what you would call as a whole system administration side, a lot of things used to be manual and when things are manual, it slows down and it's also prone to a lot of bugs and errors. So, the system administration or something that is more maturely called as configuration management was the problem of that time where which Chef is trying or had tried to tackle at that time. So, basically Chef was written by Adam Jacob. He was doing a consultant for, he had written Chef for himself, internally for his company to provide, it aided providing better consultation service for the entire delivery pipeline during that time and that's where he met Jesse Robbins. He showed this set of code to him and then all of these Barry, Nathan and Joshua, five of them together, they founded Chef. In sometime around 2008, Chef was founded. You see, the first release of Chef was available in 2009. It's been around in a market for a very long time. Chef was initially called, the name of the company was called Opcode and Chef was the product name. I think two years ago, they renamed Opcode to Chef so now we just know Chef. Initially it was called as Opcode Chef. Someone out here, do you know what is Madionette? Initially, in the beginning Adam Jacob was thinking of calling Chef as Madionette but eventually it was very difficult to write. So, basically it was very difficult to type, like on the CLI, if you have to type something like Madionette, it's a bigger word. And then eventually he replaced it with Chef and then the entire company, it was called Opcode Chef. So, it was called Opcode Chef. He replaced it with Chef and then the entire components of Chef were called as recipes, cookbooks, or even knife plugins. So, initially they were thinking of, it was actually, even before Opcode it was called as Madionette. It's written in so to be an online. And what is this Facebook, Not Stop and Disney G, any guesses? Yeah, so they are the biggest clients of Chef. And they are among the top 500 companies, fortune 500 companies. So Chef initially was solving the configuration management problem. And so the objective here is like, if an assistant admin has to manage, say, 50 servers, then he will have to log into SSSE server. He'll have to set up certain ports, configurations, ensure that your application there, consider there is a DB server, so there are certain different settings that have to be done as opposed to a web server. So all of these, some system admins, smart system admins have started writing shell scripts, but still managing all of that was getting complex. So, exactly, so Chef came with a lot of tools or some processes where you can actually write, do this better. But eventually it was not just for the on-prem or something, it was also, it offers a lot of cloud management. So you can provision virtual machines in cloud using Chef. And anything around the infrastructure is what gradually Chef started offering. Right now Chef is, I think last year, yeah, last year they announced Chef delivery as the end-to-end product where it's a licensed product now, whereas Chef is completely open source. So Chef delivery is something that they are completely concentrating right now. However, one of the biggest things of what Chef is driving is the Chef community. Chef is completely open source and Chef has a huge community. So typically, as a company, if you, and I'm a DevOps consultant and I tell somebody, okay, you Chef, it's good. They find it good, they start using it. Tomorrow they find a problem, whom do I talk to? Because it's open source. So what we felt, what I believe is like the biggest driver of Chef is the huge community. You just put a post that I am facing a problem with. How many of you are clueless about what is Docker or what are containers? Anybody? Okay, little less. So containers are forms of virtualization. It's a virtual machine, though not a real machine. It's not a virtual machine, but it's a virtual box or a container, actually, which is running on top of an OS. I'll get to that in the next few slides, what it is. But if you look at container era, it too has a long history. So right from 2000, there was Jail, which was released for free BSD. Then later, the server for Linux. Then there was an evolution of C-group, which was providing better control security groups on the container, because during that time, security or portability was still an issue. Then the LXC containers came into place, which started making containers as such more useful. And in 2013, Docker was founded, which made use of this LXC container, but it made it a lot more useful. It made Docker has been written so wonderfully that in 2014, Docker 1.0, the first version was released. Today, it's at 1.1.10, with a lot of features in it. Docker has seen a tremendous growth in the last two to three years. In fact, this particular presentation, like baking Docker using Chef, I had presented one year ago at the Chef conference in Santa Clara, in the US. And when I was invited to present the same conference here today, in this one year itself, Docker, the entire ecosystem has changed so much that it is not possible to present exactly whatever was presented last year. There's hardly anything that holds relevance today. There's a lot of changes that I had to make to make it more relevant to fit in today. So in one year, the last year, the state, exactly one year over, the state was, is Docker really that good that it can be run in production? And if Docker is good, can Chef work together with it? And if yes, how? So it was more like a research or more like an R&D kind of project during that time. But now, we see lots and lots of companies using Docker in production. And not just that, Docker is not just used for DevOps. It's also used extensively by the developers to write better code, to test better code, to do their setup. So there are a lot of applications that Docker sees. So while considering whether you should go in for, I mean, by Docker Guru app, and there were a lot of other configuration management tools, not just Chef, but Puppet, which were catering to the entire infrastructure as a code area, space, and then suddenly with Docker, people started evaluating, do you need both? So these are some of the things that are points that come into picture. All these configuration management tools like Chef has a huge learning curve. To make Chef useful, there is a lot of knowledge that you need to have so that you can actually try it out. Whereas Docker was supposed to be very simple and you just take it, it's very simple and you just get started with it. You don't need a lot of systems knowledge and you don't need a lot of other underlying parameters. You just run a few commands and you just have one machine ready on your desktop or anywhere and you can just play around with it. There was also, with Docker, Docker is also open source, Chef is also, however, there is no huge investment needed at the early stage. So consider it as a startup or if it's a big company who's considering some changes in their infrastructure, the thing to consider is there is a lot of money required to change or to even start something new. Initially, typically you start with doing small chunks, maybe 10 servers, 5 servers and then at that time, Docker plays very good because there is the learning curve smaller. But with Chef, once the number of servers starts growing, that's where Chef, even today, still has an upper head. So in fact, it would last to be for some time there is something called as Docker Swamp which caters to how you can manage many Docker containers. How you can manage many Docker containers. So last week, somebody did an R&D and it was out of care. You can start one million Docker containers and it is much better than something called as Google Kubernetes. So this is something which is still being experimented but Chef is still proven there. So the question comes, if now Docker is being considered in huge numbers, particularly in production, huge products that are having huge number of servers, that's where the question is still an unanswered question. Now I have 100 containers. How do I manage them? So basically, Docker is a Linux container. It's open source. It can create lightweight, self-sufficient containers from any application. And the benefits are speed. So if you have to start a machine, a virtual machine, or a Docker container, Docker container is up in in less than seconds. It's portable, so Docker can run on any underlying OS. Now it's even run on Windows till last year, Docker did not run on Windows. And the density. So by density, I mean it's on your laptop. Anybody tried starting a virtual machine using virtual box or VMware? So typically one or two or max three machines you can start. After that your laptop will stop working. Whereas on one machine, Docker, you can run multiple a lot more instances of Docker on one hardware. Docker can actually run even on a virtual machine. On an underlying machine you can run more number of Docker containers than the number of virtual machines. So the density is high. So I made a statement that it's too visualized. What is Docker? I said that it's a virtual machine. It's actually not. So the difference between the Docker and a virtual machine is the need for that hypervisor and the guest OS. The guest OS typically takes some GBs of memory which is not needed in Docker. Docker basically makes use of the underlying kernel to start a container. Container which has the complete space. It gives you complete independent space where you can start your you can run your application in an isolated space. And that's all you need. And Docker needs the Docker engine. What is needed? Of course you need something which will run on top of it. The only requirement is your underlying machine should have Docker engine installed on it. And then you can run it. Yes. So consider the host OS is right now Linux. So since today there was a Docker engine written by Docker which we had to install and set it up on Linux. Then on top of that the yellow thing that's a Docker container that's the yellow thing. So on top of that you run a container. Inside that container there will be another OS. That OS can be Ubuntu Linux. It cannot be Mahan Windows right now. That OS container OS. Now what Windows there is? There was no Docker engine which is available. Initially. So now you can run containers on top of any OS. And you can run any variants. Any kind of... That's how it is portable. So underlying if we are so portability it's not just even OS. Sometimes there is a requirement. There is a Java application or more Ruby application where the backward support is not there. Sometimes the ROR version increases from 1.5 to 1.6. Typically 1.6 version cannot... 5 version will not run on Ruby 1.6 that kind of a thing. I'm giving this example just to say sometimes when you change the OS you can upgrade an OS from some X version to Y and you try to run your application on it. It may not run on top of that. However, if you run your application inside a Docker which has OS at a particular version itself you don't have to worry what your underlying machines OS is. You can keep upgrading or change it from CentOS to Ubuntu to Linux or any version. You still have your application run correctly the way it has to be. Another example, maybe you are using a probably Mac for your development and maybe Red Hat Nina has been used in production. As a developer, I want to test support. Instead of creating a version you can actually run your application inside that container and it becomes a lot easier as a developer to test it. Is that any questions? Basically how do you start or how do you define or how do you create a Docker container? Basically a Docker file that you see here. If you see this Docker file if you see the Docker file over here it's basically just some Linux commands or some commands that you want to run which you specify using some basic... If you look at that from Ubuntu 14.04 basically it takes an image a predefined image and then it says that I want to install some packages on my machine and then copy some kind of configuration to it from my local to the this is how you define that image so basically what you are doing or the machine that is going to run on or the container that is going to run you basically codify it you tell what is going to go inside it it's a set of bash commands so if this is this is called a Docker file all that you have to do to build an image is Docker build Hello Scala and it will build that image for you. So here it will build a predefined image some base images which are available on Docker have by Docker and on top of that you start making changes so when you say Ubuntu it is like an OS that is going to run on your container and then on top of that you need some kind of things like you want Java or you want some other Node.js or whatever set up then you will have to install Java JDK jre 1.7 all of these things you have to just set them up here you can create an image and then you can distribute that image to whoever you want from developer's perspective from DevOps people perspective if they have to define that this is how my production web server is going to look like all of this is in there and if there are any changes this Docker image is few MBs of takes few MBs of space because you have this Docker file Kala was a language yes yes yes so initially Docker initially was initial first thing that I had heard about Docker was mainly Docker was used for testing purposes and it seemed like the best application for Docker would be for testing because in a testing environment typically you end up polluting the entire environment and then if you want to restart you don't know what to do other than just format your machine or disk or how do you go back to this and typically regression testing and when there are lots of testing things being done lots of tests being run that's where this would Docker but then gradually when people started thinking about microservices and using Docker in production they thought it is actually good to have your monolithic application breakdown into small chunks put them into a Docker container and then run them as separate services run them as separate containers then CAS is container as a service so this is what before that and talk about pass there are lot of tools like if there is something like Travis CI or there are lot of hosted applications which make underlying use of Docker to provide some services for example there is a CI server which so there is a CI server, it's a hosted pass application and then it provides you machines on which you can run your builds for example if I want to do testing if I want to continuously build my application on different versions of Java on different machines then I tell Travis CI and Travis CI internally calls spawns Docker images and then it does some processing and provides some kind of pass offering CAS is container as a service which Docker has started calling and said like they are providing the entire Docker ecosystem initially it was just one Docker container starts but then there are lot of other problems, hardware analysis and all so now they are providing the container as a service so if you want to whatever your business case, whatever your use case is to use Docker there is an entire ecosystem with the solution available you run a container get services so I mentioned that long time so before I started something like thing about golden images and system administrator taking image snapshots so they used to maintain those snapshots whereas Docker is also based on images however the way those images are created there is something called as layers layers of images and there are lot of things because of which it makes it simple simple as a lightweight as well as fast so there was a configuration management was creating lot of golden images this point is like you are controlling the environment with a system image or it's a runtime image or it's a property made so these things have to be considered there is a trade off between flexibility and management so if you have the entire server with you you are sure that if you just bring it up back it will be exactly the way it was running before however if you have done the same thing using configuration management as chef you have written lots of code which is going to create your server if your server just one day crashes dies down you can just run certain chef cookbook recipes or whatever chef code that you have written it will recreate the same thing on the top there is a trade off between what you are managing in this case it is lot more flexible and it is more manageable using chef so configuration management is the way in of devops basically there is an evolution from writing small shell scripts to a more mature way of doing configuration management using chef there is a concept of usable infrastructure it's the thing that infrastructure should not change I mean it doesn't change that's where the changes and snapshots come from the victory so basically chef and docker if you have to compare by what they are actually tackling through completely different problems chef is basically chef when it came it replaced normal human tasks or better writing of shell scripts let's do it more sophisticated way it's like saying I mean just an example here there is a concept of shell there is a convention that narrates so there is this convention what we have done is we have written some code around it consider that convention is not there how would we all have registered or submitted proposals or seen the things there is still a way to go about doing it it just makes it cumbersome writing code in a more organized manager helps you doing much faster same thing chef was trying to tackle however what docker is trying to do is trying to attack the same problem in a different way in such a way that it seems more efficient than eerie and I hope I made this point here I think it was a little confusing so I can provide a demo here so to give you an idea what is being done here is there is some code on node js code node which just prints hello world it's an app server it just prints hello world if you make some changes to that code a Jenkins build would be triggered Jenkins is a CI server so it just make change to it it will trigger it will create a docker image from your application and it will save that image in docker pub and then it will deploy it to any AWS I can just go through that so basically there is some code over there and so basically some companies might be still doing it it's not that everybody has to there are some companies that are using chef there is something like this happens they run chef so basically what chef does is it writes some code such that it configures your server the way you want it and that code it writes about it as a rewrite chef code so and then it will configure the server the way so if you want to recreate your server you will have to just run your chef recipes cookbooks you will basically run your chef code so that is one way the other way we don't use chef at all the other way of doing the same thing is what you do is whatever your server configuration is that server is actually going to be the container so your application is going to run inside the server and that container configuration is done an example was given that there is a docker file I will show you that docker file as an example here so there is a docker file which you specify some batch script where you set up your server in this case the server is not the underlying server it is the container is what you call the server the underlying post machine how would you start that also needs the requirement was it should have docker in the running apart from that it can run anything any container once so in this story whenever there is a new build docker image is created typically there is a docker image which is holding information of what is there but unless you run that image when you run that image it is a container so when you are going to build this when you build your app you can actually just do an example to make it more relevant you can do a nvn clean package nvn build something where you create a war file so in this case if there is a plugin or for example npm install and npm run for node here if it is kala you would run SPT build over whatever so basically it is going to create an output some application packaged application that is a package which is going to run ideally your building this application you can go chef can take it and run it on your server that is option one case two or option two is you don't go in that you completely rework your mind don't take that package and put it inside of docker image itself and when you have to run you run the docker container so when the docker container runs there is no application running the docker with the latest build when you have a new build coming in you have a new image then you decide okay now it's a new image let me run this then you stop the existing running container you start a new one if you have to roll back you just start the other one so that is a simplest case so this is what I was trying to do so this second case is done can be done without using chef at all so if you see then it is creating a docker image Jenkins or anything like any other you can just think there is some tool which will push this image to your some repository you have to save all these it's like saving different builds you are saving different docker images which are having different builds so example the default open source free thing that is available for public reposals docker hub there are again different ways you can do there are different docker trusted there are lot of things available artifactory and lot of so once you have that image lot of bins ready you want to deploy how would you deploy it you can use maybe that one way most popular way people use docker with ansible has anybody used ansible or heard of so ansible is also used to just deploy once you have that image taken and deployed otherwise you don't use anything you have just some script basic script with this what that script would do is basically get me the last docker get me the latest or some particular build that I want stop the existing container run this the one that I have taken just three lines that you have to run and then it will start your newer that is that I'll show you this but baby I'll show you the demo but I think I can just rush to this and come back to the demo maybe few more points maybe then get to know if you get more questions the demo will probably clarify so basically problem with that pipeline that we showed is in actuality is a very simple case in actuality is not that simple case whatever it is for understanding it was a very simple demo that I will give you here but in actuality your application typically has a lot of example your application server is there then your web server is there and a db server is there a very again simple case but a more practical case now when as a developer you are running your application and a db server you tell your application connect to my dev db server when you are running the same thing in production you tell that application now connect to my fraud server so there is some kind of what do you say information that you have to give to the application at one time and all of this together but you are going to run a container so how do you provide that information so lot of these small issues start coming up they are big actually big issues start coming up so there are different things there are different credentials packages software, database, files, environment specific configuration some custom steps there are lot of things how would you embed all of that inside that image because if you have to just run that image it has everything running the way you want but there are some things that you want to make it configurable then how would you do it so that is still a challenge in some way right now then there is a concept of different environment there is a dev environment there is a pre-prod there is a prod and then do you take the same like for example if he has pushed some as a developer he wrote some code and pushed it it works fantastically then the same image was given to QA it works very well and then the same image was given to prod it will work well but do you use the same image or should we use different images here what about the configuration images are same but the configurations are different it starts adding lot of complexity so typically the same image will move so again here so if you have to give a getaway password for example some username password has to be given so how would you do that in Docker images that's a problem right now it is something called a Docker compose using which you can specify that use an environment file so where do you put your credentials do you put them in environment so this is the recommended way however that's not a nice way if somebody just logs in and says does ENV show me everything and see all the passwords and everything but it may not be the right way chef has a solution for it right now there is something called as data bags or whatever where chef ensures your passwords and all the main are secure more secure Docker is still working on that part okay one thing which Docker so as I mentioned that Docker is eventually progressing it's growing and evolving initially there was just a Docker now the entire Docker system around Docker ecosystem around it is growing so with that what is still not available is how do you provision the underlying host consider you have thousands of AWS instances on which there are thousands of containers running but what of the other one who will start it how will you start it so chef there is something called as Docker machine which is not used it's supposed to be used only by developers you can figure that you can use Docker machine but it is not used by production for deployment that's where still chef is recommended so how do you do that is there are nice plugins like a nice easy to create a server for you and now this will get a little more maybe I will just skim through these things unless anybody who wants to go in detail with this cookbook I will go through or maybe after the demo I will come back but I will just skim through right now so this is basically a Docker cookbook and this is the chef code that I was talking about this is a glimpse of the chef code or this is called as recipes and cookbook this is the kind of code you write in chef this is dsl or ruby dsl written in that form and there are different if you see Docker registry that where you say that is the address and you have to push my image to repository called Hello Scala and then there are Docker credentials email password for connecting to my repository and these are coming from something called as data data values and then the passwords and everything is encrypted so this is the solution that chef has right now called credential management again this is chef code where it is building the Docker image can you just I had mentioned what was the other way of building a Docker image how can you build a Docker image how can you create a Docker image yeah so CLI is one thing otherwise there are those some plugins that are available in Jenkins or your Mavin plugins which would create the image Docker image for you chef can also create a Docker image for you if you take something else maybe Ansible also has some things that you can create finally so this is creating this is the code to create Docker image Docker image when it runs you call it the container or running so this is something that and then here if you want to so this is where you provide something like port mapping any questions on this right yeah so yeah so that port mapping over there it maps the container port with the host port because if the Docker container is running it is utilizing the host resources including the network there is a way to now there is a Docker file that is used to create a Docker image which as an example if you remember there was something like what install jvm or something like that or get the latest port from somewhere copy conservations or whatever from bash command chef can generate that for you also because it can generate depending on some template if you have to generate Docker build command so there is a template you can this is actually a template and using this template you can create a Docker file on the fly so that is also something that chef provides so basically a basic Docker flow on top of that chef provides a lot of see all of this they are called as resources or providers in chef that simple pipeline that we saw it is a little more mature would look like this so you build the application you build a Docker image but that Docker image is built using the cookbook and then it is saved to the Docker hub using the chef cookbook itself and then you can run different Docker different cookbooks where it will do the same thing it will fetch the latest or needed build version from Docker hub and then it will run it on all of this can be done using chef so using chef is one way to do this actually it is up to the demo so in some of the examples there was some hello scala in similar lines there is a hello node there so here is a repository all that it does is it prints hello node there on a screen this is what you see this is my application that is running your functions will be building so if you see there is a lot of things this hello node there is I have changed it to welcome to this screen you just get I just put some code here on the UI you can see that now you could actually see welcome to Azure in the function building there is a build this is something that you will see what happens there is there is a simple build and now it is it is building and operating ok step one I should be showing you the other file here this is the other file it is sent off in it I have installed node on it then it basically copies the source code and it is done in npm install this is something that port mapping is something that I will be showing here this is my particular port this is my application that is going to run on ATAP and if you look at this it is exactly the same thing from sent off it it is not sent off it it is in step so the image is being built in step step one take the sent off image step two set us this is the locker file that I have if you see this step one set 4, 5, 6, 7 set half seven steps exactly seven steps so first step and then it starts building the image if at all there is a problem in one of your docker like if I have written some wrong step in here it would fail at that particular step and next time when you try to fix that and you come back and try to rebuild it it would have built the image is ready docker internally would have images ready and saved on your machine for the first two steps so it does not do those steps again it is already there it picks it up and it starts from step 4 and then I used there is a plugin in Jenkins from where I pushed it to the docker registry so so one three private repository others are paid so how many of you know what is continuous integration and all of you are developers so there is also something you think that you would like maybe you would like to explore is people are thinking how to take continuous integration one step ahead so I have one Jenkins which would do continuous integration for all of you as one team so one repo one Jenkins instead of that you provide Jenkins inside a docker to every developer and asking you keep testing everything the way you want so whatever this is doing everything is provided to developer as a docker container many things are okay let me just unit test he will just run two steps or three steps so you have a cup of coffee and it will do continuous integration some testing automated testing all of that would be ready for you and then you will be able to debug also you don't have to push to your repository to the GitHub for this to yeah so what I mentioned here was this just got pushed here continuous integration right now how it is typically done today is considered we all are a part of this hello node in GitHub we all are developing one particular project this one particular repository so we all make changes and whenever there is a change in that repository something like Jenkins a CI server will fire the build it will package your changes it may or may not deploy let's talk about this continuous integration and then it will run some unit test a few more extra functional test or whatever it will run some automated test now as a developer can we get that integration why should I push all of that to GitHub for that TI thing to fire and run some testing and then I see the result why can't I do it on my local what we are trying to work on is we get this entire TI setup inside a container and we give it to every developer and then tell him run these two or three steps after you are done if you think you want to do unit testing or certain testing you just run it all of this is still not being used anywhere it's all in the conceptualization phase but that's something that you can think of and try to read about so that would be it may not it might take just your local code so you don't have to push to this to do any kind of testing if you want to do a lot of testing it will be running so that is one way now since our image is ready we can directly deploy or this is one a command just for demo purpose there is an IE specific command it will go and access into your node and it will run something called as a Chefline and it will if you see I am going to do Hello NodeJS so it will run the Hello NodeJS which I will show you what it is it's a very simple recipe and it tells if this is the password and all that is required if you see basically it just accesses into your machine it's a test node that it's going to access to name colon the nice access is a Chef plugin and there are told it that the thing that was missing before is I didn't tell it was a conservation file it's an i.rb it actually the deployment failed I will tell you what that problem was but to see you can just log into that machine and check what it is and fix it and get it up but then that is like a workaround this ideally should have passed here there was some problem with the let's go through what happened here so if you try to debug what we have done in this we have told SSH to the node that you want to and then run the Chef client and I say whatever it says run this recipe Chef client will you tell Chef client please update my node with something and this is the Hello NodeJS and it starts running if you see it's trying to run the recipe or default from the cookbook Hello NodeJS here there was a problem let's try to debug what happened it looks like the shell command failed actually my recipe was a set of bash commands all that it did was it it tried to log in it tried to log in and it tried to stop the existing running docker container and it couldn't stop it again and that's why it failed and that's why the port wasn't used so this is the text node there is a little bit of network problem this is the container that is running did you log the log yeah it was not able to stop this if you see what is happening inside it it says that the application is running because that's the only thing that I have printed from my application saying where it is running I will stop this stop container id and it will stop you can ideally go and do a docker gun but instead of doing that I am going to run it through the chef line bash hello NodeJS if you look at the images you will see a lot of docker images we have to run the docker does be the demon mode I am going to do the port mapping so this was it like when it was running if you see there was a port mapping done and instead of 8080 we had export port but while running the container we told looking on 419 490 I will show you the cookbook so this is how basically what we did over here on this particular machine we ran some docker commands since this is a simple example here again we did a docker ps it shows you what is running if you do a docker stop container name it will stop and we also did a docker run all of this was packaged together into a chef cookbook recently what we did is docker stop full docker tag is one command it is like a rename image it drops the latest call it the current that is running and then you run the docker command and if you see all of this is basically the simplest chef recipe that one can write when you say it is bash you tell it that basically you tell that it is running and when you are running it here typically what happens is there is a chef client which runs on the target node and you tell the chef client what job needs to be done and then you tell him ok now you have to deploy now you have to update yourself you just give instructions to that node through chef that is how you manage all of those docker commands through this you can tell it this is one last point I would like to touch what we covered right now was very basic chef cookbook would be related to speaking with those that are cooking can do all of it can do all of it probably I ended up writing which is number one in reality there is a lot more things that are a problem so the entire office is supposed to score a point under a question like if you put a debug your application can do all of it can do all of it the container is stopped and then inside and open the container how do you see what happens how do you figure out what it does that is docker netting this is a point where you get that code copy there are a lot more issues and then they are all identical they are all applications they all are on the same page how do you do all of it in your way that can be a little more complicated still some problems some problems chef chef what is happening with your cooking cooking some problems Schritt I'm fish chef chef chef I am chick chef chef chef matured튼 For both of the right hand chefs, it's very important to be able to do this. For those of you who are interested in pop-up form, that is, I think it's very important to be able to do this. There are a lot of options. For those of you who are interested in pop-up form, it's important to be able to do pop-up form. So, if you are interested in pop-up form, If you want to check if your particular build works on three different words, so that you can do it without any additional issues, if you have built a step itself, where you create an organization itself, with just few lines of change, two, three lines maybe, of changes specified which three OS versions you want to test your build, it will create three different containers and images and it will run three different containers and then your build will be running three different doors.