 Hello everyone, my name is Kamalika and I am the boss of Salvan. So today I will be talking about giving you a brief introduction to Docker and Shell. So are you guys familiar with Docker or Shell? Have you worked on it? Are you familiar with various containers? Or at least have you heard about the list of things? So basically, to start with, what are the challenges we face on a day-to-day basis while we are setting up our development of the manors? But each developer pays. Too many manual steps. It takes up too much time, maybe an hour to set up just your laptop to get your code working on your laptop. Once you get the machine set up, when you start actually with your machine and test it and so on and so forth, it becomes, at times it becomes slow. The machine has to be restarted. You have to deal with various performances. You don't get the faster feedback when you pass your application on to it. And when you are doing this kind of testing on your local machine it is definitely not the same as your production environment. A production environment may be compared with multiple nodes. It has more than one machine. And each of these machines are integrated to each other so that your application works on it. But when you set up your local machine, you are doing everything on the single machine. So you are not able to actually test the exact code which is deployed on your production environment, on your machine. So what happens? Your code works on your machine and maybe goes to production environment it makes. And you have no idea what is happening. Everything is working on the machine but it is not working on the actual environment. So these kind of issues, how we handle this kind of issues, how we solve this. First thing, if you can get a replica of production environment, a lightweight replica of the non-core machine and if you can produce environment on the one. When you are not bothered about a machine getting restarted or machine crashing or an entire setup going back you can, if you have the freedom to create your environment as and when it is required. Automated configuration management. So once you have your virtual environment, you have an automated configuration of your application or the environment or the setup that is required on the machine. Then after your virtual environment is set up, you can configure it and then you can deploy your application right away. And this also helps in getting faster deployment cycle. The faster you can test your code, the faster you can get feedback on your local machine, the faster you can come into your subversion of the G or gate approach. And the faster you can deploy your production. And obviously you will get better performance in the machine. So in order to achieve these things, these three, four things, one thing is definitely for sure is we need environments for Zuma and we need configuration management. So now what is Docker? Docker is now a right-wing software which is built to do various work using Linux containers. So it is a rapport which is written in order to create, destroy, set up your environments on demand. And when I talk about these environments, these are actually Linux containers and these are very lightweight. What is the difference between containers and virtual machines? You might have a question, why Linux containers and why not virtual machines? So there is a difference between containers and virtual machines. So in the virtual machine, you have a machine with operating system set up on it. So suppose you have three VMs, say three virtual cost VMs on the machine and you give them one GB RAM. So you are giving up like three GB RAMs being used by all... These are dedicated for three GB RAMs on this machine. And they will be using those. And it is like you will not be able to utilize those three GB of RAMs you will be able to use. And it's very lightweight. So you will not be able to have more than three or four VMs with a typical hardware machine with a GB RAM. Your machine will give performance issues. In case of Linux containers, it will share the operating system, it will share the binaries. But it will have its own isolation. So you can have as much as ten to hundred Linux containers with it, your machine. Within you will see the government environment. And these are not actual virtual machine games set up. And these are just sharing the environment. It's very fast to create, destroy and run various operations on to them. And Bob also provided a simple command line to us, which makes it easier to script it. So you need not go and do a manual operation of the time. So you know how to set up your Linux environment, you script it down. And you have one script that can be shared across your P. And that can be also used in setting up environments on to serve. The last thing, one of the interesting thing is Docker registry, which is like a Docker repository. It's similar to an image repository, where you can save your own Docker images. And it has a public industry registry, but you can also set up a local repository. Like you do for up to eight minutes. So you have all public up to eight minutes, but you also set up a private up to eight minutes. So what's the advantage of Docker registry is, you might not really want to use the image which has been uploaded into better by someone else. Because you don't know what has gone into those images. And you are not very comfortable using any public images over there, right? And sometimes you might want a sudden configuration of softwares onto your images, which are not properly done. You don't want to share those licenses or anything with anybody else. So you just set up a local Docker registry. And in a team of, suppose you have four teams with 10 members each. Each team has their own specification. But eventually team A would like to use the PowerPoint which has been created by team B. So image which team A is using might be useful for team B. So you want to share an app on the team, right? So you have a centralized repository. And then when you add a certain configuration to those images, you can upload them to the registry. And then the other team can just pull it and do their testing. So one time set up, you can actually utilize those templates. So the templates that you are using for your environments. So at this point, couple of straightforward Docker tasks that you can do. We have got some key nodes. Typically, it runs on all the next distros. It was built on Ubuntu, but it has been working on Centro SREGAC. I personally have tested it on Centro S6.4 and Red Hat at Appliances.4. And also Ubuntu. So it works on that. In order to use it for Mac and Windows, you don't have to. You need, actually, local based out of the LXC RDS. So you need those RDS and services to install our machine. And there is no LXC RDS for Mac or Windows. So what you will do is that what I have done, you can have a virtual machine which has many of these Linux distros. Ubuntu, Centro SREGAC, Fedora. And you can then install Docker onto it. And pancreas have many Linux containers or images you can. So this is my vacant virtual machine where I have installed Docker. It lets you create search machines, et cetera. So Docker images, this command actually works for any available repository of images, whichever you have in this local machine. So it functions similar to what it does. When you can do a Git clone and Git pull and Git push, right? Similarly with Docker, each of these things is a repository. So once you create an image, you tag it and then you do a Docker push to your repository. So functionally the command line is actually similar to what you have in Git. So I want to search what are the images I have in the Centro SREGAC repository. So you might have a question, do you need internet access on your machine when you want to use Docker? So it's one time, once you pull that repository, once you have that image repository on your machine, you don't need internet access anymore. You can continue from there. If you build a local repository yourself, then anyway you don't need an internet access. So it's just one time for the initial repository. So if you see here, there are various images because this is a public repository that I have pulled. So you can see there are multiple virtual images which have been constituted by various people. How do you create a Docker instance? So when you create the Docker image, when you create the instances, you have to say which image you want to do it. So you have to specify that. I want to create an instance from this image. And when you say... My question is, now we have not done Docker search, right? So you have searched in the first place. Yes. Where should it is? Where should it is? It's a local pool that I have taken. That's what we've been able to search it. It's a local internet search experience. So once you pull it, you will be able to do all your performance. It's like once you get close it, you'll be able to do whatever you want to do, until you want to push to that approach. So you can do all the functions and then you can get the results. So I haven't shown how to pull it because I don't have internet access. So you can do that. So I want to enter start a container and enter its shell. You need to know what are the containers which you have already running in your machine. You can run a command called Docker PS, which will show you the container IDs which are in a running state. These are the three containers that you have. So now I want to run this. I want to enter the shells of another container. So I'll just run CentOS. CentOS is for the repository or the ways which you want to use to create this container. So when you use this, yes. So you see we have three containers before that. And now you have one more added. This is showing it is using CentOS and the command that I have run is slash, this is slash, this is H. So each drop of command that you run is actually run as a process on your local, on your host machine. So you can see those as processes they spin up. So like 100 Docker instances does not mean you have, as I said earlier, does not mean you have like 100 virtual machines. These are like isolated zones you have within your Linux environment. Key things to note about Docker is that it's still under development. It is not recommended for production. There are some security issues as well. People who get, if they're bringing to your guest system, they might get access to your host machine. So those are still things to be considered. And that's why it is not recommended in your actual production environment. But you can use it in your QA staging as long as those environments are not exposed to that problem. And Docker networking, it will be in a Docker machine. But it uses Lever for the bridging. And it creates a bridge interface called Docker 0. Each VM that you create that gets pinned up to that bridge interface. And since Ylgar itself has its own local VACP pool, it will assign IP from that VACP pool to that commercial. Which each container will have its own IP. And it is possible to change that before the VACP pool. You can create another bridge interface to it and assign a VACP pool. So any questions? Yes, you can. The reason I chose Chef is, you can choose any container in your system. I have worked mostly with Chef and I am more confident about Chef. That's why I have chosen Chef. And so Chef is a content management tool that can be used to configure your environments to whatever you want. What it gives you is once you have the code, once you have your instructor code, you can use it for all kind of environment. You did not have different setup for different environments. You have one single set of code and use it across all your environments. You don't need to manually install your software. You don't need to create. Now if you use, if you have environment software, you don't need to log into it and install and then configure it for each and every virtual machine. If you have Chef, you can have your cookbooks and recipes which are actually scripted to install and to configure it. Install various software and configure it as per your requirement. And then you can use it for your, the same code can be used for your local network. The same code can be used for your UAD staging, production, any environment. And this is, Chef also has various plugins for various public and private cloud platforms. You have Chef plugins for OpenStack, you have Chef plugins for EC2, likewise for Docker. So, what does it give us? So, okay. Some of the features in Chef. Chef has a concept of recipes, cookbooks, roles, environments, database. So, how do you, you know, structure your Chef code is, you can have environment, in Chef, like, say, continuous staging. And you will have multiple roles within that environment. You'll have a virtual role and a front-end, front-end app server, as per your application. Right? And then, based on those, you will have the recipes. Suppose, virtual world, you're using engine, you will have engine recipes. You won't get engines to run on, quote, 80, 80, not 80, and all those things. You want some specialized metrics to generate for the engine. And all those you can have in the engine.org and put it in roles recipes. And then, you can apply those recipes on to your local. So, what do I do in life of the day? It's like, you have similar roles across your environments. And those are applied to your containers or virtual machines, whichever you are using. So, that way, you have consistency across your environment. That way, on initial problem, your code will work on your machine, as well as on your production environment. At least, you will have that much confidence that, you know, if you can push your code into it, you don't need to, you know, think about, oh, I don't have an actual environment on my machine. How do I debug the issues? And you can even do education testing on your development machine, and then push your code and then do education test for your environments. So, there are aspects like that. If you can combine proper and chef, and you get environment in demand, and you get very fast on feedback. So, you can have two repositories. You can have your application code in one side. You can have your infrastructure code in the other. And you can have the Docker images. So, create your images, apply the infrastructure code, deploy your application. And these are very fast. Like, within minutes, you will get your, within five minutes, you'll get your chef front on the page, and you can see if there is any error, or if there is a no bug in your code. Second question. Can snapshot your death? Yes. Yes. Snapshot, I think. You don't do five minutes every day. Yes, it might be five minutes, but of course I'm done, right? So, you don't need to destroy your instance once you deploy your application. You can rerun your chef plant again. It's just you're running the script. Yes. Yes. I've finished my work. What's your environment in your machine? I started my machine in between. It'll be there. Yeah. It is just another finish, one which is created at a process, which makes more. It will be there. As long as you have not destroyed that container or removed that container from your machine, it will remain as it is. And chef front will not take same time for subsequent runs. For the first run, it might take, because it will be initially setting up here in your cache and installing your software. But once your software is installed, you run the same chef code, it will not install it again. It will check if the API is available. So, that is the only way of the chef test again. It will check, it will fire API whenever QA engine is. It finds engine is for the same version. It will not remove it. It will proceed to the next step. So, it will be just processing. So, our chef code, which is taking, say, five minutes, in the next run, it might take two minutes, or might take ten seconds. Depending on how much, if it is just a pump, it change, it will not take much time. It will be faster. So, you need the chef plant to be installed. Yes. Yes. So, what you can do is, you can create a minutes container, install chef plant and do it, and put it to your software engine in a positive way, and keep it, and let people use that. But sometime you have to put it, and before it does not happen, right? The first container that you want to use, that will need to have that chef plant. At some time you need, once you do it, then you don't need it. Any more questions? How do you retest your... Okay. That's a nice question. I have not included that in it. In case you are ready, find out, find out, get selected, I'll be covering that as well, but yes, that is possible to write, you need test for your chef, as well. You can write aspect test, aspect test for your chef code, and you can, you know, write it against your chef recipes. Aspect, or chef's set, depends on which one you choose. But, yeah, that is possible. Any more questions? Production environment. The thing is, the software is still under development. It's pretty stable. We have publicly used it for our acceptance test environment, and last-daging environment, and last-scale environments. It has worked pretty fine. But since it is under development, it has not been used, it is not recommended for production, because production is like a very critical environment for you. You can't afford any kind of issues happening in production environment. So it depends on how much, you know, importance you give to it. And the second thing is security. If your environment is, if you are developing a public website, and if you have not taken care of all security measures, if people bring it to your guest, there might be a possibility they will get access to the host, which might compromise your entire environment. So that way, it is not recommended for production environment. But what you can do is, like, you can have an exact replica of production environment. You can do exact level of testing over there. And then in the production machines, right? As long as your code is safe, your configuration is safe, it's just a matter of some, you know, VMs for containers. VMs will become a commodity item for you. You can throw your VMs, you can create VMs, you've known you'll longer need to think about this is my production VM, I need to keep with you. As long as you have everything automated, you have environment on demand code, you have your object management code. And as long as they have consistent across everything, right? So, yeah, it is not recommended for production yet. Okay, any further questions? So, like I have made all this work, I have a very good machine, I'll show you the specification that I have in mind. So, yes, that's all. Proper is an RPM. You can install that RPM into your machine and your code. It doesn't matter. If you have local operating system in your local machine, if you have any Linux distro, you're good to go and install it. If you don't have it, just have a virtual machine. Mine has 2GB RAM. You don't really need a Linux distro for this. It's based out of Linux. Yeah, I understand, but you have something called booted Docker. It is a tiny Linux drawer, I think that's like a 50MP or something. You can use a booted Docker and deploy it on-site. But if I have access to the host machine, then can I see everything that's happening inside the Docker container? Yeah, you can launch it to the Docker container. But how am I deploying this on-site? Where I'll give them my Docker. But I don't want them to go further inside and mess it up. So, is there something? You don't want them to go from host machine to the gas machine? Don't even access to the host machine. No, it's their machine, right? I'm installing on-site on my software. I have a Docker image which I can now deploy to my truck or to the on-site server, correct? You don't want them to get inside the Docker image and change it. So, as long as they don't have commit rights to your Docker apposite, they won't be able to do much. Things will be only on their machine. They won't change anything. But how about preventing read access or something else, you know, like going inside, is there anything that Docker provides in this system? I guess you should use IP tables. Yeah. But there is something on IP tables, right? Probably that's very good. But that's not enough, because these are all processes which are things on the host machine itself, right? There might be a way to get into the Docker images. Yeah. So, how about amount of access to the host machine, right? No, it's fine. Okay, but further, I just want to even provide you're saying that don't give them room to access it. Yeah. But how about even not giving them any access? No, not even access to the host is what I'm saying. So, if that's the case, they still be able to see those Docker instances. Right, so the question is, what is that room-level permission that you are giving to part of the database, et cetera? So that's one set of challenges that I understand. But how about even preventing a basic SSF into this container and see what's happening inside it? That's possible using it. Actually, it depends on how you want to deploy it, and I'm talking not really to Docker, but to Elixir. So, Elixir, what we used to do, and as one of the products we do, instead of using as a standalone, I think typically we use it as a bridge, but you bridge to the Gaussian. Yeah, you can. You really want to use it as an individual VM. You need to switch to something called MacBLAN. So, in network type, you switch to instead of V, you basically use a V and a typical, I mean, the default configuration is a V. So you need to switch that to something called MacBLAN. So, create a MacBLAN interface so that it's just as a standalone interface on that. And this stands out as a network interface switch. You know, what MacBLAN is this? It creates a virtual interface. Virtual interface, it's not Elixir interface. It creates a duplicate, I mean, clone of a network interface. So it stands alone. It's as good as a VM. So you need to switch into something of that sort. And put the idea was to do instead of that sort, nobody can ask. But what is the problem you're trying to solve here? If they can't log into that machine, how will they access it? Like, how are they going to... No, it's their machine. Like, it's their most machine. Basically, we're trying to instead of creating two distributions, we're using software to sort of push it onto our platform or push it onto their machine. But if they have access, access to it, they can sort of read whatever there may be or your other processes that are up there. So even if we prevent them giving them access onto that machine, then what would they do with their Docker image on their machine? They don't want them to see anything. But if you don't want them to see anything, what do you want them to do with their Docker image? Like, what will they do? They just access the application? This will just be a duplicate. Yeah, please. We just want it to run there, but they don't want it. They have to see what he's running inside of it. Okay. For example, if you're using EC2, he doesn't want... I will not receive what he's running. You know, I was on the spot because that's why I have access to it. But the thing is, you don't need to SSH into Docker to see what's going on in the Docker. These are things I say, right? If you see, these instances, if I run Docker 3S, it actually shows what are the processes or what are the commands which are being done on the container. So you don't have that level of abstraction over here. So each command, each Docker command, has a process in the most machines here. They can anyway see what is going on. But if the problem is they can log in and they can do, you know, their own work and they can corrupt that image and you are worried about it, then I would think one, if they don't have access to the actual Docker repository, they can't really corrupt your actual basic images. And next, if you have to fix their local machine problem, why doesn't it show that image install it again? Because it's much easier and faster to recreate the environment when you have automation. Then you sit back and debug what someone has, you know, done. The content is that I create, you know, I can only create on the distro, which by underlying, in this case, the virtual box mediumism. Let's say you have brought up centers and whatever, right to the virtual machine. So the content that I deploy on top of it are of the same... Yeah, it will depend on the distro that you are using, that you share that kind of operating system. If I want to deploy multiple different... Yeah, in the LSE containers, it is possible. Okay. Like, you want to have Ubuntu and CentOS both on the Docker, it is possible. It depends on what you have installed on the Docker image. You can have Ubuntu and you can install it. Yeah. With the minimal CentOS installed, minimal you will get around 400 Mb, but it keeps increasing, you know, if you keep installing things on it. My images are 1g because they have a Chef client is called and 2b is called and all those things. So that's why it has increased from 400 Mb to 1g. So it keeps increasing. The size will increase. But you are asking about the usual Docker, as such? Yeah, so... I think it's definitely going to be starting to take... You can start to take these. The base... The basic base one, very minimal one is around 400 Mb. No, no, no. It's not 400 Mb is too big. It's not 8 Mb you can create one. Okay, I think... But you don't get everything into it, right? Yeah, the base image. From there, it's up to you to strip and do whatever you want. You are talking about Amazon base image or Docker image? No, I'm saying a simple Docker you want to migrate whatever you want with the... I think 8 Mb is better. Right. I don't know. Maybe, but the image that I have used is... you think that one is minimal and it's not. It will actually serve most of the purposes that you would like to do. So every time I spin up one of these things, it's going to basically take up 400 Mb of this space. Oh! You are talking about that when instances come up. No, that won't be taken up. Okay, this is just the base image. Yeah, this is just the base image which is saved on your local machine. Okay. It is not creating another virtual machine although it is not of you but if you keep pulling those repositories that is going to occupy some space on the machine. If you keep pulling those images, download those, keep those templates on to it, you are basically doing a download and keeping it on your machine. So that will occupy this space. So solution to it is you have a centralized repository to create a centralized locker or keep things over there. No, don't do it on your local machine. Keep pulling. So the locker image yeah, it will have a Linux container image which you have, it will not have your operating system, it will have some you know things which will be required for you to spin up your LXCs on to it. You have those CHU environments and all those things set up properly so that the locker can spin up those Linux containers. So that's the Linux containers on to it. Yeah, the kernel is the underlying thing and you don't have any different versions, right? So in that case does it So LXC is a space tool that's it. The whole concept is based on namespaces. So nameshift is a kernel concept which stays in the every Linux destroy thing for two or six or whatever it has. So my question is if I have a let's say a particular destroy installed on my laptop and can I spin up multiple containers which are of different distros? I think so see in LXC is that it is not installing operating system separately. It's just using various namespaces and CHU environments. So the locker is based on of LXC's. So in LXC's it uses a template and in that template it is basically I want to use it has a CentOS based release kind of thing which is not that operating system. It has and rest is all CHU environments and which will be your home's machines various no-tools. I should be getting into the shell actually. It is inside the pseudo terminal but it is not showing me that actually so inside this you see it has got an IP as well it's the LXC container. This IP is actually pulling it out from the device. So if you go and look at the label not sure it has a GSP code defined. So you can play around with it. You can remove the locker, create a new bridge assign all of the IP. You can connect from a remote machine into this process. If you enable access you can. No why do you actually what it is doing is over here. If you do BRCTL show this is the actual bridge. So you have a ETA-0 on your host machine and then it creates a bridge which is Docker and then all these LXC containers that comes up they get these interfaces and all that. What you can do is right now Docker is using the interface create a new Docker interface saying this is my interface. Give me this interface. So that you will not depend on the results actually. Yes. If you are calling engineer to local host on the next container then you have the problem. Any questions I don't have a setup right now like I need I have a code setup like a Chef code but in order to run it I need to do some double wiring like doing my code stuff and all that this is a very minimal Chef code that I had so we have your code you have your rules and then you have your code books. So what it will look like I don't I haven't done it like I need some answers in between.