 Hi everybody. I'm Shankar. Let's start with a quick demo before we start with the session. Let me get my laptop set first. What is that? All right. Let's do something interesting to... All right. And let's see if icing is up. All right. And let's get the username and password. It's according to the docker. Okay. Yes. Here it is. All right. So it took me what? A minute, Max, to get icing going for you on Rancher. All right. So, yeah. So let's imagine the world as being run on docker and everything is a container. That's an ideal world, but we don't normally have the luxury of running everything in docker on day one. We all have to start somewhere. And the idea with this session is that you can see the flexibility that docker gives you. And Rancher is a management platform for running your docker apps. What do I do? I am a freelance consultant. I work with enterprises and startups on their operational issues. I also do a lot of work for Apache Cloud Stack. I build private and public clouds using Apache Cloud Stack. Previously, I was at InMobi and Yahoo. And traveling with open source for 15 years, even before I started working. So all my efforts are completely on open source and I support open source wherever possible. A show of hands first. How many have thought about using docker? Excellent. How many are using docker in production? Not too great. Okay. So it's like five people using docker in production. So much for developers love the user experience. That must be something missing here. So the point I would like to... I mean, everybody take away at the end of the session is that, yes, we can get things done very efficiently with docker. It doesn't have to be complicated. And why should you be doing it? So, see, from a developer perspective, right, they make too many changes all the time and make it most often the responsibility of the operational person to manage the change management. And there's a lot of conflict of interest when that happens. Developers want to push code as fast as possible and the operational team wants stability. And since I come from an operational background and I think most of us attending here also from the operational background, what is it that we can do to make things more robust? And we had this boss session yesterday about automation and everybody had the same concerns that some machines got a sink, one change breaks things, and so many, so many, so many different things. So here's a perspective. So what we want to do is we want to have a consistent image right from the time that the developer does his development on a laptop and that goes into production. And we want it to be very, very quick deployment. You saw in the demo earlier that I actually launched Icinga with less than a minute of overhead. I mean, it took more time for me to find the username and password than to actually launch Icinga. So that's a key takeaway from an operational perspective, right? For you to go and do a deployment, that's the time that it takes for you. Which means now you can do an upgrade or a rollback in the same instant. I mean, 50 millisecond or one second is too short if you ask me even for Icinga or Nagios to catch because your pollers run every 60 seconds. So if you can roll forward and roll back within a minute, excellent. Nobody's going to see a blip. And if you can do such things, we can run every app as a microservice and scale up, scale down and do all those interesting things that we often hear large companies talk about, right? But given the fact that only five people raise their hand when they said they are using Docker in production and what are the most common reasons why nobody wants to go with Docker at this point in time. The biggest, biggest challenge right now, at least for the teams that I work with in Docker adoption is there is no decent orchestration and management tool. If you are on AWS or if you are on Google Compute Engine, they provide Docker as a service. But what about for everybody else who's either a multi-cloud or are having physical bare metal servers or if they are running on their own laptop, for example in VMs, there are lots and lots of people who use VirtualBox or VMware to do development, right? I mean, what tools are available for them? Now, if you do development on Docker, or rather put Docker into production, then we have other issues to look at like monitoring tools, log management, and my personal favorite is networking. Docker assigns itself its own IP. You need to manage port forwarding, which ports to expose. How do you know how app A running in container A talks to container B? There's so many plugs that you have to plug in before you actually have a complete working environment. Storage management. Docker apps are very ephemeral, right? You stop them and start them. I mean, they go back to the same state. You have to maintain configuration and the data. Imagine MySQL, right? You have your varlib MySQL. If you keep that inside a Docker, there's not going to be any consistency. You need to still manage the data for your apps and the configuration for your apps separately. User management. Though from an operational perspective, I would rather solve the bigger issues of storage management, the networking, and the actual UI to manage the whole thing. I can forego multi-user management, but yes, Rancher does support all of that. Essentially, it's an excellent framework. It's brand new technology. It's not been there for a long time. It's like six, seven months, and it can do a lot of things for people who want to run Docker introduction. So how does it work? It supports multiple clouds. It supports bare metal, like I mentioned. It takes care of your lower building blocks, which is your networking and storage services, does your user management, resource management, service discovery. Service discovery is going to be based on HCD, and it's on the road map. It's going to come up soon. Orchestration we do right now, and monitoring, you can get, not icing that kind of monitoring, but stats about what Docker itself provides. When you run Docker stats, it tells you a bunch of things. So that's what gets exposed to the management infrastructure. So how do we get started, right? We installed Rancher. Rancher itself is a Docker app. So you can launch an EC2 instance or on your VM, and you do a Docker pull, and you start. And it's already set up to connect to the default Docker registry. Many people who run Docker in production, I know of, use their own private repositories. So we can talk to private repositories, and if you have your own customized Tomcat with a var, and so forth, you can point it to your local registry and continue to use Rancher. And where are you actually going to deploy your containers in? The containers could be anywhere. It could be in your physical data center, say in Netmagic or any of the local data centers, or you might have tied up with any public cloud provider. You might have your own private cloud. All Rancher requires is a machine that supports Docker, which, if you're using anything like in the last two years, Docker is probably built into all of them, Ubuntu, CentOS. If not, app to get installed Docker or YAM installed Docker from the Docker repo, and you're set. And once you have added physical machines or virtual machines, you can now start deploying apps using Rancher. My personal problem with Docker has been to manage its networking, like I mentioned earlier. For people who are running Docker in production, the IP addresses that is assigned to a Docker container are private. Your host has... Let's take an example of Amazon EC2. You have a public IP or an EIP assigned to it. That's one. You have a private RFC 1918 network address assigned to the physical interface. Then you have a container which has got Docker 0, Docker 1, Docker 2, and so forth. They have got their own private IP addresses. And somebody has to manage exposing the ports out into public. Normally the way you do it is that when you write your Docker compose file, you would say expose port 8080 or 80 into the private IPs of the Docker container itself. That's one. But what happens when you're running multiple containers on multiple machines? Or when you migrate services from, say, machine A to machine B, who's going to keep track of all of it? To solve this problem, the way Rancher manages it is that it creates an overlay network. So the network actually spans across all the physical and virtual infrastructure that you have added to the management system. If you use Google Compute Engine, you will see that they provide you a subnet that spans all the regions. So you don't have to worry about VPCs between multiple regions, running VPN tunnels, so on and so forth. So when you actually add host into a Rancher management framework, it does an IPsec tunnel between all the physical and virtual machines and gives you a single namespace from an IP perspective. So each machine is spingable. You don't have to worry about any kind of NAT, VPC tunnels or physical lease lines, whatever. Everything goes over IPsec. So it's like your private SDN, software-defined networking. And once you have a single overlaid network, you can start putting in load balancers anywhere. So you can actually have your load balancer terminating in, say, Amazon, Singapore region, but it's directly able to talk to your physical data center where you're running a container. I mean, that's brilliant. Today, most of us can't imagine using, say, an AWS ELB and add another instance in, say, another region. We can't do that at that level. You have to use GSLBs or something like that to get it done. But thanks to an overlay network, now it doesn't matter which physical data center or which public cloud hosting provider you are on, you can connect between machines anywhere in your overlay network. And the other major building block is the storage itself. You can manage the volumes. You can snapshot directly to S3. And the idea is that snapshots are consistent, right? I mean, if you throw away your app, you can still restore it from your snapshot because the data is kept in the snapshot. The app is, by itself, is stateless. It's the data that gives more meaning to an app than just like Tomcat. It's the data that you write or process and talk to MySQL or a DB to make things more meaningful. You can throw away the MySQL service itself, but as long as you plug it back into the right data store, you're back up and running. So does that kind of make sense? Docker compose? So here we are saying, okay, we define a load balancer using an image and we map both 8080 and that links to web and the web uses a PHP image. It links to a database and ZK. ZK, in turn, uses the zookeeper image and the DB uses MySQL. And developers should be able to do this, right? Define the environment right at their laptops. The way hiring happens in companies is like there are freshers, there are experienced people and there are people who actually do the job of day-to-day management on the production environment, right? And all of them need to understand what is the environment that they're working with. And if I were to share this with anybody in the organization, they have an idea as to how the app deployment actually works. Today, if you go into an organization, somebody will start drawing on the whiteboard, okay? Here is where the application is, here is the out. Traffic comes in, traffic goes out and suddenly somebody comes in and says, oh, we just added a new component yesterday or, you know, ten years back and everybody forgot about it. Now imagine if we can describe our environment in a YAML file. Brilliant. Everybody's on the same page and if I say to somebody, look, use Docker and you can build this environment on your laptop and you can have an end-to-end functional environment. It hopefully changes the way you do development and do end-to-end testing. So everybody uses GitHub you pull your code build say you store your Docker Compose YAML wherever you want in GitHub people check out the same Git repo and they run their docker-compose.yaml and they use Rancher to deploy it onto their local box or even into production. Though we wouldn't want somebody to deploy directly into production go through a change management process but the important thing is that, yes, we can now build environments in a very, very lightweight way without too many dependencies. I would say no dependencies as long as your machine can run Docker via set. It may not be the same class of machines as you would run in production but from a functionality perspective you can build the whole thing out. Now the same thing when you move it to say in production you might have additional things like you wanted to scale up you wanted to scale down you have health checks so on and so forth. I mean these are all very pertinent from very large scale deployments who have high peaks and they want to optimize on cost and stuff but don't get hassled with this most of us on day one don't care about auto scaling it's about getting the product out making it consistent and being able to do updates but yes for the most advanced people probably 1% scaling up and scaling down is a lot more important than having a sane environment yes we have the flexibility to do that so rather than do all of this on right very important thing in linking up applications is service discovery spinning services up and down is a lot of overhead I think somebody was making a point a little earlier that can I start listening to events and then spin up Isinga checks based on event because people are spinning things everywhere from an operational perspective you don't know what new machines have spun up and what old machines have been destroyed and that's a fundamental problem in the cloud world because you can click and start and click and destroy everybody has got say Amazon control panel access and they just spin up things and next thing somebody comes and says that service stopped working and we in operations realize that hey nobody told us that somebody has spun up a service like that so service discovery is something that I'm very very keen on and it's one of the fundamental aspects of rancher is that everything has to be plug and play kind of a model so are we there yet it's a different thing all together but yes it's on the roadmap to make it happen alright so I'll be happy to take questions before I go on to the second demo the one that I actually wanted to show before rancher always coming in there is a service from I mean I think not before it's after rancher always coming in Amazon ECS came in where Amazon ECS have pretty familiar things what rancher always does I mean like at a level where it does but what difference you see at a level where rancher always will be capable enough like doing it on my own instead going with the service like ECS the question was what's the big deal between rancher and say Amazon's container service rancher is multi cloud simple so you're not stuck to so if you wanted to move your workload to Google what would you do let me ask you a simple question what would you use can you use ECS not exactly so what we're proposing is a rather what the project wants to do is have a single platform for orchestrating docker it doesn't matter where you physically your resources are located it could be your laptop it could be your physical bare metal servers it could be VMware instances it could be Zen server it could be cloud stack it could be open stack it could be Google compute engines it could be to digital ocean droplets anywhere so coming to the networking part I mean is it compatible like a plugin format where you can bind it up with open which or you know the pipe work or wave or funnel or any of the networking tools which are right now there is absolutely no integration it is a dedicated overlay network it's an IPsec tunnel that goes between all the participating instances so if I'm not wrong if the agent which is running on the instance for the ransom OS to take in so is it at a privileged mode it runs and takes the namespace and the cgroups of that particular instance or the VM and then get the information from there for the other content. Okay so let's kind of do something else after this session how many of you actually plan to try to work on production right I want to first make sure that because we have a buff later I want to make sure that people understand the difference between containers and virtual machines that's a key mindset change that has to happen if you want to move to containers we'll get to what you're asking for I think it's an excellent technical discussion but from a practical perspective at least my personal challenge is to get everybody interested in containers the implementation details are something that we can talk as a buff session probably much later because we are having one yeah in fact the reason I'm asking is we use Docker in production wonderful so in fact we consolidated the answer was but somewhere it didn't work out okay sure so if now's a good time to ask questions about Docker before we get into yeah so what's the difference between Docker and Kubernetes of Google okay that's a excellent question so Kubernetes is more of cluster management framework not necessarily infrastructure management framework okay sorry Rancher with support linking Docker containers across hosts which Docker does not natively support Docker swarm even does not support linking containers across hosts it has to run on the same host does Rancher solve the problem yes we create an overlay network right because we are on an overlay network everything is one large contiguous resource of CPU memory and disk or whatever you want excuse me here yeah we have a question does Rancher support still have any support for the Docker swarm let's say like feature like these are the IP address of the servers that I need to set up a cluster with Docker swarm absolutely so if there are no questions on Docker itself or why we have Docker maybe I'll just explain a bit to you on how we got here right just to give you an idea as to where we were previously and where we are right now see once upon a time we used to deploy from tar balls we used to download the latest Tomcat W get it make make install it would go into user local and you would have a start script that is probably a stock start script copied into hc init.d and things start up but the time would come when somebody would need to upgrade and then you wouldn't know how to go about it you would again recompile it or you would have to figure out reverse engineer what somebody previously had done that's how we all started off with and then somebody came and told us hey there's something called a package manager you should be doing rpm IVH or dpkg-i but where the heck would you get the actual package from now you would go to fresh rpms or rpm find and you would download it put it in on your disk and then say dpkg-i or rpm-i then you would complain about 100 dependencies and then you would start getting libraries one at a time and satisfy the dependency or just say hey I'm not worried I'll do a force install then somebody came and said you know what we have something called a repository you can do an apt-get update or you can do a yum update and suddenly everything was packaged in a repository on a network and you could install packages it would solve your dependency issues and we are having a much more saner stable environment where there's consistency between different machines then people said hey look I need to do rollbacks I need to do roll forwards and if I use the built in the vendor provided Tomcat or Apache it doesn't have all the fixes that I need so then people started building their own packages and when they started building their own packages that would conflict with the stock once provided by the vendor so now you would say okay why don't I build it with a different path I'll call it as slash opt my company slash whatever so all the artifacts started going into slash opt and so on so that's evolution for you we're still using the same package management system we're doing things differently but the end of the day it's the same result then you said okay now I have things in a different place same package manager what will happen if I do a yum upgrade or a apt-get upgrade would it destroy whatever I have done so far now you tell the ops people don't ever upgrade any system just stick to what you have right if you want to do an upgrade call me then we'll do an upgrade that's that's where we are today and so smart ops guys decided okay let's do things differently why don't we start using jails or charooted environment so that is a completely different environment running in the same box and is network accessible so you would start charooting into a different folder and you could do your yum dpkg whatever and it wouldn't spill over onto the system area where your ops guys wanted to make sure it's patched the next step of evolution was containers containers have existed in solaris for ages in linux it came very recently freebsd had jails for a very very very long time and I personally in yahoo we were deploying on gfreebsd jails for a very very long time but linux didn't have an equivalent of that and finally linux learned that there are better ways of doing things and containers came about and that's where we are today so the container strategy is it's an app strategy you no longer worry about the OS itself you put everything together say what developer really wants to run is Tomcat or Apache with PHP he is really not interested whether it's open to 14.04 or CentOS or whatever it just so happens because he was stuck to the packaging system provided by the operating system he started insisting that he wants x, y and z but if he was given the flexibility to run whatever it is that he wanted and a way to manage it you can do development and deployments just as easy as deploying packages you would simply say here is a manifest file build a container container has the app I don't care where it is because whatever the developer wants in terms of a user land environment is already put into the container so that's where we are right now we now have containers for applications we no longer care about virtual machines actually virtual machines came before we would say here is a VM take a snapshot of it it has a running copy of our application deploy to as many places as you want then you worry about things like host names every machine has the same host name every machine has the same SSH key so in monitoring perspective you are suddenly seeing multiple events from machines all call the same thing because your cloning machines left right and sent out so yeah that's that's where we are today and hopefully Docker can deploy the application environment in a much much much same way sure the mic is not on I think yeah so question is more regarding the performance of the dockers let's suppose we are deploying a docker in a production so how people should do the density management on the host right let's suppose as of now we try to calculate the some density for the VMs like this is a 32 GB 32 core box these many VMs we should apply based on the application benchmarks so how you calculate the density for the dockers and how we have seen that okay if you have a 32 GB or 32 core boxes how many containers you will run on those boxes that's a one question other queries more about that okay do you recommend updating the same image again and again or do you feel okay you should actually always use the fresh images for the deployments okay I'll take the first one the first one was about resource management right see in a VM you are actually doing strict partitioning of CPU virtual CPU and virtual memory so it's a very different ballgame out there and in terms of performance it depends on the hypervisor itself whether using HVM or paravirtualized VMs technology has come a long way now we support something called VT which is the CPU so CPU vendors claim that there is absolutely no overhead provided you use paravirtualization drivers so my take is that the performance difference is going to be so negligible I wouldn't bother trying to do benchmarks on VMs actually I am not talking about VMs at all I am talking more about a docker so from a VM perspective itself I don't care too much about performance overheads so now when it comes to docker in general it's the same system cache that you use so whatever is buffered in the main memory gets buffered I mean that's one of the security issues around docker you are saying that you know what you are actually sharing resources it's not perfect isolation so my take is that the resource utilization ultimately depends upon the application that you run something like say solar which I actually have a client running is a beast as far as professional perspective running solar in containers is great from a developer perspective but from a resource consumption perspective it continues to be the same whether you run it on a bare metal or on container it's going to eat the amount of memory that it wants to eat so from a performance perspective resource utilization perspective I would say it's the same if you are running say solar on bare metal and it takes x amount of c2 and x amount of time it's going to take the same that's fine but still there should be some mechanism to make sure you do the capacity planning for the containers as well right because if we are deploying 100 containers on a single box that also doesn't work so there should be some can you deploy 100 apps on a single box so still I think if you are using a so does it mean that we are seeing one container one box is that what we are seeing it's just an app right I mean if your app is using x amount of CPU you need to figure out how much your other app is going to use that's how I think in your Docker world you can do the CPU bindings right yeah you could do that but it still uses cgroups at the end of the day to do limitations but my point is that's not what what we are supposed to be doing it's just supposed to be using it as a regular app but in isolation of user land that's my take away you want to run it as say a resource a cluster resource management system then that's where you run things like resource or Kubernetes to manage your resource that's how I would look at it I wouldn't use Docker itself to do things like that I would use a much more intelligent system that is capable of managing resources the final question that I have is like the Docker itself like people are building Rocket app now the core is people are building Rocket app containers so what is your take on that like why they feel like Docker is becoming a more monolithic system rather than microservices so what is your thought process about that Docker is becoming monolithic is that the question they feel like this is the one system that does the imaging part then upload the images then manage the containers as well so why do you feel like if you have any take on that why Rocket app came into the picture as compared to the see Rocket vs Docker is a political thing I would say okay I don't think it's a technology requirement at all to have I mean it's good to have competing frameworks but I think Koro has just had a different way of doing it probably like Icinga and Nagios but Icinga does much better than Nagios so that's why Icinga exists I use both I use the right tool for the right job and that's how we as operations should always be if you're stuck to one particular technology I think it's going to be you know bit rot at the end of the day you have seven more minutes so do you want to finish the demo yep let's get to the demo how it's being marketed today is it's the silver bullet so it'll be nice if you could highlight the cases where you should not use Docker okay I'll start with where I am using Docker I am using Docker to containerize applications which absolutely have no developer interest from an ops perspective right these are all legacy applications which nobody cares about and is unmaintained and for you to put things back in case the machine that's hosting it goes down or whatever nobody knows so these are the things that I have found companies interested in saying nobody loves this application but we want to make sure it's running because it still makes money nobody knows how to set it up if it goes down what can we do about it so all these legacy applications are great targets for moving into a container because you can always store it in your Docker repository or your private registry and then say okay you want copy of some php 4.x app that somebody wrote some time back here you go it'll run out of the box now where is it difficult to actually where I am finding it tough to get developers to put it in is when each developer is developing a different module in the same app okay it's they've written very big monolithic apps and everything gets compiled into one big jar or something and it's you have to go all round so there are lots of developers who have to get together to build something for you so for them they don't want to use docker because they feel they're doing very dynamic things I don't know what the perception is he's doing something I'm doing something I think it's more of a distributed development issue than a docker issue but then trying to tell people it's alright start using docker even for things like that they don't want to get into it they think it's like an overhead for them to start dockerizing at that point once we're done with the development we will dockerize it so that's one challenge that I'm seeing from developers I don't want to I don't think there are any technical challenges at this point in time there is one see one technical challenge that I've seen that developers say is that because they're used to SSH into a machine to check logs they find that convenience missing that's again a logic it's not a technical thing I've seen anything so one of the major things in systems like these that are handled well otherwise when you start accessing disk one of the things that you do in docker is do a docker mount or you expose file systems or Unix domain ports into docker and the problem is how do you ensure consistency of disk across the boundaries consistency of disk somebody else changes from the outside disk you've mounted a file system into your docker and now you're writing to that disk reading and writing for whatever purpose you cannot say your app probably should not write there there are a lot of legacy apps that actually use the file system and once you do that the whole docker paradigm falls apart and then you are left with the challenge of also handling availability of that data which is not a part of the docker now absolutely yeah so data you don't keep data in containers yes but how do you how is it what is there to handle in it I mean you don't want to containerize data at the end of the day you want data to be outside and that's where you would like to keep it the order is right to slash temp or var temp or slash data or something those are the things that we have to make sure that is outside of the container itself so we have to make those mounts available and consistently across well that's the process of containerizing an app is what I would say it's alright so it's the screen back alright so I'm going through the UI now if you go to a docker run on any of your VMs or EC2 instances and you will get the UI up and running once you have a UI you add host to it in the interest of time I already added two EC2 instances before coming in it took a bit of time probably because git pull takes a lot of time I guess I don't know it took a bit of time but it's up and we deployed I sing a web a little earlier so let's go through the process of adding a new host and so right now there's ready integration with digital ocean and EC2 you can give your AWS IAM credentials and it can provision an EC2 instance for you same thing with digital ocean you can give your credentials for digital ocean and it will provision an instance for you and by default it uses the 1.2.14.0 for image and you choose the size and the region and private networking and you have your host up or if you're using custom you build your say a physical instance you build a physical instance make sure there is the latest docker running on it and you install the agent using the above command as long as your management server is reachable on the network you would be able to control the resources that you want to run on your physical machine or on your virtual machine takes roughly a minute and a half for it to come if the network is good it does a docker pull and docker pull the first time takes a bit of time once the host is added it picks up whatever is the host name that you have assigned to it so I assigned a host name of R1 to 1 and the other one is left empty the first thing that comes in is the network agent which is responsible for running your overlay network 1042.69 1042.16 is a subnet that I am using and all the machines that are part of Rancher ends up being on the same subnet and because they are on the same subnet they can all talk to each other no matter physically where they are located it goes over IPsec so that is UDP 500 and UDP 4500 now once you are done with this you can always add a container so let's start with say something like Tomcat alright so my Tomcat is up the first time you do a container install it takes a long time because it has to do a git pull but every subsequent runs that you do it comes up in like a few milliseconds I am going to do the same thing in another machine I am going to call this Tomcat 2 same Docker image alright Tomcat 2 is up so you can see the IP addresses that has been assigned to them 250.29 and .79 next I go set up a load balancer make it a bit small because it is too big for me where I would like to run the HAProxy load balancer and I choose the two containers that I wanted to run and I map for example port 80 to the Tomcat port so once it is added it should not take so much time I should be able to access it on so I map the elastic IP to the hostname r2 and now I have got Tomcat a public Tomcat load balancer with two Tomcats running behind it so it is as easy as four clicks five clicks Icinga was two clicks first one to get launch the Icinga instance and the second one was to get its username and password that is it not yet we are in version 0.16 I think open source completely open source go to github check out well the license is actually the Apache ASL now to go back to the presentation I just want to quickly cover one other thing so in order to make deploying docker itself the rancher guys have come up with an OS a rancher OS which does nothing but run the latest and greatest docker and runs everything in a container so the idea is that for you to upgrade docker becomes super easy and you can manage all the services because it is containerized anyway you can use the docker apis and make changes and do whatever you want yes but no it is standardized there is nothing special there is no rocket or anything like that right I mean the whole point with core OS is sorry well we actually run hcd for our own service discovery but core OS seems to be doing its own thing I mean that is what is happening they do not want to do anything with docker so that is what is happening so we could the other thing is core OS does not run everything in a container it runs docker itself but for example dhcp inside core OS is not a container you can upgrade everything that you want as a docker you can say docker upgrade my OS so your OS upgrade itself is a docker update sorry I have to cut you we are running late so you will have to wrap it up so to put the whole thing into perspective so from an enterprise perspective these are all the things that we want to take control of today it is possible for us to do all these things say using VMware's fancy enterprise systems or cloud stack or open stack to manage the physical resources and make data center much more efficiently managed so what rancher wants to do along with its set of projects is to make application management as the next way of looking at environments so how do you get started check out from git and join the community thank you