 So, adb is a is an environment which is easy to start with to build and package containers. So, it comes with all the dependences installed like Docker, Kubernetes, OpenShift, Mesos, Marathon, all the infrastructure, container orchestration engines installed. And it has all the dependences installed to build Docker applications and package them as nullicles. And if you might have set up any of these applications, Kubernetes, OpenShift or Marathon, you might know that it is not a very friendly and easy process. So, adb is there to help you. It is a background box. And you have to just run one command to set up your environment and start working. Do I have to speak about containers? I think I've heard so much of them. No, please don't. Yeah. So, coming to container and packaging, right? So, we have got a Docker file where we add a few lines. And we build like a container image of that. It seems to be simple, clean, but is it beautiful? Because it depends on the people who make it. And OpenShift is seeing that like the way, the number of packages you see in the public Docker registry, right? There are too much. For a simple application, you can find like hundreds of images mainly because like, and most of the images do not have proper documentation. You can get started with it. And every image is tweaked for a particular user's use case. It's not generic. You cannot just download the image and use it in any environment. You need to, and that forces you to make another image out of it and add to the notice. You are forced to make a custom image and push it in Docker Hub for that. It's crazy out there. Just for example, if you search for the images for MariaDB, technically there should be only one which works everywhere. But it's not. Like every community and every organization has their own MariaDB. And most of them might not work. And most of them will be without documentation. And over time, it will just keep on growing because everyone needs his own MariaDB image. And as a user coming into the container ecosystem, it's confusing. I just want one. That's why even we go for the official images, right? But sometimes the official images also do not work. Either they are not tested thoroughly for every release cycle. You'll find issues report against them. So I only have confidence when I move to this ecosystem, right? This is a blocker for me as a user. But nonetheless, like containers are fun because you can package everything in one thing and ship it. But we are yet to agree on particular best practices for how we package containers. Let's take an example of a real-world application WordPress. How many of you know about WordPress? Everyone, right? Or everyone might have used it, at least. So if I compare the typical WordPress setup, we have to download the RPM or depth for WordPress and then configure things. It doesn't work. Again, we configure things. And after a lot of heat and thrust, we get it working. But containers is that WordPress, I need a TV. So I have got my IDB container image for that. I run the container from that image by specifying the database name, password for the database. And then I can run my WordPress application by linking it to the my IDB container that I just booted up. And then I just need to find the port in which my 80-port of my WordPress application is exposed to the host. And once I find out, I hit the browser and I see the WordPress screen. No setup, no configuration. All the things I have to do is just there running for me. So it's pretty easy. But here we see that I have got two containers coming together to deliver a much richer application. What if we take it to more than two? We have a lot of containers coming together to deliver a richer application. A typical scenario would be if I'd like to have my inter-production app, say an e-commerce website, be on containers. I need databases for it. I need cash servers for it. I need multiple applications, app servers for it. And then finally a load balancer app. And everything can be containerized and linked together to deliver the entire website on containers. Just for example, we look into an example of GitLab. So GitLab is an open source clone of GitHub. It's very famous. It's used in-house by many organizations. So on a superficial level, GitLab depends on a database, Postgres, MariaDB, whatever it could be. And it depends on a caching server, Redis. And Redis, again, could be run as in a martial slave mode. It could have, it depends on two other nodes. One is the martial and one is the slave node. So we come to microservices. And containers help us shape microservices. And if you can, but when you deploy containers, the image is the same, right? And if you can write out images properly so that we can separate the metadata of the image and the static image, so we can actually deploy the same image in multiple environments by configuring the metadata according to the environments. And thus we can actually orchestrate containers for different environments. And as a result, a bunch of orchestration tools have come out in our ecosystem for orchestrating containers, deploying the same image with different parameters for different environments. But when it comes to like, so we have got rid of one problem that having a static image which works in different environments, but now we have to ship again a variable piece of data with the metadata. And all the orchestration providers have their own definitions of this metadata. And now the metadata from one platform to other looks the same. They are totally different. For example, if I have to deploy it on Kubernetes, I have to download the template format which contains the metadata and I have to deploy it on my Kubernetes server. So we have got container orchestration engines like OpenShift, Flame, Compose, Mesos plus marathon, Kubernetes, DoCoo, Terraform, Shattered, Hilly, and so many more are yet to come. And yet we do not have a particular leader in the market space of container orchestration engines. But everyone has taken their independence to invent a new templating language which makes it more confusing for users. As a developer, why should I even bother with their language? I just need to get my application into production. It is not that easy. So how the metadata can change across various environments, right? So like, for example, I have got multiple environments for my application, right? It could be staging, test, it could be production, right? So for example, if I take the metadata for a MySQL or PostgreSQL, whatever, so I might need to run it on different IPs. I might need to run it on different ports if all the environments are on the same host. And I might need to have different credentials for that. And I need to be able to configure that on the fly. And if we look into the example of a guestbook co-app which is a multi-continental application that QBD showcases the demo with, it's not theirs. It's like 725 lines long. So that's not a quick start guide for you. You have to go through all of that to get started. So here we are introducing Nulikul. So Nulikul is a spec for building and packaging multi-continental applications. It's basically a pattern that guides you to build multi-continental applications and shape them. It encourages composability of applications. So basically you can compose a Nulikul application out of multiple smaller Nulikul applications. So the benefit is that once you package an application as Nulikul, you just forget about that thing. Once you have packaged a DB properly, you don't ever need to do it again. You just consume it. So that's where Nulikul encourages composability. And it is also like orchestration platform agnostic. So Nulikul works as it is. It has one common language and it works as it is for different orchestration platforms like OpenShift. It works on Marathon. It works on Kubernetes. It works on Docker. And we have got a pluggable provider model which allows you to hook any provider you want. We have an API. You write your module and you can get support for your own provider. So you're talking the same language in the world of Nulikul. You don't bother about the native artifact languages for the different providers. So it's a spec. It's open. It's like a country engine agnostic. And it encourages orchestration. And it lets you parameterize your deployments based on the different environments you're deploying it to. So why did we start with building another tool? Because the time we started working on it, there were no tools like it, which were like totally provider agnostic or orchestration tool agnostic. So we had the need to actually work on such a tool. But now currently, I think Ansible Container is doing something similar which is kind of like orchestration provider agnostic. We just came later, right? That's what I know of. But last year, there were no such tools. And as a developer, when I want to define my application architecture, I want to be able to do it in a much high level, right? I want to say that, okay, this is the application, application. This depends on the DB. This depends on the cache and just deploy it, right? But again, when I talk about high level definition, you don't want to me lose the power of doing low level tweaks. So, NuLiQL is something which gives you a high level overview of your entire application architecture, but it doesn't compromise on you customizing the low level bits if needed. And it's kind of like high level. It's very easy for a junior developer to actually take NuLiQL and deploy to protection without being in the fear of breaking something unknowingly. And it integrates with other tools. As I said, it integrates easily with other orchestration providers. So, you can easily write modules to add support for your orchestration provider. Maybe it's like marathon. We already have support for marathon communities. Maybe some X operation platform which might come in the future. And the spec is open and the implementation for that is also open. So, it's open source so anyone can look into and like improve it and like customize it as needed. So, NuLiQL is implemented as a directed acyclic graph so that we can actually like have the dependency among the application components laid out properly. For example, the gates book would depend on a radius and the radius muscle itself depends on two radius slaves. So, we define an application as a like like as a graph and so that the dependency is like self-evident from the definition itself. And we manage dependencies as in like when you like when you install in NuLiQL application if the dependency they're not on your system the engine itself takes care of pulling all the dependencies and just running the thing. You don't need to bother like whether you have dependencies prefaced. So, it takes care of the dependencies for you as well. And as I said like when you deploy into different environments we need to be able to configure the metadata right and we support parameterizing the metadata so based on the environment we can customize the environment variables you can say something like that to actually say what data to use to actually deploy it. For example, when I'm defining here a partial app right I can actually I have defined certain parameters under the key params one is the image which says what image to use to run the HDBD app here I'm setting a default for that like CentOS HDBD and this could be overridden as well and also I'm mentioning the host port the port on the host on which to bind the 80 port of the container for example running multiple instances of it on the same machine right I cannot do that I cannot bind every 80 port of the container to the 80 port right so based on that situation I can customize it to actually like bind it to any random port and also all these like so all the nodes in the graph they have their own parameters and also I can put certain restrictions on the parameters as well if you see that on the constraints in the section host port so I have added a constraint which adds a rejects that port can only be numbers of this pattern so if you try to enter a wrong value it will not accept are we really using rejects for this right? we can do but most of us don't but we can do it is a feature that allows it to be done but like people just skip that and also the defaults are just for user friendly sometimes when you like when you know the apache server on 80 port you don't want to configure that that's why the defaults are there that allows you to override the values if needed but you can just proceed with the defaults if you know that your environment will allow it with the param section we define that what data my application or nullicle needs to be instantiated right? what data do I need to run and answers file as a name answers file could be in any format it could be in json format it could be in yaml format is just a like a hash of a hash where you have what sections and the section if I would go to the previous spec definition is like one of the node names right? hello apache so there's a section for hello apache inside the answers yep got it does this also translate into like an openshift template welcome that in a bit yeah so so here we have got various sections in an answers file where you can pass the data which you want to override we can either supply data for the params or you can override the default variables for example like here I'm using defaults I had used sentos htbd just pick up sentos htbd but if I want to run this on say on a htbd container based out of fedora I can do this I can even override the port in which the my container gets bound in the post so I can configure pick my answers file to match various of my environment requirements and specifications also the numerical specifications has room for various providers provide this as in like human it is and I've got artifact files so this points to the artifact files as in like Kubernetes templates or OpenShift templates or like Docker commands right so the thing in nulikul is that it's kind of pretty much explicit nothing is magic so the the question that does it generate to an artifact file for template for OpenShift or Kubernetes it does not you have to do one time effort you have to write the templates with which can be configured I'll just show this so even the templates which we talk about like templates are the artifacts that gets deployed in the orchestration platforms like we have got templates for Kubernetes and OpenShift we have got different kinds of files or marathon we have got Docker commands which is one line right so we can have placeholders in the template like a dollar image which gets populated from the values supplied in the answers create dynamic artifact files from the templates that we put but yes you have to create the templates manually as in when you scope Nulikul we wanted that it should not be magic it should not do too much magical stuff it should be explicit and you are in full control of what you do so you have to write the template once but once you ship it for different providers you don't need to take apart from anything for the answers file so it's kind of explicit and that was one of the applications that it took so that we can keep the tooling lean and simple not magical because magic comes like a lot of uncertainty and things go wrong and you don't know what has happened here all you just render the templates with the data we provide and deploy it as simple as that so coming to Atomic App so Nulikul is a specification it doesn't do anything it just says how you package your multi-continent applications and ship it but we need something to run it Atomic App is a reference implementation of Nulikul which installs, runs and manages the life cycle of a Nulikul app on a particular orchestrator so you can fetch Nulikul applications with Atomic App you can run it you can stop it on the provider and yeah and you can also you also use Atomic App to build images for Nulikul images so one of the decisions that it took for building Nulikul images is that like the Nulikul image should be stand-alone so the image should contain the code that is necessary to run and deploy the things so you don't need to install anything apart from docker on a machine to actually run Nulikul images yes you don't need the Atomic host and the Atomic CLI command so the container so whenever we make the a Nulikul image we base it from the Atomic App image to the metadata definition and then we add the metadata files like the Nulikul docker file and the artifact files inside the inside the container during the build process so it is stand-alone it contains the code that needs to be done to actually manage these application itself so we have chosen the container format and the mechanism system to actually deliver Nulikul images so we will try to look into some demos that will make things much more clear so most of the demos are made by Thomas my colleague and yeah I need to share my terminal to sit for the terminal things so I am on an empty directory so I will show you various methods to actually run Nulikul application so I will take a sample of WordPress Nulikul application which consists of a database and the WordPress application container so I have got the Atomic CLI with me yep so all I need to do is so let me try to run the WordPress on Docker so all I need to do is sudo Atomic install it is very difficult to type like this Atomic still WordPress is correct sorry I just need to run this yeah that's what happens so I have no data with me right now so Atomic app is designed in such a way that for the missing data it will ask me for data this is not for deployments just for like getting started with it so I will just enter some random values with this like I will just run or is it a just run yep I think I will just I will set it up as a cloning Cloning screen is better I will change this Is it visible or do I need to increase the font size that's good I have got one parameter provider Docker I have got I have got I have got I have got I have got I have got I have got I have got provider Docker so it says that my application now resides in a directory which is generated on fly barely by Atomic App project Atomic WordPress send of 7 Atomic App and then a random number and yeah so that is the directory where the state of my application lives so let me do a Docker PS and yeah I will just create with Atomic yeah yeah so I see that my WordPress container and my Atomic container is running I see that the WordPress application is running in port 888 so I can try to hit my browser localhost yeah and yes WordPress is running I just need to configure the language and then enter the form and yeah I will be set up and that is on Docker which is pretty kind of like simple so I will just kill my application right now so I have started my deployment application I will now undeploy my application wait yeah okay so you are going to now roll back yeah yeah so I will just mention the state the folder in which my state of the application resides so I will do again run the image name so it says that it is stopping the images one way one it will do that it is taking some time it is taking a lot of time for Docker right it should not take that much time for Docker yeah it has undeployed the container applications from my Docker's engine and so if I do it Docker TAS Grip Grip Atomic yeah it has been successful so now I will try to do the same on a marathon provider Mrs. marathon provider and for that I am using our ADB marathon box which has already preset up with marathon so I don't need to set up marathon I have already done wagon up so I will do a wagon SSH to login to my marathon box and let me switch to a marathon yep so I just remove the WordPress folder from there let me use Atomic app to fetch the so this time I will be doing an unmonitored deployment so that I don't need to enter in value manually I will be using the answers file to deploy it automatically that's how to do it in production so I will be first fetching the image I will be doing I didn't run project Atomic WordPress Center 7 Atomic app and I will be running it in a mode of fetch and I will be giving the destination as a local WordPress directory yeah it just download the image and extract it in the local WordPress directory so that's the pre-populated file with all the parameters that the WordPress application needs but it is if I just do a cat of it so most of the values are none which I will set it right now and then try to start the WordPress container on marathon sudo fam answers so first I will be copying it to answers.com because answers.com just copy it to answers.com I don't have a view on this machine so yeah so let me some easy palace password and then I can use the word WP for the name of the database and user WP and 1 7 2 7 10 0.1 I am switching to an absolute IP address like DNS setup on my marathon setup so just I am using the absolute Docker preachers address for that and I will use the password again I will use the send us my DB image and user name is WP again and DB name is WP and I will setup the provider as marathon seems to be good so let me try to run sudo atomic run yeah it says that it has deployed this application and I have now did the run and run in the current directory so that's why I gave dot after that so that I have got the state preserved in the current directory rather than a random directory under barely atomic app so let's see the marathon UI oh it says that my DB and WordPress apps are running now and it has given me so let me click it and see what it is there yeah so WordPress is now running on marathon isn't the same numerical application and I can do and do the setup but I will not go into this yeah so now let me deploy it so I'll do run of this image using a modes top and dot for the current folder and it will deploy it and if I go to the marathon yeah so yeah it has under the log things so I do not see my container is running here anymore so now it's time for Kubernetes so there is another way to run so till this time I was using Atomic CLI to run Atomic app so Atomic CLI uses so we have I say that we have a reference implementation on Nautical that is called Atomic app that runs and manages the Nautical application and Atomic CLI indirectly calls that code to run it from the image so we can also if you want to access the cool features that are developing day by day every day so I personally use Atomic app because that's the latest code and it takes some time for it to be built and go to Atomic CLI so I'll show you how to use Atomic commands directly yeah it's there it'll go nothing so the commands that I showed you earlier right so here so Atomic app supports a lot of other commands than the commands supported in Atomic CLI so that's why we have got an option called like mode where I specify fetch sometime I specify run sometime I specify stop sometime but in Atomic app these are like the first level entries for my command line so I can directly fetch as I'll do right now destination as word press I'll run it with sudo so I don't need Atomic CLI to access Atomic app I can do that using Atomic app command directly so I'll set it into WordPress again I'll do the same thing sudo cp copy answers.com sample file to answers.com and I'll go and edit it p i answers.com and I'll just again use some sample values here I don't need to override the MariaDB thing because Kubernetes comes with a service thing which can actually map to IPs and by default they provided I'm specifically provided to be Kubernetes and so if I'll just check Kubernetes if it's running something no, Kubernetes has no ports running on it or no services as well oh there's the Kubernetes service running and yeah so I'll do sudo Atomic app run dot yeah so it says that it has done the deployment so I'll watch kubectl the ports yeah they already came up wow this didn't come so fast yesterday launch is the application into Iconocative can you specify which projects the namespace since it is the we use the raw like template files for Kubernetes you can do literally anything and we can define the namespaces which get deployed to yeah so here the namespace is by default is default that we can override it so let me have a look into the service kubectl services I see that it is running on the WordPress I can access on this thing try to go and find it out yeah so WordPress is running on Kubernetes now I can now do and sudo Atomic app stop dot and it will deploy and if I go to kubectl get services the services have been taken down and the ports also have been taken down it's terminating right now it should come later now sorry it's terminating it'll terminate that's that's a that's a interesting I'd like to showcase one of the new features that have worked on so so we have got now Atomic images right and it has been pushed to GitHub so we need a way to actually list the Atomic index as in like an index for Atomic index where you can go run on features providers and from you know the name which you have to image you have to download to run the and consume the neural application right so we have we have a command called index am I using which version I'm using an updated version which is released I've got called master there was must be an index command was the changes merged from it's I don't know why it's not showing up I yeah it's there now so we have got a command called list which points to the upstream library repository and it lays down the neural images that are uploaded there so it is yeah this is the repository which contains all the neural images that we have we officially support and it faces from that repository but if I want I can generate my own index myself so all I need to go to a local numerical library repository or something similar to that so I have gone to my local library and I can run Atomic app index generate and the location yeah so and it will be generating and updating the index file from this repository so if you have your own index repository you can build your index yourself on your machine so if I run Atomic app index list again so it will be showing the things from this repository which in my laptop both are the same but yeah it shows the different providers that a particular image supports for OpenShift K for communities and M for marathon and it also tells you the image image name to actually pull from the Docker public registry so these are the information that are available from this index command so the point the point that we have mentioned right regarding like generating the templates so we do not want to make Atomic app complicated or too bloated as in making it to a lot of stuff that makes it complicated and then a lot of files are used in the process so we want to give a little instance but we are considering the option of automatically generating the artifact files for different providers based on common language something similar to say like we have got Docker compose right something similar but not exactly Docker compose we are thinking in the lines that there should be an easier way to get into this and then everything is totally different like and then you learn and then is now the tool for you you learn and then you realize that there should be some high level temperature definition which tells how to deploy a multi-continent application of the organization providers irrespective of communities simple to write and simple to understand so we are thinking on those lines something which takes a universal format and generates the artifact files for you and then finally ship it using the queue because it does what it does quite well it's simple but we want like embassy universities small and critical clam small things together and like make it work and deliver a bigger story so here are some of the things that we have a website at projectatomic.io and those are the links for repository on GitHub projectatomic slash nulicle and projectatomic slash atomicapp and you can reach us to us on a mailing list at container hyphenfools at radar.com so anyone do you have any question regarding we can discuss regarding the spec do we yep so one of the big problems with actually constructing a generic spec is that for very basic functionality it's possible that actually have sort of universal high level language but once we get into things like mapping ports storage other areas where the different registration frameworks differ radically from each other yep it seems a little bit more challenging to actually come up with a generic approach for that in the existing rule that's an issue that's unresolved that is any thoughts on what that would look like in the future the challenges in this space is that the underlying technologies are so different and they are moving so fast it's very difficult to like level the surface and like build a common spec but even if you do that so when you talk about from a developer so we are totally the developer side it's like 0 by day entry for developers to develop the containers on production without learning anything so deep so when I talk about developer side I don't want to be bothered with which ports do I expose my service to or like what networks example in Docker compose we can specify which networks my container joins but as a developer why do I care choosing networks and choosing all this is a job of the IT right as a developer I don't need to do that but I mean like right now when you talk about works is we just don't get a level that gets booted to writing a bunch of shell scripts that no longer provide a directory which is not really a solution it's not a solution but with the so what we are we are targeting is that we want to target that developer audience so that like it's a spec that is very easy to discover and it's very high level without giving you the details of the platform right you define everything in an abstract way for example like networking in Docker right it's a way to actually isolate applications but how you isolate applications is totally an IT decision right infrastructure decision you can isolate applications either by doing putting firewall rules or you can create private networks to isolate them but it's totally an infrastructure decision a developer should not bother rather what should be specifying is that okay I have got an application A and it depends on the application so I need to access application B and that application doesn't need to depend on for example a DB doesn't need to know about the application right so if if it tries to access the application something wrong right so you define in a high level way what are your dependencies right and let the tooling underneath it figure out what's best or the office guys to decide how to achieve that level and to we are actually discussing right now on those kind of issues that how we can like define applications in a developers language without like separate the platform level details and the high level architecture like architecture specification so it will take some time because as I said all the tooling tools are totally different and we have to get people on board when we are talking about such a thing because if you just do it inside our organization it's not going to make sense we have to get everyone on board and like yeah and leveling the surface like mesos marathon is totally different from Kubernetes it's totally a different take on how you deploy things right so leveling the surface is difficult and again even if you level the goals are all moving fast we will always keep on changing them it's not like when we came up for the virtualization world when things had settled they knew like what to do and like things were not moving the fast but now in the current ecosystem things are moving so fast that even if you keep leveling you will keep on running with the bulldozer and keep on leveling the road all the way but still such a tool is necessary I have spoken to a lot of developers they feel the need of such tool because I have made a lot of people who try a platform and they invest energy in doing things in their platform to realize at a later time that it doesn't work for them and yeah and then they don't want to come out of that because they have there's an investment on the platform right but if you have got such a tool it will let people experiment and choose a proper tool which suits their purpose does that partially answer your question or like does it not at all kind of I mean it's not I mean it's that we need the other that's good but we need the other piece so that the opposite people can actually deploy with the developers yep yep and the other piece is going to be a little more complicated because we're going to need effectively what amounts to a separate driver for each orchestration yep and there's some things that people will never be able to cover like I was looking at what it would take to support Kubernetes Pets app in New York and the first question is do we actually even want to go there so we're not like still we want to confirm ourselves to just bring the packaging for multi container applications so we don't want to do the magic pits but yes the tool for doing the magic bits is required which can generate the articles for you but yeah that's going to be complicated but that's where the fun is like we take the hardships so like the life of the users like diffs on the stack do you mean like composability like nulicle suppose composability as I said like so nulicle works like this like you have got a big app right it depends on apps A and B and apps A and B could again be a collection of nulicles so nulicle once you package something in the nulicle it's a black box for you right you don't want to look into inside once you package it it's a black box for you yeah so you said it's a black box for developers right or I'd say there is a black box for the the consuming nulicle applications so nulicle application is consuming B it doesn't need to know what B is made up yes yes but you said development don't have to care about infrastructure right so me as an infrastructure guy who has to back up this we have to who has to back up for example that MariaDB me as an infrastructure guy don't want to know what developers do there but I need to because it's very take up and developers don't want to deal with backups mostly and my school is not the solution here I don't think yeah so those kind of things we have we haven't attended yet I think we need to talk with more people actually this is one of the requirements that you came up right so maybe like we need more mind share on this tool so that we can hear to your needs and come up with a tooling that is necessary for this kind of issues currently it's not we don't have a direct solution for that in nulicle you can do that you can have at the end you can write Kubernetes templates and all stuff you can run something there which gives some backing up your databases a solution that is provided by nulicle directly but if it is a requirement we would love to actually look into this and like create a tool that actually solves problems of devops and devops you are welcome to actually create an issue in nulicle I think it's time the other speakers are waiting so I'll end up thanks