 I'm Mildu from Sri Lanka, and I'm a contributor of Skolab, which is a popular organization among Google Summer of Code So I'm a contributor in Skolab, which is a popular organization among the Google Summer of Code students and Google Core Incidents and also a contributor in Label Lab, which is a very young organization like few months old for now. And at Label Lab, we have been working on several projects like small tools, which will be help the people who are basically working on dev and off-site. So this tool, which I am going to share with you today is something we started few months back actually. So before going into the tool, I'll tell the problem that we are trying to solve. So most of you have used at least one task on a tool in your life, right? Maybe grunt, maybe gulp, maybe ant, maybe make files. So I'll take grunt as an example. With grunt, you can run any kind of JavaScript-based task, but the problem is when it comes to binaries, we have problems, right? So either we have to install these binaries on our machines, like sometimes they are not very straightforward and once the project is done, maybe we are not we are not needing these tools again. So but we sometimes forget to uninstall this stuff or maybe because we did a lot of stuff to install this stuff, we sometimes find it hard to find a way to uninstall this stuff totally. So when the next problem is, when I share my code with someone else, what happens is they have to do the same thing again, right? They have to go through all the problems again and the thing is when it comes to continuous indication, like building your tools on Jenkins, I have to install all these tools, all these binaries on Jenkins instance also, which I don't like because like on my machine, I can like simply format my machine and remove stuff, but on Jenkins, it's very hard. So my approach for this is what I usually do is I use dockers for this one and docker really does a really nice job with this. So I usually have several strips to run docker containers as my binary tools. These are like very simplified striped out versions of my actual code. So you can see I'm actually using a Bava container to do Bava install and then Composer to do a Composer install and then PHP container to do RTC and key generate. This is like initializing the project and when you come to the deployment, you can see like there are a lot of environment variables we had passed and a lot of like maybe we need to pass amount, a lot of volumes there. So the things also get bit complicated with strips also. So what we thought actually can't we have a small tool which will do all this stuff and we only have to define all this stuff in a one small YAML file, YAML configuration. That is what actually Dana do. Right. So this is a simple Dana configuration file. You can see I have defined two tasks here. One is big and one is deployed and in my build task I have three steps. One is to Bava install. The next step is to Composer install and the third one is to PHP RTC and key generate. And you can see in my build task, I'm using three separate containers, three separate images, docker images. And you can see in my second task, I'm using AWS CLI to deploy my application into elastic beanstalk and I am passing environment variables very easily. And what I have to do is at the end, say Dana do build to run the build task here or say Dana do deploy to run the deployment task. And this is a bit of a complex example. So you can see here, I have removed the Composer install to separate task which I can say Dana do Compose because I need to run Compose install several times when I add a new dependency to Composer. So I removed it into a separate task. And you can see in the build step, I have said in the second step named at Compose which means every task in Compose task, every step in Compose task will be imported to that task there. So when I say Dana do build, it will first bother install and then Composer install and then the PHP RTC. And let's come to the Compose one. You can see here, I am mounting my volumes, right? So you can mount any number of volumes to your task like this with general arrays. And when it comes to deploy, you can see I am passing environment variables here. And if you look at the environment variables, you can see the last one, AWS default region is a simple one. I am simply telling the US East one. But the first two, access key, I am not hard coding my actual access key on my Dana configuration because I am going to put this stuff into my GitHub account. So I don't want to these secret keys to be in my source code, right? So what actually happens is, I am telling Dana to get this AWS key from actual host machines and modern variables. So the good thing is you can have .tnv files. You can define these AWS keys and AWS secret on your .tnv file. And Dana will pick this from .tnv. Or if it's not there, it will pick from the host environment. And here, you can see, like, I am passing parameter, I am saying Dana do deploy production. And in the deploy task, you can see actually, the last ring is the one. What happens is actually, when I say production, when I say do deploy production, elastic beanstalk, update application, dash dash application name, the next one will be production. So with this, this way, we can pass parameters to our command. So we can customize things in our command with the runtime argument passing. And here you can see, I am mounting several, several folders to my container. And the second one, you can see I'm saying WR, which means it is a read write mount. So for folders like logs, you have to have a way to get your logs into your host machine. So I'm mounting my logs as a writable folder. And the second one, I'm just mounting as a read folder. So no one can, in your container, change your SSH keys. The default is read only. So you don't have to worry about putting R, just R, if you want it to be just read only. And this cool thing is like with build, you saw we have three separate containers using write. So the thing is like, usually what we do is we pull one container and run the task and pull the second container and run the second task. But with async, what we do is we first go through all the steps and get all the images pulled from Docker Hub or whatever repository you have. And this will happen parallel. So which means that if you have 10 containers in one task, all the containers will be pulled from Docker Hub at once, which saves a lot of time. And then we will start running our steps one by one sequentially. And the good thing is we, today morning, released the version one. Actually, this is a very young project up today. Like, we have only worked on this three months and just few several developers. So if you are interested in DANA, using DANA, you can install it with snap install DANA, which will give you a working binary of DANA tool. And actually, we are planning to have APT get kind of APT get with private PPAs. But actually, we were working on that, but we couldn't get it done with the time. So hopefully, before the conference ends, we'll be having APT support also, and also Briv. Briv is also on the way. So anyone using Mac, you will be able to use DANA on your Mac machines. Yes. So this is why actually we are here. Actually, this DANA tool is a very new tool. So we need feedback from you people, because you are open source contributors, and we need a lot of people contributing to us, and we need a lot of feedback from you people. And your expertise, maybe you have better ideas for us, you have better suggestions, or maybe you know some similar tools to do the same thing, or maybe you know the better ones. So come to us, you can talk to us, you can give us feedback, how you can define how we should be going forward with our development. So feel free to go to this URL and give us some feedback. So we know like what should be the features that we should be implementing in the future. And also, DANA is just a code name. It's not an actual name. So we need your ideas on a cool name for this DANA. DANA just mean Docker runner. So come to this our GitHub issue so you can contribute, you can start contributing from there. So you can give us suggestions, and we can have discussions over there. So maybe once we are finalized with names, we can have logos, and maybe we can ship you some specs, maybe you can send stickers with DANA, whatever the name we come finalized with. So please help us with this small project, which is very young at the moment. So I hope you will be like, you will try this one and give us some feedback today. And that's it, actually. So you can go to DANA repository, it's on GitHub, DANA. So go there, and if you like, you can contribute to us. If you are expecting to participate in GSOC, we actually are under Scolab. So Scolab will be having slots for DANA also. So if you are expecting to participate in GSOC this year, maybe you can come contribute to DANA. Maybe you will get a slot there. And you can find me at Agent Milindu. So maybe in GitHub, maybe in Twitter. And we have time, right? So one small thing, this is also kind of a pet project. So install it too. Like we do, like curl, get.com says bash. So this is a small ship which installed DANA on your machine. In the same way, we are planning to have DANA on Installer 2. So you can curl Installer 2 slash DANA pipe to bash. It will install DANA on your machine, whether it's APT, whether it's brew, whether it's young, whatever. We will have a small ship uploaded there. This is a kind of a pet project of Leopard's Lab. If you have in your tools, if you want to publish your tools with us, you can come to Installer 2 in our Leopard's Lab. You can create a PR for small ships. So you will be able to install your tool, like curl Installer 2 slash whatever your name. So just that from me. Yeah, I'm that. Questions? I think that's where we are. Wow. You have covered the subject so completely that there is nothing left to ask. Seriously not? Nope. Going once, going twice. A few times. All right. In that case, thank you very much.