 All right, I'm going to start working and go find a beer. 20 minutes. Yep, I'm going to find a room at the stop. OK, hi. All right, so I'll be as brief as possible. I tend to speak very softly, so if you cannot hear me or my English is not good, just tell me. Yeah, show me. So, yeah, I'm going to talk about how we build feature ads serves. When you're starting from stories, like the traditional scrum stories, you will have many things happening at the same time and the way you usually will build this or build it into like staging servers or whatever, it's either everything goes in or nothing goes in. But so it doesn't make us very reactive. The day we are on microservices, it's going to be another story because we can really decouple things as much as possible. But in the meantime, we had to find a solution because we have business pressure. So how do we manage to build a magnetic app but still be able to deliver by feature or at least to the product guys? So that's the talk I'm going to give tonight. So, yeah, it's a teamwork. So I'm not alone. There is people behind. So I want to just mention it about me. How are we going to skip this, this or so? So that's the agenda. So what do we do at serve, the stack we have? What is the problem we are solving? And so the feature build, I have a few slides explaining what's happening. I can also do a live demo of it. Then back to the problem, like, okay, let's have some retrospective on it. Do we resolve a problem? What's the trade-offs on this kind of thing? What are the limits of this solution and what are the potential enhancement? So there's a lot of DevOps in this thing. It's very trendy to use the word DevOps. I'm not a DevOps guy. I'm just a CTO, which in the end can be described as doing whatever the team doesn't want to do. So I ended up doing DevOps. So what do we do at serve? So we are a Singapore-based startup. We are building a SaaS platform for SMEs. So typically the air conditioning company is going to serve service your icon in your place three times a year. Most likely they're using a system. So for us, these are the kind of customers we have. Those small industrial companies that nobody really cares about when it comes to technology. So in the end, before us, they're using a whiteboard and paper to organize their business. With us, they use technologies. So we have also mobile aspect and so on and so forth. So I'm going to skip on this. So that's our technology stack. So we have everything in Java Spring on the backend. AngularJS on the front end, the test range on MongoDB. The Tomcat is where we execute the code, the web code. The rest, bootstrap, we don't care. So yeah, we don't have native mobile. So that's why we have the Ioniq for now. But we are moving to native. DevOps, what we need, what we use to build. So we're using, of course, the Amazon Web Services. We're using those tools called Packer Terraform from the Hashicorp group. They're doing cool stuff. We're using Jira for the project management and this kind of thing. So an acro for the Quatee Jenkins to build, of course. So all these tools that I'm mentioning here are actually working on the hood to do what I'm going to present today. So that's why we try to solve. So having many features come from the build everything or nothing state to we build feature by feature and we can present them feature by feature. So that's how it works for us. So for us, it starts from a Jira ticket, which is a feature. It goes in GitHub. Then we will trigger some command on Slack. Slack will talk to Jenkins. Jenkins will build actually one-time throwable build job for that specific feature. Then we invoke Terraform. Terraform will build actually from the scratch the whole infrastructure. So we are not recycling environment. We are building everything every time and we're destroying at the end. So on your branch, I will show you in detail, don't worry. Provisioning wise, so then, sorry, then the job is created. Then we execute the job. The developer executes the jobs when the code is ready to be shipped. Then we provision actually through Enciable. I mean, Enciable is provisioning the stuff. And then in the end, what we want is to provide to the product guys the name of the feature.server.sg.slash system. Then you can look at what has been built. And you can map it directly to the story described in the enjar. If it's good, then it's passing it. We merge the code and we can actually push it directly into the other hotfix, for instance. And we can make it happen in the end production. So that's the way we bypass actually the limitation we have by building a monotica. So when we did the branch, then it will destroy everything. So the infrastructure that has been set up just for this is getting rid of as well as the Jara job, the Jenkins job. So should I... Where is it? I didn't track the time, so... So yeah, all starts in Jara. We have a branch. The developer will branch out. Name of the branch. In Slack, simple command slash serve branch build. So what will happen is some scripts will do that. So find as a new branch has been created then start to build the Jenkins job properly, configure it. Then building infrastructure. So it takes a bit of time for Terraform to know how the technology stack is to actually build it up. So it takes a few minutes. Then taking care of the DNS and all the stuff on Amazon through Route 53. And then once that's done, then the thing is ready to be deployed. So the first time, the developer has to ship. He has to build the specific branch using the Jenkins that has been just created. So that's the infrastructure we're building actually every time. So from zero, we're getting this. So the provider, we have three security groups. We have two instances for now, simple. One web instances, then a DB. The EIPs associate to each of the stack. Few DNS. So that's what is done in the background. And this is only for this specific feature. So in Jenkins, it looks like that. We provisioned the branch through NCFR. So that's what we give to the product guy actually. That URL. So that's the end result we're going to have. And when we destroy it, you just go in GitHub. You either merge the branch or you delete it. Then it will actually run the Slack command again. And it will destroy everything and clean stuff. So in Slack, it's like, so maybe I'll show you how it looks like in real life. So I'm going to start in GitHub. So I have a clone of our project. And I'm just going to do something simple. I'm going to create a new branch. So let's call it jara-687. Also have my new branch. Now I'm going to go in Slack. I'm just going to run the command. We also did something more for the fun of it. We actually have Slack, I mean Jenkins, the scripts to push information on what's going on. So it's going to detect our branch. It's going to build stuff. So it's going to take like 2-3 minutes. What is important is if we're looking at the previous run, you will get this link actually. So that's the Jenkins link. So I'm going to get one soon. So if I'm clicking on the link, then I'm getting in the unique... Is this the Jenkins integration into Slack? Yeah. No, not directly. That's something we wrote ourselves. We have also a Jenkins integration, but it happens somewhere in Slack, but we're using it. So that's just our script, getting the URL that has been created for the occasion through the Jenkins API and pushing it to Slack. So that's the... So now the infrastructure is being built. And once this will be done, then I'm going back to Jenkins. I was just there. And I just need to run that Jenkins job and it will build the binary, provision the... finish to provision the stack, because now we are just building from scratch with Terraform servers, but we don't have anything inside. We didn't really set the DNS, the hostname for instance, on the database side, we didn't load the dump that we are interested in. So this is what happens at the Jenkins job stage. And once that's done, we are just ready to connect to the server. So yeah, taking a bit of... a few seconds. Any questions in the meantime? Your Jenkins is by your job name. Yeah. What happens if you run two jobs of the same branch? Two jobs of the same branch. The job is... so the... when you invoke the Jenkins... sorry, the Slack command right, it will check whether there is already a job running for this or not. And if it's already running, it will stop. Okay, cool. So now I'm going back there. I can run this one. I can build, actually. So the thing is, yeah, we will have only one build or maybe five builds for the same stuff. So if you have any back and forth between the product and the dev, you did this, but I'm not very happy. So you just need to... you just need to work again on this branch. The same branch is going to modify stuff and rebuild again from that job until everybody is happy. And that job will disappear as soon as either the feature is deemed ready to ship or acceptable from a product perspective or to be dropped, so then they did it. So if we're looking and I guess you all, are familiar with Jenkins, right? So that's what's going on here. So now it's Enciable, actually. So we also chose Enciable rather than those puppet or chef or whatever, because we didn't want to have those Master 7 kind of relationship. We don't have something as straightforward and Enciable is good for this. So you don't need to have the same kind of infrastructure behind. So what it does you can see, it's what it does. So we're using some binaries like SupervisorD for instance to run some on the web stack, some Node.js kind of thing. So configuration is happening at that point. So on both servers and it crashed. And it's the first time it crashed. But usually it crashed when you have the Jenkins as this cargo feature to ship a Java into Tomcat. Sometimes it does that. And he hasn't done that for a while. So what if I run again? He doesn't resolve the host. My screen is very small so what happens if I run it again? So if I'm looking back at the slide. I was afraid of the Murphy's Law. But we've been using for three months. It's the second time it happens. Honestly. So I need to check that. But in the end you will have seen the maybe I can have. No I didn't. Okay another run and if it doesn't work then I'll move on. You will see that things have been created. So that's also the power of those tools like Packer and Terraform. You don't have to go in AWS and do things manually. So you just use those binaries. You describe for Terraform for instance how your infrastructure looks like. I have this load balancer that's behind this. I mean under this VPC then I have this security group. You describe that, you link them together in a configuration file. And once you're done with this you just run through Terraform and Terraform will take charge to build this. It does that for AWS but the same kind of execution you can apply it for many actually providers in parallel. So you can do that on Digital Assurance and all those at the same time. So if you have using one web services in one case and you're using one for whatever reason, you can actually integrate that together and work it concurrently. So if you look at this our stuff are here. So we have one machine here. We have the DB here. And if I'm looking at the for instance the Route 53 I will find the same thing. The good thing with Terraform also is once I destroy the branch or measure branch when I destroy a command I will clean that. So she's cool. There's many ways to do this in another manner using other technologies. This is one of the things we try. I'm not saying that's the best, the top but that worked. Yeah, okay now it works. So if I'm going back to this one so what's the address? So it's jira-687 So now I have actually a check and login. So I don't know why it crashed but So now I have our platform with just that feature. And result, that's what we wanted actually. So then the product knows exactly that whatever is described in this feature on the jira side, it is what it's seeing. It doesn't even have to wonder whether it's a disk plus that or whether a bug is going to find can be related and push at the same time. So for us it's a good trade-off and a good in-between from monolithic to macroservices. I think this is not going to last for macroservices for instance. We have to redefine it. So now let's imagine that the product guy is okay with this. I'm just going to go and delete the branch. I'm running the Slack job again. How do you measure state for each Terraform view? Terraform, so you know Terraform, right? So he has the state file that is created first run. So we don't change the actual infrastructure in the meantime. So he's just referring to the file he created the first run. So when he's destroying he's just going to look at that file that has been saved on our integration server. He's just going to say, okay, this one I'm destroying it. So are we at the Jenkins workspace? No, so Jenkins. So the way it works, we have an overlay of a series of scripts that would actually for each feature we create a directory with the stuff in the directory directory for Terraform the stuff would be run there. We have a lot of templating happening in the background to match the DNS kind of stuff and modify the... So we don't even... Terraform is working in such a way that we have one master file with a lot of placeholders and then we have profiles and the profile is generated automatically. And then behind there is a... I mean templating run. We're using Jinja2. So now if I'm looking back there if I'm looking back at Jenkins this one doesn't exist anymore. It shouldn't be existing anymore. Yeah. So I don't have it anymore, right? You don't see it if you have. If I'm looking back at Amazon I shouldn't see anything anymore. So my instances should be terminated. So it will be the same with all the resources we created for this thing. So if I'm going back to the... back to the slide I'll finish quickly. So, yeah, so what we are learning from this, it's more like a walking prototype that something else, right? It's a proposed approach but as I said you have many. Yeah, deep learning by feature and nothing is new. The thing we are tending towards here is immutable infrastructure. So the idea we are trying to illustrate is what about instead of updating your different services with your new update, your patch or whatever even, a new binary whatever we just actually rebuild everything from scratch every time. How does it work? So tool like Packers and Terraform helps to do that. And you have others that have been around for some time like there's this new .io that went last month you have this one called Well, I forgot the name but you have a few doing this. So that's maybe a way to go towards immutable infrastructure. We are in between actually. So we deliver by feature, not new combined with immutable infrastructure. I haven't seen that so many times actually. I'm not even sure I've seen it actually. So anyone doing it I'm curious to know. So there's benefits from a business project perspective. We did it to sustain the product actually. So it's cool because we are very responsive compared to what we were before. In this kind of context when you do Java we are a bit more agile. Not confusion what's actually deployed. You know it's that feature and only that feature. So it's getting a lot of questions out of the equation. And yeah, so a branch can be hotfixed when it's approved. So that's a good thing if we need to push it quickly. Technical perspective the image on Amazon Web Services which is super good. And it's more predictable since we are addressing it 100% programmatically. So we are not going in the console, which is nice. We don't do environment recycling. We hold the problems you might find with this kind of approach. So we are exactly sure what we have. And we ask for as professional as we can of course. We're using Packera to craft. The image that is being used by Terraform to build the system and run the system. It's actually a pre-baked image. So we're not using the Amazon CentOS 7 base image. We already process that image into something else that has the most as many common things that we will have in our image. For instance, we update you at that point. We assign the IOPS at that point of time also for the image. And a few things like this. A few limits of course. How to keep track of the full life cycle. Yes, so there is a lot of we are increasing the single bottom failure doing this. Because the more you add on the rain systems, the more you have the kind of primary. On the Packer side when you craft the image, you need to many describe the targets which is a bit tedious to describe all these things. And Cable is super hard to maintain to the first time you do it, it's fine but when you need to update it, it's becoming a bit more complex and it's hard to test. It's very hard to test actually. And you have to do everything manually and test by doing it. So that one is a bit annoying. Terraform also you have to describe everything manually. So it's very tedious. Dependency on Jenkins. Jenkins can be quite tricky. You can lose track of how many 10,000 euros you put in. You can't really show what your Jenkins does anymore. So it could be also a problem. A lot of templating also. And that's one of the things that could be avoided. And especially all the dynamic stuff like the EIP addresses and names and DNS and so on and so forth. So many pieces in movement. I'm not sure. So the question is, is it worse? I mean do we benefit enough to take this kind of risk? Because we have so many pieces in movement. So you can question it actually. I wouldn't I wouldn't blame you for thinking actually it's maybe overkill. Yeah, thinking with the product team. There's a lot of connection with the product team to be done. And yeah, what else enhancements? Yeah, we could use more tools to orchestrate. So something a bit higher level. Stuck storm, dust dust, pinnaker maybe. I don't know. Or build a macro service to this. We have a lot of things. Choose the branch to build the Slack. So now the branch is just the thing is just going to look for all the branch that have been branched out and it's going to build the Slack. We didn't code it in such a way that we can just do one specific branch. We need to work more on the Slack side. You cannot Slack argument. So it doesn't work like this. You need to have a mine in the middle services that will actually get that command translated into something else and then push the argument for now. So Enciable is a program that might be a solution, I don't know. And yeah, we are moving to macro services so how do we move that to cater for micro services? And that's pretty much it. Sorry I was a bit... Thank you very much. Yeah.