 And I've heard about how basically we went from shipping code about say once or twice a month shipping code every single hour and how the journey and how we got there. So, you know a bit about Haptic itself. Haptic started out as a customer assistant application from the app store and from there it's home to a full grown chat board development platform. The app store out there going strong from the high area apps in India. The entire platform now processes about 1.4 million messages every single day. And we have more than 14-50 plus boards deployed now across multiple aspects of the business. So, the BPP aspect where we make chat boards and we deploy them directly for consumers and businesses. And the application itself, the Android and iOS apps, as well as most packages which are deployed in other partner apps like Dynzo India and Nordic Dynzo. So, around across these, all of these applications we have already chosen 20 million users and are giving the DAO that's another scale we see. So, you have to give an idea of the problem that we faced when we were trying to build these boards and ship features to the platform itself. I thought I'd talk just briefly about basically how the teams are operating. So, you have certain teams over here which are completely independent. The machine learning guys are working by themselves on the models and making the board smarter figuring out ways to build smarter chat boards. The app scene is shipping out the SDK which works independently of the chat models themselves. As well as there are certain product teams building out the chat boards. And they are shipping, each single team here are shipping its own set of board, its own set of features. So, across the entire companies there's about 7 or 8 different feature lines that are being built. Multiple boards are being built at the same time. And everyone has sort of their own release cycle, their own set of timelines, everything that they need to ship, you know, to kind of get it done. So, what used to initially happen was that everyone kind of, there'd be a release date set, everyone kind of pushed into one big release and that would create this one massive release that had to go out on a single day. And as you can imagine, that was a whole lot of problems and we actually took that away. So, we're making a Python workshop. There's a lot of tracing messages and message processing. View Saturday as a consumer for Lab at MQ. There's a message delivery system. We have a few different databases, Mongo, Elasticsearch, MySQL. So, it's a pretty deep text line in that sense because we, as a chat board, we have to kind of be present in every sort of platform. So, that should be iOS, Android, web, that should be on Facebook, Slack, a whole bunch of other platforms, right? So, in that sense, in a lot of technologies, we have to deal with those quite a bit. So, this is where we started out, that the event is my only release is happening where eight or seven or eight teams put together their own staging environment and they're trying to get things stable and QH trying to test things so that we can ship it. And when so many teams commit so much to the same, they all emerge at the same time. You can see that something's broken. It's extremely hard to figure out now who's going to roll over. And then you're trying to debug multiple, each other trying to roll back ones that don't work. So that, you know, you have like some feature which is delaying everyone else's release now. But this is always used to fix things on staging to release of production. You would see, hey, now, let's find other things that are broken. And it's a lot of firefighting, sort of all kinds and all that used to happen when we released code. And even after the production release was done, you'd realize that someone on some other team has included some other package, someone has included some other, put in some OS level dependencies, and other teams now don't know about it. And when they pull master back into their branches, they realize, now the development environment's out of sync and they're breaking as well. So, it's actually going to clear us about some realize, you know, hey, we've got to do something about this. We've got to make sure that we can shift forward faster and hold up easier than it is now. So, you know, when we started out, we said, okay, what are we going to achieve here? What are the goals that we want for this entire system, right? So, first thing that we started out with, it had to have a CEO and everything. Migration, be what it may. It just, it's not expected of us as a company. Even in the middle of the night, we have certain number of users online and we take their service in there. We want to change from, you know, those batched large releases to a lot of high-frequency lowest deployments. And that, that kind of results in the third point, fourth point, right? Which is basically your integration problems get minimized. If you're shipping code a lot faster, you realize, okay, this is, it's just my code that's on our plate right now and, you know, whatever went wrong can actually be related to that. Not related to five other things that went live at the same time. So, the moment you shift faster, your integration problems also immediately kind of get minimized over there. Other things also, you're going to manage all the dependencies and have that present somewhere in the legacy in place. All the installations that we did, the OS level installations that we did, that also have to be automated. If someone else in some other team installs something, it should be available to everyone now. It should be documented somewhere, it should be available. Last point is actually something more of a thought process, you know, that happens. So engineers, you know, would just build something, they pass it on and throw it over the fence and give it away as easy as it's not a problem anymore. And that's just, you know, culturally not something that we want. We want people to be responsible and owning that code right on conception, right with the moment it goes into production. And that creates some culture of ownership. That day, even if something went wrong in production, it could be an internal problem, it could be anything, right. I'm still responsible for it now. I broke this code and I'm responsible for shipping this code now. And that sort of mindset needed to come in. So we looked at it and this is kind of the first version now of what we did with our background for the ACF. Like, as you can see, the gate needs to be available, their own sort of side-load development environment. So, we did a lot of demonstrations about the level of dependencies. We did on S3, so anyone in any configuration, they just go in and add it there. Anyone else could just go in and come to the latest version of the network and push it up there. What you see here in the purple arrows, are actually the configurations of the environment itself. So, say for example, you just go in and on each glue it and we both, so we didn't perform testing and that just f***** up your level, right. It's just, we're just shipping more and more, it's staging more and more, and it just, and I actually think it initially made things a little more worse. This was a very immediate realization, right. You just can't have CI without tests. You can't port at that level of frequency on an hourly basis without having testing in which one you haven't broken anything else. And this didn't even be kind of a problem for us, right. Because here we have this very large orbit, so right now, how do you go back and start writing coverage and also writing tests to increase the coverage. How do you bring in that coverage of, hey, this is the value of S right now. And it's not just a developer right now. You are going to get PM, who's that man, instead of a factory. Your shift now has to have a testing philosophy where it has to have not just both their chips, but also a set of tests with it. So we actually had to start going backwards now and start writing tests, or at least for key components, payment, user registration, but it was a good thing. Key point that we missed out on was the focus of each of our own data that was being incorporated across multiple environments. One of the key things was chat data, basically. So the machine learning models that we had, the data that was being changed on, would have to be re-entered across every environment or a developer that could figure out how to manually move that across. So the board would then end up in this hair-facing environment. Even the configuration data of the board or what are the responses, I mean sort of move across environments and the problem, especially for us, we had to realize that shipping and shipping data are actually on the same level. Sometimes you just need to ship a new model, you just ship new responses, and it was just as important as the board. It was the first class that had to move across the environment where you build this sort of pipeline. And there was stage introduction, but then I came back to them as, again, other people are building other features, other data's going in, that has to be available now. And we also made sure that, you know, there are no conflicts. If two people are working on the same thing and they can make the same edits from different places, you can't have that data collide with each other and suddenly there are multiple versions of the same thing. There has to be some sort of conflict resolution for all this data as well. So, you know, data has to be given that level one importance to it. It's not just about all moving from A to B, the data has to move with it. So, this is kind of the learnings from the first one. That sounds sort of an observable perspective that, I mean, it's just processes and they need to ship in this fashion and make sure that they develop this in block for some other release. In a few months or so, we've been set up, maybe we've been set up with some sort of sanity, that we've run and then we've just run the environment and it's fairly open. So, we just have that, we just have to focus on having the current set of environments working the way this is that it has to give you confidence. It shouldn't make you more nervous that, you know, more people are shipping code. We have to have the confidence to say that, hey, you know, this code that's going out, it's shipping, actually, faster than what we're raising it up. And if it requires some sort of personal involvement, like, hey, this isn't working, that isn't working, it's not going to work out. So, let's see what we need to focus on and build upon from the peers. So, the first thing was obviously, we had to build our test, right? We had to have test-blocking, or just test-blocking releases and really testing anything before it went. We could kind of just not worry about anything and be hands-off for this entire thing. You'd be hands-off in terms of what goes on at a very strict level, because you don't know, you don't know. The second was we were going to ship a mill of each. So, we developed our devolved staging environment where the request for the test was run and the results of the test were actually attached to the PR along with cold coverage, where the coverage had increased or decreased based on the cold and the test added in that PR. Now, you can see all the labels. This actually created a huge psychological impact for this college. And suddenly, they can see who's PR is adding coverage, who's PR is reducing coverage. That made everyone more motivated to write more tests. Hey, I'm contributing to improving the college. The idea is PR is going to go, it's not going to break anything. We could then automatically deploy and we could just deploy that port automatically in staging. So, we didn't have to use deploying to the staging at that point where it would just deploy and we didn't have to worry about it. I mean, the service was focused. So, I don't know if you guys are familiar with what needs to be paid. It's great. All servers on AWS can take these servers back from you at any point. So, you can just before having servers come up and down it's a great, great way to save cost. And what Spontance does to you is that it will wait for these servers to be automatically. You say, I need 10 servers running at any time. Spontance goes wait for these servers to be at the lowest cost again. It deploy these servers to you. The moment a server goes down you get about 30 seconds notification of the servers when it goes down. It will wait for another server for you and replace the existing server that just went down. So, it will maintain a certain level also, these things for you. It can handle scaling it handles building deployments to make sure you don't have downtime things outside the box there. And also, saving a whole lot of money. So, you say that there's a set of servers that are compatible for your application to make the cheapest among those based on the value. So, we put it in Spontance to release two productions and it will just kind of make sure that deployments I think deeper in the deployment itself in a bit, to how that sort of will go in there and just put in the latest and make sure that it is just a simple index point for all the configuration. So, staging will be that you would actually enter all the data for our ports all the models. And then we transfer it to production. So, you could actually not change anything in production like you have to first go back to staging change what you have already made and then transfer it to production. So, it became sort of a sandbox on testing environment for us. Now, let's say you have a certain question, a certain phrase for which the bot is not able to provide any service which would then be presented to all assistants in operation or even across all of the boards. They'd be like, hey, the bot is not replying because this data is missing, this data is not back correctly. They would automatically just create a legacy entity and then again push that back to staging. They would test the new model on staging and then create the same way after that in the production. The structural configuration piece for us remains actually the same that was working fairly well now. We've moved into a recently a new service for E&V that has just come out. We have managed the environment piece across management and it has a fairly nice console and security configuration to hide that those environment feeds from anyone who goes into the service. This is the deployment process that we've been checking to go in and create the deployment. We have one in the 2100 and it's those which are at the end. And all this, all and I will be open spawnage. There will be one on-demand server which was a template of that entire ICO server. So if you have your web servers, you have one on-demand server for your web servers. You have one on-demand server for your sales facility workers which are running separately. So every ICO server has one on-demand server. Now, where do you want to make the deployment? You take that one server and you take another load balance or you have a delivery that is not doing the install or the requirement so any sort of packages that you have that need to be installed that information was now pushed to the depository itself. You go on with the data score on that so if there are any migrations that are not in the load balance or this is not taking any production workload you just want to run basics and clear that point of view and when you push for the production now it's good, it's solid and when you test the real thing the production is good. If you pass it to Sparkins it will start shucking, it will start bringing it on so once those servers activate your on-demand server so I mean the game actually you just want a one-demand server that those tests were done and so because the database in the night they wouldn't see him and you know like the creation test and you know they would just test out this current status or whatever features out there and test this, and test that so one thing we realized that was component size. What do you mean? Challenge for us was actually testing the entire chatbot itself that was that's something I think industry might not be that struggling but that each time you make changes to something fundamental that's extremely time consuming you can see 70 bots now it's not possible but I'm not talking about changes just to that particular bot or that particular chat flow there are some changes which are more fundamental so when you change, that's what I'm already using that's what it's in fact that takes days absolutely against this because you can't just test input and values against output because that's also changing those are being constantly tweaked by product managers, by content guys see what works there so you're not relying on those copies as well so if you're going out of the way we give a little bit of test the flow or based on just logic and keeping aside we'll just say hey I can post the next step and keeping a full text input why I would prefer basically draft flow or the main chat flow just the chatbot works we can just test it see this is still working and be sure you know nothing based on production so these are things that kind of work for us largely I mean at least have completed their own independent release cycles they are able to shift forward wherever they want I do have a follow up it's like I said there's dates and I say we'll get the release part in a day in that case when something goes wrong when he's shifted that forward he's going to be on top of it he's going to make sure this one's good or improve that sort of cold quality sometimes is then it's taking the information also completely out of every sort of single release perhaps as I mentioned hardy every day goes to machine learning you know you're fully on it try all different sort of modules to make sure that this is still working and you start that flexibility of trying different so you know this is also one of the three takeaways and we focus on it should I use Jenkins, should I use this, should I use that, it's use what works for you it all works fantastically well it all works great and don't get bogged down by these tools of like answers, movies, etc focus on how you're actually taking and the culture that takes CIF maybe how you actually are shifting this talk to lunches to engineers but also everyone else involved in the process because they are also impacted their timelines are impacted and they just happen the tools just pick your favorite and you have this is just sort of where we are right now so I think a couple of things are striving for what it was and not a functional coverage as I like to call it not just coverage for the sake of coverage a unit test should be testing against something of actual business value it's actually like validating something not just checking that things are not getting from your favorite perspective that needs to be done this is something that you have on stage where you know you just really have to buy your own and just reduce it that would be great to have in production as well once we have that further development and testing and not make that and then further the valuation that nothing is going to come and this is something that I've actually been thinking of in terms of actually growing intelligence and using the responses that we fetch out from a historic data that we have and we just throw it at the wall we just randomly keep throwing things at the wall to make sure that nothing is breaking the set of answers that we have and the question that we have just are constantly growing those sort of chaos among each other or through a thing is what we are doing