 Good afternoon, fellow Stalkers. Today we're here to present how we have enabled lifting and shifting some of the e-commerce applications to the cloud platform. I'll start with introduction myself. I am Rupesh Dalbisoy, working as a senior architect cloud platform in Walmart. I've been very focused in working how we take some of the non-cloud native applications and moving to the cloud platform. Hi. I'm Jeral Buthalo. I'm a senior manager at Walmart Labs. And I manage the cloud operations teams that manage the open stack clouds. And I also manage the middleware team, which basically has a rerun, as.com, or sams.com, or walmartcanada.com. We manage these platforms. And primarily, we are here today to talk about how we have taken some of these apps, or these websites, and move them, lifted and shifted them onto the cloud. Rupesh has played an active role as an architect with my team, which has enabled us to lift and shift these applications into the cloud. And that's what you are here to talk about today. So these are a few of the topics we are going to cover today. What was our problem that we encountered? We'll share some of the challenges we had. We'll walk through what was our approach. How you can lift and shift applications by just making a few tweaks, that's what we did. We necessarily didn't want to rewrite apps and move them to the cloud. So there had to be the least number of changes, the few changes that we could make, and move them to the cloud. The key ingredients for accepting change. So what does it take to move these apps over? What we achieved? And why did we use one ops? And then the last is Q&A. So if you have questions, hold them. We have a sufficient time at the end to cover Q&A. And we'll try to complete it ahead of time a little bit so that we have more time for Q&A. So what you see is a nice bus. But it doesn't look like the modern-day bus today. So we're using this bus as an analogy for us. We have these legacy environments. And we have the apps or the websites sitting on these environments. What happens is this bus runs really well. It looks nice. It functions perfectly well. But in today's date, the business, the developers, they want functionality to be delivered quickly. They want additional benefits, additional functionality, and they want to scale. All this has to be done quickly. So we took this. And we said, hey, every year, we take this bus, we drive this, and we run the sites on this. But how can we go quicker? How can we go faster? So this bus is basically what happens to us every year during the holiday, right? So the app teams, they want to add more functionality. With functionality, with year-over-year growth, you have more customers. So you have to scale horizontally. You add more benefits. You add more functionality. And you have all these variety of teams that come over and are working together on this bus with you. And you have the same environments. So you have environments, you have functionality, you have business functionality to be delivered. And we're all on this bus together. And then you've got to do this stress test. You've got to make sure everything is working fine. And again, with the year-over-year growth, you have additional traffic that comes along, or additional customers that are also along with you on this bus. So how do you go faster? This is what our Cyber Monday, our Black Friday, looks like, right? So you can have this beautiful bus. You have all these teams. You have the functionality. You have scaled up. And it works well. It's still working well. But now you have everybody on this environment. And as most of us in the software world know that you want to go fast, you want to automate, and you want to scale quickly. So we use this picture in the analogy where we add all this hardware, and you are running with all of it on this truck. You add all this functionality, and you're on it. You scale, and you then get all the customers also on it. But what does the business want? Again, the question comes back, right? The customers want quick delivery, speed to deliver. And if you want, you have all these lines of application code. You have these variety of applications. You want automation. You want a one-click deployment. You want everything to be automated from a centralized location. But with so many variety of environments, with all the bare metal in place, you have to have a separate automation for goal copies, separate for monitoring, separate for scaling, separate for performance testing. And you have all these places all over. You have these bits and pieces all over the place. So that's where you need multiple teams to take over. Right. Just to continue on that, in any traditional environment, like, suppose you are a server becoming to convert to the actual applications, there are quite a few people, a functional team, works together. It starts with the team. They basically layout the operating system is there. Then go through the networking team. They lay out the networking VLAN to there. They put the right protocol there. They put the right firewall there. Then it goes to the middleware team. They basically configure the app stack on there. It could be a web server. It could be a Tomcat. It could be some of the other middleware components. Then it goes to the release engineering team, who basically lay out your latest, greatest artifact to that. So once all the four, five functions come together, then finally your server or that particular environment is ready to do the functional testing or is useful for the dev or the QA team. So it takes, again, if you look at it, it takes from weeks to the month's timeframe. Again, I'm not saying these are highly heavy-touch manual tasks. Every functional team have their own automations. Every team do their best to automate their task. But this is not a single one end-to-end automation systems. You have to go through various different teams to make your product finalized. So as we come here, we all faced very similar challenges that we talked about. The legacy platform, as you know, it's very difficult for us to scale on the legacy platform. Scaling and hardware. By the nature of the bare metal gate provisions, it takes a good amount of time to scale the hardware and scaling the team. That you need a good amount of team to provision those servers, provision those environments, and get it ready on the right time for your applications team, for your QA team to be getting used. So when you are in a very legacy or bare metal mode, you will be incurring a lot of infrastructure cost. Since you cannot scale very frequently, you ended up in having procuring a lot of hardware, keeping it there. Maybe all those hardware may not be useful for every day. They might be useful for some of your fixes and some of your holidays. But since the nature of the scalability is not there, you ended up in procuring a lot of hardware. So there is a lot of infrastructure over at cost doing that way. Talking about, again, same lead time expansions and the limit of automation. Now that we know we are very much correlated with similar kind of problem that we are all dealing, let's look at how. We have been trying to tackle these situations. We all understand moving from a legacy or from a bare metal environment to a cloud platform will address some of these issues. Now the question is how we are going to do that. Whether I'm going to rewrite my entire applications? Should I make a microservices-based services, which will be make very software-based so that applications can be running on the cloud platform? Can I make my applications fully cloud-native? Then only I can consider to deploy into the cloud platform. Well, I don't think the answer is no to that. We have been doing this. We have been taking some of these applications, which are non-cloud-native, which are used to be running on the bare metal, and converted them, and lived and shipped them to the cloud platform. Now let's talk about how we have done that. What is your approach? To start with, we form a small team. We call it an abler team, which started with the combinations of cloud architects and partner with our associates from the applications architect. So we form a team. We talked about various application stacks. We go more deeper, having a series of workshops to understand how the application is configured. What are the different steps required for the applications to be functioning, as per the requirement? It could be how one app talks to another app, how what is the dependency between a single compute or single server to connect it to another services? So these are some of the findings, or these are some of the exercise we did to identify, what does it take to run into the cloud? Because the cloud runs a slightly different model than your bare metal. So we are trying to come out of this requirement and trying to map with the use case like how we are going to run into the cloud. Now, some of the things may not work as is in the cloud. I mean, for example, like in bare metal, used to for any file sharing, we used to use NFS. So we understand we are not going to use the NFS in the cloud. So we talk to the applications, it seems like how we can replace that. I mean, not necessarily application has to be rewritten, but certainly application has to be tweaked a little bit to make it work on the cloud platform. So as a part of the workshop, what we are figuring out, what are the three things that we should be doing to make it a successful migration? We definitely look at, yes, application need not to be completely rewritten, but certainly application has to change certain deployment strategies. And we'll talk about that, what are the different strategies that we followed? And the third and the key factor is like, what is the, we need to definitely have to have an orchestration tool which will work and which will also help in operationalize on the top of the cloud. So talking more about what are the different changes that we will basically consider to do that on the applications layer. Cloud, I mean, certain things is, we understand any VM can go down any point of time, right? But that's not impacted your application's availability. So the very key point that we focus on like, we take your applications and deploy into the multiple clouds, multiple data centers, so that we maintain the very high availability. Now, there are certain applications to be designed not to run in multi-instances fashions, right? That could be just like we never tried that, or it could be there are some real technical stickiness behind it which will make it not to work. So we are trying to break those stickiness so that it'll be more like you can run it in any number of instances, you can make it more horizontally spread without impacting the functionality of the applications. I was giving the example of NFS, right? I mean, let's say you are talking, you are sharing from one application to the applications and use file share systems. So we are kind of like breaking those two to make a JMS-based where we can basically share the information through a JMS. In that cases, we don't make any applications or any VM to be highly sticking to a particular VM. So we kind of like remove or avoid any host to host communications, right? That is another key point, right? So the moment we remove that, we ensure every application talks to the other application through a load balancer. So when any number of, when we deployed the applications, even though it is a single instance, we kind of first made a point like that application has to be deployed more than one instance so that we can have load balancer on top of that so that the communications, intercommunication to the applications can be through a load balancer. In that case, it's like if the VM is up and down, it's still not impacting anything because it's all the request is going through the load balancer. Another key point is the application has to be fully configurable through some kind of an automations or orchestration tool, right? That is another key because when the VM is crossing, the VM is going down due to some region. There could be various regions where the VM can go down. Your applications, that particular VM has to come back to the same point, like how it has been supposed to be configured, what the VM was running on web server, what the VM was running as an app server, what are the different settings on the Tomcat, what are the different settings on my data source, everything has to become as is. So if you do not have some kind of an automations or some kind of an provisioning systems, it will impact because cloud being the nature, maybe one of the VM can go down and we can't rely on some kind of an hand holding system or some kind of a manual systems to build another VM, reconfigure that systems and put back into the cloud. Of course, we don't avoid any kind of static IPs because that's basically make your VM is very, very chatty or very, very sticky to a particular point. So that's why we kind of remove any static IPs to the VM. So these are some of the changes, like very high level, like we made a tweak on the applications to make it work on the cloud platform. If you can see, I mean, these changes are not, need not to be your entire applications to be rewritten, but these are something very high level on the deployment, like how you're deploying your applications, how you're talking from one application to another applications. These are highly focused on that, right? And we'll talk about like how we handle the entire end-to-end automations, end-to-end orchestrations when we'll dig in more to the one of. So as Rupesh mentioned, right? So we went through what we were looking and how we were looking at the applications and trying to move them over to the cloud, right? So every year we have additional functionality, we have more traffic, we have more business, there's growth and we have to scale these environments. So we decided that every year this cannot be done with the e-commerce traffic, with the traffic and the business that increases, we have to have a better way, a faster way and a more efficient way to do this. And my team, one of my, at least my teams is highly involved in scaling these environments, installing these apps, managing the gold copies and going the full nine yards, right, to do this thing. So we had to have a much more efficient way to do this because the lead time from procuring a server to installing it, racking it, cabling it, installing the apps and all that was taking a lot of time. So we did this proof of concept where we actually tweak these apps, move them to the cloud, used one ops, orchestrated and went back to the management, went back to the business and said, hey, you know, we can actually run these things, run these environments on the cloud. But as you have been hearing from morning, technology comes last, it's people, process and technology, right? So the key is how do you manage this change? How do you manage acceptance of going on this environment or on that bus, which we showed you earlier, perfectly running, good availability, scales, but it takes time and you have this lack of automation. So why do you guys wanna take something that's working and move it over to the cloud, which we have never tested and it can be disruptive? So we went around trying to do this POC, a proof of concept, and we approached it from top to bottom and bottoms up as well, so both the ways. We had multiple workshop sessions where we could demo how these applications actually worked on the cloud, right? And then you have to showcase the benefits, it's low cost, it's easy, you can scale, it's easy to scale horizontally, you can onboard apps quickly, and how easy it is to use. But then came the question is, hey, will my apps perform on this cloud? So as you see the picture, right? There is a little bit skepticism, like will this work, will this not work, right? So what we did is, we said, you know, no worry. We won't disrupt what's working today. It's running fine. In parallel, we went and built a dev test environment, a load test environment. We actually built a full-fledged, even a production environment, which wasn't actually taking live traffic, but close to a real production environment, right? And we ran full-fledged stress tests on it. We tried to beat it up from different angles, and we had very impressive results. We went back and showed the results and demoed it to the business, demoed it to the management, demoed it to the dev teams, the various different teams, and everybody was like, yes, this really works. And that's how we drove change. We got everyone to believe this actually works and actually give it a shot. And we didn't go in full blast. We started moving traffic slowly over. So once we got the buy-in, we also showcased how this can be achieved by reducing the number of teams involved every year for scaling up, for building, for functionality, for automation, to a handful of teams, right? And it basically enables the self-service model that every app developer wants out there, right? Every app developer wants to manage his app, automate it, and go the full nine yards with it, right? And this model totally enables them to go the full nine yards from the time you design, you build it, you install it, you deploy the production, and then you can manage it also by yourself. You're not dependent on 10 different teams because once you're in the cloud, it takes care of the policies, the network, the systems work, and all that is taken care between the IIS teams, right? So that enables the app teams to go quicker and faster. So as we are doing, like how actually really we did that after we migrating our applications to the cloud, right? These are some of the performance metrics, right? I'll go each of those in detail. Let's talk about, again, this is not presenting here like, okay, how we perform better or what is the difference between running the applications in the legacy or bare metal versus running on a cloud or an open stack, right? This is like, see, when you migrated our applications from the legacy or from the bare metal to the open stack cloud, we also clear up, we also address some of our tech debt, right? I mean, we feel like some of the stack is not working, some of the technologies is not working. Let's remove that out. I'll just give an example. It's like, you used to run like we have a load balancer under load balancer used to have an Apache as a web server, then Apache web server is talking to the Javas as an application server, right? We figured that out, like maybe the Apache is not functioning on that we need to because pretty much the Apache was doing for us like load balancing all the Javas instances and also uploading the certificate, the SSL and making some of the redirection. So we, when we move that to the cloud platform, we basically remove the entire Apache stack, right? We can directly load balance through the, we basically load balance all our workers or Javas through the load balancer one. So, I mean, that will also give us some kind of a boost on the performance. So we, when you look at like, how is that different? I mean, we have got different different pages, pages from browse page to search page to a heavy transaction space like cart and checkout, right? Browse pages are pretty much cash on the CDN, but the, all the real transactions comes in the cart and checkout, right? So you see there is a, there is a either better performance when you moved to the cloud platform. Some pages varies from 20, 30% some pages actually really perform well. So overall site performance actually in the range of like 30 to 40% is. Another key metrics is about, we look at, we measure the instance throughput. When I'm talking about instance, instance means a single JVM I'm talking about. We run in Tomcat and JVOS. So when an applications we are running on the bare metal, used since the bare metal, we are not able to virtualize it. We're basically having a bare metal used to run like multiple instances there with having different, different ports, right? Versus compare that in a single VM, the single instance you are running on the JVM, right? So we look at like, we're trying to compare like, let's say for us, for the given capacity for a given throughput, let's say, there are 100 orders are coming in, how many instances I need or used to have in the legacy platform versus how many right now need on the cloud platform. We see there is a significant, there's a significant reduce in the number of cores that we need on the cloud platform. And this could be for many different reasons not necessarily like running on the cloud per se, could be like because apparently it has a tech refresh and we have the latest and greatest model of hardware's running on the cloud platform versus running on the bare metal model, right? So that is, that helps in managing the overall, the data center operational cost is to reduced because more number of instances we don't need to. Also it helps in automating a lot of applications which we'll talk about more on the one-off layer. So post-cloud migration, as the picture shows, right? We had this army of people doing work, running around and now the minions are pretty much sleeping out there. So basically it gives you the self-healing capability, it gives you the order-replace capability, it gives you a lot more ease in your life of day-to-day work because now you have the DevOps teams, you have a lot of more automation in place. With this, this pretty much enabled us to have a very smooth holiday season. It enabled us to have a stress-free day-to-day day where the number of tickets or issues or incidents significantly decreased because everything is now centrally located. There's a lot of more automation in place. There's a lot less touch points that are happening along the way. So it was pretty much successful. The numbers, a talk for itself, 99.99, those five nines, that was the availability for these migrated sites that we moved over. We were able to manage or rather take over billion-plus page views as again, Rupesh earlier mentioned. So these page views, if any of you have been in e-commerce or worked on the e-commerce apps, there are CDN solutions to take care of caching your page views. But within the clouds or within these apps, there are millions and millions of transactions that happen where once you log in, it looks at inventory management systems or order management, your cart, your checkout, and these sessions cannot be cached, but these transactions cannot be cached. For this, your cloud has to perform. You have to have that SLA in place where you are meeting those SLAs for these applications, for the transaction or the connection between these two apps so that you don't have a ton of timeouts. So obviously this was, we met all those SLAs, it worked really well, we could take traffic, we could scale easily, auto scale up and down. It reduced the server startup time, this comes pretty handy if in case you run into an issue, let's say you need to auto scale up on the fly, you can scale up really quickly, right? Because now our instances are coming up in matter of minutes. If your instance startup time takes a lot of time, you can auto scale, but it's gonna take you, if it takes you like 10 or 15, 20 minutes, by the time you scale up and you bring all that instances into traffic, you'll be down before that, right? So this really helped us, you could scale in and out, up and down, and it made it really convenient for us. Right, I mean, one question here, we might be thinking about like, how actually it is helping in reducing the server startup time, right? I mean, it doesn't make sense, right? I mean, just given, like I was explaining V4, like when we're running on a bare metal, which is very good amount of core, very good amount of memory, used to run multiple JVMs there, right? Now that we're in a single JVM, single compute, right? Then so that actually helped a lot, right? That's basically reduced a lot of server startup time. And it's directly impacting our MTTR, like mean time to recover, because the instances is coming up faster. So when we are having an incident or anything, it's like we'll very quickly bring up the sides because of your instances are coming up very fast. Moving on, so this is a great help for optimizing our capacity, right? I'm just, this is just an illustration. This doesn't depict the actual numbers here. But I'm just giving you an example. Let's say now that we have got multiple sites running on the same cloud platform, and every site have a different pick, right? Or maybe you are running on Boxing Day in Canada versus you are running Easter's in UK, or you are running some promotions in somewhere, right? For which you can expect a traffic search on your, expect traffic search on your website, right? So what is helping us here is like, we are actually very effectively moving the workload from one site to the other site, right? The same capacity which is required by the, which is used for the pick seasons, for the site A, we basically scale down that and move that to the site C when they basically need it. So this doesn't need for us to maintain the capacity for the pick of site A plus B plus C and D, but having a balance across so that we can move around the capacity around so they can get used, you can use across. So with all that, what we did, as explained earlier, how did we manage to do all this? So we use OneOps. OneOps was recently open sourced by Walmart Labs. And OneOps is basically the orchestrator for us that manages all our VMs across the various clouds. It can be an open sourced cloud, it can be a public cloud, it can be Azure or AWS or a Rackspace cloud and it enables the app team to deploy their VMs across any of these clouds of choice, right? So it basically comes down to a one-click deployment. So how does OneOps play a role? What we have, we have it in three different sections, right? So in the business executive section, from the business side of things, it enables you to do cloud shopping. You can go to any cloud provider and OneOps will enable you to manage the application lifecycle management in that particular cloud. It totally avoids any vendor lock-in. It's open source, right? So you can go use it. The time to market, that's what you want, right? The speed to deliver, the time to market. It's low cost and great value. Then moving along for the developers, what does it give the developers? It gives them the self-service on demand, which basically enables them to build their packs, build their application, do them on a train, look at all the stats of CPU, memory, everything, all that visualization in there in OneOps at one place. It helps them to do the complete application lifecycle management and it basically enables them to build their DevOps teams. The third part is the IT operations. Basically it reduces cost, it's good cost effective cost management. It has the governance that you need. You can enforce all your policies in there and it control, it has control and it secures your cloud usage. So it's safe, usable. It's proven. We have used it for three holidays now and it works really well. So we're gonna deep dive more into what OneOps is, right? Okay, let's go a little deeper into the OneOps, right? Again, I just stated like OneOps is our application's lifecycle management. So there are three different lifecycle or three different phase of the application that's managed. It's design, transitions and the operations, right? You take your applications, you define your applications how it should be. Your applications needs a Nginx, it needs a web server. It needs to have your MaxClient set to some 200 numbers or maybe it's running on port 8080. So these are the different, different configurations you can set on your design layer. And you can actually version control it, you can actually make it part of your source control so that any releases you can tie together the design. So you can define, design is kind of like template engines where you define how my applications looks like, which is I think Gerald was saying before, like the gold copy for say, so you define for this application, this is how the applications should look like. So when you, when in the design phase, you can very well define that saying, I need a web server, I need an app server and so on. And how is the communication is going to happen between web server and app server? All those things you're going to define on your design layer. Moving on to the transitions layer, you take the same design, now try to realize it and create the environment out of it. That's what you do on the transitions layer. So in the transitions layer, you apply the same design, keep in mind, apply the same design into your dev environment, into your QA environment, to your stage environment, could be your productions environment. So what you say here is like, it is highly reliable in the sense, you absolutely show the same design is when applied to all. Now, there are certain environment variables which change from one environment to another environment, which you can always do that on the transitions layer. But some of the base design, which you think like everybody on my application has to be running on this particular OS, right? Or every JVM running has to be running on two gig of Mac size, right? So this you can confine, this you can enforce on the design layer. And the final one is the operations mode, where you can once you'll create the environment, once you'll select like, okay, I'm gonna deploy that into cloud one, cloud two, cloud three, cloud four. Now, how do you operate that applications? Operations mode will operations phase will directly give you, how do you start your server? How do you take restart your instances? How do you take the entire cloud out of the traffic? How do you monitor? What is the uses of all your applications? So these are all comes under the operations mode, right? So these are the three different phase of the application. The applications end-to-end managed or life cycle is end-to-end managed through the one-offs. I think I do have a more simpler view of one-offs, right? Where you say, you have an applications, you go through the design layer and you created different, different configurations on the transitions layer, you have multiple environments. Then you go to the operations mode where you select, okay, I want to deploy it in cloud one and in cloud two. So the key call out here is like, you see like you can actually deploy to any cloud. You can move your workload from cloud eight to the cloud B, right? So seamlessly it is there in the one-offs you go and basically move it from one. It could be a private cloud, it could be a public cloud, you can integrate that one-offs isn't already integrated. So you can actually move it any point of time. So again, these are some of the key major features we talked about applications life cycle management. It takes starting from the VM creations all the way to what security group needs to be there, what VLAN has to be set up there. On top of that, what application needs to be configured there. My have a web server there, it should be a Tomcat that's running on there. All the way to the creations of the VM, to all the way to the creations of the applications. Everything that you can do using this one-offs, right? Auto repair and auto replace. Another great features. You can set up a lot of monitoring, a lot of threshold within your applications. Your applications having some hard bit problem. Applications having some particular threshold is not meeting, right? It's try to one-off, try to auto repair that VM. So it's kind of a self-healing concept where it basically takes care of repairing that VM by itself. If it doesn't work, it's try to replace that. So you have to be very sure about your applications on the design, like what you have said, like how my application has to be designed. Auto-replace, once the auto-replace happens, once auto-replace generates the new VM, it basically goes through the design and reconfigure your application in the same way. Autoscale is very useful for the, when you are going through some peak seasons, having a lot of traffic is coming in. So we have set up a certain threshold best up on which we can have a horizontal scale. Policy enforced, like some of the deployment I was talking about, definitely each application has to be deployed more than one clouds, more than one data centers. So those are some of the policy you can enforce it. So the key takeaway is, again, like we are trying to address how we lifted and shifted there are three things again, definitely you don't need, the application has to be rewritten, application need to be micro-service-based. Suddenly there are some changes that we have to do with the deployment mechanism, with removing some of the IPs. And definitely you need some kind of an orchestration store like one-ups, right? So this pretty much, we are at the end of the sessions. We'd like to open it for any questions. You'll have to use the mic. So you were talking about having to re-engineer or change the apps, like no IP, static IP, things like this, right? Did you crack the code open and do code mods or is there something that you wrap around it to make that possible? No, I mean, pretty much all the, I don't think the code label need to have the details about the IP, right? We basically looked at, definitely on the applications layer, to the different configurations layer to look at, like how one application is talking to other applications. Or even like in a server, if there is an IP there, how that application is using that IP address? I understand at the app layer, but if I have an app that does that already, how do I shift that into this environment without cracking the code and making changes? Okay, so your question is like, my app is definitely on static, on looking at dependent on my IP address? NFS, any of these, you had one slide, make sure it doesn't do these things, right? So, I mean, NFS, right? I mean, like I talked about, right? We remove that, we use object store in that. So we basically, when the app talked to an NFS having an amount on the bare metal, so we basically tweak that applications to use the REST best object storage so that we can basically doing it. So like, I mean, yes, I mean, there are some tweaking we have to do on the applications layer, not necessarily on the functional layer, but more on the communications, like how we are talking to each other. Okay, thank you. Is there an answer question? Thank you. The whole aim is to keep it really, really simple, so you don't have to tweak a whole lot of things on it. Thanks for your presentation, this is interesting. Thank you. But to follow on from that question, I was interested, you're talking about Apache and all the almost stateless layer, and then you were talking also about transactions. Where does the data live? Are the databases involved in your lift and shift as well? Yeah, good question. So, yeah, I mean, we talked about purely on applications today, I didn't touch a lot on the database here. Yes, database at this point, we are running on the bare metal, we are not virtualized at this point, so this migrations is purely working on how you move all the application stack to run there. So database is running on the bare metal, so it was not running on the cloud at that point. What kind of issues did you have when tackling availability for the applications, and taking them from legacy to the cloud? So like VMs can die much more easily than the bare metal on which they're running. Right. So how did you handle those scenarios? Right, I think that is true, that VMs die very frequently than a bare metal, right? That's, but the way we are handling is like, we are basically taking the applications deployed into more than one instances. So we basically scale the applications or make it more resilient so that a single VM down has no impact to the application's availability. So we, in fact, increase the application's availability moving there, because of, because spreading across multiple zones, spread across multiple data centers, so that way it will help us in moving, I mean, getting more availability. Just to add to Rupesh's, OneOps has this awesome feature of auto repair. So you lose a VM, or let's say the app has an issue, it'll try to do a number of repairs, like three auto repairs, it'll try to restart your Tomcats, and it'll try to keep rebooting it. If it doesn't reboot, it'll do an auto-replace, so it'll basically replace your VM with a new VM with code and with the deployment in it. I just had a real quick one to follow up again. Do you have plans for shifting your database into the cloud? I'm sorry, I missed the question. Do you have plans? If you're not doing the database at this stage, and yet you've outlined all these advantages, do you have plans for shifting the database into the cloud? Absolutely. I mean, we have already started that work in progress. We have been currently running all the database in the non-productions environment for certainly, but still there's a lot of work needs to be done to make it work on the productions layer. So, but I'm talking about non-productions like DAF, QA environment, we are kind of like started running on that on the VM. Okay, thank you. I think you have a question. So thank you for doing this, it's very informative. My question was related to the applications themselves when you're moving them to cloud. Generally, there's an expectation of horizontal scalability, right? So did you run into any applications which were not designed to be horizontally scalable and there was an inherent assumption in the design that there is vertical scalability? And if so, how did you manage to get around that? I mean, session stickiness and all these other things. Yeah, definitely. I mean, not necessarily each and every application's work are designed to have horizontal scalability. Some of the applications, we had to make certain changes like I was talking about. We make that as more like a message driven so that we can break that single independency because the single stickiness to the applications to a more horizontal one, we could able to do it. But certainly there are some applications we could not able to do that. And those applications we found out to be, those just need to be running as one instance or two instance. We better kept it outside of the cloud platform. I see. Okay, I think that asks my question. Thank you. The aim was not to force something in because if you force it in and it breaks, you'll end up taking it around the entire site. So there was an assumption there that the application did not require significant changes. Correct. That was one of the main requirements that we do not have to significantly rewrite or redesign the app to put it in the cloud. Okay, perfect. Thank you. Thank you very much again. Once again, we'd like to thank you all of you and you guys can for any questions, clarifications, reach out to us on the email address. All right, thanks. Thank you.