 All right, let's get started. Hello, everybody. My name is Gaurav Gupta. I am a BVF engineering for cloud and infrastructure for a company called Snapdeal. And in this presentation, I want to talk about the journey that Snapdeal took from moving from a public cloud to a hybrid cloud based on OpenStack. So what are we going to do in the next 45 minutes? I'm going to try to give you a brief introduction about who we are, what do we do, to get you an idea on why did we need to build our own OpenStack cloud, and why did we do that? Also, some technical overview of our Snapdeal cloud. We'll share some key learnings, some insight details about the cloud. And if time permits, we'll go through a brief demo of our infrastructure as a core capabilities. All right. So Snapdeal, we are an e-commerce company. We have one of the largest e-commerce marketplace in India. We have more than 50 million products in our catalog. We are in a hypergrowth mode for the last few years, with a million daily transacting users coming on our site and buying products. We are a marketplace, so we also have a seller base of more than 300,000 sellers who come to our platform to offer their products to the customer base we have. And in the last few years, like I said, we've seen phenomenal growth on our business, on the amount of transactions we are doing on our platform. And we are just getting started. Just to give you an idea, India right now has about 370 million people who are connected online, out of which about 50 to 70 million people are doing transactions. So there is a huge population of people who are yet to come and make transactions online, which is all the traffic that we will receive. If you extrapolate that for a few years, the projection side that we are going to cross are more than 500 million internet users who will be transacting. Most of them will come from mobile, and that's what we are also seeing in our platform. More than 70% of transactions that happens on our site are coming from mobile devices. So what did we do with regards to OpenStack? We built our own cloud. We call it CIRIS. And CIRIS is a private cloud based on OpenStack. We have more than 16 petabytes of storage that we have built for our infrastructure. We are running in three different regions with the capacity of more than 100,000 cores. Our networking is 40 gig from servers going to the Spine node and 100 gig on top of it. So very fast network. And we are doing all of that with 100% automation and deploying everything using Ansible and automation scripts that we have written on our own. We did all of this in the last one year. And then we came out a few months ago when we launched our cloud. We got to know that we are in the top 4% of global OpenStack deployments in the world, which is both a proud moment as well as very nervous to see that we are running such a large OpenStack cloud. But happy to be here, happy to connect with the community. And we would like to get more engaged. So far we've been working in a silo, primarily because there was a large amount of work that we had to do in a very short amount of time. And we are very happy with our OpenStack deployments and like to show some more details on that. We are running in a very high core density racks, more than 3,500 cores per rack in a completely redundant, pod-based architecture. So why did we build a private cloud? So cost was one of the major factor. Like I was explaining before, we are a very large hyper-growth company. And as we saw our business grow, our infrastructure requirements also continued to grow. And our bills on the public cloud side were phenomenal. And it had to sort of, we had to find ways to reduce that bill or control that one. And doing it in the public cloud was very challenging. And when we did the analysis, it was very clear to us that at a certain growth, at a certain scale, public clouds stop being cost effective. They are OK if your growth is unpredictable. When you're starting up, I think one should go to the public cloud and get their business established. But once you hit an inflection point, you need to start looking at some alternative. And for us, it was building our own private cloud. When you do that, the economies of scale also kick in. And then you start getting the benefits of cost. So cost was definitely one of the key reasons for us to start looking at the problem. But how did we make it cost effective? Firstly, our entire stack is built on open source technology. We did evaluate some enterprise technologies like VMware and also open stack commercial offerings, distributions. And when we evaluated the cost of the scale at which we were running, it was very clear that we had to build it ourselves. We had to build it using open source components. We also did that. We also took into account the operational cost of creating a cloud. So we did this with a very small team of engineers who have spent a lot of time in past building enterprise products so that they can understand and embrace what open stack offers. And not just use it, but also sort of understand it and embrace it. So it was very important for us to make sure that the team is appropriate for maintaining, for building this particular cloud platform. And we also converted a lot of our capex into optics. For example, for all the hardware equipment that we are buying, we have taken it on an optics model. We have negotiated power consumption with the collocation provider so that it's all bundled in into the operational cost of the data center. And by doing this, we are able to do apples to apples comparison of what is the cost of running a private cloud versus what was the cost of running, of us running in a public cloud. And clearly, there were cost savings. The next big reason for us to do this besides cost was performance and security. We wanted to get more performance from our infrastructure. And in a public cloud, you are restricted because, again, you're in a shared tenant environment and there's only so much performance that public clouds offer. They can offer you a lot more, but with a very high cost, but it is restrictive for the large scale that we were looking for. So by building our own private cloud, we were able to now optimize for self-use. We are able to put advance security appliances, DDoS prevention, intrusion detection into our data center. So definitely a step up into the security that public cloud offers. And lastly, data sovereignty and compliance was also a big reason for us. So Snapdeal is an ecosystem. We also have a digital wallet, which required us to make sure that all the data that we store, the money that we store in the digital wallet, remains within the boundaries of India. And at that time, the public cloud provider that we were using did not have a region in India. Hence, we had to look at least for that particular application to be hosted within the boundaries. So that was also one of the reasons for us to start thinking of building our own private cloud. So to summarize, these were the four reasons because of which we built it. Cost, performance, security, and data sovereignty. But we didn't stop there. Our private cloud is 100% hybrid. What that really means for us is we have set a definition, and I'll read it verbatim, and then we'll go over that. A true hybrid cloud expands seamlessly to public cloud and abstracts the underlying infrastructure away from the applications. So they can be dynamically assigned and reassigned to run in different parts of the cloud. If some of you might have seen the presentation in the keynote today, where there was a presentation of launching a workload in OpenStack and AWS at the same time, I think what we are trying to do is that on steroids. We practically have a hybrid cloud, which is extended from a public cloud all the way to our data center. And our applications are launched in a seamless fashion on OpenStack-based cloud or this public cloud provider. And the application does not know where they're running. So we have completely abstracted the infrastructure layer from our application layer. And because of these reasons, we are using OpenStack purely as an IaaS provider to us. We are not using a lot of PaaS projects that are in OpenStack because of the limitation that they catered for the OpenStack-based clouds, but not for public clouds. We have written our own PaaS layers on top of OpenStack to be able to do both of these things for us transparently. So why did we do hybrid cloud? One of the main reason is bursting. We are an e-commerce company, and we have seasonal traffic, just like in the United States, you have Thanksgiving. In India, you have Diwali, which is this festive season, it's going on right now. A lot of traffic comes to our side during that time. And we wanted to make sure that we have the capability to burst out in a public cloud if we ever run out of infrastructure in our private cloud. The second big use case for us was disaster recovery. So given that we are now going to be running in a private cloud, we needed another similar data center or an extended region so that we can create a disaster recovery zone. That would require twice the CAPEX, twice the OPEX, for building the particular cloud. But instead, we decided to use public cloud as a disaster recovery zone. Now, everything that we do, all the data, all the applications, are seamlessly copied in public cloud all the time. And if a disaster happens and our primary data center goes down, we will be able to move 100% into the public cloud. All right, so let's jump into some of the technical details. So Snapdeal was born in the cloud. We are about six years old, and we pretty much use all the popular open source projects under the sun. This is not even an exhaustive list. It's something I just put together this morning to just give you an idea that building a cloud for us was more than just building the infrastructure. It was about how do we migrate and run all of these applications in our cloud. And we already are running them in the public cloud. Now we had to first migrate it and then seamlessly run it at that performance. We are also microservices based. So we had 500-plus services running in our infrastructure, talking to each other. And we had to first understand their architecture. We had to understand their dependencies on each other. And then create a mechanism by which we can migrate these applications from public to private without taking a downtime of our business. That was the key requirement for us. So overall, this project for us was only half of it was infrastructure. The other half was making sure the applications are handled properly. So if I look at just our architecture of a single node, of a single server, this is how it looks like. So we are using Ubuntu as our host operating system. We evaluated Ubuntu, CentOS, and we chose canonical in Ubuntu because of its deep integration with OpenStack. A lot of large deployments like ours were using Ubuntu. And so those were some of the reasons. Some of the kernel version and everything was more tested with the OpenStack versions. And then we combined it with OpenStack. We are using Kilo 4. So we are still on a very old release relative to what we have just launched just now. But the reason for us to do that was because we stability mattered more than features for us. And for the reference architectures that we did, for the amount of testing that we did in Kilo, we found Kilo to satisfy our requirements. And for, again, the size of architecture that we had, we wanted to stick with stability. Again, we're using KVM and Kumu as our hypervisor. And all of this is getting automatically deployed using Ansible. So this is the box of our infrastructure. We have the capability today to add hundreds of nodes, physical servers, into our infrastructure in a single day. We are able to create a single pod, a single rack, completely automated, and add it to our infrastructure cloud within hours. And we do that all the time. On top of it, if you look at, we are using CentOS as the operating system for our virtual machines. Given that we are a private cloud, we control the environment, and we have standardized all our applications to run on CentOS. On top of CentOS, the first thing that we do is, we do service discovery of our applications. So we are using a project, open source project called SmartStack, which gives us the ability to do service discovery, as well as load balancing. So in the web world, most of our applications are multi-tiered. And they have a layer of web servers talking to some sort of a database, MySQL, Mongo, Cassandra, whatnot. And then they offer services which are accessed by other services over RESTful API. So this creates a very dense graph of service dependencies. In a public cloud, you would typically have a layer four, layer seven load balancer, running as a central load balancing service. And when service A wants to talk to service B, it talks to the load balancer, and the load balancer distributes the traffic. In our architecture, we have made it completely distributed. So we're running a small HAProxy on every web server that we are launching. And the application running on that server talks to the local HAProxy whenever it wants to reach out. This creates a very distributed load balancing architecture. But again, it has the challenges that now with thousands of nodes, you have to program all of the attributes and all of the endpoints on that particular on each of the HAProxy nodes. So we're using ZooKeeper for propagating that, and Smart Stack as the architecture which is programming this. We're using another open source project called Terraform. This is similar to, if you guys are familiar with heat, this is similar to that. This is the orchestration tool, which is talking to OpenStack and talking to the public cloud for launching the infrastructure. This is, it has integrations with public clouds like AWS Azure, so that from a single pane of glass for us, there is no UI, we are still using CLI. But we are able to now orchestrate virtual machines in different cloud seamlessly. Our application stack is managed by Salt. And we have written our own custom scripts and automation software in Python. So this particular block, the top block is the application stack for us by which we control it. Now this does not contain the application itself, which is going to be on top of it. From the infrastructure side, we have made the OpenStack control plane completely redundant. We are running, this diagram shows three, but in production we are running five independent controllers of OpenStack. We have made sure that our control plane and data plane are completely separate. We do testing like we shut down our complete control plane and then check if our OpenStack cloud continues to stay up. Because a control plane dying should not have any impact on our data layer, which is where all our applications are running. Now that was very critical for us, so we designed it for that. So this diagram shows you that how a whip is transferred from a primary node to a secondary node. This happens transparently. And then all the services are running everywhere. And then most of them, like RabbitMQ and everything, are running in almost all the control nodes. We have designed a highly scalable storage layer as well. And when we were working on our storage design, it was evident that we had to build multiple kinds of storage because one storage was not going to solve all the needs that we had. So the biggest storage that we have is based on Ceph. We have two separate kinds of Ceph storage. One is using magnetic disks. This is the largest Ceph storage we have. And this is primarily used for booting the virtual machines. So all our root volumes come from our magnetic storage of Ceph. And we use them for things like logs, which are, again, low performance, but you require a lot of disk for that. This lowers the cost of running this Ceph cluster. But if any application requires high performance and high redundancy storage, then we also have an SSD Ceph cluster. And yesterday, several talks, Walmart was talking about that, that they have a very similar architecture of a SSD Ceph for high performance applications and storage for high performance applications. And we have a very similar architecture. So this is about 10 petabytes of storage is coming out from Ceph. But we also had a requirement of many applications who needed local SSD. For example, there's a database called Aerospike, which is a low latency key value store. And it requires a raw disk to be given to the database application. And there were several others like that in our environment. So what we've created is a host aggregate where virtual machines are able to get a local SSD as well. If you look at the cost of these two things, the SSD storage is going to be the most expensive storage. So given the size, we had to balance between the two. And for SSD Ceph, we have used a replication factor of 2. For magnetic Ceph, we have used a replication factor of 3. For SSD storage, which is local, we are saying that the redundancy of the data has to be with the application itself. The infrastructure is not going to provide redundancy in case the disk fails. So we've kept the three different kinds of SLA and given it to the applications. Now most of the applications which are using local SSD are already in a cluster and clustered environment. For example, Aerospike has its own replication mechanism by which it replicates the data into multiple nodes. And it can tolerate a node failure very easily. But what is very important when you're doing clustered applications, which is pretty much all the applications that we have, is that you have to be very careful of VM placement. If in a cluster, you ever place two virtual machines on the same host, then when a single node will fail, two virtual machines will die. And most likely, you will have data loss. So what we have extended on top of OpenStack is the capability that when you are launching a cluster, you're defining anti-affinity rules over there. And our anti-affinity rules goes like this. First is that there's an anti-affinity of a pod. Then there is an anti-affinity of a rack. And then there is an anti-affinity of a server. So if I'm launching, say, a six node cluster of anything, let's say MySQL, then first the rule will apply that I'm going to place six virtual machines across six different pods. And the definition of a pod, and I'll come to that in the next slide, is basically a combination of three racks. So if I can find six different pods, I'm going to place these six virtual machines across these pods. So the chances of an entire pod failing is relatively low. But say if I am launching a cluster of, I don't know, 50, 100 nodes, then I may run out of pods. I may not have these many pods available. But I at least make sure that I have redundancy across racks. And you can extend this logic to say, if I can't get that, then at least I will have redundancy across servers. And if that is also not available, then I will fail that operation. I'm not going to go and create a cluster of you, because that is extremely dangerous. So we have used these kind of techniques to make sure that we are able to recover from a single node failure. Moving forward, we also have a very large data lake for our big data platform. We are using HDFS for that. And that is built on using, again, spinning drives. This is petabytes of storage available for our big data platform. As an e-commerce company, we do a lot of analytics on the data that we collect. Every click, every scroll, every view is data for us. And we crunch that. So we are generating terabytes of data daily for our big data platform. And that goes in the Hadoop platform. And lastly, we do have a little bit of enterprise storage as well. This is, although we would like to avoid it, but there are some use cases for which enterprise storage works very well. For example, we are using enterprise storage for keeping backups of our data. Data is extremely important for us. And we sort of give in at that point in time and at least store a copy of our data into enterprise storage, and a few more use cases like that. But the use is very, very minimal. One very important thing that I want to point out that I've also sort of spoken with a couple of other companies who build OpenStack is that we're using Chef Storage for our root volumes. So 100% of our OpenStack cluster now boot from volume. That creates a big failure domain and dependency on Chef itself. But so far in our experiments and in our experience, it works reasonably well. And it gives us the flexibility to now do live migration and things like that. So for networking, we're using Clause network architecture. Basically, it's a spine leaf architecture. We have spine switches, which are 100G. And they're going to top up the rack switches for each of the rack that we have, and which are going to the each of the server with a 40G connectivity. This gives us the redundancy that I was talking about. And it also gives a very highly performing network all the way from a server to any other server in the network. This gives us the ability to now use all the storage in the network. We don't require a lot of directly attached storage at all. This also gives us redundancy at any point in time. Even on the top north-south traffic, we have high end load balancers because of the traffic that comes in to our data center. And then again, all firewalls, all routers, everything is in a redundant architecture. The three rack that you see per pod is what I was talking about in the previous one. So I just want to quickly show you a high level view of what an application migration looks like for us. This is a very simplistic view. But if you look at on the left-hand side, let's say this is a public cloud view of a single application. It has a L7 load balancer on top and a bunch of app servers at which it is load balancing. And these app servers are accessing data from, let's say, a MySQL cluster. A bunch of reads of servers and a single write server. So the first thing that we do is we create a replica database on our private cloud. And then we set up replication between them. So this gives the ability now that the data is completely backed up from a public cloud onto our own infrastructure. The next thing that we do is using Terraform, as I was talking about, which is our orchestration layer. We launch a replica of the same application into our own cloud. Now, at this point in time, there are two copies of the same application running. But the data is still being accessed by the primary, which is in the public cloud. We just have a replica with the exact data completely in replication sync. Now, to make that happen, we had to make sure that the process of launching an application is 100% automated because this could not have been done manually. And this is what I will show you in the demo. So from the point that a developer checks in the code to the point that it gets deployed on a server, the entire pipeline is automated for us. Next we do is we open up this application for read only so that we can do basic verification check. You can run simple checks to make sure that the connectivity is there, and then data is consistent. Once the verification step is done, we point the DNS from the primary application to the new application, which is running in our cloud now. And then we blow away the primary copy. Now, at this point in time, the application is pretty much migrated into our infrastructure. After that, we can delete or blow away all the older databases, and then launch a new database, or reuse one of the older ones, as the read copy. This goes back to the disaster recovery use case that I was talking about. For all the databases that are running in our private cloud, there is a copy of the database in the public cloud as well for disaster recovery purpose. So what were some of the key learnings from the entire journey that we had for the last one year to migrate from a public cloud to a private cloud? So first thing is, for OpenStack, on it, don't just operate it. We practically have knowledge of the OpenStack code so that if a problem strikes, we are able to go and debug it, fix it, patch it ourselves. This is very, very important for us, because if it happens at, say, two in the night, there is no way for us to reach out to anybody else and come to come and help us to fix it. Our entire business depends on it. And that's the key learning for anybody who's trying to use OpenStack for production environments, that you have to really own it. It's your code. It's your software. You can't point fingers at anyone. So you need to learn how to fix it as well. Keep it simple. That's what we did, at least. We are not jumping into any of the advanced features, advanced projects of OpenStack yet. We might do that over time. Many of those things, we wrote it ourselves. But our mantra from day one was, we will deploy something that we understand. Design the control plane to be highly redundant. We talked about that. And automate upgrades and tests frequently. And we do that a lot. Our entire OpenStack build is actually, we patch it. We do bug fixes on it. And we regularly put it in production. And in an automatic fashion, without taking any downtime of the infrastructure. So it's highly recommended that you test the part of upgrades because that's going to bite you in the ass when it comes to upgrading to, let's say, a major version or to do a bug fix in production. We've also built a capability to build OpenStack on OpenStack or to launch OpenStack on OpenStack, which basically means that for any time that we want to do a bug fix testing, we are able to launch a brand new OpenStack environment on our cloud itself. So that gives us a lot of ability to do continuous integration and testing. Yeah, this is something that was eye-opener for us. Understand your applications very well. You'll be surprised what's underneath. And we found that a lot. We thought that we knew our applications. We thought they behaved a certain way. But once we did the migration, they didn't work in the first attempt. And there were all sorts of things going on, especially around the dependency graph. There was applications which were bypassing the API layer and talking directly to databases. They would break. There were cron jobs running, which were not part of CI CD. They would break once you do the migration. So there was a lot of learning process for us. And one thing that we did as part of this migration is we didn't migrate as is. We fixed our application, which was running on the public cloud. And then we migrated it. So we then sort of inherited the mess. We cleaned up the mess, and then we migrated it. Then again, automate and monitor everything. I mean, that was the key for us. We were talking about thousands of servers and hundreds of applications. We couldn't possibly migrate it unless we knew what was going on and how they were behaving. So building automation and monitoring on top of it was extremely critical for us. All right, so with this, I'll just give you a quick demo and just walk you through some of the processes of how we launch applications in our own cloud. I'm just going to do a quick time check. So in this demo, what we are trying to do is we are trying to launch a brand new application called Hermes, completely from a CLI and using infrastructure as code. So first thing what we do is we create this thing called a new service on our application, which is basically which creates a YAML file for us, which describes what are the different properties of this application. So right now, we are just saying that it's called Hermes. We've given some email addresses of who's the owner of this application and which group owns the application. And this is the list of our platform, which are basically deployed automatically using the code that we've written. So you are seeing Aerospike, Cassandra, Elasticsearch. So in this particular case, let's say it's a simple Tomcat service. And we will select that particular service and launch it. So over here, what we are specifying is what's the repository from which the code for this particular application will come through. Specifying the port, 8080. And once we see the YAML file that is being produced by that particular script, this is what it looks like. And this is a very simplistic view of an application. But you can see the basic things. I mean, it has things like what's the name of this component, which is Hermes. What type is it? It's in an Nginx service. Where is the repository for this application in GitHub? And what ports does it run on? The load balancer port is what I was talking about in Smart Stack when we were running a local lecher proxy. That's for that. And it also specifies what are the minimum number of instances or some maximum number of instances that you need. And of course, the CPU and memory configuration. Now this information, the CPU and memory information, is interpreted by our code. When we are launching a virtual machine, it's going to launch a virtual machine of this particular size. It is also going to keep track of how many instances of this virtual machine of these VMs are running. And it will always make sure that it runs the count that you have specified here as the minimum number of instances. So if you specify that you want a minimum of 10 instances, it's going to track that at any point in time you have at least 10 instances running. And if one dies for some other reason, some reason or the other, it's going to launch it automatically. So in the interest of time, I'm just going to fast forward some of this. So once you check in this particular file, by the way, so in this demo, what we did was we created an application and a MySQL also. So we've combined them together that there is a dependency between the two. And we pushed this file into our infrastructure repository, which creates a merge request in GitHub in GitLab. And we will have to accept this particular request, which is going to run a bunch of automated tests for us. I'm going to skip this part. So this is just showing that in Jenkins, it's going to accept it, and it's going to go through the pipeline of running all the CI CD tests, and then accepting this particular merge request, and then giving it to the orchestration code that we had to go and talk to OpenStack or the public cloud to run this particular thing. In the checks, we do things like there is no conflict of ports, whatever you're specifying in terms of the size of the virtual machine, the dependency, the application, they all exist. And in the end, it's going to create a new tenant in our OpenStack cloud. And given that we are a private cloud, we could potentially run everything in a single tenant environment. But what we have done is for each component that we have, it runs in its own tenant network. And it's completely sort of restricted from access from any other tenant. This was done for security reasons, and it was also to make sure that we understand what our applications are doing. So if an application requires to talk to somebody else, it has to be specified in the same YAML file that I was talking about to say, I want to talk to service B on port X, Y, Z, and then the specific security group would be open. So in this particular example, since we were launching a new component, a new tenant environment was launched in OpenStack, and virtual machines were created inside it, and all of this was done using the scripts. So this is showing that now there's a project called Hermes, and we can go inside that. And we can see that there are two virtual machines that got created. One is a MySQL server, and other is a Tomcat or an Nginx. And we can access this particular Tomcat to see if there's any application running on it, which is basically simply showing you that it is an application that is connected to the database it has launched. Now, this entire thing could have been done in the public cloud as well. You could have specified any number of instances. It would have launched that for you. So that concludes the presentation. We can do questions after that. Thank you. Yes? Yes. And we actually understand that challenge. So what we do is we keep ourselves at least in our example, the Kilo train. So we do pull all the changes that are happening in Kilo. And any change that we make on top of it, it's a separate code base. We understand what changed. We are making and what changed what code we are picking up from the community. Now, we actually had that problem when we moved from Kilo 2 to Kilo 4. It was a large change for us. It was sort of a hairy merge for us to make sure that any code fix that we are doing and whatever we are picking up on the community. But at that point in time, we didn't have an option because we didn't want to move to liberty. We made this cloud in a matter of about one year, which meant that there were so many intermediate steps that we did. For example, we launched a Devintest cloud environment first, which went into production earlier around January, February time frame. Now, there was what we did not want to do is we didn't want to launch a production environment on an open stack software that we had not tested. So there were reasons for us to do that because we trusted the code that we were running. We understood the code. And it's going to have challenges when we will move a major version like liberty. But at that point in time, what we will do is we'll create a parallel environment. We will probably install liberty on one of the more stable versions of open stack and then migrate our workloads on top of it rather than doing this. So it's a thing that we understand. But given that what we were doing at that point in time, we found it more appropriate to own it rather than keep on with the latest code. Yes, I think you asked first. On top of SSDs, yes. So many applications are running on top of SSDs. Databases are actually, when you say databases, MySQL is actually not one of them. MySQL is using a local data copy, a local disk, because some of our databases require very high throughput. And a local SSD is going to give you that kind of performance. But many others like Mongo and Cassandra, for example, they are using Cep SSD. There are other examples of databases and also message use like Kafka that are using Cep-based SSD storage. I have a relatively small team. My total cloud platform team is about 25 people. That includes the data center folks as well who go and do racking and stacking and maintaining the physical infrastructure. Yes, then I'll come to that. I don't see, based on the experience that we've had, that there is a certain application that needs to run in a certain kind of a public or a private cloud environment. But if I were to think about it right now, I would say that some of the enterprise softwares like ERPs and Oracle and things like that, they're better suited in a private environment than in a public environment. But that's also changing. For us, what was more important is that our applications are able to run on either of the platform, and that's what we try to achieve. Yes, I can share that a little bit. So like I was telling you, we did migration in a few phases. First, we migrated our Dev and Test environment, which was also fairly large and growing. ROI for Dev and Test data center was only eight months. So within eight months, we were able to recoup all the investment that went in. And from that point on, it was a very small incremental operational cost. For the large production data center that we have built, our calculations of ROI are 1.6 years, which is again extremely, extremely cost beneficial. Yeah, this question here. So the question is that if you shut down the control plane, how will the north-south traffic happen? So we are using provider network. So our north-south traffic, which happens, it goes through a separate network itself. And our control plane network is completely isolated. So even if that shuts down, or there's an issue there, our data plane traffic continues to work. Yes? Somebody had a question? Yes? So why do we do the migration? Because, like I said, we were 100% running in a public cloud. We created a private cloud, and we had to migrate our applications to it. We could not shut down our usage of public cloud and create brand new infrastructure, because our business runs here. So the entire thing had to be migrated, and that migration also had to happen transparently. That means when traffic comes to our site, they don't know where they're going to. But I think that's all the time we have. Thank you, everybody. All right.