 But to say, if all it is, is A. I don't want to do that to anybody. Good morning. Everybody's paying attention. That's great. Welcome to our third sponsored session today from Cisco. We're very, very fortunate to have Ashit Khan from Rakuten here to talk about some of the work that they are doing. I'm going to keep this brief so we give Ashit as much time as possible. And I will turn it over. And we will have some time for Q&A at the end. So away we go. Thanks, Gary, for the introduction. At first, it's great to be back. I've seen lots of faces I knew from before. And today, I'll walk you through what we are doing, Rakuten Mobile. Many of you most probably saw on the newspaper that it is becoming or decided to become the fourth mobile network operator. Rakuten Mobile was so far an MVNO on top of Dokomo. But it decided to become an MNO on it itself. And in order to make the network, this presentation is about that, how we have made our network and then still making it. I'll take you back to 2013. So this is what we did in 2013. This, you all know. After that, in order to make this ecosystem, conventional telco players, they made their product. I don't know, I see Chris over here. We did OPNFV as well to do open source development of these. And this architecture, we actually do not call it an architecture, we call it a framework per se. Different deployment implementation that came up into the market. And the point over here is, as Rakuten, we are following this architecture that was standardized in XNMB in 2013. And if I remember correctly, in the bond meeting in June. Now, the reason I saw that diagram is because of this. So this is Rakuten's ecosystem. And this is how our network looks like. So we, sorry, right, it came back. So the first thing is, of course, as you saw in the title, in our network, we do not have any physical network function. Everything is virtualized on top of OpenStack and KVM. So let's start from there. So first thing is as we are basically a new operator, we do not have any legacy system. We are using commercial of the self servers. These are servers. I wouldn't say that you'll be able to go and buy it from some of the shops over there, but these are basically Intel servers. You can buy from anywhere, generally speaking. And on top of that, we have this horizontal, what we call Telco software. This is the OpenStack KVM part that we are procuring from Cisco. Cisco call it CVM. It's a Cisco VIM, virtualized infrastructure manager. And on top of that, we have our 4G core network. This 5G core, it will appear soon next year. Let's ignore that for the moment. So the EPC core part, this is coming from Cisco. The whole IMS stack is coming from Nokia. And the hardware, these are coming from Quanta. I don't know if you know of Quanta. It's a Taiwanese server manufacturer. And the VNF manager point of view, I think we had many discussions around the industry, the generic VNF manager, the specific VNF manager. So at present, for the Cisco stack and for the rest of the ecosystem, we use Cisco's VNF manager, which is called ESC. For the IMS part, we use Nokia's CBAM, the Nokia's VNF manager. But Rakten's strategy is, within one year, we will move all of the VNFs to ESC. We will have only one single VNF manager, the specific VNF manager, the whole, let's say, NFB, Dremthoft, I would say. On top of that, this is where our innovation, there are two innovations we have that these are coming from conventional vendors, let's say. The Viren part I will explain a little later. The OSS system. So an operator, by definition, it operates. So one good thing about us, as we didn't have a legacy system, we actually developed our OSS system by enlarging house, by using a company called Inouye. And it is, when I was making the HCNV architecture, the reason you have an NFBO and OSS separated, because all network operator at that time, they had the legacy OSS system, right? So they couldn't really replace it with a virtualization aware OSS system. That's why we had to augment it with NFBO. But for us, it's all virtualized, so our operation OSS system is virtualization aware. So it basically handles all virtualized element. It's not handling any physical element per se. So this is our in-house OSS system that we have developed. And then we have the virtualized RAN. And this is, for the first time in the world, we are the first network operator, mobile network operator who has commercially deployed. It's not in the lab. It's commercially deployed in three major cities now, in Japan. The 4G Viren part on CAPS architecture. So you have the control unit and you have the data unit. And this is, I will explain a little later, we also have a centralized data center and edge cloud physical deployment. So this is our NFBO architecture looks like. And it's pretty much a multi-vendor ecosystem. You can see you have Quanta. We use Intel, FPGA cards, Nokia's, IMS, Cisco's, the whole virtualized infrastructure, Cisco CVM on OpenStack and we have our own OSS system. Now this is how our, I don't call it an architecture. This is actually the physical deployment, right? So what we have, we have a centralized, what we call CDC, Central Data Center. I'll show you later. This is where the EPC core part, the Cisco EPC, Nokia, IMS part, and also other security gateways, they are hosted over here. And this is also a virtualized on OpenStack. And the gray one is, we have two central data center in Japan. And Japan is a very disaster-prone country. And one thing I want to mention, you have seen many redundancy in your life, I'm sure. You have seen one plus one and plus K and plus one. Maybe something you didn't see is, let's say this central data center, we have one in the OSAC area, which is the center of Eastern Japan. And the gray one, let's say it's in the OSAC area, which is the central location in the Western Japan. If Tokyo goes down with the magnitude nine earthquake, we can actually switch over to our OSAC data center. The network will still be serving. So it's not only the redundancy on the VNF in a one plus one redundancy. It's also the redundancy among or between the cloud themselves. The whole Tokyo can go down, but we'll still be able to serve our customers by switching to the OSAC data center. And then we have Edge Cloud. GC is a local time analogy. Please ignore it. It's entity group centers. So we are using them for our Edge Cloud to host our virtualized RAN. So I'll show it to you later. The BB units, they all have been put into virtual machines and are deployed on Cisco's Edge CVM platform. And the only physical thing, I have said everything is virtualized, but you can't really virtualize the antennas, right? They're physical poles, so they are physical that I must admit. Now, I think in my previous presentation, Chandra and Ian has explained in details, Cisco's CVM as a product. So the way it works is we have different pods and each of the pods is one open stack run cloud, okay? And you can have different sizes of pod. We dimension our pods according to the necessity of the VNF. It's just an example, please, it's not true. For example, if Nokia, IMS, one element of IMS, it requires 100 servers to host the VNF. We will make a pod which has 100 servers. That's how we dimension it. It depends on the VNF that is running. And depending on the VNF, the size changes and each of these pod, they are run by one open stack instead from Cisco. The pods, they themselves have redundancy. If one pod fails, another pod will take over. The VNF themselves, the software part, they themselves have redundancies. So inside the pod, if one VM fails, another VM takes over. So the whole network is designed for at least one failure. It doesn't matter if one element fails, the network will still keep on working. And we don't have to replace hardware instantly. We can wait for a week. That's how the whole network is built. So this is generally how our cloud look likes. You have pod and it has some benefits. One is sandboxing. If one VNF, I wouldn't say after 10 years of 4G, we rolled out 4G 10 years ago. There is any rogue VNF from a legitimate vendor. But still if one goes wrong, it stays within the pod itself. And you can also enforce different types of security policy on different pods. So it has that benefit. And just as an example, I again wouldn't give you the exact number for example in our central data center. One central data center, you have around 30 pods. Around three to 4,000 servers in one central data center hosting the 4G core network. Now, this is virtualization in my view, in my very own personal view, somehow a bit complex than let's say straightforward physical implementation. The complexity and the end point virtualization, everything is virtualized. How do we handle it? And please do remember Rakten Mobile is a very new company we are growing. It's a few hundred people. It's not a few thousand people yet. And within one year, it could be a record I would say. Because I worked for other operators before, much larger, it takes three to five years to develop one generation of mobile network and then to commercially roll it out. We did it in one year, but how we are doing it? The point is our OSS, we built it from scratch. So what we call the infrastructure database, your whole country, the whole Japan is abstracted and put in the database. IP address generation to host name generation, these are automated. I'll give you an example. Example you would understand. Let's say you have Denver, Chicago and New York. So we have our inherent logic that when you're deploying a pod in Denver, then host names, let's say it starts like DN, then if it's Nokia, then maybe it's NK. And then if it's EPC, then it's EP. And if it's the instance one, it's 01. So we do not, there is no one, there is no network administrator sitting in front of a console writing the host names. And this is a full, complete IPv6 network. So the IPv6 addressing scheme is also we have, we put our own logic, how to geographically, automatically allocate it to Denver. So we are not doing anything manually over here. So you have the infrastructure database in the OSS and this is the pod. And now in this case, in this is Cosimium structure, this is Barebone, okay? The gray is that there's nothing, no software installed in the server. Now depending on the VNF, you may need some numbers of compute nodes, you may need some numbers of storage nodes. So based on that, what we do, and what actually not what we don't, we don't do anything, what happens is this. So this is Cisco's, I mean Cisco call is setup.yaml. So this basically it will write down how the pod should be configured. And the host names that are auto generated from our OSS system, they are basically written in this file. And this file defines how many compute nodes should be in this pod, how many storage nodes should be in this pod, what their names are, what their IP addresses are. And these, we feed it what, if it was probably Cisco already explained, there is a management server in the pod. And this management server then by using that setup.yaml that is auto generated from our OSS system, you basically, okay, let's say in this case, this VNF needed five compute nodes, data processing node. And two storage nodes. That's it, it's the setup.yaml, that file that's config file is coming in from the OSS. It fed into the management server. And the pod is up. We don't do much in setting up a cloud. Now, that was an example, okay? Let's say an unit example of deploying or deploying a cloud, or configuring a cloud. Now, we are talking about, at present we are serving, commercially serving three major cities, the three major cities in Japan, Tokyo, Nagoya, and Osaka. And with a few thousand base stations, and a few hundred edge clouds, and of course, two center clouds. Now, how do I do that? It's exactly the same principle. I'll just explain it to you. I put about one hour to make this animation, actually. So the setup file that I explained to you before, as soon as you, yes, in the OSS, you have to actually select the data centers, right? It's Denver, Chicago, or New York, wherever. You just select that part. And the OSS system from the infrastructure database, which is digitized, it creates the relevant setup.yml file. It's all, basically 80% of it looks the same. The difference is the host name and the IP addresses. And what is their DHCP server, DNS name server, NTPPTP server, where they should be connected. These are the differences. And that difference is also the logic is in our OSS. So you create the relevant setup file. And this is what you do. I'll try to be very quick. So you send the setup file to one edge cloud. The edge cloud is made. It's same. I just keep on going, okay? Okay. So this is how it is done. I'm not doing much. Well, sometimes things go wrong. We troubleshoot, we fix things. In the middle of a CV installation, a server may have problems. I may have to replace the server or do troubleshooting. Those things we do, but generally speaking, we have basically completely, I would say completely automated the deployment part of this cloud. So now I have few hundred edge clouds on Cisco CVM. Within next two years, I'll have few thousand. So if we don't do this, it's physically impossible for human being to set up few thousand edge clouds sitting on a chair with one console. So automation is a big part of Rakuten Mobile's innovation. And I'll come back to you, show it to you later. So once these clouds have been deployed, you have your core system, 4G core network inside the center cloud. And you have your VRAM virtual machines running on the edge cloud. From there to actually make the antenna, broadcast the radio physical signal. Without any human touch, I'll show you the flow later. It takes 15 minutes. It takes 15 minutes to commission one base station in Rakuten. And without naming any name, go to any other operator. It takes three to five days to do that. To commission one base station. So that's the automation part we are putting a lot of effort from basically day minus one. And this is how our operation or let's say bootstrapping the network works. So this I have to get a little bit more details into our VRAM, which I'll explain a bit in details later as well. So you have your antennas on the left hand side. These are your, oh, please. So these are your antenna cell sites. And usually one cell site has three sectors, right? 120 degree you cover through by one sector. And we are using CPC pre-eternet. And this is our servers from Pwanta. And they are hosted in the edge data centers. And the platform is again OpenStack from Cisco. And KVM and on top of that, you basically instantiate virtual machines from the OSS by what I explained to you before with the setup config files. What else to explain from here? This should be a very sensitive information, but still okay, let's say as it is written over here. So one virtual machine, basically one BDU, it is covering six sector. That's huge. That's a huge achievement. And then you have your centralized unit which connects to the EMS. We still have the EMS for the VRAN part hosted in the central data center. So this is basically our VRAN part, which let me explain to you in little details in the next slide. So have any of you been to a cell site? Yeah, of course, not here guys. Had been to the cell sites. Okay, thank you. Okay, so this is a 40 minutes presentation, right? Yeah, so I have time. So I also handle Ministry of Telecommunication and due to like they in Japan, you do not auction for a spectrum, right? You get spectrum from the government as a social responsibility. So that means you have lots of overseeing from the Ministry of Telecommunication. So they wanted to see one of our sites. So I took them to show them one of our site. That was live, one of our first sites. So they came in and they say your site is not completed. It's not a completed site. And then I have to prove to them showing the test terminal that look, we are transmitting our spectrum and it is a live site. The reason is here, the reason is we have virtualized the RAN part. And this is the conventional cell site, okay? You have three, let's say three antennas facing three different direction. Let's forget the battery and the power board. The main part is here. It's a huge cabinet of baseband processing unit. And that's what we have virtualized. That goes to an edge data center, let's say in between 10 to 30 kilometers away, okay? And one edge data center, it can accommodate, well, let's say a few tens of cell sites. So this is when you move this huge physical cabinet onto basically few servers from around 10 to 20 servers depending on the size of the edge data center. Your cell site looks like this. Compare the number of elements. So one, two, three, four, five, six versus two. So this is physical that I cannot get rid of it. So when you compare these two, it's very easy for us. And that's why when the Ministry of Telecommunication people saw this place, they said it's incomplete. You are lying. So there are two benefits over here. It's not only about virtualization. Virtualization, of course, deployment is easy. When 5G will come, all you have to do, you will have to install new antenna for 5G but just instantiate five virtual machines for 5G. The point is Japan is a very densely populated country, right? Finding out a cell site in Tokyo, it would be much easier for me to finding out, I don't know, diamonds or something in a street corner. So when the cell sites are compact, it's easier for us to find the cell site because we are a new operator, we have to find our cell site. Second is the operation cost. You don't need to go to the cell site. There is nothing to do over there. And just to give you an example, that the benefit of virtualization is not only in virtualization itself, there are other benefits also. There is a rule in Japan. Japan is a very safe and well organized regulated country. One person can carry 20 kilograms, okay? If it's more than 20 kilograms, you'll need two persons. So it's not that we calculated it. What we have found out that in order to make this, I only need one person because there is nothing more than 20 kilogram in this structure. Because you have moved this VBU, the huge cabinet, to your edge data center, which is a kind of centralization of your base band processing unit. So virtualization is not only giving us the flexibility and ease of operation. The ease of operation actually translates to reduction in operation cost. And you should try to reduce operation cost in your radio access network part because it's basically 30% to 70%, right? Generally speaking, your core network takes 30% of the cost. Your radio access network takes about 70 to 80% of the cost. So that's where you should try to reduce cost. So this is how our RAN sites looks like compared to the legacy. And so these actually had a video. So in the previous slide, I have showed you to show the antenna. The antenna is a physical element. It's sending electromagnetic wave, right? But that electromagnetic wave that the element or let's say the machine that generates, can you actually see it? Is this, this is the RRH. Now we have developed with our partners, and RRH is one person. What he does, there is no jumper or anything. Carry it through a very narrow tunnel going to the rooftop. And he basically, there's a hoop on top and he connects it to these two connectors behind the antenna element. That's it, your cell site is done, one person. You don't need a huge construction company to make your cell sites. So this is something also, and some source of innovation we did within less than one year. To re-emphasize one more thing is inside the data center we are using the Cisco STI fabric. That's fine, that's many of you know and STI is well known around the world. One thing I want to emphasize is, oh no, I'm always pressing the wrong button. So STI fabric you know, it's an SDN solution, Cisco proprietary SDN solution for data centers or transport network. So you have the epic controllers and we have the pods, you have the torch. One thing is, this is also our innovation with Cisco, in cooperation with Cisco is there are two parts of ACI, right? So you have the network inside the pod and you have the network among the pods and then you are going out of the data center, your ingress, egress routers. So for the VLANs and everything that you set up inside the pod, which hosts your VNF, we can do it from OpenStack actually, from CVM. So this was the development we did with Cisco that before that the pods, the OpenStack instantiation and everything was done separately and then someone has to go in and set up five more minutes and set up the VLANs or VXLANs network. Now we have automated it and the CVM, the OpenStack instance from Cisco itself is setting up the network inside those pods. One thing I would also like to take some questions so I'll be very quick. We are most probably the first network in the world, we are 100% IPv6 from day zero. So that's what it shows from RIU, antenna side, still the central data center, it's all IPv6 and actually this has an animation I'll not explain it. So this flow you are seeing, RIU is your antenna and ESC is Cisco's VNF manager, NSO is the NFVO. You have another higher layer of NFVO, NSO end to end, we have OSS system and you have the VLAN EMS. This is what I said it takes 15 minutes for us to commission one RIU and this is a completely automated flow, no human intervention is required. Actually the person who is doing it is sitting standing over there. So that's how we commission a cell site without actually any human intervention and we'll keep on doing that all over the country until our network is completed and that will be 30 to 70,000 base stations RIUs. So this is my summary. We from day zero we decided that will be a software-centric network, software-centric mobile network operator. We will be 100% automated, 100%. Our ultimate goal is 100% automated and we'll be fully virtualized and we are. We are and in future also there is no reason for us. At least I don't see a reason not to be virtualized anymore. So these are the three summary or key takeaway or our let's say philosophy of making a mobile network from scratch. That's it from me. Thank you for your attention. Thank you. Thank you Ashik. We do have about seven or eight minutes for some questions. Okay, here we go. Thank you for the valuable sharing and have a two question. One is the open stack control plan. Can you elaborate a little bit? It's this tribute control plan among the central data center and the remote edge data center. And that's one question. And the second one is you share your perspective of the technical challenge when you adopt the virtual run beside the real time or any other technical challenge in the adoption of virtual run. Thank you. I have the chief architect of CV right sitting standing right beside me in Chandra. The first question is the control plane in between the central data center and the edge data site. You don't need to open stack control plane does not need to interact in between the central data center and the edge data center. So one of the things we have done is because understand this is mobile mobility and a lot of critical messages are going on. So when we designed this, we made sure the failure domains, the fault domain or the blast area of that fault is limited. That means every pod has its own control plane. Now to your point where you are going is then how do you manage so many of these across the entire country? The thing is remember in the previous slide I talked, we talked about every pod has the same API. It's the, all the OSS-BSS has centralized for this one OSS-BSS system. So based on the pod location, there is a pod gets labeled with a particular name and that is the basis on which the RAKUTAN OSS-BSS system is calling an API for a particular pod and that's pretty much how it's happening. Now the advantage to that is we really, if any one site fails, it's located, okay some sites are down but the rest of the blast areas and the rest of the network is up. Now that does add a challenge of burning three servers at the edge. Now what we have done, and this is where one of the innovations working with RAKUTAN has happened, we came up with this thing called Edge Pod which in my talk I was talking about we only take two codes out of the Edge Pod for controls and the rest of the server, the same server acts as both a controller and a computer and that's one of the innovations we did as part of this program. So yeah, we kind of, if we will cheated the system to still have controllers locally so that our failure zones are localized but so we don't have to have this computer for what we are not DCN that is a technology that is evolving as we speak. Yep, thanks Chandra. So it's, every pod is a standalone cloud. That's the summary. Your second question is our challenge is on RAM, VRAM, is it? And at present the challenge is, listen, performance wise we do not see any problem yet or issue. We already commercially launched, right, in a very high quality market like Japan and we are getting close to, we got only 20 megahertz of spectrum as the first allocation. We are getting close to 200 mega BPS, pretty much. If you ask me what is our challenge in VRAM, I mean as I own the whole cloud infrastructure it is the covering the whole country with a few tens of thousands of sites. Yeah, go ahead. So I would say that there's two parts. Now that deployment is done, pre-deployment, obviously hardware offload, we obviously work with Intel to bring in the Vista Creek or N3000 card, do a lot. We had to do a lot of innovation within the cloud to make sure the 30 microsecond of round trip once the packet enters and leaves, all of that is mitigated. We started at 1000 microseconds. So from where do we have to go to 30? Those are done. So that's a good thing. I think the real challenge now is when the number of people that is operating is intense, 10, 20 people operating this number of clouds, in a scale to thousands of clouds, even though it's a common API, there has to be a whole tracking system which cloud got updated to what version of saving, what version of all two stars, VNF, tomorrow if we have an FPGA update needed, think about the while it's all automated still, that automation has to crawl through thousands of clouds to make sure the new software gets rolling. That is, I think, the challenge. Yeah, I mean, as Chandra was explaining, one thing is build the other one is operation. So sometimes build is easier. You do a lot of testing beforehand. So we are in the build phase. When we'll get to the operation phase, when you virtualize a system, you have more number of software elements into the ecosystem. So tracking them, making sure that I'm doing the right update, automate them, these will be actually not only for VRAN, it will be the challenge for our whole ecosystem. And how agile, parallelized, our OSS system is, because the OSS system itself is evolving. So those will be the challenges for VRAN, whereas as Chandra said, performance-wise, we are not seeing any issue at the moment. Thank you. Are there any specific technical implementations? You can share with us about those devices, how you dropped from 1,000 to 30, what data plane acceleration technologies you used? Yeah, I will share what is open and already. Obviously, this cloud is not running standard operating system, it's running a real-time kernel, number one. Number two is we do know that there has to be, everything cannot be done in software. So that's why every server here has an FPGA offload, which is the Intel N3000 card. So that is, so if you think about it, and I'll go back in February of this year, Rakuten launched a soft launch. It was all software-based. It was not based on any hardware offload. We could only do about five megahertz at that point. We could not scale up to 20 megahertz. And so that's another innovation that we brought in. Intel has another technology called cash allocation technology, which is part of this flex-tran licensing piece. We've incorporated the cash allocation to the technology also. We've also, some of these videos, we see these needs dedicated course. We've come up with changes in NOVA to make sure there's no preemption in this course. It's used only for VDU or VCUs, exactly. So those kind of innovations are what we have done. I mean, obviously one of my key developers each end is back there. So there was a lot of, I would, I mean, this was an amazing project one year. What a long project. I think about eight to 10 innovations came out of it. We have five patents out of this. You know, so I do want to acknowledge the partnership that we had and the partnership was unlike anything. Like, it was real time. We were always talking on Byword and whatnot. So thank you. Thanks, Chandra. I mean, I remember one of the panels in OPN everywhere. I think it was probably that building an airplane while flying. And through my experience in last one year, I'm inducted for one year now. We are indeed a building an airplane while flying. Seriously. And it, like we need to fix, find out the solution of a problem in a three to five days timeline. So in that sense, it's not only Cisco. I mean, our partner with Nokia, Altyostar, Intel, we are really thankful to all of them to do the response time we have. I'm pretty sure in the history of, in the history of human being, no one rolled out a mobile network in one year. With, like in my view, at least 50% of these are untested technology from before, was never done before. So we are building an airplane while flying. Of course, sometimes there are mistakes, but we are fixing it very fast. One thing I did not touch over here is that we have built a testing facility, which I call MiniMe. So it is a complete replication of our commercial network. It just, the scale is smaller. So something goes wrong or something new you want to try or suddenly a new use case comes in. It, we actually do it over there. We have a complete replication. And if it passes, we have a CI CD pipeline. It gets into the production. So in order to get speed and this to do the thing in one year, now we have to do, like we have to cover space, right? We have to cover the all over Japan, which will take place in two and three years. But the fundamentals and the basics are in place. And we envision to build on top of this. And hopefully, and most probably heading into the container direction soon. Thank you. One more question. Yes, just very quick question. I understand that the whole solution is full stack Cisco from OpenStack, the fabric, ACI and the underlay and the SDN because we have the same scenario that we want to build a cloud, but it's not a full stack. Let's say for example, you have a Red Hat as an OpenStack and then Cisco as a ACI for SDN. What drives you to use a full stack for that? When you say full stack, what do you mean? From OpenStack to the fabric itself, the underlay, the ACI and then the SDN. Are you pointing towards how we did the integration? Yes. Okay, right. So yes, Cisco, obviously Cisco is not a producer of OpenStack per se, right? They are using an OpenStack. Okay, let's say from Red Hat. But OpenStack itself, not all the time sufficient for a telco service. You require monitoring features, you require security features, you require password management features. Those things, that's where Cisco comes into play. Not only Cisco, other OpenStack suppliers, they come into play. Now, how we did the integration over this versatile supplier spectrum is this is my own very personal view. Everyone was excited, that's what I found out. Before sometimes you have to force or make an agreement that you have to participate in the integration with that particular company who is actually your competitor. What I have seen in one year as we are doing something very new and very exciting in my view, it's all virtualized, right? I have seen very positive response from all our ecosystem partners. Nokia participated common development with Cisco, Cisco participated in common development with AltioStar, AltioStar did with Red Hat. There was what you see in an open source community, I have seen that inside my company. Open source community is by nature organic, right? You do things because you feel so. So as Rakten from day zero already declared that this is how our network going to be and it's all gonna be virtualized, I think everyone was excited and wanted to be a part of it to make history because no one did this before. So from integration point of view, apart from the technical challenges, there are. They went through lots of tough time but there was also an inherent natural urge to make this thing successful. That's something I want to actually appreciate to all our ecosystem partners. So we knew the biggest risk was on the VDAN side. So Cisco and AltioStar, I mean with Intel and then when we did Red Hat's real-time kernel, all of that, irrespective of what the contract said, we had an NDA like N-Ways NDAs. And we just said we had daily meetings and daily sync cups and all of that. I mean it was like AltioStar had engaged with this even before officially the program became formal, we started that because we knew that was the longest pole and that would take, like because the integration of Vista Creek and all this, this is not going to happen overnight. So we have like, even now, to this day, we have a weekly sync up with Intel. I have a daily sync up with AltioStar. I have a weekly sync up with Red Hat on the Red Hat kernel, in a real-time kernel. And obviously a weekly sync up with Rakutan in which all the vendors are there. And then we have what is called Viver Groups on which we talk real-time, whatever is needed. So yeah, when you bring best of breed, best of breed is a theory. And really to get it tested, and one of the things that Ashik said, the Kiba lab that we have for testing, that is absolutely instrumental or essential that Rakutan is actually invested in that. Otherwise this best of breed is a myth because every time anybody makes a change, how do you test it to scale, even if it's a scale down scale, but it's still scale. How do you do that? Because patches are coming left-right center. And that is the most important thing. Best of breed, otherwise it's just theory. Like I said, all this plus one, plus one will not add up to something greater than some of them, otherwise if you don't have auto-testing. All right, I'm gonna give them the hook at this point. Ashik, we have to get ready. We have another session getting ready to start in about four minutes. Ashik, thank you. Thank you very much. Chandra and Ashik, continue the conversation out on the hallway track. And I hope you can stay for our fourth session on the new Cisco Container Platform. Ashik will be presenting in about three minutes. So it's time to get warmed up.