 No? So what Arun and I today are going to talk about is I believe you've heard a lot about NFV, network functions, virtualization, what is it. And since you saw all that written in the session abstract, I know you are in the field and you know what we are talking about, so I'm not going to bore you with any of those things. But we thought that there have been a lot of conversations around how do we virtualize the network functions. And over the past year or so, year and a half, there has been a lot of traction in the industry. I'm sure you're following it. And there's a lot of discussions. I see some familiar faces, folks that I've spoken to when to their organizations and talk about it. And a lot of work that has been done so far has been in a lab. We are essentially doing what POCs are, a proof of concept that what we are trying to do is doable. But now we are getting to a stage where it's ready to graduate. And we are getting ready to take these virtual functions from the lab into some contained field trials. And there are some carriers and operators who have been a little bit bolder who have actually come out and said that by a specific date, before the end of the year, they'll have these networks deployed in production and they're going to be running to it. So we thought that we're going to take maybe next 38 minutes to talk through and share what we have been hearing so far on what does it take to get out of the POC and go into a production world. And I know you guys absolutely know a lot better than us. We provide solutions to be able to do it, what it takes to productionize, operationalize solutions. But we thought we'll share what we have been hearing and how some of the things are happening to be able to address it. And by the way, my name is Tarek Khan, part of HP's network functions virtualization business unit. And within this small business unit that HP has created, my task is to help cloud guys, i.e. OpenStack folks. A lot of people are here, figure out or provide input to what features are required to be able to host telco applications on OpenStack. And I'm joined here with my colleague, Arun Thulasi. Hello, Tarek. In the network functions virtualization business unit. And my focus is on how can I take what he builds as such a great OpenStack is and take it to the market as a product or as a vertical solution. OK. Thanks, Arun. And I know we're not going to do it too much more. Or you can leave his mic off. I'll just talk. OK, thank you. I'm OK with his being off. So I know it's a bold statement. But maybe some of you guys get all the things that we keep on getting. So in fact, just earlier this week, quite likely in preparation to this summit, Forester came out with a report. And no one's representing Forester over here, right? And we're not recording input. In any case, this title I got it from the report, which is OpenStack. Forester, they did an analysis. And they're saying that OpenStack is ready. It doesn't mean that we are ready to start putting it. Everyone is ready to start putting in the production. But the point they were trying to make was and the summit attest that OpenStack has graduated now. People are not asking questions of, is it going to play? So it is ready. But it doesn't mean that every single feature that we require to run every single workload is there. So what we thought was, not going to bore you on enterprise class deployment, what's happening over there. We're just going to focus on a narrow piece within these three categories, security, some carrier-grade features, and multi-data center, which is inherent to telcos. What is it that's perhaps missing? And what's some of the work going on to make OpenStack ready for telco applications? And with that, I'm going to hand it over to Arun to talk about the first part. I guess it's on my mic, is it OK? So as Starik mentioned earlier, again, the goal is try to identify what are some of the common use cases that our customers are telling us and try to work with the community to see how we can bring that into what could be a category at OpenStack. And we'll start with security. Going back to the earlier point, is OpenStack ready? Is OpenStack ready in a number of areas to face security challenges? Enterprises need to be aware of what's going on in the open source. You need to consistently watch for what are the new issues that are coming out. And the community does a very good job of ensuring any new issues, the Venom issue, for instance. Anything that comes out, it gets the attention of the decision makers. There is a process to get the fix. There is a patching mechanism that's already provided. You can patch your OpenStack services in the run time. OpenStack provides a way to log any of those challenges. So there are a number of areas where OpenStack is already addressing some of the security reasons. But going to a carrier-grade environment, what are some of the primary asks from a carrier-grade environment? And we have put them into three large buckets, issues that impact the host, issues that impact the network, which forms the core of what telco is, and issues that impact the virtualization layer. So if you look at a classic deployment, you have a host, it applies a number of VMs, and over a network, it connects to a bunch of different hosts. So in essence, that's what a telco cloud does. So what are some of the challenges that we're seeing in each of these domains? On the host side, today in a non-telco world, for instance, in the financial sector, people use security-enhanced Linux, or some variety of a security-enhanced Linux to make sure that the host is protected. That allows you to harden the host that allows you to deploy applications in a much more safe environment. But when it comes to OpenStack, for instance, OpenStack hasn't really played well with AC Linux. So how do we bring together the capabilities that OpenStack has and have it work well with an enhanced security platform such as AC Linux? So that's something customers are asking us. The second is the ability to have a full-fledged role-based access control mechanism. So Keystone is doing a lot of good work around hierarchical tenants, the ability to have unique admins for each tenants, and so on, coming up in Keystone V3 and beyond. But in an environment where the telcos have already deployed a data store, already deployed a security engine, already they are using some kind of a role-based access control, how can that flow into an organic security system such as Keystone? I think that's a key challenge that we're trying to address. Today, the OpenStack configuration files, for instance, are available as plain text files. And as much as they are protected for users, group, and owners, some customers have come out and said, how do we encrypt my host so that whatever files I have, the configuration files, the data files, how can I have a fully encrypted platform on which my OpenStack services can run? Again, these are issues that impact the environment at a host level. Moving forward, the network, again, forms the core of what a telco is. It requires an intrusion detection system so that the environment is fenced. The amount of threats that are being faced by telcos these days are enormous. So the ability to have an integrated intrusion detection mechanism, that's been a consistent ask. Deniala servers, being able to protect the endpoints, for instance, various products that HP has, they use HTTPS to protect the endpoints. But how can we ensure any of the services that OpenStack already runs are protected? So this forms part of what we're calling network security. And going forward, we have the virtualization security. Whatever runs in the virtualized plane, that needs to be protected as well. You've protected the host, you've protected the network that carries the data, and then you finally protect what we call the VNF layer or the virtualized layer. Again, as if good or bad timing, the Venom issue just came to the forefront where every single instance of KVM is potentially under threat. So how do we address challenges at the virtualization layer? And only by addressing host level, the network level, and the virtualization level, would we be able to provide a fully secure environment for telcos to deploy. And lastly, again, OpenStack in itself, the way it addresses challenges, there are a number of benefits. And on the flip side, those benefits could at times be challenges, a community-driven effort. So any time there is a serious issue, either in KVM or SSH or any of the key components, the community is able to respond much faster. But on the same token, defects seem to be getting in much easier into OpenStack because of the way the applications are developed at such a rapid pace. Because of the information being out there in the open all the time, there is a potential where the issue is exposed, but the fix is not yet ready, thereby threatens the entire deployment. So the community needs to find a way to ensure defects don't get in, if defects are exposed, then the solution comes in as soon as we could. Gone are the days where open source means 10 different people working in 10 different countries trying to build some small component of an open source stack. Today, enterprises are contributing heavily to open source. And again, that's a boon. On one side, there's a bane on the other. Because of the large engineering workforces, these enterprises bring in, it's much easier for us to respond to any challenges or build new functionality. But on the same token, again, each larger industry vendor has a specific direction that they would like to take a service or a product into, and that causes issues specifically around the security domain. Lastly, the ability to spin out individual services so that we can have focused attention. So today, we're talking about tap as a service. They've been talks about high availability as a service. Every individual requirement from a telco has the potential to be spun out into a service of its own, which is, again, a good thing, because it now gets individual attention. But at the same time, there is a challenge, because it becomes an island of its own. So it's quite unaware of what's going on in the other domains. Is there duplication of work? Is there a consistency issue based on another parallel project? So these are some of the challenges that open source needs to address so that it can build a security platform that carriers can easily adopt. Then I'll pass it back to Tharek for his question. Thank you. And after security, I wanted to just talk a little bit about carrier-grade features. Do people know what carrier-grade means? Any show of hands? Of course. Thank you. And please, I apologize if I repeat a lot of things that are so ingrained in your minds. Is there at least 1,000 definitions? Exactly. Now, but everything, so that's the fun of working in open source and standards. There are people, a lot intelligent at least than I am, who have looked at the problem and they have a very thought-out way of addressing it. And one of the things is under the Linux Foundation, where there are some specifications for carrier-grade Linux. The latest one is version 5. Not many Linux out there or Linux inside. What's the plural for Linux? Anyone knows? But there are very few Linux out there that actually meet the version 5 specifications. But beyond that, we have not gone. What does a carrier-grade KVM mean? What does a carrier-grade OpenStack mean? We have not gone there yet. But when we look at it, it essentially boils down to some differences that carrier or telco workloads have from enterprises. And we've got to be able to run. So in this, Richard reminded me that everyone coming in has been reminding what NFV is. So I'm not going to go into that at all. But some characteristics of what NFV workloads look like and how they're different from IT workloads. In IT workload, it's about aggregation. In IT workload, it is that I'm going to take a lot of resources, pull them together, and I don't care if I want to deploy something. I want it to get it, but I don't care where it's going. That doesn't work in networks. In networks, you need the locality. If you have a service chain and firewall is sitting here, router is sitting here, optimization sitting somewhere else, you have a problem. And there's a flip side to it as well. Sometimes you want locality. You want different components for a network service to sit together. But sometimes you want them to be separate when you need to provide high availability and those things. You needed them to be next to each other but different racks, in the same rack but in different blade enclosures or servers. So those things are important. And when you deploy, so right now if you look at the OpenStack NOVA filter, some very interesting filters were added in the scheduler just in the Kilo release. But in the main part when the NOVA scheduler looks at it, just says, find me the best host to run on it, there are not many capabilities that you're able to provide. You can customize the filters to do something and we're going to touch on it in a couple of minutes. So there are some inherent differences between network workloads. And the sum of the capabilities, and again loosely talking about these capabilities into carrier grade capabilities, is that you want the detection, fault detection, and the reaction to the faults a lot faster than what upstream capabilities are available. And when I say upstream capabilities, these are unmodified capabilities. So if you look at it, I think one minute may be generous for fault detection. Because if you look at it, OpenStack, if it's a fault in a compute node or the host OS, yes, OpenStack eventually horizon will update the status of the host to be unreachable or down. But it is done on a polling mechanism. And that is people don't look for OpenStack NOVA a scheduler database to provide you the real time status of the nodes. You want it to know about it as soon as possible so you can take actions on it. Same thing about VMs. Yes, OpenStack is able to detect VMs field. Same thing if the compute node is down, the VM goes down, and then if you go to Horizon Database or use API, it'll mark them as unreachable. But it takes time. And then performance, if you look at it, we switch. I mean, we know that we switch. I mean, running the cloud without some kind of a virtual switch is basically like going back to the flat world of networks where everyone's sitting in the same network, which doesn't work. You put the virtualization layer to get the flexibility, but that comes at a cost. And when we're looking at network workloads, just getting 1 to 2 GBPS out of a 10 gig pipe. And by the way, this is a lot of testing that has been done. Happy to share those test results if anyone is interested in it. But if you get 2 GBPS out of a 10 gig pipe with upstream OVS and unmodified or unoptimized KVM, it's amazing. Normally, you don't go beyond that. But that doesn't work. You want to be able to go as close to line rate as possible. And by the way, to even get to 1.5 GBPS on a two socket, 24 core server, you may be using as high as 18 to 20 cores. If you're using that much for switching, where you're going to run your VMs. So there are options available. You bypass the switch. You can use SRIOV. You can use PCI pass through. But we want to be able to use a switch that's able to do this. So we get benefits of line speed, but with the flexibility of a switch. I'm not going to touch all of these. You can read everything. But this is a slide I wanted to use to respond to what do we consider definition of carrier grade within the network context, or telco or NFV context. So essentially in these three buckets, first one comes to availability and reliability, where you want to be able to provide the platform, which is the OpenStack control plane, as well as the data plane where VMs are running or tenant space to be capable of going north of 5.9 availability. Now, if you look at what OpenStack has done with high availability, there's a high availability best practices document out there where they provide guidance on how you should write. By the way, all of us, we have contributed heavily to come to it. And I think if you follow the HA guidelines, then you're able to get up to four nines. Getting five or six nines in the control plane is very hard. But that is what's needed, because unlike enterprises in telcos, if you don't have, and I would like your validation to it, but what carriers have told us is that if you don't have visibility to the service, then you consider service to be down, which is very different from enterprise. If the visibility, which means monitoring is down, no one considered monitoring as mission critical. We considered the actual service as mission critical. If monitoring is down, we are not going to raise a category one alarm. In telco, it's different. So it's a platform, as well as the services you're running need to be highly available. And you cannot get that without self-healing and this quick detection and recovery that I talked about. And something like live VM migration, it helps you remediate issues once you find an issue. So these things become very important in availability and reliability. Now performance, I touched on it earlier a little bit, that as close to line rate performance, we consider as part of providing a carrier-grade platform. So you not only want to have a accelerated virtual switch, but you want to be able to make sure that if a link failure happens, the link state is sent upstream so that other components are aware of this. And then you also want to, in performance, for carriers, when we are making a voice phone call or watching a video, what you want is that you want predictable performance. So the virtualization, what that inherently brings it, that it tries to do a best job it can to provide same amount of compute time, which if you look at the variance of what the average is and what the worst case is with an upstream KVM, it goes from, I think, the delta between worst and average is about 40 times, which is just not livable. What you want is that you want the average latency to come down and you want the variance to come down so that you have predictable performance. So we consider that to be part of carrier-grade. And the third is that there's a lot of manageability that's required. In-service upgrade is a norm in carriers and operators. You've got to be able to do the same for the platform, which is now provided to run all the network functions. And then you want to be able to, like Arun touched on a lot of security capabilities to solve some of the performance problems and availability problems. We've got to be able to share and not have to be able to initialize memory all the time for folks who have worked with kernel-level stuff. We know if you have a fast disk, if you have fast memory, the longest time that's taken is for a VM coming up is to grab the memory and say it's mine. No one else touch it. You want that to be faster. But to make those things faster, you've got to use huge pages. You've got to use shared memory, which has some challenges on if I have shared memory, this VM and this VM both can use it. That has security constraints. So you've got to be able to balance the security requirement, put hardening around it, and be able to provide the manageability that are looking for. And all of these things together, we at HP feel that this is carrier-grade. And we are trying to get a common understanding that as a community, we can call it carrier-grade. Now with that, I know you responded to carrier-grade definition 20 different. So did this fit at least one of them? 19 others to talk about. So to this, what we started looking at is that for us to provide carrier-grade capabilities, how do we go about doing it? And to do it, I think the reason we are all here is that we have already internalized that the open source is a way to provide the innovation and collaboration that's required. So you've got to be able to start with open source component, a common base across everywhere, and be able to use as many capabilities that the underlying component provide as you're able to. So you start with carrier-grade Linux. There is an open specification for it. You start with carrier-grade KVM, and the carrier-grade version 5 specification that I talked about does talk about some carrier-grade capabilities in KVM. So you start with that. Then OpenStack, obviously, being the other standard base OpenStack component. So these make the basis of your starting to build a carrier-grade platform. After that, since we want to be able to provide predictable performance, you need to be able to provide, in the KVM especially, you need to be able to provide or add real-time extensions. And real-time extensions, for folks who have dabbled with Linux and all, it's again an open source, a standard-based code available. You can make almost any Linux real-time. There's patches you need to add for KVM to be able to make it. And what this allows you to do is to basically create a preemptible kernel. And what that just means is that when I'm doing some work, I don't know. People, if you remember, I know 25 years ago, my kernel class is something called non-maskable interrupts and maskable interrupts. But IO is non-maskable when IO wants to do. CPU, stop doing it. Do it. That takes a lot of time. You've got to be able to have some processes where preemptible. To say that, if I'm doing this, no one come and talk to me. And you assign a few kernels to do the things that you care about. What you care about is the operating system tasks. I don't want it to go everywhere. And the virtual switching, I want to pin it because, well, this is telco. Network latency is important. So that's where you start. And then once these are added, then you're able to, you have a platform that can provide some level of predictable performance. And your jitter is going to be a lot less. Then to move to next level, you need to be able to provide some kind of an accelerated v-switch. Now, I mean, if you look at this discussion, there's almost beta versus VHS camps. And some people say, I don't want to use v-switches because v-switches can never reach the place when I have things like PCI pass through or SRIOV, which is becoming popular. And there's other camp that says, no, you know, it's all about abstraction. I want to have abstraction. So I think there's a place for both of them. If you have a 100 gig switch, 100 gig nicks, which are not too far away, and you want to do switching or packet passing, no processing. Just packet comes from one place to goes to the other place. No matter how efficient a switch you're going to bring, 100 gig, 200 gigs are going to throttle to today. It's pretty much every technology. It doesn't matter how fast a accelerated switch you can bring in. But if you're talking about what most of us are going to end up using, which is 10 gigs, which is norm today, you can use Intel's DBTK, Polmode drivers, and some capabilities to get very close to a line rate with reasonable amount of CPU overhead, which is to say two cores, three cores, something like that. And our test, we were able to see 20 gigs throughput using two cores only. So if you assign one core, pin it for system test, two cores for virtual switching, you still have 23 cores left to do. No, my math was very bad. 21 cores left for doing all the other tasks. So still a lot of stuff you're able to do. Now once we have done the plumbing, so this is all the work that was done in the host operating system, then you have the capabilities of start leveraging this plumbing to be able to deploy your VMs or non-VMs that can take advantages of it. But you do need to, depending on how much you are ready to touch your VM or the different VNF vendors or virtual network functions vendors that are providing these applications, depending on how much you want to touch, your gain is going to be different. And right now, I think, broadly, there are four different performance levels. First performance level starts with, I don't want to touch my VM at all. By the way, my VM that I was running, it's not even Linux. So even if I wanted to change it, I can't, which is a lot of these virtual routers that we have, they're running on each vendors. Cisco's are running on iOS. HP's on Conware, Juniper on Juno's. So even if I wanted, it's not as easy as Linux. That's true. But we, as carrier and operator, they can't go and make changes. Vendor needs to make changes. Absolutely. But even that, if you just run unmodified VMs, you'll see some performance gains. It may be, but I'm saying that as if two Xs, P nuts, two Xs double the speed of earlier, but you can get significant, from 1.5 gig GBPS, you may get to 3 GBPS, which is still very good. The next level is where you use some kind of a custom NIC in the kernel. And it's a kernel loadable module. And it's like Intel comes up with a new NIC, or Melonax, or Broadcom. I know the Melonax guys, they were talking about 40 gig or 100 gig NICs? 100 gigs, yeah. So those, even in physical, you can't use the 100 gig NICs without their driver. So you need their driver to be able to use it. Similarly, this virtual NIC that I'm talking about, it's a new driver, which is the easiest way to make change. You're able to get, and I think our test shows that if you use a kernel loadable module for this, you may get up to 6%, 7%, 8% benefits in speed. But the real benefits start coming in if you, when you start using DPDK, user mode drivers, in guest as well. It sounds a lot, but for programmers, any programmers over here, it's literally. So you have seen the DPDK, yeah, three lines, right? DPDK SDK is three lines that you need to add. You need to add a few places, but it's really three or four lines. So it's not a big deal. Now, I know, from techie guys coming, big deal is different from what the business and support guys think. But where I was trying to go was that the changes are not that very big, but the advantages are huge. And if you use DPDK and you use a carrier-grade Linux in a guest as well with a pole mode driver, and again, another word pole mode, and pole mode driver, all that does is normally when a packet comes in, the packet has to say, stop and tell processor, hey, I'm there. Do something with me. Takes time. Pole mode driver means that driver is just pulling the CPU. And it's saying, oh, sorry, driver or CPU is pulling the nick directly and say, got something for me? Got something for me? As soon as it gets it, immediately picks it up and processes it. Negative of pole mode driver is that it basically uses all the processor capabilities. So it'll keep running. But the positive is that if with a pole mode driver, carrier-grade Linux, all the underlying plumbing, now you are able to go from 1.5 gpps to close to 10 gpps for a 10 gig pipe. In our test, we were able to see, and I completely understand these tests are what's called the hero test. So doing in a lab doesn't mean it's going to work in your environment that way. But even a hero test, if you can go close to 9.0 gpps on this, I think that's pretty damn good. So once you do all of these things, you're able to get to this. And by the way, this was just addressing performance and jitter. But now that you have done all of these things, you need to be able to add some carrier-grade management and middleware capabilities. Some of these are done and will need to be done. There's no other way of doing it to be able to add some extensions to existing OpenStack services. So by the way, OpenStack, there's reason why OpenStack is a pluggable architecture, because they expect different vendors, different suppliers, to be able to extend the capabilities. So you use OpenStack-approved way of extending the capabilities in providing these capabilities to you. What good is CPU core pinning? If you can't specify this VM, I want you to pin to such and such core. But in addition to that, the things that the middleware needs to be able to add is the thing that OpenStack so far doesn't do anything about. And one of them is high availability. So what you've got to be able to do is, and for folks who have used high availability frameworks, pacemaker, what's that? I think pacemaker, the minimum polling that you can set is like 30 seconds. And if you look at keep alive, do it some other way. So if you use the traditional way of doing it, you can never get sub-second detection and recovery. So you've got to be able to do something outside of it. So you need to provide a framework that control plane can use and then make that framework available to your VNFs as well. So your VNF, if you want the VNF to react to, you can use the SDK to go into the framework and make decisions based on that. Now, again, this requires changes to your applications. But the platform provides a way of being able to do it. And then the last one, of course, is that you don't want to be just able to use a specific flavor of Linux or something. So you're able to do it with pretty much any upstream Linux that's coming out there. I think the requirement still is Linux. So it didn't go to Windows or other things. Not many people are asking for it. I don't know why. So with that, I wanted to hand it back to my colleague Arun. And we wanted to keep some time at the end so we can take some questions. So talk about the third part. Oh, boy. Now that we are at the end already, so we talked about how important security is for carriers. We talked about how a category platform is important. We also want to talk about how a category distributed environment can be deployed. Again, that's a key ask from carriers. Is OpenStack ready? Yes. OpenStack is already helping enterprises by supporting various mechanisms for larger developments. You could use regions. You could use host aggregates and availability zone cells. In essence, you could continue to grow your control plane to support your ever-expanding compute base. But that isn't adequate for what carriers are asking us. A common deployment usually involves a number of different data centers that run local functions with one or more data centers running global functions which manage your entire data center. Again, if you compare a failure for an enterprise IT, thousands of people get affected. If one data center goes out in a carrier, millions of people get affected. I mean, you cannot update your Facebook. They're watching that next big movie that's out. You're pissed at your carrier. So it's very important for them to be able to support a full multi-center deployment. Again, an example of how multiple data centers use a classical use case that we picked is the customer agent. And there are various ways in which you can deploy it. But each of them are requiring you to deploy a cloud operating system across multiple data centers, which is geographically a part, completely different from how traditional IT has been running. So what are customers telling us when it comes to a category-distributed environment? Again, we're following the common theme of identifying some of the top-ass. First is doing service chaining. So there are no more VMs, no more VNF components, no more VNFs. It's the service that matters. You could deploy VMs in 10 different ways. But the key is to be able to deploy a service end-to-end across these multiple data centers. Intend-driven traffic steering, we've heard about this a lot. Traditionally, a number of elements that govern the traffic are set during the instantiation of the VM itself. We estimate how the VM is going to be performing and set the appropriate rules in our controllers. But that has to evolve, where the intelligence that controls the network needs to be moved closer to the network and needs to be done runtime, not during instantiation, but during runtime. Lastly, the complete desegregation of components with common interfaces to access each one of them. What do we mean desegregated components? So today, you buy a physical network function. It contains a number of different services embedded in a single appliance. The customers are looking at being able to break even the core networking functions that they can pick and choose that can be desegregated at that level with sufficiently defined northbound and southbound interfaces to integrate in your existing environment. So you, as a customer, have an orchestrator of your own. You should be able to bring in a mixture of these components and have a common language to talk to each one of them. In the interest of time, I'm going to pass it back to Tarik. No, let's just go to the next slide. There's something that we are asking for your support in. For the operators in here, I'd say at least a couple. We would love to hear a prioritized list from you. And for contributors, which a number of us are, we would love to be able to build on and try to move OpenStrike towards, among other things that they're trying to optimize, be able to provide an environment that's ready for telcos. And we'd love for as many people as a few who care about this to be able to participate in the telco workflow, which is meeting is held every Wednesday at, I believe, to 1,400 UTC, whatever that time comes out. And I think we have like 30 seconds, maybe unless they start pushing us out. We have time for a question or two, if there are any. So there's an Oslo project within OpenStrike. And yes, we at HP, until recently, we were leading that, our PTL was HP. We are not now. There's other ones running it. But yes, absolutely. And OP NFV as well, which is an interest group for OpenStrike, trying to push the telco agenda. So both of them, yes. Any other question? What I see is that vendors, there's some innovative vendors that are doing this. And some of them have gone public with their solutions. Some have not, some startups. So newer vendors are absolutely moving in that direction on how they can disaggregate and disrupt the market. I wish I could. So there's a reality in the industry today. And the reality in the industry is that unless there's an operator over here who feels otherwise, the most organizations are taking their existing monolithic applications. No one's going to deploy 5G solutions or 4G solutions on a no-name, new way of doing IMS. So you're going to use some of the established. But it's open for innovation. That is why some smaller companies, software-only companies, they have come in, and they're providing this. Now we'll see more desegregation happening over time. But in near term, quite likely, just being able to desegregate at VM level is a huge thing for networks. It's a new way of running the network that operators need to learn. They have to become system integrators now against being more a venture capitalist. We'll give this money to this person to come and run it for us. Is that a question? Comment? Well, folks, thank you very much. I know we are a little bit over the time. Thanks again.