 Se if we can get going then. All right, welcome. Welcome to the NFV session. I will do this presentation. I'm Jens Söderström. I will have the help of Alan Cavanaugh here to make a presentation of NFV and how OpenStack comes into that. So I'm the suit guy. I will do the introduction and then come Alan and do the software stuff, right? So this one to run. There we go. Ericsson. I bet, not all of you know what we do. Many people might think that we still do telephones, right? We used to do telephones. We have done telephones for 134 years. We quit doing telephones a couple of years ago. We did some Android telephones together with Sony and Sony had taken that over. We do networks. That's what we do. Så vi har A very big market in mobile networks. About 40% of all the mobile traffic goes through networks that we have built. That goes for 2G and 3G. And for 4G it's even more. It's 50%. So we're huge in mobile networks. On top of that comes of course the management OSS systems for managing those networks. We have services to run those networks. And now lately we are adding IPTV and media as another number one area. Så these are areas that we have developed in the past. We will come into how this plays out for cloud as well. We are also establishing. Ericsson is actually, if you look at the bottom, a big software house. Depending on how you count, we are number five in terms of how much software we crank out or how much revenue we get from software or how many software engineers we have in our R&D. Of course these software engineers have been working to develop our native systems. Preparator software. Now turning them over to work much more in an horizontal and open source world. I said that we sell a lot of products, but we also run a lot of networks. About 1 billion subscribers worldwide are run through our managed services network contracts. So basically we are an operator if you like with 1 billion subscribers. Although not fronting with our name, that's the operator doing that. So we have huge operations in this company. A lot of people, of course, we have a reach also in most countries in the world, 180 countries with on presence. So it's a global operation. So hopefully with this short interaction slide, you know a little bit more now. Ericsson do not make telephones, but we have done. We do the networks. Okay, so what have I been up to then? We are a startup of the age of 138 years. So the first 100 years we spent building out fixed line telephony. It took about 100 years to give 1 billion households telephone connections. Then it took another 10, 20 years as we invented the mobile telephony to get sort of every user 5 billion people, its own telephone connection. So you see the ramp up there from up to 5 billion. We were actually quite surprised when the number of subscription overtook the number of people in this planet, but it actually has. There is more mobile subscription than people. And now when we come into the next phase, it's connecting all the machines, the machine to machine communication. We have projected about 50 billion connections by the year 2020. And this was a projection we did a couple of years ago actually and it is looking really well. So the network society is for us what can be connected, will be connected, what has the benefit of being connected to the internet, will be connected to the internet. And moving on then, what we really see as the main ingredients in this network society is of course mobile and wireless devices connected over a broadband wireless connection access network, LTE and 5G or whatever comes after that. But then of course bringing it into the cloud. The cloud is the third cornerstone in this transformation and that's why we are here. That's why we're investing heavily now in making sure the cloud will cater also for the telecom users or the telecom applications and the public networks. So when we talk to our dear customers, we primarily cater for the operators in the world. What kind of challenges do they have and we need to help them with. Look at the bottom first. Efficiency, effectiveness, radically simplify the network. This is basically have fewer people handling the networks even though the bandwidth increases and the number of end points increases. The launch of the NFV initiative, a year and a half ago by, it was actually launched by Robert Waldforum and Bruno Jakob Feuerborn talked about this. This is actually for getting efficiency and effectiveness, making sure that you can run all these networks much more cost efficient. So that's really one big thing that that our carriers are caring about and we're helping with them. In the middle there of course agility, getting new software and new applications more rapidly out into the end users use. It can take all anything from nine to nine months up to two and a half years for a new application to come in in the traditional way in an operating networks. That has to radically come down in time and the virtualization and the cloud and the horizontal infrastructure make this possible. We will talk more about that. And of course on the top there innovation and superior performance. One thing that really stands for carriers is that they deliver five nines reliability. You always can make your telephone call. Five nines means five minutes per year maximum that the service can be down. And superior performance can also be that you have quality of service guarantees through your piece of the network etc. So all these three are the change drivers and a big reason why why you see such big initiatives from the operators like that from 18th and Verizon in in driving towards this this network society using cloud. For us then it's a new paradigm of how we will sell product and how we will build products and how we will roll out products. We'll still have the three domains within within an operator. We have the telecom private cloud where they have all this for the telecom typical functions. The IT side where they have the OSS and media and other IT functions and the enterprise cloud where they can sell IS and other software as a service directly to their enterprise customers. The right hand side they have not been tremendously successful so far. But things are really happening here as well. But all of these network functions the OSS BSS and the enterprise functions should all be able to be supported from the same type of infrastructure same type of technology. The cloud infrastructure and the cloud management catering for real time performance and also supporting by a software defined networking to tie all the end points together in efficient matter. This will require also of course professional services in the integration phase just as any complex system will do. We are building products of course and as any vendor do we do the applications of course the be virtualized sort of the general Ericsson general telecom applications, the IMS nodes, the packet core nodes, the OSS nodes, all kinds of nodes. They will be virtualized so they can run on the general purpose cloud. But we also build the system as such. The cloud execution environment based on OpenStack. That's our statement. The cloud manager to deploy all these virtual network functions in on the cloud. Platform as a service function as well. So we will have a full stack offering in this space but we will enable to sell these components independently if so needed. So this will cater for telecom grade, real time optimization, both centralized cloud in big data centers but also ability to run workloads out in central offices or even down in the enterprise sites in an efficient manner. I think I've covered most there. This is my last slide before handing over to Alan. We have a significant capability engagements here. We have about 1000 cloud developers that are developing our cloud system and the virtual network functions. We are adding more. There are openings within Ericsson. We will find more about that in our booth. We are for example also investing a lot in our own data centers. We are consolidating all of our server parks into three major globally ICT centers worldwide, one in Canada, two in Sweden. Invest in one billion to make that efficient. And here we will use our own technologies as well. We are adding capabilities to the company through acquisition and partnering. We have bought concept wave, telkordia, media room from Windows, from Microsoft but also partnership with Ciena, Mirantis, etc. to make sure that we have full capabilities. I want to end with the box on the lower right, the industry ecosystem. Being a global player that has built sort of the fixed line telephony for one billion user and the mobile telephony network for five billion users. We always want to make sure that we get global standards to cater for scale and for global adoption of the technologies. Now in the past that has been through standardization forums like 3GPP going forward into cloud, it will be through this kind of community, the open stack, etc. to cater for this adoption. So we are part of the Etsy NFV forum, we are part of the DMTF for the OVF protocol, OVF format. Of course there are open stack, that's where we are here. Open daylight for the SDN controller. We part in the ONF for open flow order from the start. The latest addition is the cloud foundry for platform as a service. We joined there about a month ago. So we are really committed to the ecosystem and to these communities. Today we are here for open stack, talking about how open stack need to improve to handle the telecom applications and how the telecom applications will work on open stack. And with me to help me with that, I have Alan who has been engaged in open stack since it's basically since it starts right Alan. Okay, hi. Okay, my name is Alan Kavner. I'm actually based in Canada and I'm actually working where one of the data centers that we're actually building, what we call the global ICT centers, is actually going to be based. So I'm going to go through to talk to you about some of the features, requirements, functionality that typically telecom applications and telecom vendors like ourselves actually need to have an open stack for for we to actually deploy open stack in a real production environment. So today we think we're still some far away from doing that. This is not an exhaustive shopping list. I'm just going to give you a couple of features that I'm going to just try on the table. And if anybody wants to grab me after the meeting like feel free around all week. So I'm going to go through a list of some of them. If I can, yeah. Okay. So first one, motherhood of apple pie. Why do we really want to do NFE? Actually, we've been doing this for quite some time. Some of our actual nodes actually run on x86 architectures, our CDN, transparent caching, for example, actually have been running on x86 infrastructure for a long time. So NFE for us is actually not really a new paradigm and not a new mindset. The part that is actually new, that's a little bit more challenging is actually to fulfill some of the NFE requirements within the open stack API pluggable framework. That is really the challenge that we actually see happening because we actually see a lot of features actually missing for open stack for us to actually provision, deploy, configure, manage, and on board our telco applications. It's miles away from what we actually see, from what we really wanted to be. So what we're actually here today, we're actually here to give you an example of some of the elements feature sets that we're actually upstreaming, writing blueprints, committing code, actually to open stack to help achieve that. So if you actually remember the classical example is basically to decouple what we actually do in the telco world, we actually build a dedicated appliance and the software that's actually custom built for that appliance. So that's typically for a BNG, PDN gateway, core router, pretty much like all of the telco nodes and applications today. So what Janne was actually saying is we're actually now endorsing NFE and we're actually moving from making sure all our applications will actually run in a VNF virtualized model. And we're actually going to use open stack then for the one actually to configure provision, deploy, and on board those applications. So what we've actually done, we've actually moved away from the traditional lock in vendor appliance boxes and actually make sure that that software can actually run on any x86 architecture. And it's kind of reading between the lines, because in the heterogeneous data center, not all of the blades will actually be the same. And I'm going to talk a little bit about that actually, because that's really the interesting part that we actually want people to understand and actually help us to actually support that in open stack. Because we have actually real different requirements in the telco world than what people actually have in the ISIT environment. Like when packets are lost in the ISIT, like for an Apache server, yeah, it's good enough people will just refresh the browser. Yeah, people are not going to refresh their phone when they're making real voice calls. So we have really, really different paradigm shifts, requirements and actual hard limits that we actually need to have open stack to endorse. So there's three main paradigms, actually, are three main types of technology that we really want to talk about. So the cloud itself, right, what's really the cloud for flexible deployments, scaling up, scaling down, elasticity of the application, right? A lot of the telco applications today actually already support that. So it's not a new feature that we want to do. We just want to make sure that those applications can actually be onboarded and actually make use of the existing scaling mechanisms. That open stack had actually provide for us as an example. If I talk about NFE, some of the main parts that we really want to be able to focus on is to relocate VNFs for network efficiency. So what does that really mean? It's not just about power consumption. It's not just about like powering up the VM, moving it to another data center. It's also making sure that that VNF, for example, is more place, is more intelligently place based on its requirements. So one example could be transparent caching. Mostly transparent caching, what you want to be able to do is push it as close to the end user as possible. So if you can imagine in the distributed data center, what you want to be able to do is you already want to be able to move the VNF container around based on the hit rate that you're actually getting like in the cache or based on utilization of load. So you could already do that ahead of time if you were actually intelligent enough to take decisions or what you can do is you can do that in a live runtime executable environment. But OpenStack doesn't have those features actually to take those intelligent decisions. That's the problem that we actually see. And if we talk about SDN, for example, as Jan has mentioned, OpenDale controller. So OpenDale is one of the SDO bodies that we are investing heavily in. So we are actually building an SDN platform using open source as the new standard. And that's one thing I really want to emphasize to everybody today. We have been doing standardized technologies for years. 3GPP, ITF, Braband Forum, Etsy, OMA. None of this is actually new, but the new part is that we actually kind of see that the open source is actually becoming the new reference for standard in some of these technologies like in SDN, for example. So some of the parts that we're actually leading and that we're really, really endorsing and we actually have products available today is service chaining. And this is really interesting because typically what operators actually do today when they're actually deploying applications or services after the mobile pack of core gateways, they're actually statically chained. What SDN allows us to do is actually to reconfigure them either during runtime or before runtime and actually remove the fact that when we actually want to deploy a firewall or a low balancer, it actually has to be based on a specific box. You got to place an order. It takes four to six weeks to get it before you cable it, before you install it, before you test it, before you then put it into the system. This is what all these three actually give us. So what we're really saying is that it's reducing the time to marker for us to launch existing and new applications in the telco cloud, in the telco cloud. So why is Ericsson believing and backing OpenStack? The reason for that is pretty simple. Like I said, we actually see that open source is becoming the new standard. It's very, very clear to us as a company. We're here. We're pushing more and more people. We're actively recruiting people and we're actually going to have a statement here to actually say, anybody who wants to come work for us, innovative, good software developer, please come and talk to us. We'd love to actually talk to you and recruit you. We believe in the power of the OpenStack community, right? So we believe that numbers count. Numbers really do. Yes, sometimes we actually get into disagreements. People don't agree because we come from the telco world and not the ISIT world. And we don't agree because those people that have ISIT requirements don't agree with the telco requirements. But one of the things we're trying to do is marry the two together. In OpenStack, right? And that's probably like the friction, the chalk and cheese that I've actually seen in OpenStack for the last three years, where we have been coming as a telco company, as have orders and actually try to push requirements, but people actually treat it that we need to do with the ISIT way. We actually can't. That's the thing. So we need to come together as a community to leverage existing projects that we actually have inside OpenStack and to drive that as an industry standard for one cloud distribution, for enterprise and telco applications. What we don't really, and what we wouldn't like to see as a company, we don't want to see two distributions, right? We don't want to see one for managing the telco applications within the same data center that's also managing enterprise applications. It kind of doesn't make sense, right? So what we're trying to do is basically come inside the OpenStack community and making sure that we can leverage and push and basically have as many of those feature requirements endorsed in OpenStack itself. So we have one distribution to be able to provide telco and enterprise application on board configuration. So classical example. There's different layers as we know with an OpenStack. I think one of the things that we need to probably get away from is actually looking for OpenStack to solve all the problems. It's not. It's an API framework, pluggable framework. But what we shouldn't be looking to do is reinvent the wheel. So what really do I mean by that? So if we actually take a look at OpenStack today, the classical example is compute network storage. Fault management performance management kind of don't have those parts, right? So should we actually solve those parts in OpenStack? Or do we actually need to do them outside? Actually we kind of need to do both, right? We need to do fault management and performance management for resources that we're actually maintaining in the OpenStack layer. But then on the physical layer we can make a conscious decision if we want to have OpenStack solve that or if you want to expose that information up to knock-bound services like an OSS-BSS system, like a cloud service manager to actually take care of the orchestration and the onboarding of OpenStack for the applications through OpenStack. So just to give you a quick flavour on some of the examples that I want to walk through, let's take for example resource allocation and optimization. What exactly do I mean by that? If you have a distributed data center, what you want to be able to do is you want to make sure that when your VNF, your telco application or enterprise application isn't instantiated on a blade, whether it's virtualized or bare metal, there is more information that we kind of need to be know ahead of time. So what do I mean by that? So in some cases there are telco applications and they work really, really well, have deterministic values when they actually run within these parameter sets. So that might mean that we run within this specific PCI device with this specific driver and when we get that specific configuration, this is the SLA and these are deterministic values that we can actually guarantee. Networking. So we're really excited around neutron, but I really think neutron yet has a huge problem, one of the major problems that we actually see in addition for not being able to do VPN one orchestration, which we think more open daylight will actually solve for us actually through the ML2 plugin for open daylight. There's other missing features that we actually need to address. So some examples could be that on the physical layer, typically the telco applications need to have no single point of failure. The reason why they have this is because of regulatory requirements. Governments actually say requirements to actually say when your telco network is actually provisioned and open running, you have to make sure that you have no physical single point of failure. And we kind of don't have that in open stack. We've no way expressing to say, can you please guarantee that you actually have duplicate, torr, edge of rack, switch fabric, PCI cards, etc., etc. Not just on the virtual overlay, but also on the physical overlay as well. High availability. So I think this is one of the parts that within telco we really, really are specific and we really want to nail this point home. The big problem we actually have is that when we actually request a resource, when the resource gets instantiated, regardless of the networking, computer, storage, we want to be able to say that when the system crashes that we have automatic recovery, automatic recovery, that the system is actually able to detect that there is a fault and actually order calls through an orchestration system sitting on top and actually then ask, say, should I spin another VM? Should I move my block storage? Should I migrate the VMs because I'm after getting a link failure? Carry a great security. Not a really interesting topic. So yet again, typically if you remember what I said is that when we actually deploy applications in the old traditional way, we actually had them on custom boxes and that box was dedicated for that application for that service and for nothing else. Now that we're actually moving to a virtualized infrastructure, there are a couple of additional things we need to take care of. And those additional things that we need to take care of is to make sure that the host operating system, for example, and the firmware for the PCI card is actually authenticated to make sure that before we deploy the telco application on the specific computer blade that the integrity of the actual host and of the firmware on the PCI card has actually been validated. If it's not, we don't instantiate the application. The reason for that yet again is telco and government requirements. One example could be for when we actually do lawful intercept. When we do lawful intercept, we have a whole different set of security requirements that we actually typically need to adhere to. In addition to that, when we also talk about multi-tenancy, we typically just talk about VXLAN, MVGRE, IP and IP, or VLAN.1Q. That's not just good enough for some of the telco applications, because typically what they also want to be able to do is they will also want to be able to say, yes, I can do my isolation, but I also want to have my traffic encrypted. Why will people do this? Lawful intercept is one classic example. Typically what they do is they need to make sure that not just is the traffic isolated, that it's also encrypted, that nobody administrating the open stack, data center network is actually able to capture those packets and sniff and actually see the subscriber information that's actually being captured. So these are features that we actually need to bring back into open stack, right? And the other one is IPMI, for example, right? So typically some of the other parts that we're really, really concerned about when it comes to telco applications being deployed in the cloud is out-of-band management. And IPMI is an out-of-band management interface. And one of the things we actually need to address, we need to address to make sure that the call of the IPMI controller to the BMC is actually validated, is authenticated. So we can already use that through shared secrets, certificates, et cetera, et cetera, but that's something that we actually need to support in ironic for open stack as an example. Service continuity. I think this is probably one of the big things that we've actually solved in the telco world in the last 50, 60 years, right? The problem is, is that when we actually get service degradation, when we actually get a failure in the network, that the system will actually continue to run. So one example could be, for example, when the Nova agent actually crashes, the VMs just keep running, they just keep running. No migration, no movement, nothing. You just report an alarm, but you still continue the service. You do not take an action on the service immediately. Assurance. So this one in particular is one also that we actually have really, really hard detailed requirements for. So what does that really mean? So assurance really for us is all about SLA enforcement. So what that really means is that we can actually run within a deterministic value and time step. So what that really means is that when you deploy the application, you can actually say, here is the SLA boundaries that you have to guarantee me that when my application is deployed in open stack. Managed by open stack, you have to guarantee me that you will always give me this SLA to be enforced on. Auditing and troubleshooting. I think this is one that we really, really struggled to actually find an open stack. So has everybody tried to instantiate a VM and you continuously poll, continuously poll, continuously poll. And then if the VM is not instantiated, what do you do, right? So for us within the telco world, what we want to really see is that when we place an API call, it's not just enough for us to give us back HTTP OK 200. We want to make sure that the service hash actually being instantiated, and if it's not, if it's not being instantiated, we want to have some other recovery mechanisms being reported. So we need to add watchdogs pretty much throughout the whole system. So when we actually see a fault, we can actually take a retro action. It's not a human actually clicking a refresh. It actually means that's the orchestration system that can actually take some intelligent decisions for when an actual resource is actually to be instantiated in place in open stack. So these are like, these are just like a handful of items that we actually really want to actually push into open stack so that we can ensure that when we deploy telco NFE type applications that we can rely on open stack to fulfill all those requirements. So some of the contributions. So if I just give you a quick example of some of the, the blueprints that we are actually submitting in the June of Summit. One is salam or ironic, for example, cleaning disk agent. Why do we actually feel that this is important? I actually raised this at the last summit meeting and a lot of people basically blanked it. The reason why it's really important is that when you actually lease a blade on service, Anna has an onboard disk. You need to make sure before you bring the disk back into service, into the pool to be scheduled again to somebody else, that the tenants information is completely erased. The reason why that's important is because we have legal requirements again and government requirements that actually will actually specify to say that if the disk is actually going to be shared and brought back into a pool and it will be shared between the enterprise and the telco, you need to make sure that there is no resident information that has been there. It is completely erased. And then what we need to do, we actually need to check, we actually need to check and validate that there is no information. So there's small little fine grain teams that are probably not important within the ISIT world, but are very, very critical for us in the telco world. And then if I look at Neutron in particular, so one of the things that we'd really like to see a lot more and that we're trying to bring is quality of service. So I know Sean has actually been pushing quality of service a lot, but it's in the experimental trunk. In telco applications for us, it's all about resiliency. It's all about SLA as well. We need to be able to set quality of service markings on a pair of VM on a pertinent basis. So for us, we need to be able to expose on the Neutron API so we can actually say that when the specific VNF application is deployed and provisioned, here is the quality of service endurance that you actually need to be able to do. And another example is to actually pin a VNF to a single core. So yet again in the enterprise world, if you look, what happens is that you don't actually pin a specific VM or container to a specific core. You actually share it. The problem we want to be able to do is yet again, we want to be able to build deterministic SLA values. And one of the ways to actually do that is to make sure that the VM or the VNF container is actually pinned on a specific CPU core. So some of the other examples then is automatic compute, divide, discovery and registration. Why is this important? So automation is really, really key in the telco world and also in the cloud, and this is where I actually think the two worlds can actually benefit. So one of the things that we would like to be able to bring to OpenSack is that when you have a heterogeneous data center, what you want to be able to do is have minimal to zero touch for human intervention. The reason is error, it's error. So typically when a lot of telco applications are provisioned and deployed, they're typically done through templating and orchestration of management systems. We want to be able to do that as well in NOVA. So one of the things we want to be able to do is that the compute devices, NOVA agents should actually be able to register the PCI device, the PCI device capabilities and the different drivers and even the hypervisor and the types of virtual switches that you have. So Jan mentioned about the global ICT centers. So I'm actually a great believer of eating our own dog food. I really think that's important. So one of the things that we're actually driving inside Ericsson is that in the OpenSack cloud execution environment that we're actually building, we're actually going to be using that inside all of our global ICT data centers. So we're actually going to be taking OpenSack in the Ericsson cloud execution environment and we're actually going to use that for our own uses within all of our global ICT data centers today. Jan, sorry. We are coming to the close. So just summarizing quickly, I talked about network society, how mobility, broadband and cloud defines this future. The network finds will be virtualized and this also will require the cloud to be real time to cater for this media plane, for example, applications. OpenSack, Alan told you, you saw what we need from the community and from OpenSack to make sure this. But we really bet on OpenSack to be the foundation for this real time cloud. But we can't do it alone. We need you to help us here to help bring this requirements into the main branch as well. That summarizes it and I think we have one more, right? So as Jan mentioned, actually we're recruiting people. So anybody who wants to talk to us, please send us your resume. You can scan the QR code, come to your boot, drop your resume. We'd love to actually hear from you. We're a very innovative company, as Jan says, we don't make telephones. We do a lot more than make telephones, okay? And we have a great weather in Stockholm, but we also recruit here in the US and other places as well. All right, that's it. So does anybody have any questions and answers? I'm sure people have lots of questions. Ja, go ahead. So I think you hit the nail on the head. You should probably repeat the question here. Right, so I guess the gentleman is actually asking, so what role does actually he play in OpenSack and NFE, right? So to give you an answer is yes, the problem is the he project is very immature. It doesn't support like all the actual full template configurations that we'd like to see, like OVF is one example, for example, right? We only see hot templates. We don't see a lot of support for Tosca either, right? And the other example actually is that the auto scaling mechanism that you want to do truly is really only for enterprise class applications, right? So there are stateless manage applications and objects, right? So for he to actually take on the telco applications, there is a lot more work that really has to be done. So what I actually see will happen actually is that like within Ericsson, we have actually developed our own orchestration engine. So we can't wait for he to actually mature, develop and actually get ready. So we've actually developed our own orchestration engine, which we're actually using for scaling the VNF applications and containers. Probably not. And the reason for that is like, do you see he like taking care of application based scaling in the telco world unless you're there. I'm unless you're there. Any other questions? Yep. Do you see a design philosophy difference between the telco world and the cloud world? And by what I mean is in the cloud, especially in the public cloud, what you see is the hardware fails. And a lot of the intelligence is in the software. So if the hardware fails, you know, you just fail over or you know, you just move to the next like horizontal scaling as opposed to vertical scaling. What you're talking about in the telco world is, hey, you know, there's a difference even in the hardware and you're trying to build for that. So do you see a design? Yeah, there is a different paradigm, right? So I'm actually glad you brought that up, right? So if you can actually imagine today, like typically what you would do like in a treat your web application, you would actually replicate the VM for different Apache servers, put a low bounce in front. You would actually manage your database separately for your objects. The problem is like in the telco applications, they are not designed like that. So typically what you'll see is a lot of one run in a one plus one state or the run in an plus one state. And the other thing you need to make sure is that like, you'll actually have geographic redundancy requirements, right? So you'll actually have to say my active, like BNG or BRAS will actually run in this specific availability zone in this site. But my backup will actually run completely like in a redundant site actually, totally geographically away. So there is complete different mindsets actually in the telco world, like in the public cloud for enterprise applications. Yeah, fully agree. Maybe I could add that, of course, as the as the capabilities of the cloud matures, we can utilize more and more of that, but over quite some time we will need also typical telecom resilience mechanism combine that with what is bringing brought up from the cloud cloud as well. So we see a combination. Ja, så, coming to that, you said that you can't contribute back to heat because heat is not moving the way you want. So don't you see there is a need for a separate body for NFV, rather than just being open daylight based for a carrier based implementation. So, so, well, let's not confuse heat with ODL, right? So I mean, there are separate things, right? So I think ODL will do a good enough job for the SDM par for what we need in the telco world, right? I think the problem actually with heat is that they look at everything as basically for managing like stateless applications, right? And like I was actually saying to the previous gentleman, what we don't actually have in heat is that typically when applications are scaled today in telco, they actually do like that, like on the actual, like, let's say the application layer, right? So for example, like, does he, will he actually support that when he actually get a call like from an IMS system to actually start scaling, start scaling heat won't even know like about the IMS system itself. That's really like where the OSS and orchestration area will actually come in. You know, to do the actual fault monitoring, the performance monitoring and the metering actually, right? So I don't think he will actually take care of these at least today, at least today. We take one more question and then we'll give out some prizes. Yep. About lawful interception and all the regulatory provisions that are in telekommunikations, particularly about, you know, keeping records for very, very long time. I mean, do you see that all network functions will have a capacity to be virtualized or there will be pockets that will remain entrenched in? Yeah, we don't know yet how fast it goes. We know enough to say that we are virtualizing basically all of our main major core network functions over the next two years. Så, but then we don't exactly know how fast it will go and which ones will actually all be virtualized and which one actually will stay in native format. So it's still, it's still open. I think we will see quite a bit of virtualization for, for most of the application, but some of these specific functions will have to have to work with and see how it moves forward. But several, several open issues in the community around those kind of things. Så, okay, last question. Let me, we're being cute to get off. I just saw, I saw your, your architecture there. I didn't see a lot of hypervisor just called that. Is it an intention to do a lot of bare metal implementation? No, no, I mean, like, that's not really part of the open cycle area, a bit like OVS, right? So good point, right? So to give you a short answer, so I guess there is going to be different layers of virtualization in the cloud, right? So you'll have bare metal, you'll have hypervisor type bare metal. You'll have basically then one layer up, which might be LXC type dockers. And then you'll have like full virtualization, for example. So yes, you'll, you'll have different layers. And like I said, if you remember what I'm saying, the, the key part is the SLA and deterministic values that you actually want, and then that will actually define the actual layer you want to endorse on, right? A good question. Okay, we have here a camera, I think. The versatile camera that can be put on your helmet or something when you skydive. Or you have to wear this around open stock all weekend. Absolut, you can have as proof points. And the first price then goes to Vishal Morgam, Morgai, Kevjum. I'm Kevjum. All right, give him a hand. All right. Thanks a lot. Okay, I'll do the next one. Next one is Matt Nussom. Matt Nussom, where are you? Do we have him in the room? He just left, okay. Tuff, too bad. Next one on the list. Chris Stiles at Macy's.com. Chris Stiles, no, okay. He just missed the head. Patrick Lopez, CEO of Core Analysis. Patrick Lopez. Yes, that's you. All right. Congratulations. Speakers. Okay, congratulations. Okay, good, thanks. Okay, thanks everybody. Okay, thanks for coming. Thanks for coming.