 Right. I suppose I'll start. My name is Seamus Keen. I'm with Ericsson. I work in our cloud business unit. I'm going to take about 35, 40 minutes today just to talk to you about the economics of cloud and some of the learnings that we've had around putting NFE on top of OpenStack for operator customers. Walking around the booths for the last couple of days, I know a lot of people don't seem to know a huge amount about Ericsson. I'm assuming this audience has a reasonably good idea but I mean, Ericsson's vision or intent is that we see ourselves as being a company that can help any industry out there and individuals achieve their full potential and full capacity. And for us, cloud is a big part of that. And particularly, cloud is an enabler for companies to allow them to compete more effectively and to accelerate the speed at which they can go to market their products and disrupt and be successful in the future. And that's what a lot of our operators are trying to use cloud for as well. But of course, the theory and the practice are two different things. And the question is, how do you actually get there? When it comes to NFE, there's two types of questions that I'm sort of hearing from operators when we go out and talk to them. The first one is, how do we actually do this transformation? What's involved from a technology, from a roadmap? How do we make that journey from A to B? And secondly, more immediately, can you help us explain why we should do this and justify it? And that's not just what we hear. I mean, there's a report here from Infinetics, and 52% of carriers sort of identified cost benefits has been a particular issue when it comes to looking at adopting NFE and going down that road. So in looking to see what we can do, one of the first things you'll do is, well, OK, so people want to know what's the TCO benefits from doing this. Well, you go back and you look at, well, how has this worked out in the IT world? Cloud in the IT space been around for a long time, relatively mature, very well known. And that gives us a good start point. But the problem is, of course, that when you look at NFE and you look at telco cloud, it's very different in terms of an environment and an endpoint that you're looking to get to. And the key thing, of course, are the very different demands. The first one is, if you look at it as from an application point of view, I mean, a telco network has a much smaller number of applications, but they're very tightly integrated with a much greater degree of independency than you would find in a standard IT data center. Secondly, we're fundamentally talking about moving a network into the cloud. And that network has a shape and has very complex service chains and requirements, which also must be managed and carefully transitioned over. Carrier-Gate performance, it's a term which means different things to different people, but it has requirements in terms of latency, predictability, reliability, that also have to be met. And then, of course, the other thing is from a technology standpoint, I mean, we've come to the understanding that it's not possible to virtualize everything today. When you look at a telco network, OSS, BSS, elements around the control planes, that can be done today. But if you move right out to the edge of the network, you'll see that it's not possible to use access. It's very specialized, very high performance requirements, and we're not really ready to do that just now. And that's probably going to be the last thing on the journey to NFV that has to be done. So looking at all of that, I mean, while the IT experience and the IT learnings gave us a start point, we sort of came to the recognition that we had to build a model that looks at this early from a telco point of view. So we've been building a model that looks at NFV, Eric's, and is building our NFV architecture primarily on OpenStack. We've built a model that looks at, then so what are the TCO implications for an operator coming purely at an NFV point of view and not just as IT? So the model we look at, and the numbers and what I'm going to talk about today is sort of based around this sort of starting point. We look at OSS, BSS, the OpenStack, the NFV functionality itself, and then of course a range of the core network functions that are within that that would be virtualized over time. Okay, and each of those then we pull out CAPEX, we pull out OPEX, and we broke those down into different layers. So within the CAPEX hardware, software, and SI, we reckon we're probably the most important components to look at, and from an OPEX point of view, again the hardware, software, facilities, which is more around data center, heat, light, power, and then operations and support costs as well, okay. From an actual cost numbers perspective, we took a top-down approach to looking at these because it's probably the fastest way to do this, and we start looking at the overall cost structure for the whole operator. So just to give an example on CAPEX, and I won't drill into the numbers or anything, we start with a typical operator, you got about 10 to 15% CAPEX to sales, about 75% of that is your access in your backhaul, so the towers, the radio units, but about 25% of that then is the core and the management, the OSS, BSS, and we start from that 25% and then break that down into the major elements, OSS, BSS, EPC, communication services, IMS, and so on, and break those into, as I said again, hardware, software, and SI, and we've done something similar around the OPEX. So important to note then, so when you see the numbers here, what I'm talking about is not cost on a particular function, we're looking at the full cost for an operator to actually run and manage this on an annual basis. Okay? And then in the modeling, the key inputs that we put in, current CAPEX, current OPEX, and the two big drivers on that, of course then is a lot of growth estimates. Most of our operators are seeing very large growth in very dynamic markets, so that has to be factored in, and then we take of course the plans for virtualization, what should be done fast and at what rate. And of course the most important thing then is what are the actual impacts of virtualization. And I'll spend a bit of time talking about that, and then we run that through the modeling and get a set of output TCO and NFE type numbers. Okay? So on the virtualization impacts, I mean these are the big variables that drive the case for any particular operator. And some of the key ones that we've identified are here, I mean probably the primary one that most people start thinking about is on the hardware side, to what degree are the servers, the number of servers that we have out there are going to consolidate. We have 100 today, how many we're gonna have tomorrow. Automation and the degree of SI that's associated with that is another huge variable. And this we've found as well, it's a great degree of variability between operator and between the plans that they have. If you're willing to accept a large amount of SI and not willing to do a significant amount of automation, then that's gonna change the CAPEX to OPEX mix. Transformation and migration options also tied to that, how fast you want to move, what do you want to do first? Are you gonna, is IMS the most important thing to you? You wanna do EPC. Distributed versus centralized, when you move to an NFE type architecture, you don't necessarily assume that everything is gonna be run out of centralized data center. And one of the capabilities that NFE donors unlock is the ability to push an awful lot of processing out to the edge of the network that makes most sense to do. So again, what is the best way to do that? And that does have cost implications. And then there's a few other scaling, disaster recovery, continuity, and so on. A point I did make is that we have found that this is very sensitive to the particular operator's starting point. We've got operators out there who've never installed IMS. We've operators who have it and aren't quite happy with it, but there's a big SI overhead on that. So as I show you some numbers here, this is sort of for a typical case for an operator we looked at already, but different operators will have different sets of numbers that come out of this. So we look at a real world scenario, and this is a case we modeled recently. It's an operator we're running with over about 50 million subscribers. They've got two, three, and four G services. Big focus on mobile data and growing quite aggressively. About 10 to 15% capacity growth needed in any given year. And we modeled out about five years. I'm looking at this operator from discussions we had with them, their ambition would be to virtualize everything by about 2020. So they've got a three to four year timeline to complete their move to NFE. And we model a couple of different scenarios, and the scenario that I'm gonna talk through today just sort of assumes you start with EPC, and in your first year you would virtualize about 20% of that. So you'd have both legacy and NFE architecture operating alongside each other. And then you'd bring additional applications onto the platform, IMS, OSS, BSS, and then the rest of the core functionality taking about another three years or so to virtualize that. So that by the end of about four years you'd have virtualized the full network. And then the last set of numbers that you'd see here then are the fully virtualized environment. So you get some ideas for overall cost impacts when the transition is complete. Okay. So this is the first graph I'll show on numbers. So green is non-virtualized. This is your do nothing. And orange would be the virtualize. And looking at CAPEX from a CAPEX point of view, I mean, what you can see is through the time period from about 2015 to 2018 when the network would be virtualized, we're seeing not significant amounts of difference in terms of cost. The reason for that is the first initial applications you're just doing EPC and you're only doing a small part of it. So there's not a massive overhead in cost. There's some savings. But one of the things we've learned and we've noticed is that as you, once you build your platform and there's a cost to establish it, there are benefits that are unlocked. And I'll talk a bit more about those as we go on. But as those benefits start to come more and more downstream, there's synergies as you bring more and more applications onto the platform. So while in some ways you're increasing your investment as you virtualize more and more of those applications, you're also seeing more and more benefits. And the two things seem to be broadly canceling out until the point where your network is fully virtualized and you're no longer having to put that investment in to build a platform. And then you start to see the overall gains. So when we ran the model in this case, you've got over the full five years, the total gains in terms of your spend difference would be about only one or 2%. But by the time the network is fully virtualized, your annual cap X is down about 8% or 9% on where it would have been as a start point. So some gains there. The bigger difference though is on the OPEX side. And here, again on the first year or two, because you're only doing a limited number of applications, there aren't huge benefits. But then as you go on further, more and more of the operational gains start to come. And you begin to see a very rapid divergence then as you bring more apps onto the platform and you move to a fully virtualized telco cloud. And I'll talk a little bit more about that. But again, by the time the full network is complete in 2019, we were seeing about 25% reduction in the OPEX for the core, the OSS and the BSS components. All right? So just looking at where the benefit's coming from. If you take the cap X side, so the access is non-zero because it's a rather large pool of spend, so I just sort of compressed it. So this is what you're doing non-virtualized by 2019. So when you were fully completed, if you had not virtualized anything, this would be your spend. And this is where you end up once you have moved everything over to an NFE implementation on OpenStack. So the biggest reduction in terms of an individual component is the hardware. That would come down by about 80% overall. And that's based on what we've seen in terms of industry expectations and analysts and also what we've done in terms of our own internal modeling. And that gives you your biggest chunk of the reduction. But at the same time, there is an offset in terms of the SI to manage the more complex environment that you've now built. And you claw back about half the gains. So some savings, but not massive. And then down here we have the additional software costs primarily for implementing your cloud platform. And then over here we've got the operational side. So again, non-virtualized, where would you be if you have to add your 15% capacity for five years versus this is where you come out on the fully virtualized. And again, it's a non-zero access. So some savings in terms of the hardware, about half what you would have in terms of the capex because while you can save from moving from nebs to cots, you still do have to maintain a similar number of servers or a smaller number of servers. Some increase around software, some reduction on facilities, but very big savings around operations, about 25% to 30% there. So that's where the key gains are coming. And that's in terms of the support, the management costs and the cost to launch and manage new services on the cloud platform that you have built. Okay. So we looked just taking a minute or two around the capex. As I said, this is two sets of numbers now. So this is your start point today, non-virtualized network and then five years later everything's fully virtualized. And you can see overall capex has gone up, but there's two factors that are happening here at the same time. First of all, you're having to continue to increase capacity. As I said, we're assuming about 10 to 15% capacity growth in the network. As you go on year by year. So you have to account for that. But at the same time then you're virtualizing and you're moving into a cloud NFE architecture. So on the one hand you're actually capacity, but on the other hand then you're making savings through the virtualization. And if you balance those two out, I mean the main thing you can see is that overall the hardware component as a percentage of your total spend drops from about a fifth down to about 5%. So there's a big saving there, but then as I said you are continuing to spend on SI and on software as you go along. So while there are savings net, you are there are being offset by continued having to spend. But so that's primarily because you are building capacity on the network. This is not a static picture with comparing to like starting endpoint. The other thing then is because of the mix of the software and the SI that's in there, that's very dependent on the types of applications that you're looking at. If you take something like OSS and BSS, there's a very high SI component on them today even in a non-cloud environment. But when you virtualize that, that's not gonna change significantly. You will still have to do a large amount of SI even on cloud versions of those. Whereas other applications you will be able to automate and you'll be able to instantiate these in a much faster, simpler and cheaper way. Okay. So then if we look at the OPEC side and here, I mean the biggest reductions again are on the operation side. But that's also partially because of course about 70% of your cost is actually on the operations and the support. Everything, spare part management, software update patching isn't really that significant. It's the going out and the managing the applications and the services is where the primary cost is today. And that is what we've seen from the numbers. And again, this depends a little bit on the customer and their situation and to what degree they can automate the services and the applications. We worked through a number of different scenarios with the case we were looking at here. But in terms of ongoing SI, vendor support, network dev ops, there's big opportunities there to reduce the cost. And again, even though the network capacity has gone up and you're managing a larger, you're servicing more users, greater data, the actual OPECs total has gone down over time, although as opposed to the CAPEX site where you still have to spend an increasing amount. And there's some increase around the software and so on and for the virtualization, but it's not significant. And then the most important point is, I mean, I didn't put these two graphs on the same slide, but I mean, typical OPECs to sales ratio for an operator is about 60, 65%. So if we're talking about small reductions here versus the potential CAPEX gains, you're actually talking in terms of annual numbers, what you're gonna save on OPECs is about 10 to 15 times what you would save on CAPEX per year. So what it means is that even in the case that you were able to make very large savings on the CAPEX, they're far outweighed by the cash potential that you can free up on the OPEX site instead. Okay. So just to quickly summarize some of the key things on that, I mean, the first one is that there are CAPEX gains there. As I said, about 10% on an annual basis, but they're not massive. And I know certainly when NFE sort of appeared on the radar two, three years ago, there was a feeling that this was an area you could generate an awful lot of savings that can be achieved, but at the same time that the main benefits are coming around the OPEC site. It is operational benefits that are gonna generate the greatest opportunity and potential within the business for freeing up cash. The other thing we found is the synergies are very important. The first couple of applications, you set up your cloud platform, you bring on an EPC or an IMS or something like that. You won't really see significant gains and you probably will cost yourself money. But as you bring on more and more applications onto the platform, you build up momentum and you start to really, to build up speed in terms of the benefits and achieving the potential that you want there. Also, the numbers I put up here, they're very dependent on the particular situation. We had an operator here who was quite advanced, a lot of work on into building and operating a relatively mature network. Different operators will have different configurations that'll be different plans. So how exactly it works will depend on the customer. So, I mean, if I take all of that, I mean, a key takeaway then really is that getting the TCO gains isn't really a technology problem. And that's what we've learned from just looking at these numbers. You can take your legacy architecture, you can move to a fully virtualized one. As I said, we're looking at moving all our applications onto OpenStack and you can put that in and you can swap legacy with virtualized. But if that's all you do, you won't really see significant cost benefits. There's a lot more required. Putting in OpenStack gives you a platform which is still deemed to do a lot more to unlock potential, or to unlock the potential that's in there. I mean, and that is primarily around the operational side. Automation of processes, services, new service creation, looking at how you run and manage your network. Those are the things that will generate the big savings and really give you the benefits that the platform provides. So coming from that point, I mean, we've been looking at this for a while. I mean, this is sort of driving some of the areas that Ericsson then has been looking at. And the first is around orchestration. I mean, we've decided that total orchestration is key here. So in terms of cloud automation, governance, provisioning, security and management, these things are all important because they do aid the operational benefits and generating the load-on-line savings that we're looking for here. The second thing, of course, then is around the revenues. And I haven't really touched on this today because I was primarily just talking about the costs. Putting in cloud as a platform, I mean, in some ways, yeah, you can save some on the hardware side, you will save on some operational costs. But I mean, one of the biggest reasons to do this is to accelerate your ability to roll out services. Both in the near term for an operator, they're going to have to do this in order to be able to compete more effectively with their own customers. But also if you look wider at the OTT and the IT players who are coming in on their area more and more so, we need to be able to launch and iterate services in days and weeks rather than months and years the way it has traditionally been. And that's one of the key drivers here. So it's what can the platform do in terms of allowing you to generate new revenues? And also then there's other benefits in terms of greater reliability and greater service resilience because that drives customer retention, which also is one of the big cost drivers in an OPEX area that wasn't directly covered on the model. And then the final thing is around implementation is another key one. So in terms of doing the migration, that has to be managed properly as well. And this is one of the technology areas. So there's a large number of risks associated with implementing a large technology stack like this. We're talking about multi-vendor integration, new application, new technology, implementation as well, and ensuring the full stack interoperability is key there. And we've been able to do a number of things on this in the past and we've also announced the OP NFV certification initiative this week. So I'm just going to hand over to Shanganyu and she'll talk to you very quickly just a little bit about that and how that also benefits on this too. Yeah, great, thanks. Okay, hi everyone. My name is Shanganyu. I'm working for Erison at Group Function Technology on Cloud. And I've been working with basic mobile network planning, implementation and optimization for the past 20 years. Like just now was mentioned, like we truly, you know, what we see the challenge for operators today to go ahead with, you know, virtualization and cloudification is truly the concern about interoperability and the multi-vendor compliance. And again, performance guarantee. I guess you all recall that in the, you know, main whole session, like was a question to basic the telecom representative saying, you know, what are the things, the open start community supposed to focus on to help the tackle, you know, operators to be confident that they can speed up the virtualization and the cloudification activities. So as mentioned that since last Thursday, Erison made an announcement, a press release that we actually would like to launch something called open NFE certification program with the focus on saying that we want to lead the community saying we need to have a joint effort to address this interoperability and the multi-vendor challenge. So we want to highlight, this is not Erison certification, it's not a vendor specific certification. We want to drive this together with the community as an industry certification. So the ambition is here very much is saying we want to have a creative environment so we can jointly certify all the vendors for the so-called compliance to standards and compliance to the reference platform. So that's the thinking that we have that in term of multi-vendor compliance and also this like was mentioned, the full stack interoperability so end to end performance should be the focus. So from the we NF, you know, like in general, like we should be able to identify what are the NF workload we would like to work on so that as an ambition that we set the performance requirement clearly from the start and then jointly that with all the vendors we can actually address the challenges and make sure that truly we have a working, basically a production compliant solution jointly. So in term of performance, as I said, this is where much the challenge operators are facing today. It's not only about the end user requirement, it's very much about compliance to the regulatory requirements, as we all understand because like five nine or six nine requirements are very much the most challenging task, I would say the telecom operators are facing today. And a minute of downtime actually not only cause, I would say, you know, loss of revenue also is also causing that, you know, they're basically compliance to the regulatory requirement. Then in the end, I think our focus, as I mentioned, that very much will be the WinF portability and the NFVI compliance. So that in term of the, you know, how we're going to set up the testing environment and how we're going to jointly work with all the partners here in the community, this is very much, I would say, the focus for us in the coming months. Then people are actually asking how, why do we call it OpenNIV certification program? The thinking is very much to, you know, as was mentioned that the Linus Foundation announced the OpenNIV initiative about months ago. And our ambition is to truly, we're much aligned with this initiative to see this as the standard reference platform for us to work jointly. And then of course in term of other for us, we do know that there are other, you know, basically industry for us are also driving interface and functionality alignment, et cetera. So that's where much also all focus are saying that we need to work across the industry with, you know, basically the ambition so that we can jointly align on the focus. For example, phase one of OpenNIV will be very much to create a reference platform for you truly can start working, have a production solution. Then the second phase will be very much focus on the model part, which has been very much request by the operators today. So in general, I think as a summary, our ambition is to saying that as I mentioned, we want to work together with you, all of you here in the community, so that we truly provide not only the open source, you know, based the solution, but truly working solution to our telecom customers. And today, as I mentioned that no single vendor can guarantee performance. And that's the challenge for the community and jointly. So we need to work together on this. And then in the end, as mentioned just now, that in term of the process message tools, we need to also rethink and there will be re-engineering. There will be a lot of things we need to work together. In particular, I would say related to onboarding tools, related to automation, and again, going back to planning, implementation and optimization of the network. This is the biggest challenge, I would say, for the community to move forward. So I would say, finally, we would like to highlight, Arizona again, as the leading telecom vendor, we definitely believe that this is our responsibility. We would like to take the lead, to drive the community forward, saying this is a joint effort, that we are investing in our existing mobile vendor, multi-vendor verification capability and IoT facilities we have today worldwide. We have many labs around, but we would like to start with focusing and saying, we provide this environment in the distributed cloud fashion, so that all the customers worldwide would have access to this environment. But we will initially start with having the equipment in two locations, one in Europe and one in US. So this is where much, again, effort we would like to see, that like, I would say, a community effort, and that will welcome all our customers and the partners to come to us, so we can jointly start this activity then. Yeah, okay, thank you. Okay, so we have a few minutes, if there's questions. One question comes out to, sorry, one question comes out to, on your TCO study, did you use software as freeware, or did you use a service around that's, from like Red Hat or Ubuntu? This would have assumed the Aerson implementation, so we would run OpenStack for the virtualization platform, and we do have some of our own software around the cloud system to manage the network applications that are running then, once we set it up. So there'd be a license fee around that? Yeah, there would be, and that was, on the grass, there was a small increase on software costs, and that was primarily to cover that. Great, and then another question in terms of certification, who would be the holder of that certification? Would Ericsson grant that certification, or would it be a test that everybody agreed to and would be available? Yeah, this is definitely would be a community work, like you said, but Ericsson would like to take the lead, to initiate it, so currently we're working closely with OpenFE, test and performance community. Would other test locations be available? Say in Asia as well, or? For sure, for sure. As I said, we have globally, like labs around the globe, and the ambition is saying we want to start with a few locations, and it truly depends on the requirements from the customers. As I said, like any operators, like operators in Asia, will benefit from this already from day one. So even the equipment actually is in U.S., or in Europe, it doesn't really matter, because that's the distributed cloud thinking, yeah. So any company could do the test and just say, okay, we're compliant with this, and you could do it at Ericsson's place, or some other place as well. You should be able to, you know, that's the discussion ongoing, saying that, you know, what exactly would be the right way, you know, the best way for the customers, to, you know, the partners to, again, to have the environment truly working. You know, we believe that today, for example, like in the Ericsson booth demo, we actually have equipment in Montreal, but we're actually showing here in Paris. It works perfectly fine. Great. Yeah. Thanks. Okay. Okay. Thanks regarding your TCO calculation. Maybe I'm missing the first part, where is your fundamental of this? Is it a real customer, is it an operator case, or is it in your lab condition? This was a real operator case. So we took live cost base from an existing operator, looked at their existing network, and then had worked through a number of different scenarios for how we would recommend they virtualize, and then we did that analysis with them. We've done with a couple of other customers, but I just selected that one as a sort of a typical case for today. And the operator, what you work with then, is agrees this calculation? They were, their view was that the numbers were pretty much in line with what they had expected themselves, yeah, and what they had calculated internally. We took a slightly bigger picture. They were looking sort of on an application by application level. We looked across the full network and their total cost base, but they were pretty much happy that there was nothing radically different to what they had already calculated internally. This means if I understood correctly, it's not one operator, it's several operators. This was one operator, although we have done a similar modeling exercise with a couple of other operators as well. Thank you very much. Hi, I'm Mari Polodini from HP. I'm on the HCNAV and so on, and also OPNV. We had the meeting yesterday, the technical meeting, and definitely, you know, testing, and I mean, OPNV as this objective, you know, to have this package solution and this infrastructure on the Linux Foundation to allow, you know, doing HackFest, interoperability testing and so on. There is a team defining the test and so on, so all of that is very much in line with the objective of the industry. At the same time, you know, every vendor has its open labs, I mean, HP as, you know, free worldwide and so on. So I guess, you know, I mean, many vendors will be doing testing and certification according to the specifications. The requirement for the Telco is really to deploy, you know, telecom solutions and telecom functions and so on. So do you intend to have live network as a test bed, like a live mobile network as a test bed or things like that? That's exactly the point. I think I probably was not clear in the last slides. Like, you know, in terms of a mobile, you know, again, multi-vendor verification and interoperability testing, Erison has been doing together with, you know, all the vendors or potential vendors for the past 20 years. So we've been doing very much in a live environment. We have extensive facilities worldwide and that's exactly the ambition we believe that we need to have a live network environment in order to do the proper testing. This is not a simple demo like simulation. We want to do, again, a live network testing. So truly make sure that once you certify in this environment, you actually are ready for production. So that's very much the thinking. Yeah, but I think that's a very interesting thing, you know, to, I mean, to communicate into documents, you know, what is the type of live network that you put in place. Whether it's 3G, LTE, Broadband. We have everything today. Yes, we have everything today. And that's exactly the belief we have that instead of creating something new from scratch, and you're definitely right. There are many vendors today who have their own certification. We're not saying we're going to replace all the existing ones, but we want to... It's very complementary, probably. Exactly. We want to take, like you're saying, a complementary approach, but with the focus on live network traffic, you know, again, all the IO type of scenarios we need to consider. Because there have been many thousand test cases that have been run with all different labs, but now we need to hold the whole effort together. That's the thinking, yeah. Thank you. As you are leading this certification program, I wonder how you want to drive this multi-vendor verification when there is no standard party dealing with actual orchestration and managing of the VNF function. So how do you want to convince the telcos which are seeking for an actual open source implementation doing the VNF orchestration since the open NFV excluded the VNF orchestration and managing? So are you proposing that you're providing a product which will be open for multi-vendor VNFs and you're placing our product on the market doing all the orchestration? Or are you looking for some open source implementation to make this part of open stack? Or do you try to go with heat or Toscar and things like that? So what's the idea here to make this interoperability with multi-vendors? Right. There's a very good question. And I'm trying to catch up. In term of the thinking, it's definitely your right. And the challenge as we also realized from Ericsson's perspective is truly the multi-vendor challenge. Like the open source, it's very much encouraging like all the vendors to contribute with the code with ideas about functionalities interface, et cetera. And so therefore we want to call this, again, why we call the certification as OP NFV certification. The thinking is very much we need to have a certain standard behind. That's very much aligned because OP NFV is very much aligned with et C NFV initiative. So the thinking is very much like saying there are many for us today ongoing like in terms of how we're going to align each other's functionality and each other's again like solutions. So to believe like what you're saying orchestration is very much the currently again the phase one of OP NFV is very much to speed up you know the ongoing activity so we actually have a baseline so that can work you know figure out how we can have a working environment so that again not only open stock I would say all the other community would like to have a solution we felt like the operator can start working on. And then the phase two is very much to focus on orchestration and this bundle focus yeah. And that's where much the focus I would say the OP NFV community also today are working on even though that we need to also realize for many operators is where much the migration thinking like from legacy environment to virtualized environment is a big step for them. And so therefore we need to be very careful in choosing like you're saying which way in life should we focus on. Like different workload we have again like if you say watch your EBC compare watch your IMS they have different you know requirements in terms of the the underlayer you know solutions. So that's the portion we're actually working together with the community. We do not believe that any one vendor can actually drive this neither Erison or Huawei or you know Cisco or like HP can drive this along. We need to work together. So yeah. So you're basically saying that in phase two open NFV is also picking up on the orchestration. Yes yes that's exactly the focus if you look at at the NFV is phase two that's exactly the focus. Open NFV slides. Yes the previous one. Yeah yes yes yes the one here. So yeah so that's a phase two so phase one focus yeah. Yeah okay so in phase two it will shift. Yes yes exactly exactly. Okay but our our certification we want to start is truly the full full you know like full stack thinking. Like we're going to pick out the WNF you know WNF portability is very much the focus for us. And as the initial activity yeah because we believe like you said this is a crucial part. Okay yeah okay yeah. Thank you. Okay great. Any other questions? Okay thank you very much since we're coming today and myself and Shanang will be here if you have anything else you want to ask. Thanks.