 So, good afternoon. I think I'll start. My name is Karsten Rosengefeld. I'm co-founder of EA NTC, the European Advanced Networking Test Center. Actually, we're based here in Berlin and that's one of the reasons for the name of this, for the title of this presentation. So it's now 29 years and five days since the Berlin Wall came down. I remember it well. So, we're independent test lab just as a very brief intro and propaganda if you guys don't know us. We're not selling any hardware or software. We're not a channel partner with anybody. We're completely independent. We test with vendors, manufacturers. We test with service providers and also with large enterprises and governments, agencies. And we have been doing only quality assurance in the network telecom sector for the last 20 years. And NFE is one of our areas. We do most things between layer two and layer seven. So anything where you see packets in the mobile space, in the fixed network space, that's our area. And NFB, SDN are the two main areas leading towards 5G deployment which we're actually involved in. So, let me start with why do we need interoperability at all. And that was one question of two Vanessa's presentation, the previous one here. Why do we need interoperability at all? In the enterprise space, people don't need interoperability in most cases. Just to remind you, the telecoms always want to have multi-vendor solutions because they can't afford vendor lock-ins. They don't want to have any single source issues. If something doesn't work as advertised, they have to have a backup solution because they're actually making money with the network. So they cannot afford outages. And that's where this whole multi-vendor ability requirement is actually coming from. And so if you look at the NFE reference model on the left side of this slide, you can already see underlight with these gray boxes where there would be typical different vendor requirements and how the service providers are trying to split up the market so that they can actually buy from different sources, different types of their infrastructure. So here you can see vendor one might be providing the OSS, vendor two, the orchestration, vendor three, the infrastructure, vendors four, five, six, and so on, the actual applications, the virtualized network functions. Same diagram in a different graphical representation is from an official Etsy document. It's from the NFE TST007. This is a document which actually defines interoperability guidelines for NFE orchestration. And so typically, each of these functional blocks, the FUTs, functional blocks under TST, functions under TST are seen as distinct units. And this standard actually defines in around 100 pages what kind of test cases to run, how to achieve interoperability, how to evaluate past failed criteria, and so on. And typically we have like three major areas. It's the orchestration, it's the VNFs, it's the infrastructure, and then the VNF managers are kind of in between their association barriers. Sometimes they're associated with VNFs, sometimes they're associated with the orchestrator rarely. So these are the interoperability points that we see, and you can already see there are like three things that we tested with each other in each combination. And traditionally, when any interrupt testing was done, let's say in a router space, for example, you test whether BGP implementation works, you have two entities, they're pairing, and you do many pairs, and then you get to like a two-dimensional matrix, an Excel sheet. But in the MANO case, you have a three-dimensional Excel sheet because there are three parties in each combination. It's a three-table, which means, of course, the number of combinations is by far larger. So that's already one indication that the traditional way of doing interrupt testing in a static manual way, going to the lab once a year, and coming back after two weeks with lots of successes, that doesn't really work well in NFE because we have way more combinations. There's too much effort. We have to automate it. The other reason is that because things are still under development, there are way more releases. Companies don't just release one software version per year. They release like maybe 10 or more depending on the type of company, type of vendor. So that also increases the frequency of testing. Another reason why automation is important. And when I talk about interoperability, most people just think interoperability means we plug something together, you know, like it's a plugging thing. But in fact, it's much more than plugging. Plugging is fine. You know, it's the first mandatory requirement. But you cannot run a network with things that just plug together successfully. You need to test many, many more different things before you can actually go to production. And that's one of the reasons why service providers are not going to production at large scale yet. So there is data plane performance that needs to be tested. There is the scale of the services. There is high availability. If some component fails, there must be some sort of fail over. It shouldn't just fail completely. There's a whole topic of manageability, provisioning, service assurance, all of these things. And then the agility of the service, how can it grow and shrink and adapt to different requirements? And of course, last but not least, the security aspects, which have been undervalued for a long time and people are now looking at security much more closely. And so with these kind of chaotic arrows here, you can see not each of these functional blocks is affected by each and every area, but most are affected by most. So there is a ton of testing that goes beyond the functional interoperability. So what's the situation today? I don't know if you guys know the Gartner hype cycle. I always like this. Gartner defined a long time ago for totally different technologies, this hype cycle, which says whenever a new technology is introduced, it starts with a hype. You know, it's triggered. Everybody thinks, wow, this is going to solve all our problems. Then people after a short while, they figure out, oh, it's not going to solve all our issues. They're getting a little depressed. That's what they call the trough of disillusionment. And then some things die at that point and other things continue to live and they get to this slope of enlightenment where people can actually make money and deploy them. So now the question is, where is NFE today? So in 2013, that's like five years ago, good five years ago, it started with a big bank. So a lot of operators, specifically European, North American ones, a few of them came together and said, like, we're going to align our plans. We want to throw out all our legacy equipment. We want to deploy NFE. Wonderful. So the vendors started preparing something. Only one, two, three years later, they came up with a lot of success toys. Wow, it works. It works. We have POCs running. It's great. They're like 30, 40 public POCs only that were published through Etsy. So these were success stories. And everybody was like, wow, we're done. This is done. We can deploy it. And then people started to try and deploy it. And they found that although functional interoperability was achieved on these POCs, it is by far not enough. So if you want to take a picture, there are two more steps. So the industry noticed a few years back, 2016, I would say, that the scaling and the integration is way more challenging than they thought. And there is a lot of testing effort that each of the service fighters was forced to do because nobody else did. And now today, we're, I would say at the very low of the trough of disillusionment. So the vendors and service fighters are talking openly about issues and more about issues than success stories today. I came back from a conference in the Netherlands a couple of weeks back, one of the major NFE SDN conferences. And there are vendors such as Ericsson, for example, who put out slides in public, which says NFE has, I'm quoting from my memory apologies, NFE is far behind our expectations. It doesn't really work as we hoped it would be. And I sat there and said, wow, you know, but like in the stock market, things can only go up if people understand what's going wrong. As long as everybody was saying like, cheerful, you know, you can't say anything bad, things could only become worse. And I'm pretty hopeful that we're reaching this bottom line right now that a lot of the entities in the market on the open source side, especially on the commercial side, understand much better what does it take to deploy NFE, what needs to be done, and it will hopefully be solved. I think we have a good chance with the very important application use case of 5G that there is a certain drive to get NFE done because there is no not going to be any large scale 5G deployment without NFE. And so hopefully in two, three years, the world will look very different. At least at ENTC, we're doing our bit to support the quality assurance on the standard side and on the implementation side. Now, if this will not work, then I would predict, you know, it's not on the slide, but I would predict in three, four years, if we still have the same message situation in today, then we will go back probably to the big vendor with big supplier scheme. So I don't want to name anybody, but the big vendors in Europe, in North America, in Asia, they are going to win a lot if the interoperability problem is not solved. And that's another reason why I think the service writers are very interested in solving it. So in the Etsy NFE group, there was a survey recently from the Network Operators Council, the NOC, from Etsy. What are the main barriers and pain points? So they asked around all of the services that we had at the NFE operator members. And number one was mission critical operational needs of the network. For example, service assurance and performance. So you see, the number one challenge is not the functional part, it's the performance part. A use case, a business case, can only work if one understands how much resources do I need. And if the resources for virtualized firewall are like triple the resources, the same performance can achieved on the physical traditional network function, then there is no point of migrating or it's much harder to justify the business case. The second point is integration with the orchestrator, with a management and network orchestration and next gen OSS and PSS. So the integration of the management is also a major problem and it's a multi vendor challenge. Because even if one starts with a single vendor, even if one says we'll buy everything from vendor X, it'll only take one year or two years until something happens. Vendor doesn't make this product anymore. So it's why the acquires another group which has a different type of product. You go to a different market where maybe the vendor sells much more expensively and you need to issue new RFP. These things happen faster than one thing. So there is always a need for multi vendor. So I don't want to go through each of these topics but you can see most of these things are actually lack of inter-operability, you know, integration problems. And issues with VNF's life cycle management, that's also something that Vanessa pointed to, there is no standard for VNF descriptors. So the life cycle management, how to ramp up a VNF to instantiate it, sounds like a very simple task but first you need to define the VNF characteristics in the descriptor and tell the orchestrator how to do this and if there is no standard, that's a big source of configuration problems and misunderstandings. So, but we have been testing, you know, for five years we've been testing, you know, there are so many different groups that are testing. There is, there are vendor testing programs, there's testing in the open source groups, in OpenStack there has been a lot of testing and I think that's mostly successful and done for, for NFV purposes I would say. There has been integration testing with the open NFV group for a long time. There have been tons of service provider POC tests but the problem is these are mostly isolated. So the open source groups do work together but each of the vendors has their own ecosystem, they are not aligned with each other, they are often not well defined, you can't really know what does it mean if a vendor logo is in part of an ecosystem. What does it actually mean? And the service providers of course are always testing isolated and confidential and that leads to a situation where many of the basic tests are repeated over and over again and few people could get to the really advanced tests. So collaboration between open source and commercial programs could be improved, there is no pipeline of testing, there's a pipeline of code integration but people don't normally talk about pipeline of CICD testing integration. The one positive exception is OPNFV and to some extent Red Hat is integrating with OpenStack also in an automated way but these are kind of exceptions. The vendor programs are lacking transparency, sometimes they are very simple, it's just the purpose for some of these programs not all to just have like as many logos as possible as part of the ecosystem to simplify the sales process. Sometimes they are great but they are only one time efforts. So let's say in 2016 a vendor came on board, they were very thoroughly tested but almost no single line of code is the same as in 2016 so that certification doesn't have any relevance anymore. Service providers always retest the same basics and the business cases relate to all of these other non-functional tasks, topics that are rarely taken into account in these kind of programs. So that explains that you can test a lot but that doesn't mean you're actually getting to the end of it. So we also need to improve the definitions, we need to expand standards for testing, we need to look at the real use cases and define scenarios from the use cases, not starting from the data sheets. So the good thing is that I think the Etsy Group has been one of the leading groups or the leading group to really understand this industry-wide interoperability need. This is also a slide from the Network Operators Council. They've said our focus area of the upcoming work of the NFV Industry Specification Group in the next two years is going to focus a lot on interoperability testing. So I recolored this read so that everybody sees it clearly. Of course there are many things that need to work together but interoperability testing is at the main focus area and I think that's positive to see. Now I tried to summarize what is the state of this multivander aspect. Can I give you any insight and can I tell you like okay, you can check off on data playing performance but not on manageability or something like that. Problem is I can't, we've run a number of test campaigns and there are a few more slides on this. There is no uniform state that I can tell you. If you, if somebody would ask me in private what do you think of this combination, manual X, VNF, Y, infrastructure Z, then maybe we would have some data. But overarchingly it's very difficult to predict. So for example data playing performance, if there is a VNF, the VNF has to be programmed efficiently. That's the start. Then the VNF needs to use the right libraries like DPDK for example. Then the infrastructure needs to provide support for DPDK with the right hardware. They need to have the right SRIOV support for example. The orchestrator also needs to understand from the descriptor point of view what does the VNF actually want and how to encourage the infrastructure to provide what the VNF wants. And then there must be good configuration of all of this. We've seen things fail for like days until somebody noticed oh, you know in this hardware flavor, you need to set the BIOS variable in a different way. And the default was set in a way that it could never work. So even just data playing performance is one topic which is difficult to predict. So all of the deployments and integration projects are not out of the box things that will work just like that. You deploy, you download, you're done. But each of these require thorough preparation, quality assurance, integration work. And the main problem as so often is the number of options. You know, there are so many gazillion options in the market. It's not that we have one hardware and one orchestrator. You know, it's the choices which makes things difficult. So another problem that was also in the pain point list was like the way how VNFs are programmed. So one would think after five years that the VNF vendors have actually taken their code from the physical network function time, let's say from a physical firewall to take this example again. And they would have revamped it into a way that it's cloud native, which means it's multi-threaded. It's using microservices. It's actually, it can scale up and down. It's well-prepared for a system which has, you know, which is an x86 system and has a lot of cores and sockets and so on. But actually it seems not everybody has achieved this yet. So in this survey from the NOC group, from at the NFV, 55% of the respondents said that their VNFs are actually using repurposed code. So it's basically, you take the code from the PNF, you port it, in many cases it was already x86 code, so there is not too much support. You need to adopt it to the libraries and so on, done. And only less than 20% knew that their code was really cloud native. So even on the VNF implementation side, we cannot assume that the application implementations are perfectly prepared. And that's only one of the components. So there is a lot of work to be done. And what's good about this is people understand how much work is done. People talk openly about it so things can be improved. And also there is really no way back. The kind of revolution that was kicked off in 2013 for the telco industry was kind of shaky in the first one or two years. Like, mm, are people really going with that? But actually now, the service providers have invested so much. The vendors have invested so much. There is no real way back. We can't say, oh, tomorrow we'll deploy 5G with a physical evolved packet core. We'll just buy a single vendor thing. It's a huge block, and it's a one winner. That doesn't work anymore because even the architectures for 5G are already aligning with the virtualization. So these things have to be fixed. That was the reason for the other part of the title, the need to succeed. There is no way to fail, really. So how are we helping? What kind of things are, what kind of interoperability testing programs do exist these days? So first, I would like to point out one thing that we participate happily in. It's the Etsy NFE plug tests. I think there is also a presentation from Sylvia from Etsy tomorrow, if I remember correctly, or from Pierre. So there are Etsy NFE plug tests, which provide interoperability testing campaigns. There have been three so far since 2017. These are confidential, so they have an engineering benefit. We go there to beta test our automation platforms, to network with different vendors in the space, and no results get published other than anonymous results, anonymized. They focus on all of the components, and it's basically an environment where you can register your request and then test for a few hours with any other partner and get some gain some knowledge out of it. The results look like this. So in the left diagram, you can see that the blue line says there were so many different test sessions tested. So more than 1,000 test sessions in two weeks. And the success level was between, if I interpret this correctly, 30% to 40%. Maybe it's the other way around. If I'm mistaken, then you'll hear it officially tomorrow. So basically, there is a lot of multi-vendor network service interoperability testing. It has been growing between the first, the second, and to the third plug test. And the success levels that you can see on the right-hand side are almost 100% positive for the basic lifecycle management of onboarding a VNF, that is like downloading the image, and also the bottom three for instantiation, and termination, and delete. So these are things that work in most cases. But when it comes to more advanced features like related to scaling, for example, then there are way fewer combinations tried. And some of them also fail. And that's an indicator of what I said before, that when you come together for a campaign, you can only reach to a certain limit. There is no way you can test everything, even the advanced scenarios, in two weeks. That's why Etsy is now going to some sort of remote events and to have a more different type of event. EMTC has also been involved in the new IP agency interoperability tests. The new IP agency is a not-for-profit organization based in the US. We have vendor and service provider members from service providers from the US, from Europe, from Asia. And the goal is to close the gap between these open source programs and the service provider POCs. We are publishing all results there. So whenever anything is positive and working, we are publishing these combinations exactly. So what was tested, exactly which combinations were successful. And then vendors can also use those to go to their customers. So I'm showing you results from something that is very old, from 2015 to 2016, where we ran our first campaign. And the reason I'm showing this old data is that some things are still valid. Some things haven't changed, but also to show you how slow progress is. And OK, I can blame myself, but I think it's also something where the market really just needs time to get to more advanced scenarios. So we had a success rate of 65% in 2015 and 2016, which was already good. But it was for basic onboarding and lifecycle management. We started with network service testing with orchestrators in 2016 and had a number of multi-vendor combinations that actually worked for really elaborate scenarios and which we could document in detail. We then had a major orchestrator interoperability test in 2017 last year with seven participants. And when you really want to document transparent results, of course, a lot more effort needs to go into each combination before all of the vendors are OK with publishing this, because before everything has really been checked off and actually been tested and verified by the lab that has been working. So that was a good result, but it also provided a lot of learnings and findings. So these integration efforts are always non-trivial. It's not just like, OK, you get one combination going, then the next combination will be easier. It's always like an uphill experience to integrate these three tuples with different vendors. Specifically, the support for the scaling test cases varied and still varies very much. Unfortunately, there are many different types of scaling. There's manual scaling in different ways. There's auto-scaling in different ways. And the monitoring aspects, like why does something auto-scale, on which condition does it auto-scale? That's what makes the differences. And that's also what creates the number of different scenarios. And then the way of scaling out is also different. You can scale inside a virtual machine, just scale the resources. You can add a VM to a virtual network function, or you can basically create another virtual network function aside the existing one. So these are like three different options, and then there are additional three different options for triggering the auto-scaling. So that's already a lot of testing. All of this requires automation. Otherwise, there is no chance to get all of the conditions tested. And we solve this by connecting to the orchestrator from the northbound, like from the OSS perspective, to trigger services, to understand what's going on. And unfortunately, these northbound interfaces have not been standardized yet. So for each and every orchestrator, we always have to create an adapter. That may be getting better with the Etsy Sol protocol standards, which are developed right now and trialed in the plug tests. So we'll see if the market takes up on this kind of solution. If so, that will be really great and would ease our life for interoperability testing. So how do we reach multi-vendor interoperability and especially dependable performance? To do this, there needs to be something that people can understand and say, like this is a well-defined set of tests that we trust have been executed. It's not something where you say it's a success story between vendors. It's something where you can say, like this is relevant for my business. This is relevant for deployment. I'm going to take that result, and I trust that I do not need to repeat exact same tests again. And that's typically the case when certification comes up. So certification is exactly this kind of trust statement. So the certification should reduce the testing efforts for each individual service provider, not to have to repeat everything. It should speed up the deployment, because if the certification has been on before, things can be deployed faster, and also improve the quality of these multi-vendor solutions, because if people get used to testing more frequently to a well-defined set of test cases, and not to something that they like to do at the day, then their quality will improve. And our idea is to run this certification on behalf of vendors with service providers guiding the process and receiving the results, and also open source projects being integrated. Because obviously certification has to take place with commercialized solutions that people deploy. So service providers rarely, if you take AT&T and China Mobile away, rarely they really deploy wide, how do you say, vanilla open source solutions. So almost nobody just downloads an open source, installs it, and runs production services on it. People want to have supported stuff. Even if it's derived, as in most cases, derived from OpenStack, then they still want to have a supported solution, which is commercial. And but if there is anything that can be fed back to an open source project, because we find a problem that is reoccurring multiple times, then of course we need to provide feedback and get it solved at the source. So that's the testing integration pipeline. Typically, open source testing comes first. So OpenStack is the most mature part of it all. And that's because they have started testing the earliest. Then there is a commercial ecosystem testing. Each vendor takes the code, branches off, adds something, does their own competitive differentiation, and they test again. Then there are industry wide test programs. That's where, for example, the new IP agency or Etsy plug tests are located. And of course, in the end, there will always be operator-led individual testing. But the goal is to remove all those repetitive things, remove all the basics from the operator individual testing, to really focus on things that validate the operator's individual differentiation. So the integration level increases. OpenStack is tested first. Then the ecosystems, OpenStack plus the commercial orchestrator, whatever, from a single vendor, maybe the industry wide test program tested in a multi-vendor environment. And the service provider tests the most specific individual configuration. However, the effort also increases. On the left-hand side, we have only one open source code set, or from each program. On the right-hand side, we have hundreds of service providers worldwide. So the more right we start testing, the more expensive it becomes for the industry. So upstreaming tests, as far as possible, reduces the cost and the efforts. And that's exactly why service providers are even interested in these kinds of programs, because they hope to upstream things. And also upstream also means that we can basically hand over test plans. So when we find some problems in, let's say, the new IP agency, we start like, oh, this looks like it's related to OSM or ONAP or OP NFV or OpenStack. Then we're basically upstream test plans through standardization or through direct peering and allow those projects to integrate these testing efforts there, so that we won't see these kind of problems in the future. So it's kind of an analogous thing to the code integration pipeline. So I'm introducing this testing integration pipeline. Now, what would be covered in such a certification program? Obviously, we're basing it on the interoperability testing standard. First thing that one can cover is the two-party testing, the simple one, which is infrastructure with a VNF, just to see if the basic interoperability between OpenStack and the virtualized network function works. And the second part is the network services certification, which is a three-tuple, as I mentioned in the beginning, and includes also the orchestrator. So of course, one could think about, like, oh, how can we integrate the OSS and the service orchestration and multi-domain orchestration? But that's yet another level of integration that we'll keep for the future. What is the part of this kind of certification framework? It's of course the primary lifecycle, image management, the instantiation, tear down, and the termination. So basically, basics. And one would be surprised how many problems still exist with these basics when it comes to different hardware attachment options, different types of descriptors being used, and so on. More advanced tests are in this network service lifecycle management, especially the scale in and scale out operational status of a network service, healing, and determination. And I think the most complex part of this is really the monitoring of the performance, which needs to be handed over these different interfaces for an orchestrator to understand at what time should it scale out or scale in. So it needs to understand from the infrastructure management, maybe is there too much load on the CPU? Has the designated maximum bandwidth been exceeded? So I need to scale out. Or the VNF maybe says, I'm really busy and I have too many sessions per instance. Please help me to scale out. And these kind of informations are flowing between the functional units, functional blocks, and they need to be tested for interoperability as well. So this is basically the focus of the certification we're looking at. And one additional aspect beyond the functional part is, of course, the performance. And unfortunately, performance can only really be tested when the virtualized network functions are involved per application type. So when we go for performance, we can't just say all VNFs are the same. In the normal interoperability, we say, oh, we don't care. You know, whether it's a virtual EPC, a virtual firewall, a virtual router, for the lifecycle management of instantiating and tearing down, they all look the same. Maybe some of them are more complex internally, OK, different descriptor, but I don't care what they're actually doing. However, for the performance, we need to actually look into the application layer, because otherwise, there is no way to test it. For a virtual router, it's very easy. And a virtual firewall is almost like a virtual router with some limitations in the packets that are forwards. You know, looking at it very oversimplified, I can just send IP packets. But if I want to test a virtual EPC 4G or 5G, I cannot just simply send packets to it. It just won't be ready to accept that. I need to register a UE, a terminal. I need to set up channels, and I need to define slices, and so on and so on. So there is a lot of application layer preparation needed. And then also, the test cases for performance are way different, because a router mostly looks at throughput performance. It's a data plane game to a large extent. But a route reflector already looks only at control plane. A route reflector sits there and has zero packets throughput. It only peers BGP instances. So it's almost the same implementation in a different use case, and the test cases look already very different. A session border gateway, which recodes voice over IP or whatever, has yet again very, very different use cases. And so on and so on. So there are basically these nine different areas of VNF types that were identified in this survey that I quoted from before, from the AT&F group. So most people are really concerned about the virtual EPCs and the IMS parts. Both parts of 4G and 5G core. So that's really where a lot of testing will happen in the future. There are a substantial, a useful number of people that use also virtualization for BNGs, for broadband network gateways, for virtualized CPE deployments in business VPNs, for deep packet inspection firewalls, and so on. And the topic of SD-WAN is also growing. So from the INTC's perspective, we're covering almost all of these application areas. And when we run performance tests, so far we've been doing this individually, like in single-vendor environments. Mostly the reason is marketing, because in a single-vendor environment, we can easily review the performance and have a test plan and agree on how to execute a test and understand if something doesn't work as expected, it's the problem of the vendor or of our test equipment, but there are only a few parties. In the multi-vendor environment, performance reports in public are way more difficult, because there is no pass-fail criteria. If a vendor comes to us and says, I want you to test my DNF performance, and we choose three, four, five different infrastructures, and it works better on one than on the other infrastructure, it's kind of difficult to define who's fault is it. And so multi-vendor performance testing is not only a question of doing it and taking the effort, it's also a question of how to analyze and publish the results. So that's one of the challenges and goals for 2019, to expand the pure NFE interability program, also into a multi-vendor performance at larger scale. And in this area, we're working pretty strongly with the Intel Network Builders Program, who are very supportive of us. We're also working with the Broadband Forum there and VMware to create a lab, especially for multi-vendor performance and interability testing for fixed network, fixed access, edge network scenarios in Europe, where we hope to support not only the mobile, but in this case, actually the fixed virtual BNG and other related applications. So that's basically all from me for this short overview. And so if there are any questions, I think we have two or three minutes for questions. Can you switch on the microphone, please? No, not you, but the guy in the back. OK, try again. Thank you for the presentation. About the certification, you mentioned ASC. I'm wondering what do you see about, you know, we have an OVP program in OPNFV, which is doing the verification. We have the whole tool chain to do the whole certification in an automatic way. So what's your perspective on the OVP and do you think that probably together could do the certification program using the tools that OPNFV provide? Yes, definitely. So thanks for mentioning that. Yeah, we've been looking at the OVP program. I think there had been plans for it for a while. We're happy that it seems to take off now. And as I said, it's not about competition or anything. We're definitely wanting to set up this testing pipeline. And I see the OPNFV certification testing in this open source area, where we can base on our test efforts basically on this pre-integration, so like in this pipeline. I hope this works out, and I really welcome to intensify our communication with the OPNFV group. Any other questions? OK, I think I've kept you long enough. Thanks very much for joining. Have a great evening in my hometown in Berlin, and I hope you enjoy the rest of the week. Thank you.