 So let's begin. I'm Toby Ford from AT&T. I'm Chris Price from Ericsson. We're going to talk about a few things. Just an update on AT&T and Ericsson's collaboration since the last summit. An update on how we've been going and then a few things about OPNV. Integration of VODL and OpenStack to create one of the key services that we're doing in the telco space and then some final steps. So just to begin, I'm going to just remind everybody about why we're here and then give you some context setting. For AT&T, right now we're in a real struggle. The demands on our network and our services is growing dramatically. So over the last seven or eight years since the iPhone was introduced, we've had about 100,000% increase in network traffic. That growth cannot be maintained without us really addressing the cost basis and the efficiency that we operate at. And we have to make changes. And so that, in part, is about NFV and the transition from very specialized hardware to more of a disaggregated hardware, software, open, commodity, software-based sort of approach. So that's an important part of it. But also an important part of it for us is that we need to be able to make new services much faster. We can't be spending 18 months talking about it, 18 months making a business case forward and then however long actually building it and testing it. That doesn't work. Some of the things that we've introduced recently took us, in some cases, six years from conception to implementation. And that's just not tenable. And we have now new competitors coming, nipping at our heels that are doing things in a much different way and a much different cost basis. So we have to move faster. So one of the pieces in this picture of how we change that I want to focus on today is about the delivery of a new service. And integrating concepts of the network together with maybe what we think of as infrastructure as a service and platform as a service with OpenStack and actually making something real that way. And we're going to talk about one of the services, the first things that we're doing which is network on demand and VPN as a service. You know, and people often talk about when we bring network into the equation, oh, it's just not possible. How can we possibly run 100 terabits of traffic over x86 boxes? In my view, it's inevitable. And we shouldn't be thinking per se about what we can do today and what we're limited by. But what Moore's Law and a few more cycles that are still left in Moore's Law have available for us in the next five years. So in 2020, will you say the same thing about x86 boxes? I don't think so. Today, 100,000, 100 terabits of traffic, I think, can be run on top of about 50,000 servers. In that same timeframe, I think we're going to see that number go down dramatically. And then you should think, if we think about it that way, when we talk about running our current network at 50,000 servers and maybe with the scaling we maybe stay at that level. And Amazon data center today is 80,000 servers. They're dropping 100,000 servers a quarter. So clearly, we have to find new workloads. We don't have enough workloads for the demand that we have to compete with those types of entities like a Google or an Amazon. And so it's just sort of a recursive circle. We have to be able to make services faster. So this is what we've all been talking about. A lot of our partnerships like AT&T's Strong Partnership with Ericsson is all about is how do we make this real? So over the last two summits, starting in Atlanta, we have had to sort of put this notion out there that we needed help to make the NFV real on top of OpenStack. And then we presented a number of items that needed to change. And then in the Paris summit, we actually demonstrated one of the first kind of VNFs running on top of OpenStack. And then we identified more gaps and then more solutions that would come up within the interim. And today is going to be the same thing. So this was a chart that we started with back in Atlanta. We kind of laid out some of the key gaps that we have with OpenStack back then that needed to be worked and solved for to make it so that we could run VNFs. And in the interim, in Atlanta, we made some good progress with the Juno release. And then even since then, we've made more progress with the Kilo release, solving some key problems. Certainly one of those areas is around networking and being able to, over these cycles, be able to process more throughput and more functions, more service chaining through the network this way. Also, as many people believe, the carriers have a unique resiliency or reliability need. I don't actually think of it as any different than a bank or a large e-commerce site. They actually have the same expectation that their service is running all the time and at a highly performant way. But there is certain notions of high availability that the carriers have and we needed to fill those gaps in that area. And I think we've done a good job, especially in the scheduler and addressing some of the unique affinity needs that VNF has. Then we've also added a number of things to this picture to add to security and restorability and upgradeability to this environment. Now that said, there's still, in this over the last year, there's still something missing. And I feel like there's aspects and other software needed to actually build a platform for NFV. And that's where the open NFV effort has come into fruition. And it's actually made great progress to augment this picture with aspects of networking, deployment and other such things to actually make it a more complete platform. That's what Christopher talked about. So I'm going to talk a little bit about open NFV. I guess hands up if you haven't heard about open NFV this summit, we've been sharing our story. Thank you Ian. So essentially coming to what Toby was talking about, at the end of the day what we want to do in open NFV is take the foundation pieces, work them through and then figure out deployment challenges, figure out networking challenges, and build out so that the platform becomes suitable for large country-wide telco networks. And the real idea here is a simplification of the management and utilization of those networks. If we can't make it easier, we're not really helping solve the problem. From an open NFV perspective, you can see on the right we have what open NFV project does, and you can see on the left the basic architecture. And the basic architecture doesn't change. We're not here to reinvent anything. We're really here to try and bring things forward as quickly as possible so that we can start to get our applications running effectively on this type of platform. But what three main things we do, building integration, we try and get a platform together that has the components that we feel we need in an NFV type deployment. Deployment and testing, we run continual deployment across, not get across, but we have 21 labs connected across the globe already for bare metal deployment. And we work with new requirements and features, high availability, fault management. You've heard a lot of this through the summit as the open NFV guys have come out and said, this is what we need to do, this is how we're trying to solve it. And really what we want to do is get those into Liberty, re-consume them once they're in the Liberty release and bring them back into the platform and start deploying and testing and running our apps on that, and then iterating again and again until we get the types of performance and the types of behaviors that we really want to. It derives very much from the Etsy framework. Anyone that knows the Etsy NFV ISG reference architecture, this is a representation of it essentially that we're using in an open NFV. And the focus for us in the short term is this NFVI and VIM layer. We just want to get the platform running. We want to get our methods and our procedures, figure out how to work upstream, get our house in order if you like. And then we'll start to address other things in the stack, other areas that need some acceleration or development. But for now, very much focused on the platform itself. Status-to-date, we have release one coming out in the coming weeks. It's called ANO, named after River and Tuscany. Deploy is an open stack, open daylight based control with CFKVM, OVS, of course. We have two deployment tools, form and fuel supported, and that gives sort of an indication of what we're about. We're not about KVM being the right answer. We're not about open daylight being the right answer. We're about facilitating components to come in and help find the right answer in the community. So the SDN controller, for instance, is one area where we expect to have a number of controllers coming in. We have the open daylight as part of release one. ONOS is part of release two. OpenContrail is part of release two. Already intending to integrate with the platform. 21 integrated Farros Labs. Farros is what we call our hardware project. And moving forward, we have started to establish something we're calling the compliance and certification committee. And these guys are going to look at what it actually means for us to have a platform. And we can cobble stuff together. We can say now it works fine, now it doesn't work fine. But at the end of the day, OpenFV sort of has to stand for something and how do we establish a trademark? How do we establish what it means? And that's what this group is going to be working with. I'll pass it back to Toby. So one of the example VNFs, just a very simple level that we're working on today, we're actually in field trials with right now, is something that I think of as a VPN as a service. Essentially taking a, you know, making a dedicated connection that has reservation and separation for a customer and expanding it to include any of their offices or any of their clouds and then possibly also integration with third party clouds. This type of service we've done in the past with typical hardware routers and now those routers are changing to become more virtual. So routers that can be CPEs and PEs that actually can run on top of a cloud. And when we look at it, you know, for AT&T, we're actually doing a lot of work of setting up and pre-configuring the facilities and the connectivity ahead of time. We're putting a lot of investment in laying fiber and especially the multi-tenant buildings and having them ready for this type of service. And then when we can do this in a more virtual way, we can spin up resources and spin up different types of services quickly as this evolves, hopefully maybe firewalling or different types of security services or different types of maybe CDN or these types of things can be added to this type of thing easily. So that's an example of a service that we're looking at. And today in our networks we use MPLS pretty extensively to do this. We sell essentially MPLS as a service today. And so what we're looking for is to extend that kind of notion from our clouds into our customers' data centers. And then Chris will talk about the more specifics that way. Yeah, I guess from a conceptual perspective, what we set out to achieve when we started working with AT&T was to figure out, okay, how do we deploy a BGP stack with MPLS capabilities on top of an SDM solution in a data center? You want to be able to basically abstract it off the physical switches. So SDM forms a key component of that. And we put together essentially demonstration or a proof of concept and have since then been moving on bringing that out in an upstream communities, trying to expose it to the community for consumption and essentially bring it forward to open stack so that we have a mechanism to start to deploy VPN services on 10-layer-to-tenant networks. The way we've done it is today because we only have ML2, if you look at the orchestrator function on the right, we talk to the open stack and we use the ML2 to set up the tenant network. And then we kind of have to bypass open stack and we go straight to the ODL SDN controller by rest to spin up the VPN services. And then that will essentially, as you bring up tenants in the network, that will be notified by OpenFlow, form a part of the network, chained into the network and then we will advertise by BGP to the data center gateway that these tenants are now available to be routed to. And that's, I guess, at a high level how we sort of bring it together. Yeah. The key being also that the routers themselves along the way are running on open stack clouds. Okay. So just sort of walk through that process. Again, if you're coming into open stack and you want to deploy a new service, we're starting to work with open stack. We have a review tomorrow morning on a BGP VPN blueprint that we're trying to promote in so that we can actually bring this into the community. But if you start from the top, you know, I want to deploy a new service. Then, okay, we create a network, we create a subnet, and then we're sort of at that ML2 level. Then we need to create a VPN. The VPN is kind of a new object. What you need to identify here are the import route targets and the export route targets for that tenant, potentially a route distinguisher, but that's a longer discussion. And then your WAN VPN name and your MPLS network and router subnet construct. After that, you create a port. And then you essentially attach that port to an image. You boot a VM. When you boot the VM, the port will notify via OpenFlow to the controller that it exists. The OpenFlow controller will then push rules into the network in order to establish that as part of the tenant network. And then you'll actually, from the router, the virtual router, you will expose BGP rules to the gateway in order to route to it. And so the process of enabling this is not so different than what we do in OpenStack today. We just have another layer in between and then there's the BGP stack essentially coupling us directly. The plugin architecture, I'll go pretty quickly through this, but in general, the ML2 plugin exists. The L3 VPN plugin is what's under discussion at this summit to actually try and bring in. The way we're implementing it is in Open Daylight. So we have the ODL MEC driver. We need to add layer 3 support for the L3 VPN plugin to the ODL MEC driver and then it will drive down to the Open Daylight controller so that we no longer need to bypass and go directly to the north-band interface of Open Daylight to get the VPN services in place. We can do it directly through OpenStack when we create the network. If you have a look at how this works, essentially your operator admin will create the VPN, create the external networks, and then the tenant will create a router, connect the router to the external network, and then as you create the internal network, expand it or contract it, the router will essentially publish those endpoints to using BGP. I won't go into the details of the API. I'll talk us through this one. Sure. So in the end, essentially, in each of these regions, whether or not they're in our data centers or in our customers' facilities, then we've created essentially the... we've replicated virtually what we would have done before with P's and C's in the MPLES scheme. So a little more technical detail from the OpenStack ODL integration view just from the pure mechanics of software and how we're going to be putting it together. On the OpenStack side, you have the API layer and then you have the plug-in layer. Under the plug-in layer, we have a MEC driver, and the MEC driver is there to basically translate the plug-in directly into what OpenDaylight expects to see. On the OpenDaylight side, we have something called a neutron services. OpenDaylight exposes REST conf interfaces natively, and they're not natural for OpenStack to consume, so we have a neutron services function which essentially exposes an OpenStack compliant interface, which then will talk down to the BGP, the FIB layer-to-services, and including service chaining as needed. On the OpenDaylight side, what we've been working with over the last six months is essentially the red spots in the middle there, the VPN management, Next Hop Manager, multi-path BGP components, FIB service, and label management service. As mentioned, today they're invoked by the northbound interface of OpenDaylight, but we hope to wrap them up under the OpenStack neutron service once we get that capability in OpenStack. Until OpenStack can speak to us, we can't really listen anyway, so we have to do them parallel. And that's the planned activities, I guess, for moving beyond the OpenDaylight, lithium to the OpenDaylight virillium release. We'll be essentially formulating that so that this just happens naturally as part of integrating with OpenStack. I think that's all I have. Yeah, so that's an example of one type of VNF, and how we would go about integrating it with OpenDaylight and OpenStack. So I'll talk further about one of the, maybe the next frontiers of work that we have to do in this space. Traditionally, in the telco world, there was a lot of infrastructure, software infrastructure, OSS infrastructure around this thing called policy. And essentially, policy is I have this constraint or expectation, and I want to make sure that the infrastructure lives up to that expectation. Across a telco and its VNFs, there are many, or any of the network functions, there's a lot of these types of expectations and rules that we want to make sure are complied with. And they can, on one end, be things around performance or availability. Some number of nines, they can be about some amount of throughput through a particular flow. They can also be very much about security and assuring some level of, you know, is the thing encrypted, is it authorized, who can do what on these kind of connections. Make that into something that, so those expectations then get processed and then in a sort of tops-down way get manifest in some level of configuration or state monitoring or configuration monitoring system that is maintaining that and maintaining that expectation or policy. So that has been a long tradition in telcos, and now we have to take that and make that real within OpenStack and within the OPNFE context. So a lot of work has to happen still to make this happen, and so that's why we're quite excited about something like a Congress, which is helping to, not just with OpenStack, but with other things, kind of create a definition language for this policy and then help us to propagate it into the enforcement points of the systems. So that's an exciting effort that way, and then there's also a number of other interesting tools out there, and there's a great debate, right? Can you just add policy after the fact? Go into all of the different systems and make sure things are as you expect. That's one way and very likely the way we have to do it for some time, because it has been an after-the-fact issue. Now there's other systems, and I'm quite excited about what folks at EPSERA have done in this way, where they started with policy at the beginning as the first class thing, as it's the basis of the whole system, and then everything that was built around it in container management and PAS around policy. So I think this is another sort of tidbit from the telco world that we need to sort of digest in the open world. Yeah, so just as a summary, we showed some of the aspects of what it takes to deploy an L3 type of VPN as a VNF, and then some of the enhancements needed across the continuum of OPNF-V, and then introduced the policy concept. Further next steps, we need to continue to enable VPN integration and distributed L3 forwarding. What we're doing now is very much a first step. It's an enabler, it's a proof point. We know we have technical debt to carry forwards that we have to keep promoting and bringing out through the community. We have an OPNF-V proposal, which is really going to look at that over the longer term. We have Open Daylight, the VPN services project, which is also looking to evolve and bring forward those technologies. And in OpenStack, we have our first blueprint up there that we're working with a number of others in the industry, and we hope to continue to find the right parts of that networking component to bring up into OpenStack so that we can get what we need out of the tenant networks. Actually, one note on this too that I really like about the OPNF-V website and Wiki is the community pages that kind of bring together and bring your focus to, let's say for OpenStack developers, what needs to happen or a list of items that are kind of our focus point in the OPNF-V community that we're trying to upstream and make real. And also to help the developers with guidance about how to do that for each of these projects. Because it's somewhat unique across all the projects we're talking about and giving guidance that we think is a helpful aspect of the OPNF-V. We hope it is. So a little bit about what we're doing then. You can see the darker areas are activities which we're working with customers and collaborators in OPNF-V which we're targeting to bring upstream to OpenStack. There are other areas which we haven't really exposed as OPNF-V requirements yet, but we know from a telco as we're trying to do what we're doing. For a number of years now before the inception of OPNF-V we have activities that we're trying to bring forward. As we bring them through, of course, we'll be consuming them back into the open platform and demonstrating them and sharing them with the others. But just from a perspective of where we've come from you can sort of see the areas that we've been focusing on and those which are exposing themselves through the OPNF-V channels as well. We hope to see all of this coming out through OPNF-V as we get it into OpenStack and start to consume it downstream again. I'd just like to give a little overview as well. Ericsson is one of those application vendors we make VNFs. They have traditionally been made as monolithic functions sitting on a big piece of metal. We're working extremely hard to try and bring them into the cloud environment. As you see the requirements that come from us are because we're trying to do this and trying to walk this path and we need to work with the OpenStack community to help us understand what's the best way to solve this and then we take that back to our application developers. But as part of that journey we've been downstairs working on demonstrating, showing what we're up to and where we are. Some things are just coming through now. They don't have the great performance we need and the resiliency. Others are far more advanced and there is a list and I won't flick through them but for anyone that's interested in these types of things on the web page there will be a list of all the demos that we were showing and you can sort of click to the movies and have a look if you'd like to see what we're up to. All right, I think that's it for today. This is from another presentation. Sorry about that. Anyways, any questions for us? Yeah. Good, Mike, please. Yeah, it'll be great. So you give the example of Layer 3 VPN as a service. Have you looked into Layer 2 VPN as a service and do you intend to? In the future? That hasn't been our focus or our area but we know it's of interest to others. Yeah, I mean, for America, yes. So is the goal eventually to really convert each of the existing services that you have on physical networks today to... Yeah, I mean, the area is the BNFs that we're focused on right now so I'll just give you some sense of it. I mean, there's certainly a lot of the voice services SIP, which is a challenge in and of itself around IMS and the universal services platform, those things, that's one area. There's also the packet core aspect of moving bits between your phone and others' phones and your phone and the internet and such. That whole system is probably our highest priority right now and then alongside of that there's a lot of things that are really applications that go along and augment the various mobility services and then we'll see what happens with Direct TV but certainly the TV space is going to get cloudified here as well. Yeah, so the question is why would we do some and not others? I mean, there's a, you know, in the Toca world there's business cases to be had and done so we have to really justify what and it's non-trivial but there's some that are really obvious that would help us and then others maybe not so obvious so there's that business sort of prioritization and then others, you know, there's skepticism still whether or not it's doable or not, you know. Some areas are clearly doable with this type of BNF model and others there's some skepticism so that also impacts our priority as well. I think it's also difficult to make a judgment call that okay, virtualizing everything and putting it in general in data centers is going to be answered are all of our questions. Traffic growth is not slowing down and I don't think it will slow down and maybe all the initiatives and innovation occurring in the silicon space comes through and we start to see some silicon options that we would actually like to take advantage of that will help accelerate again. So I mean, you never know, it's always a balance and you can't just say this is how it is from now on I mean that's never really been an option. So a question about SDN controllers I've heard in other talks mention of controllers at say different levels like a global controller or a local controller. So when you talk about adding support for other controllers in the second release are you thinking of them playing different roles or are you thinking of multiple controllers that could all play exactly the same role or how do you envision that? Good question. Thank you. You can go to the back of the queue. So all controllers are not equal, right? And it's from an OPNV perspective I mean right now we're just doing ML2 we don't have a lot of stuff in the OPNV platform right now we've just really used it as a developmental way to get ourselves off the ground more than to say here's the answer to everyone's prayers. Moving forward we hope to create an environment where we can bring in different controllers where those controllers can essentially show how they solve problems in the network and OPNV at a macro level needs to look at for instance how it peers with other data centers how it aggregates into the data center from a wide aggregation network and this is the use case level that we really want to be looking at now I can use different SDN controllers to solve those problems in different ways and then I can actually see how effective that is which is what I'm trying to deliver and further to the point an application is not equal to any other application and some applications might like one controller better than they like another controller because they happen to be architect in a certain way that takes advantage of the benefits of that controller so there's no winner and loser and there is a quality at a macro level but if you try and measure a quality at a micro level you start to make everyone look exactly the same and that's not what we want to happen but then I'll answer the question and then another feel free so I was going to add since we created this term this is too high for me so actually to add what Chris was saying the reason why we came over the local controllers because we're finding that there's network elements that are diverting in different architectures which is causing controllers to be very specific to that so we give examples like you have a virtual router in the hypervisor and we're finding you need a specific type of controller for that we're finding when you start doing leaf spine you might have a different type of controller for that and then I can even go on we're actually working on an open Rotom this is the optical world and there's controllers that can be specific for that and so that's why we started realizing that's why we started using this term local controller and then of course we call the global controller was really the open daylight the point is you've got all these controllers and then you have all these different industry open source controllers that are all saying they're going to be the controller and our view from an AT&T point of view is we are good or bad using all these controllers for different reasons and we're noticing they're coming from different sweet spots different types of core competency if I call it that and they all want to be the controller and to us it's not obvious they can be so that's why we started doing this whole global and local and then we'll see in a year or two if it really can all merge or you just need these specialized controllers if I call it that the local controllers from experience specialized functions do a specialized job much better than a general function so yeah, horses for courses well the telco world and AT world both have no shortage of overloaded terms yeah, Prakash from future way just to get away from SDN debate to the policies in policies also you've got fixed networks and mobile networks, wireless networks different kinds of networks so how do you, have you thought about what's your take on the policies from point of view of different types of networks mobile networks, fixed networks how do you want to address that at least looking forward well one part of it is I don't like redundancy and I don't like having through a three different domain specific policy managers like to see something that was as common as possible across the domains but clearly whether it's wired access or wireless access they're going to have some unique aspects when it comes to what the expectations are and what's possible and those expectations when they're manifesting policy they'll be different they'll be some level of domain specific in that realm thank you can you expand a little bit on what you have in mind from a telco perspective of VNF HA as well as what we have in mind from OP NFV point of view yeah, I mean VNF HA is one of my favorite topics I mean today many of the VNFs show up and they come from a vertically scaled mindset because they were trapped in a box and that box had to be vertically scaled to some extent now that's probably a bit too harsh because many things like routing was one of the first things that was truly horizontally scalable so it's not entirely true but a lot of the VNFs that we've worked with come to this picture with bias toward vertical scaling so getting them to break out to be more horizontally scaled into what I call the midget cattle model is a trick we're having to do a lot of work to convince people to do that way and rely more on application level diversity across many sites too now there's a difference there is a truly a difference between a large website and one that's distributed across many locations and then access type of services because typically we don't dual home the access to your house so there is a single point there and so in those cases there's a limit to what we can do when it comes to true horizontal scaling and so then the circle still goes back to some level of making that one access point as resilient as possible so there's going to be a mix but the ideal is obviously to make this as much cloud like as possible I chime in and add that at least in my experience in the last 12 to 14 years when we've been doing applications we've been doing them horizontally scalable anyway the challenge there is that we've been doing them horizontally scalable on proprietary platforms and what the mechanisms that we use a couple to the hardware so you still end up with a vertical scaling even if you're horizontally scaling and the mechanisms that you have been using fit into a cloud architecture and from that perspective it's even for the horizontally scaling applications which are capable of doing that they still need to learn to adjust to the capabilities that are available in a cloud environment or you have to deploy your middleware as a pass over the cloud environment which adds overhead which is unnecessary and it's a journey that we have to take Thank you I know this is a layer 3 into a neutron discussion and you talked about the interdata center layer 3 VPN as one of the VNFs are there other VNFs that you can share with us that priority in terms of the ones that you expect to be deploying? Yeah, like I was mentioning before anything related to the network L3 L4 plus kind of functions firewalls, load balancers, IPS, IDS any of those things are fair game for for being VNFs and they've already been that way for many things for a while anyways but what I think is interesting is how they become part of the SDN part of the service chain and maybe reduce the overhead of packet processing in many locations and maybe find the right balance of what's the right number of places to do maybe label switching or maybe opening the packet up and doing something with it so that's the opportunity of the future in that space but obviously in the telco world the packet core, all of the 3GPP kind of functions the IMS, the USP all of those things are fair game for this space as well so SIP, base traffic as well as you know and in our world the TV space encoding, moving content around that whole space as well but for us mobility is a priority and getting more value out of our wired kind of assets is high up there and we have a new toolkit that we haven't mastered I mean it's going to take a little bit of time Yes, more on the general of the NFP question I saw many different comments discussed in many different projects and through this week's session I learned that the level of achievement is quite different between projects and my question is that on the other hand you're going to release the first R&O release soon and they're going to be periodic releases and the question is what are the relations between the projects and the releases and the requirements are they going to be listed in each release which recommends it's included and what is the life cycle of the project what is the goal of the project the goal is to include it in the open stack or is it included in your release and the kind of... I will answer it holistically and hopefully you can derive from that if there are any gaps you can let me know so we have this concept of a requirements project and the requirements project is really to address a need we see on the platform the requirements projects have the task of trying to work upstream with whichever upstream community helps them solve those problems and our requirements projects are going through their first iteration we work with upstream communities the way we've approached the problems in some case fits really smoothly into those upstream communities sometimes we've approached the problems in a way where we need some feedback and then we have to iterate on maybe there's a better way to approach the problem and the solution to the problem and as you say some have been extremely successful some not so successful this is again where we're not eight months old our requirements projects are doing this for the first time we intend to after this activity essentially recoup come together and discuss what was successful what wasn't successful why wasn't it successful share that information with each other and then try and find a more successful model as we move forward and constantly iterate and improve what the requirements projects should deliver is the ability to execute a capability on the platform so requirements projects really intend to work in upstream communities so that we can re-consume that and fit it together to form an end-to-end use case if I was to start a requirements project which said I need to be able to press a button to turn this light green then that project would probably last one iteration if I was to start a requirements project which said I want to deal with high availability in an opnfv architecture I would expect that project to last a long long time as we iterate and as we improve and so how long the projects lasts it really depends on their scope and it depends on how quickly we are able to arrive at a desired state and even then the project might not disappear because maybe those usability concerns maybe they start to work up the stack rather than down the stack it's difficult to really claim a project would disappear at any given time if it's an important area it's an important area from now on if you like are you actually taking this L3 VPN to production and will it co-exist with your current L3 VPN service so that you have a brownfield type solution where some endpoints come in on your existing L3 VPN and some endpoints can come in on this new L3 VPN yes this is something that we were putting out there and yes we have a brownfield problem there's no doubt there's going to be a long arduous process of sort of transitioning from old services to new ones this way and that's one of the bigger problems that we have to deal with is having co-existence of old platforms with new ones alright I think that was a lot of questions now you're getting in the way of beers yeah exactly alright thank you everybody thank you everyone