 Welcome. My name is Toby Ford. I work at AT&T, also on the board of OpenStack. Welcome everybody today. We're going to talk a bit about OPNV. I have an esteemed group of panelists here to join me to talk about this newer project, newer effort, but one that holds a great amount of possibilities. So to join me today, we have Chris Price from Ericsson, Chris Donnelly from Cable Labs, John Zenos from Canonical, and Margaret Choisey from AT&T. Why don't we go through a few introductions? Just talk about yourself a little bit, and then your role in OPNV. Sure. So I'm Chris Price. I work in Ericsson in Stockhorn. I'm chairing the technical committee for OPNV. So I work a lot with the various projects that we have and trying to establish relations upstream, I guess. And I'm Chris Donnelly. I'm the director of virtualization and network evolution at Cable Labs. And I'm the silver end user director for OPNV, focusing a lot on virtual CPE and lab issues. I'm John Zenos with Canonical. I am located in Boston. I'm on the OpenStack board, and I am a silver member on the OPNV board. And like Chris, an elected participant on the board. Hi, I'm Margaret Kiyosi from AT&T. I'm responsible for domain two, SDM virtualization on boarding, realization, and I'm also the president of OPNV. And so one of my goals or responsibilities in OPNV is to help shape the vision. Thanks. So why don't we start with Margaret. Why don't you tell us a little bit about where did OPNV come from and then a little back history with you and yourself about standards, MEF, your work there, and then how that evolved into Etsy and all that, and where did OPNV come from? Okay, so let me state something. My role has always been to not do external forms, and I failed miserably. So a few carriers got together, I forget how, three years ago, and we were all trying to do virtualization of our different functions. So we decided to see if we could create an industry form where we can get the vendors, users in the space to participate together. And we had a debate of where to do it, and we ended up converging on Etsy and doing this organization called ISG NFE, because it was set up as a task force where you could organize any which way you wanted. And we had very clear views of how we wanted to operate, given our backgrounds, we wanted to pick a place where most people were members. But we were also looking for an environment where anyone could join without pain, and that we could agree on concepts and so forth by having consensus and not doing formal voting. And we needed an environment that had structures for the IPRs and administrative, things like that. So that, of course, took off beyond our wildest dreams. We just thought we'd get a few, you know, or companies together have a discussion and just, you know, see how we evolve. So it actually ended up revolutionizing, I think, the whole networking industry. Anyhow, so some of the formal leaders, as we were going down this path, Etsy and Etsy NFE was only supposed to be a two-year stint, if I call it that. So we wanted to get to implementation faster. So we started discussing how else to do it besides this other forum. And open source actually became a discussion. And so we started working with a lot of the other open source consortiums trying to get advice. And we decided we needed, as users, an organization where the industry could come together to try to implement quickly a platform. I always joke around that in the end when we virtualized, when we pivoted the industry to do virtualization, we ended up disrupting the industry so much that it pulled everyone down to the platform. It pulled us users down to worrying about platform. It pulled the network vendors down to worry about platforms. And then, of course, you had the platform for people to get involved. So the goal of OP NFE was to, was our view was to actually to get an industry forum where we could actually work with code, upstream code, work things together as an industry to have voice, sort of like a voting block for all the different open source pieces. You know, open stack by itself is not sufficient enough for us, right? It's necessary but not sufficient. We need a NFEI infrastructure. We need, you know, controllers which are critical for us. We need concepts like DBK, you know, on and on and on. So that whole set of functions are necessary for us to really get on with life of creating new services. So that's why we created OP NFEI. As an industry forum which allows us to implement what we actually envisioned in the NCISG NFEI. So next, Chris Donnelly, or maybe I'll call you CD now. He's not really allowed to have two people with the same name on the same panel. So CD. Tell me a little bit, tell us a little bit about Cable Labs and maybe how, what it does and maybe how similar or different it is to OP NFEI and what it does for cable industry. Sure. Cable Labs is an R&D consortium made up of large cable MSOs around the world. We currently have 57 members representing about 158 million subscribers around the world. And we've been involved with SDN and NFEI since about 2012, about the same time. We first started off with some internal use cases. We joined up with SDN and NFEI in 2013 and we've been expanding our influence in the organization. When we looked at the use cases, we found that the real key for our members, the real benefit for SDN and NFEI is in being able to deploy new services much faster than we could in the old environment. And the key there is in developing data models and APIs. And, you know, this all sounded really good and so I told my team to try it. How fast can you give me a virtual CPEI? We have a demo coming up in two months. Can you give me something at that time? And they met it. And we found how powerful open source software is in terms of being able to take advantage of the SDN and NFEI ecosystem to drive value for the industry. And when OPN and NFEI was getting off the ground, we were really excited as a vehicle to do open source development for the industry to help move this technology forward. Thanks, Chris. Mr. Price, why don't you tell us, when we get into the nitty-gritty, tell us what does OPNF really do? When it comes to, when we talk about open source and the code involved or the testing involved, what is it really doing? So OPNF, it's an open source project, but it's a little unique. Well, it's not unique, but it's not what you think of when you think of an open source project. We see ourselves as a midstream project, which means that we work with open source communities and bring them together to create something from them. So we don't want to build open stack. We want to come and help you build open stack, and then we want to use it. That's really where we spend most of our time. So we're trying to build essentially a reference platform for the NFV domain. And as Margaret alluded to, that's not quite the same as what you want in an enterprise domain or in a private cloud domain. There are differences, and at a high level, they're pretty much, yeah, they're not so different. But once you actually dig in a little bit, there's significant difference and significant cost in moving between what you need in each of those domains. So we're midstream, we're focused on telco, we're focused on the NFV domain. So we work a lot with the Etsy organization to understand what do we need to achieve as an industry, and then we try and actually working with open source communities build it. So at the end of the day, we're in the process now, we're eight months old. So we're in the process of getting our first release out the door. It will come out very soon. And what we've learned in that process is really the difficulty and challenge of bringing together all of the upstream components, putting them in such a way that you can then deploy them to a physical infrastructure and essentially build out a data center. And in the last couple of weeks, we've got to the point where we have order deploy running, we're able to hit a number of labs, we have 21 labs around the globe. So we have 59 members, I think, 59 members, and a lot of those members come wanting to do something, so they come with a lab. You know, we've got a data center, can we get the open NFV stuff running there? So we hook everything in, we have a CI pipeline that basically spreads out to these various labs, and you can order deploy. And we want to prove that we can deploy easily and well, so we're running order deployers every four hours at a number of the labs and then we run nightly deployers to other labs and essentially just try and prove out the platform we've put together. And in parallel to that, we have a team that are working with requirements. In other words, okay, we've built the platform, but it doesn't do everything we wanted, so we start to study how we're going to achieve the features that we need in the platform in the future. So we have a process there which is kind of like what you would do before you started writing a blueprint, I guess. And that process leads into writing blueprints. So we start to sketch out, okay, this seems like it needs to be done in OpenStack. Let's write a blueprint for OpenStack and then we come here and we try and talk to you guys about what we want to do and set about getting it done so that we can then pull it back downstream. So once Liberty's out the door, we're going to be pulling it down and we're going to be trying to run it. We're going to try and run applications on it and see how we go. Excellent. Thank you. John and I are on the board of OpenStack together, so we've talked much about this topic in the past of how to focus in a particular vertical and solve specific vertical problems with OpenStack. John, tell me a little bit about OpenStack's role in OpenFV and how we're at the summit today. How can the OpenStack community help OpenFV? Excellent question, Toby. And I think there's an opportunity to bring information back and forth. When we started OpenStack, obviously it was a cloud platform and we've all invested lots of time in it. And we've been active in it from the beginning. And what we realized is as NFV became more and more prevalent starting from the Etsy framework and going on into OPNFV, it becomes a very viable use case. And as many of you know that are part of the OpenStack community, there is a telecom NFV working group to try to understand how you map OpenStack into it. And what I think the opportunity exists is that OPNFV has started to put that framework and flesh it out a little bit more. So in one hand, OPNFV can bring that knowledge into the working groups that exist within OpenStack. And I'll say bring a carrier vertical-centric perspective. And I also think information flows the other way to Toby's point that the OpenStack community has to bring that expertise to the carrier side and point out this is what we're able to do, this is what is not currently operational, but can be what features need to be added or what features may need to be improved. Ultimately, I think it's a balancing act, balancing the importance of the collaboration and communication. And at the same time putting pressure on both organizations to figure out a way to move this forward because it's been interesting for me as a relatively new addition to the OPNFV board to see the carriers wrestling with the incorporation of the open source model. And at the same time try to live with the pressures they're dealing with and the competitive landscape of bringing virtualization of the network to an actual reality. And I think one of the things the OpenStack community can bring to this effort is an understanding of what's available today, helping prioritize the stuff that is doable and important and collaborating with OPNFV to accelerate things that would make network virtualization a reality and certainly it's an important use case for OpenStack in general. Thanks, Tom. Marvrick, we've talked about this many times as well. It's like the scope of OPNFV. Why did we select this NDI as the boundary? Why not get into more of the VNFs? Given my thing with VNFs and not being cloud-like enough, that area needs quite a bit of help. So what are you doing with the VNFs today and what's the rationale for leaving the boundary at this level? So the dot-on-line is initial. And it's really what's a platform. So the goal was to try to create a platform that we all can use, which is really going all the way up if I use some of the term the mantle stack, like the cloud foundries and things like that. So that's the vision. A platform where I can instantiate an application in a virtualized environment or a container environment, and I can do the life cycle of it and then as that application moves for whatever reason to have the access, the network access move with it. That's simply what the platform should do. Now, whether you call it mantle, NFEI and so forth, I actually don't care, whatever it takes to do all of that. The area of actually getting into VNFs, per se, where we start getting involved in open sourcing VNFs, it's not really the focus. I guess if we solve the platform fine, we can evolve to that, but I think the platform part is the hardest piece right now. And again, like I said, once we get the platform complete, like I said, instantiating that application in a virtualized environment, moving that application and the access that goes with and all the F caps that goes with managing the application as well as the platform, once that's solved, then I think our job is done and then we can move up to the next step. All right, thank you. If I can chime in, one of the other reasons is, well, you need a scope to start with that you can actually achieve, but working on the VNFs, coming from a VNF vendor, working on the VNFs is you can do a lot of things on the VNFs, but you really have to know what the VNFs are running on if you're going to do it well. So establishing that foundation, working with the platform vendors, making sure we have a community that is starting to normalize on how these things are going to be deployed and how they're going to look so that then the VNF developers can start to take advantage of what we're learning there and start to build on that, then we can start to accelerate what's changing in the VNF level as well. It's a vendor's view, I don't know. That makes sense. So it's an 80-20 problem. 80% of the code delivers about 20% of the value, and so it makes a lot of sense to get together as a community and work on that 80% of the code together so that the individual vendors can focus on their specific value at. Sure. In this context, John specifically, I want to know, for a vendor, how can you play in this space when there's other competitors? Chris has talked about not picking winners for particular modules. So like in the case of deployment tools, tell me a bit about how Juju plays a role and maybe not picking a winner, how that would work. Yeah, no, I think there's a couple of very important points to your question. One is tied also to the question just asked that everybody was responding to because I think, first of all, even though there's initial focus, some of the work that's being done is certainly transferrable up, and the work that is transferrable up is about deployment, automation, and simplification. You know, as somebody that supplies, you've been to OpenStack Distro and OpenStack and in a way fulfills that vendor role, you know, I think the way OPNFV is looking at this as a loosely coupled architecture is very positive. And so our view is that there's a certain amount of plugability that you want to preserve that allows people to talk about different deployment tools and different mechanisms. What we've been trying to do with Juju as an open source service model is to deploy applications. Now, one of those applications can be OpenStack itself and then another category is certainly the VNFs. But there's other models that are equally applicable, so we had this conversation actually, yes, and I think Chris used a good analogy rather than trying to select winners. Simply put, we want to allow for options to be evaluated and for the options that can be successful and can solve the problems at hand, those will get accelerated naturally. But any time you actually try to presumptuously pick the winner, I think the only thing we do definitively, both in OpenStack and other boards I've been involved with, is you make the wrong decision. None of us are particularly good at predicting the future. What I think is very positive that OPNFV is doing is creating a framework for people to propose options and then creating an open playing field, so to speak, for people to demonstrate what will work and not work. And that's really how we see ourselves participating, not to speak lobbying for a particular end state, but rather participating from the perspective of encouraging a framework that multiple solutions can be looked at for solving the problem, one that is best at actually doing that as the opportunity to move forward. Sure, it makes sense. Thanks. So it's pretty crisp boundary when it comes to deployment tools like a form and a fuel or juju, but when it comes to SDN and ODL, tell me a little bit about, no matter who answers this, why ODL first and then what are some options going forward? Is it just going to be ODL or are we going to see the contrails, OVNs, other things show up as well? I can take a spin at this one. One of the things we wanted to do was ensure that we're able to integrate different components. So we didn't just want to start with the easiest possible thing. So we said let's bring in an SDN controller, let's make sure that we can use it, we can deploy it, let's make sure it's one that's focused on WAN-centric use cases as well. And not only that, but let's find one with a good community, let's find one that we know we can collaborate with. There were a number of members of OVNFV or also members of Open Daylight, so we're totally natural for those guys to want to do that. And it just, it wasn't a pick the winner, it was just a, well, we have guys working here, we know they can help us, let's start here and get the ball rolling. It's certainly not a question of picking a winner. We ran ARNO as a very constrained activity. We have a total of five projects actively involved in the ARNO release activity. We have 31 projects active in the community, which will then, of course, come up in Release 2. But for Release 1, the point was not to find the right platform, the point was to build out our global labs, build out our CI pipeline, make sure we can deploy. Eight months ago we had nothing but a PowerPoint presentation saying we wanted to do something and the first release was really about getting that architecture and the pipeline in place, I would say, the deploy architecture and the CI, so that we can then do more. So once we have ARNO at the door, then, well, we have a project already from the OpenContrail team, there will be an OpenContrail solution, there will be an Open Daylight solution, there will be an ONO solution, and that's just the three that have put their hands up already. Anyone can put their hand up and say, okay, we have an SDN solution that we'd like to make sure so we'd like to be able to put into this and evaluate how it works with the use cases that you're trying to define. It's an open door, an open invitation. Yeah, so we had this big debate of whether or not we could, we should do multiple things at once, but the problem is it's challenging. So we picked this initial pieces, which actually some of it became a debate, and just to see if we could get to work and it has been, it's challenges. But our goal is, in the future release, to make sure we have a build structure that can allow us to customize based on customer needs. And whatever components people bring to the table, they have to participate. And so if you have your favorite component that you want to try in this build structure to get fleshed out, you definitely participate, and then it'll help us make a more robust build structure. If I could just expand, because it was interesting for me, because we were encouraged to participate in OP NFE in the last 30 days, 60 days or so. And I did have a chance to talk to Chris and Margaret about this very point, because my concern was by being prescriptive, are you freezing the marketplace? And I think the OP NFE board has been very thoughtful on viewing it as a pluggable structure or framework, and equally if not more important, been very consistent in communicating that and making sure everybody understands that the first decisions aren't necessarily the last decisions. And that's a very important part of how this is going to move forward, because I think broadly nobody wants to stifle innovation. You want to encourage it. And certainly NFE is at the very early days, but on a real expedited timeline. So we certainly all could benefit from lots of innovation. Thanks. Yeah, one part of OP NFE that I really like are the community pages and how they kind of bring your eye and your focus to a set of things that, like an open stack or ODL can do to help make the telco use cases better. Can you talk a little bit about that and how that is helping to sort of facilitate and streamline the upstreaming of code that maybe the telcos or the telco ecosystem is doing that maybe has been problematic in the past and how that's maybe helping? Sure. So you're right. We have community pages and we have community focus groups. And we have, at the moment, we have four that I think have been sort of evolving and emerging. The open stack one was the first open daylight, open V-switch and ONOS are the current community pages. The idea of the pages is that's where within the OP NFE project we pull from upstream. So if you go to the open stack page, you'll see all the open stack processes. For blueprint writing you'll see all the template links. You'll see everything on how to work with open stack, what these guys expect, how they want to see things when we walk in the door with a requirement. So we try and pull in from the upstream there and we try and make sure we pull some people with us. So we have people from the open stack community actually engaged in those communities working with us and helping us sort of understand not only how to get the blueprints upstream, but what actually makes sense to upstream, right? I mean it's no point coming to open stack and asking for implementation somewhere else. It's anyway. The other thing we do is we map our features and our requirements and our projects to the blueprints and things. So we keep a track. So if someone comes in who's looking at a blueprint and you see that it came from OP NFE, you can come back to our page, you can look at the blueprint and then it'll link you back to all the work that we've done beforehand and make everything sort of contextual for what we're trying to achieve. But primarily it's the engineers that we have working in the community and it's the engineers that come in from the open stack group working with our engineers that make it successful. The rest is just the reference material. It's a place to collaborate. Sure. So CD, tell us a little bit about the actual labs that you've put in place and maybe helped out the OP NFE community in contributing to making it better. Sure. Cable Labs has been doing testing and certification for our members for 15, 20 years now. And we're trying to help the vendor community to bring solutions to the marketplace and make sure that it works in an interoperable fashion. So this was a natural extension for us. And what we've been doing first is contributing some of the governance models that we've developed, some of the interoperability arrangements about how two vendors can work together in the same lab without stepping on toes or running a foul of other vendors, IPR. And beyond that, we are standing up our own lab. We have one pod that we're currently installing in our Lewisville, Colorado facility and then looking to expand out into Sunnyvale as we get a little bit further on. And then we're currently in the process of tying our CI CD system in with OP NFE. Thanks. Margaret, tell us a little bit about the participation in OP NFE. You know, it's always quite surprising to me when I look at the website. There's a lot of people there, a lot of vendors there. So tell us a little bit about what that group is made up of. Well, if you look at the different pieces for a platform, I mean, you need all those different components. You have the chip vendors. You have the server vendors. You have operating system vendors. You have the, you know, the orchestrator vendors. And of course you have the virtualization vendors, the network function virtualization vendors. And then of course you have carriers. This is an ecosystem and it's difficult to just say we're going to pick these three slices of part of the market and they'll solve the whole industry. You know, the industry is churning a lot, which is why we say we're trying to build an environment so that you can customize. Even with an AT&T, right, you talked about the controller. AT&T is actually in probably all the controller spaces. We sort of group it into two classes, a global controller and local controllers and there's a lot of different local controllers. So you have the local controllers that are white box, so that might be like the onuses of the world. You have the local controller that's for your OVSs or your virtual routers, so that might be the Nuage, ConTrail, and the sexes of the world. And then you might have a global controller which is like an ODL point of view. And we ourselves, because as the industry is all over the place, realizing that you have all these different sweet spots for all these functions, and AT&T is huge, where Target is to virtualize 75% of our network by 2020 to get it on to an SDN, NFE platform, and the goal is to have 5% by the end of this year. So we are doing it everywhere and so it's pulling in all these different pieces, the best and breed of all these pieces together and it's not just one. Just in AT&T we're finding we have multiple different combinations from a platform that we need and so we're building these multiple versions. So as an industry, you can imagine, right? It's even broader. So it's, yeah, it is a lot. Now one thing I would sort of just expand upon with some folks is that we, even though it says carrier, I really do firmly believe whether it's enterprise or carrier, the requirements aren't as drastically different as folks might think. We are actually encouraging end users, whoever the end users to participate versus just carriers. The AT&T, NFE, even though you see a lot of carriers, it really was for anyone who is focused on virtualizing network functions or virtualizing functions that have access moving with it. And then of course this side is to implement. The more broader user base that we have, I think for them, the more robust the platform will be. And Toby, if I could just add a point what's been interesting for me to observe is the deep involvement of the carriers. And in doing so, they bring a laser focus on a particular use case, which the OpenStack effort in general doesn't really have because it's a platform that goes very broadly horizontally. Clearly NFE is a big use case, but to have the carriers actually at the table talking about the challenges they have and the priorities and the timelines, I think bringing a very sharp focus on what needs to be done when. Bringing that perspective into the dialogue of what happens in the broader OpenStack community I think is very valuable. So in that same context though, for canonical, what's, why? Why do OpenFV, what's the value for you guys? So a couple things. You know, obviously there's a business objective here for everybody that participates, so I'll check that box and nobody should be surprised. But more importantly, I think what we see is there's a real use case that has a big challenge for the carriers in general. It has multiple challenges. One, it's the incorporation of open source in an environment, in a market segment that open source is just getting into in a broad sense. But the rate of adoption has been great and very quick. Secondly, I think there's the virtualization of the network. Just the concept is no small digestion. And lastly, it's the effort to automate, simplify and accelerate the release of services so the carriers as a group in general can compete with the scale out providers like the Googles and Facebooks of the world. Our particular interest is we grew up as a company, you know, in a cloud world. We are active in Amazon, Azure, all the major cloud environments and our personal interest is to bring that perspective, the perspective of scale, the perspective of automation, the perspective of reuse and bring that into the carrier of the world. And I think when I look at NFV as an objective in the broadest sense, you know, that's the goal. And from our perspective as a company, you know, we think we have real value, real experience to bring to that discussion. Thanks. So, OpenFV's next release is called Arno. It's named after, I guess, a river in Florence. Who came up with this concept for naming releases? We put it to the community. So there was a vote, we have a community we're trying to figure out how to name the releases, so we put it to a vote and there was a number of different, you know, options and everyone decided, okay, rivers was the way to go and then the next vote was, okay, what river? I think you could probably ask... The teleponica had some influence over it. No, not so much, no. You should probably ask the director for the details of why Arno specifically, but it was put to the vote. It was a community activity. We had a number of names out there, Facebook rivers, you know, actual rivers. Can I vote for Potomac for the next one? Can I make Potomac the next one? It's got to start with a B. B, Baker. Good point. It's a peer release. Thank you. So anyways, more seriously, okay, so Arno is a bit late in everyone's view. When can we expect some level of consistency and guaranteed sort of release cadence like you see with an open stack? That's one of the things that I think is strength of open stack. You're the TSC chair. So with maturity comes predictability. This is the first time we all sat down together to do this and we had hoped to ship a little earlier than now, but we will ship very soon. We are not where we want it to be, but not far off all the same. The teams have been doing an amazing job of bringing everything together. I mean, some of the things that we're trying to do that you don't usually see in these sort of environments and it comes from the breadth of our community. I mean, we have hardware vendors, we have platform vendors, we have application vendors. They all want to be participating and we want them all to participate. So even for our first release, we started development on one lab, we moved it to another lab, and for the final release we're in another lab because we want to make sure that the platform that we're deploying, that we can click a button and ship out to any lab actually does do that. If you just did it in one place, you can't really guarantee, in fact, you know it's not going to ship between labs. So actually the six weeks, for six weeks now we've been working on those types of issues. What happens when you change the foundational infrastructure? How does that affect the next layer? How do you then make sure that the effects on the next layer don't affect the application deployment? Because at the end of the day, it's being able to bring up applications seamlessly that have strict requirements on the platforms and advanced networking needs that we can automate that, that we can make things move faster. That's what we're really trying to achieve. So the delays are in the unknown. As we move forwards, okay, so for release two, we're going to get more organized. We're going to have more projects. Having more projects probably reduces the amount of predictability, but we're going to have any release manager who's going to help us and drive through rather than relying on the projects to coordinate across themselves. But not only that, we've learned a lot. We now know what's going to happen when we change labs. We've done it a few times. We know what's going to continue to happen and it's going to form part of our process. We now have something that we can actually look at so if we have a new lab, the first thing we can do is move that release there and see if it works. And for the second release, which is going to be an evolution of the software layer, we can take those learnings with us and we can be, I expect, more predictable moving forward. So maybe to help Chris out here a little bit, given that with Ubuntu, we release it on six-month cycles for everybody in the audience that's been active in the OpenStack release cycle, it's hard, right? That is predictable, steady. You know, as somebody more recently added to OPNV and watching from the outside in, I think you've done a very commendable job on version one, you know, to get it out, or soon to be out. The first one is always a lot of learnings. We talked about this in the board meeting yesterday that the first version of any piece of software is a little bit of an exercise of uncertainty, but bringing the cadence is going to be hard work, but very important. All right, I want to open it up, we have like three or four minutes, so I want to open it up for questions to the audience. Anyone want to volunteer? I want to thank the panel for the answers and some of them sweat-inducing answers, so I appreciate that. So thank you very much. Thank you.