 My name is Toby Ford. If you weren't at the keynote earlier, I worked for AT&T. And then for this session, we also have Mats Carlson from Ericsson, who's going to talk about some specifics with regard to OpenStack and how to make it work for the telcos and for what we're doing with NFV. I'm going to start by doing a brief review of what I talked about earlier with regard to what we're doing at AT&T on NFV. So generally, as it was describing again, I'll try not to say this over and over again, but the worlds of IT and telco are really coming together. And things are changing pretty dramatically for the telcos because the functions that we do in mobility and in the TV space are getting caught up in the web of the ongoing onslaught of Moore's law, automation, consolidation, commoditization of agile programming methods. All of these things are converging to really drive change in a space that had been traditionally really owned by hardware vendors that built bespoke hardware specifically for specific functions, whether it was in mobility or in the TV space. And so now we're going through and reevaluating all of those things and looking at how can we take them and build them and put them onto a public cloud or on a sort of cloud facility, cloud infrastructure run by OpenStack. And in doing so, we find that many of these elements have very similar characteristics to workloads that we've dealt with before, whether it's scale out pets or cattle architectures. There's many aspects of load balancing, of firewalls, of kind of application servers that have corollaries in these types of systems. And so we're working through the process of how do we take those and move them over. So that's really one of the things that I, before the last eight months, I really had no involvement at all in the mobility or in the TV space and the vendors that worked there. And so over the last eight months, I've been able to work very closely with folks at Ericsson and other traditional vendors in this space and really get to know what's happening. I'd come from very much of a hosting and cloud background before that. And so it's been a, for me, a lot of education about what are all these systems about and how it works. So I'm going to talk today about, with more detail about SDN and what we've been doing with SDN, starting from within the data center and then expanding to what we're doing within the WAN. And then talk about NFV and some of the specifics in that regard. And then we'll finish off and talk about how Ericsson is helping to work on OpenStack and evolve it as a cloud enabler for NFV. So generally, SDN has been something that I've worked on for the last four years. We were early adopters of a number of different technologies in this space. We were early adopters of Nacira, worked very closely with VMware on this subject. We've been looking at this problem of scaling data center networks beyond their natural limitations for 10 years, right? So spanning tree issues and VLAN limitations, all these things. This is where we came from. Trying to run where I came from is trying to run multi-tenant hosting network, lots of different tenants having L2 separation, reservation separation all the way as much as possible end-to-end from their app to their headquarters, from their app to their users. And in that process, we recognized right away why SDN was important. If you look at Microsoft's VL2 paper, it's very clear enumeration of the problem of running a data center at scale, at hyperscale, and how you need to evolve it with more virtual methods. Now that work started with overlay protocols like GRE and STT and these types of things. And over the last few years, it's evolved into things like VXLAN and other aspects. Obviously all of that has to work on top of V-switches and hypervisor networking. So this is, as I was saying earlier today, an interesting concept. For me, SDN means four or five different elements of functionality. It's not just overlays and what VL2 describes as a way to solve for a data center network, but it also represents being able to talk one VM to another network-wise inside of a host that's not connected to a network or that doesn't need to talk outside to the physical network. It also evolves to other things like, okay, maybe switch can be a separate hardware device and a software. They don't have to be all produced by one vendor. It can actually be split. And also, as we talked through with OpenStack and tried to get OpenStack to work, it's all about orchestrating and creating a common API and mediation layer for provisioning networks. Before OpenStack, there was nothing like this. There was no way to depend on a reliable API set to actually provision networks, L2 or L3 networks at all. All of these things to me represent what has come to be known as SDN. Now, recently I presented this to some of the folks from our WAN-oriented network folks, and they thought, well, none of this is what SDN is, something else. So, in that regard, many of these things in this picture represent what I was just describing as my view of SDN. But beyond the data center, there's a whole other world or another realm of software-defined networks happening in L1 through L3 in the WAN. And that, in a similar way, has similar characteristics to what I described in the data center. The switches and routers are sitting there as opportunity to be run with more commodity hardware and with more independent OSs. The same thing goes for the control mechanisms and the way that we orchestrate those things. So, one example that I have from the WAN context is for us how we provision VPNs. VPNs between us and our customers, between us and our partners. It's one of the products that we offer today. We enable other clouds to work on our customers' VPNs so that a customer could have a service from our cloud or hosting a facility from us and then hosting our cloud facilities from a third party or even SaaS offerings from third parties, all on the same VPN, all reserved in separate connectivity end to end. Now, in the past, this took a lot of work to set this type of thing up between all of us. So, this is a month-long process. And since, over time, we've worked to actually try to orchestrate that end to end as well as produce APIs that allow us to stitch these things together. This is a very simple example of what's possible but gives you sort of a sense of where we're heading. Now, we're able to make WAN provisioning more real-time. You'll see from us beyond this is just beyond VPNs on networks that were already pre-provisioned or already provisioned before will actually start to get into how we make Ethernet to Ethernet connectivity happen between companies and try to make that more of a real-time experience as well. So, as SDN happens in this space, it's as much about API exposure as it is about real-time services as it is about the hardware, the software disaggregation or data playing, control playing disaggregation. So, that's SDN and that's sort of, for me, deals with a layer one through three aspect of virtualization in the network space. Beyond that is where I think of NFV. So, this is a slide of, it describes the situation today within a telco or even within a cable provider or a number of the larger service providers. Many of the pieces of equipment that make up their network of our network, even larger firewalls, larger NAT devices, DNS even. These things run on hardware that is very specific to that function. It costs a lot. It takes a very long time for it to evolve. It takes months if not years to upgrade these environments. The utilization is always built for the worst case scenario. These kind of dynamics are not really competitive anymore. And I was saying before as the IT concepts start to show up and actually scale to this level of requirement, the obvious question is why not virtualize? So, the next page really describes the benefits that we have by taking something from the NFV side and virtualizing them, putting them in containers, putting them in bare metal, whatever it takes to better orchestrate it and to more optimally use the hardware. Get higher levels of utilization and then make it so that it's easier to upgrade. It has more continuous integration, continuous deployment kind of dynamics. And then the ecosystem around it is a small number of vendors. It's hundreds of vendors. And as very much an open stack plays an integral role in making that happen. And as you see, just in the plugability within Neutron, we have a lot of participation in how that has evolved. Now we would like to see Neutron go to the next level, even do more of Layer 3 and when connectivity issues cover those types of things. And that's where we really need your help. So this last slide I'm going to do before I pass it to Matt is about an effort that we're now in the middle of surveying the market about whether or not this makes sense. So we're proposing creating a new group within the Linux Foundation's collaborative projects arm that's called NFV Open Platform. With an aim to promote trying to drive the same dynamics with NFV functions as what's happened in OpenStack and in Open Daylight. As well as promote for the underlying aspects in Open Daylight or in KVM or in OpenStack, have a mechanism to kind of aggregate requirements and push them and have a way of funding more resources to work on these things. So this is an effort that we're now asking people for opinions on, see if this will fly or not. So thank you and then I'll pass it off to Matt. So we're actually reusing the slides from myself and Toby here from the keynote slide. Looking at OpenStack and I think we see it really as the heart of NFV and the glue that keeps the NFV environment together. And we really need something that is multi-vendor and multi-vendor in terms of that we should be able to run applications from different vendors. We should be able to use different guest overs, we should be able to use different hyperrises. We should be able to connect different type of plugins from different type of hardware vendors if it's server vendors or networking or storage vendors. We need this multi-vendor ecosystem, cloud ecosystem and I think this is key for NFV because that is most likely how the environment will look on the NFV side. So if we then take some of the core capabilities that we said, plugability, I think this is more or less the same thing as talking about multi-vendor. Plugability is really the flexibility and I think this is where I believe OpenStack is very unique by this plugable architecture. Of course you don't get a complete product but you get the tools to actually build the perfect environment for NFV. So the plugability provides the flexibility that we need. Programmability, we have to deliver on the promises of NFV. NFV is really about speed and simplification of handling NFV resources and NFV applications. So programmability, without programmability you can't really automate, you can't efficiently do the speed. One is the whole area of provisioning resources and provisioning applications as well, the management of them to be faster in provision, be faster in launching new services and lower the operational cost without the automation that this framework brings you. Innovation and speed, this is really about the community. No single vendor can outperform the speed that we see in OpenStack. I think the good thing now, and I think what's also shown by Tobii is that this open source Linux foundation stuff, I think there is a good alignment around a lot of telecom operators and a lot of telecom vendors to go to OpenStack track. I think there is a lot of things that we need to do, but I think there is a good alignment that this is the bet on the future. Reliability and security, I think one of the difference that we actually have in NFV is that we have a lot of stateful applications. And stateful applications means that you can't only close down a VM, you have to actually handle a state, you handle migrate states, etc. Because what happens is that most of your telephone calls that you're using, they are depending on a state in some telecom application. And if you just close down that VM, that call will be lost. So we need to be able to handle the stateful applications because there is like 20 years of design. But we also need to move these applications that have historically been deployed on boxes over to a new type of cloud deployment environment where we can kind of get the benefits of faster deployment and more automation. Teleco expertise and features, this is really what we bring to the table. I think we have some good understanding now what we think needs to be done and how it can be done. I think it's also a good opportunity to really combine a lot of things happening in other industry that gets contributed to OpenStack. And I think by doing this, I think we can actually leverage OpenStack even further. And I also believe that some of the features I'm thinking that we have will also benefit other industry that requires high demands on the infrastructure. Distributed and scalable, when you look on an operated network today, there is actually not two, three data centers. There is hundreds of data centers. They have all these different type of location central offices which a lot of operators want to use in a distributed way. They want to move workloads across this huge network of data center and smaller data centers that are more regional. And really giving actually the capability to move workloads and to distribute workloads in a setup where you maybe have hundreds of data centers. I think this is a challenge. And at the same time more or less providing the same SLAs on the end services that we have today in the classical box type of device. Scalability. One of the things is also how do you scale to millions of subscribers with stateful applications? I think there is a number of issues that needs to be solved. How we can do this in a very good way. We need to be able to optimize in a lot of different aspects. We need to be able to optimize on cost. We need to be able to optimize on energy. We need to be able to optimize on latency. For instance, where do we put the workload from a topology perspective? And all of this optimization we need to be able to orchestrate from an open stack type of layer. So having this orchestration to be able to do this optimization. And I think in a classical data center you more or less optimize on single parameter. But here you need to be able to optimize across a number of data centers. And you for instance need to take into account strict requirements on latency. Which is a bit of a challenge. Because then we're actually coming back to what Tobias said. Then the wide area network for instance need to be part of the orchestration engine. So this is, I mean I'll go through some of the areas where we believe that we need to add capabilities. We also believe that a lot of this capability will also be usable for other industries and other verticals. So if we then start with resource allocation and resource isolation. We need to be able to orchestrate VM and network resources based for instance on end-to-end network SLAs. Based on subscriber policies. We need to have a much more advanced environment. How do we actually allocate and handle VMs or network connectivities. This we need actually to have APIs in the novel, the neutral side. Resource isolation is another aspect. When you're having all these different network functions, NVF applications. You actually need to be able to have a complete isolation from a performance perspective. You shouldn't be able to impact the performance from one application to another. That means that you need a resource isolation in terms of computer resources, memory resources, network resources. So to have the complete installations you have no impact between the applications. Networking. We said all about that we actually need to bring in the wide area network orchestration as part of the whole solution of the orchestration. We also need for instance to be able to terminate the wide area network into a dedicated VM within the data center. That means for instance that we need proper support for MPL. MPL is tunneling straight into a VM. And this is things that maybe if you're running like if you only have a data center you need to connect like two data centers. But here is actually there's a multiple number of sites that you need to connect and you need to be able to move workload. And that's why the wide area network needs to be an orchestrator resources exactly as VMs are or storage are. Real time response. We still have a lot of packet based application, conversational voice type of applications. Where you for instance need an interrupt based schedule in the hypervisor. You need to have latency figures on the virtual switch and the hypervisor together that more or less brings you to. You should have more or less have the same type of figures for real time and throughput that are similar to a native deployment. High availability. If you just take a standard open stack reference type of implementation you will most likely hit the figures of three to four nines. On the availability of the controller and the setup. What we are looking for is really that we need an infrastructure environment that are five nines. And that means that we need to have more support for fault monitoring. We need to have the health check. We need to have mitigation functionality that really can mitigate the milliseconds to move workloads from one place to another. Or to kind of increase the network resources. So these are things that we need to add to the framework. Then if you go more sort of say functions that are generally more needed. I mean the carry grade security. Multitenancy. And when we talk about multitenancy it's really that we have to have a number of NVF applications residing next to each other. They may have different security perspective. They have maybe different performance requirements. And they may also have different type of security requirements for the whole network. So we need actually to provide a complete end-to-end multitenancy environment. Including all the way from the applications to the VM and all the way out in the network. It needs to be complete isolated pipe from other network functions that are kind of sitting next to it. Software management and upgrade support. I think there is good things happening now. But we actually need hit less and automated upgrade on the environment. Back up and restore. This is really going back to a bit of the things that we have in ordinary operators network that is more automatic backup. And a lot of kind of inbuilt automatic recovery actions. Audit and troubleshooting. Things like audit logging. A lot of monitoring capabilities. Troubleshooting. Because if these things breaks. Actually a lot of the telecom services breaks. You can't make a phone call. And being a vendor to some of the major operators. When an operator can't make a telephone call. More or less they are contacting our CEO. So this is really important stuff that things needs to work all the time. You can't really have like hours of breakdown. Because if you're looking on the standard requirements for a telecom network. We're talking about five nines. And five nines means that the system can't be down for more than five minutes per year. Including planning upgrades. There is no upgrade windows in telecom. You have to do all the upgrades and all the stuff that has to be done with these five minutes. This is pretty tough requirements. Assurance. That we need to have things that are actually doing continuous performance. Monitoring. Beyond what we can do in Cilometer today. So I think we are actually looking more of course. This is the general requirements that we see. But we are also also looking into what do we actually do. What type of contributions do we plan to do. And I think we also need, I mean of course we need a community's help. And I think it's very good now with Tobis keynote et cetera. That we get the NFV requirements up on the table. And all the opportunities that arises within NFV. So I will not go into all the details here. But of course there is a lot of things on the networking side. We need to add the capability to terminate and to stitch together the wide area network. We need to look on how do we integrate with SDN controllers in a good way. A lot of things around quality of service. NOVA could be things like dynamic logging to be able to turn on and turn off logging without stopping the services. We need to have a lot of more information coming up from the computer devices. So we actually know exactly how that device is configured and how many PCI buses it has connected or storage or whatever. We really need to have this information. Cilometer, things like that we should really be able to do metering on a per-tenant basis have different metering capabilities for different tenants. Ironic, things like cleaning up the disks before kind of living over that hardware to another VM because you don't want to have data that can be kind of left on the disks you need to be able for instance to have APIs and functionality that cleans the disk before you hand over the VM or that hardware to another VM. And we talked about the framework and the high availability. I think this is so important that we have because high availability is not only about replication of course it's very much also about reliable software but it's very much about the high availability. It's really built about the capabilities to monitor what is actually happening. I think I skipped that one. So when we looked on the carrier-grade building blocks that we see, we start in the middle I think we all see that this multi-vendor ecosystem is the key because they will have a mixture of plugins, different type of hardware vendors, different type of guest OSS. It needs to be multi-vendor. And of course I think it also I mean over time I think we also need to find a type of interoperability and this Linux foundation stuff but of course interoperability is a challenge but it needs to be multi-vendor. We need to plug in whatever the operator needs into this framework. Security and reliability. We are talking about that we need to retain that level of security and reliability that we have today and of course there is a bit of challenge with stateful applications. Resource optimization and resource utilization in an end-to-end network perspective we really need to support both a distributed and centralized cloud deployment scenario unified and distributed resource pool. The operators have a large number of sites out. We need to be able to treat all these different sites and all the service and the network equipment that stands in all these places as one pool of resources. So really being able to have the complete flexibility to move workloads across in the network because if you for instance have a simple server like a content distribution network you should be able to actually move that function very close to where the load is where it makes it optimal to place the CDN function within a network and that is of course a balance between bandwidth and end-user experience. So I think there is a lot of functions that actually can be balanced in a very distributed way and that also goes for that with this unified and distributed resource pool that all the network functions including all the load balancing all the security gateways, all the nuts, it needs to move along in this network so you need to be able to orchestrate the whole set. Federation, you also believe that some way I mean both within an operator and maybe between an operator and other enterprises you need to be able to share and federate the environment. You should be able for instance to put an enterprise workload close to the NBF applications. I think there is a lot of opportunities in the federation aspect of OpenStack. As mentioned a lot about this high level SLA stuff we have but of course moving to more end-to-end based SLA's we can't really only have the SLA on a per box because the box will not exist any longer we will have a horizontal way of building the systems and we need to move over to providing actually end-to-end SLA's on the service but I can assure you that the service level requirements will still be five nines because I think that is what you are used to making phone calls. We have to deliver on the promises for NFV about the simplification the speed reduction in OPEX and that is of course the rapid provisioning and more automated management. We are working on a number of proof of concept that we are trying not to figure out trying to pinpoint some of the tricky areas which we need to understand more about live infrastructure upgrade that could be looking on host overs, seamless host overs upgrade hot migration of VMs including how we are actually moving state logic high availability so how do we actually make sure that we have a highly available cloud controller that can also be upgraded talking about this cloud controller on the kind of five nines capability alarm handling how do we actually provide SNMP traps up from the execution environment up to the orchestration layer how do we actually know that something has gone wrong so this is also areas centralized identity access management another view of the identity and access management is local access if you have an earthquake for instance and you lose all your connectivity to that site you need also to actually maybe to have emergency access virtual switch performance enhancement and I think we are talking about really getting close to line rate and I think right now I think we have made a proof of concept where we are reaching 10 gigabit line rate on 240 bytes packet size without any dedicated hardware it's only the accelerated v-switch and the hypervisor there is a lot of things that can be done and reaching 10 gigabit line rate to a VM without actually having dedicated hardware on 240 bytes packet size I think it's good I think there is more areas we need to evolve to really get this environment the virtualized environment as good as a native environment that we have today so questions bring up Tobias again on the table Should I answer? I think it's a bit complex because I think we will actually need a couple of different SDN domains which needs to be able to talk to each other so I think you actually need to have a transport domain that can both talk to open stack and talk to an NMS layer for instance so I don't think at least in my view it's not simple on so I think we will have a number of SDN domains a transport domain, a data center domain where you will have SDN controller in each one of them and they need to be able to talk to an orchestration layer to glue it together because I don't think you can have one SDN controller that kind of spans though because then it gets too complex and it definitely gets into an area where there is a small number of vendors that have very specific solutions for what you are describing today but there is not a lot of broad visibility whereas when we talk about higher end NFV functions the spec is out there, there is a lot of development happening in that space it's a little bit of a different dynamic so I agree with Matz, there is a really different domain and it needs a different type of group to focus on it The question is to both of you, can you hear me? I don't The question is to both of you gentlemen this is regards to vision we have been seeing a lot of POCs and POCs and POCs have you really considered or do you visualize deployment in a year, 2 years, 3 years, 5 years what kind of NFV deployments you are seeing in AT&T network and what is your CTO office saying to the vendors? The question was there has been a lot of talk proof of concepts within telcos lots of proof of concepts for this or that and then the real question is are we ever going to really do it in live production which is a good question there is two or three things that we have to deal with part of it is more than just the technology because we have proof of concepts that show we can do some of these things at the speeds and reliability that we need the thing is making justification for it and that's a lot of times tricky given the sunk investment in the existing hardware based setups so finding the right mix and this is really one of the bigger problems we are dealing with today is do we try to do a net new deployment all separate and distinct that's green field where unicorns live or do we do something in the brown field where we are actually augmenting existing things or existing elements with virtualized elements and have them inter work one area is easy to work on the other requires a lot more integration and effort and so the financial model is more head toward the brown field model and that's what you will see us working on here in the next year I mean it's a year or two years out before some of these things happen for real at massive scale we will have point solutions for certain things in the mobility network here and there but not on scale on TV it's a whole different subject on the TV side it's today and if we don't do it today then we have a problem I can give the vendor perspective as well because I think as I said we are planning to virtualize a variety of our core applications and I agree with Toby that I would say that big volume production deployments are I mean my view as well is maybe two years away so to say where we see them in major volume I think there is a lot of trials and tests all over the place trying the concept out I think there is some time before we have the full production environments in place as well exactly core applications inside the 4G mobile network at AT&T, Verizon and other networks are already using software defined NFV functionality for diameter signaling routers and homes for private server and the policy control server but many of those are very specialized and they have the availability built into the application when you describe availability for NFV broadly are you looking to put that availability into the application level or are you looking to get it from the infrastructure so that it can be scaled across multiple applications I think it's both actually because if you're looking at telecom applications and if you're looking at 3D people standardization you see that all applications more or less own their own data it's very different compared to an IT installation where you have front and the back ends I think we need to support how applications are built of course I also think we also need but we're having the application and the state logic if it's not really needed it actually adds some complexity so we need to read the balance so where should we actually have the state logic within the application where should we go for a more front and back end type of solution but as I said the starting point is actually that most of the NFV applications are actually stateful I think that is the starting point yeah I agree I mean it's really at this point a mix of things that ideally everything is my universe is scale out and cattle but it's a very idealistic way of thinking that right now there are elements that we have to provide support for in the infrastructure to maintain reliability and availability a working group within Linux to help that Linux be as well there are other groups working to top some of the standards and strategies to get into that space why the Linux proposal and what do you hope to get out of that connection sure so the work that's part of the NFV open platform the survey we're doing to see if there's interest in this area is actually an extension of the work that's happening within the SEISG the working group that has defined NFV the NFV group realized in a large measure because of OpenStack's success that we needed something like OpenStack to achieve the goals at the NFV layer the same goals multi-vendor ecosystem standard APIs that are meaningful not open standards but reference implementation standards kind of approach so then the question was do we make a new foundation or do we be part of OpenStack or ODL or how do we do this so that's really the question on the table what do people think is the right approach I believe that it doesn't make sense to create another foundation having gone through it once, it's a lot of work certainly the Linux foundation and its collaborative projects framework is very interesting in a way to bring new things to bear and some of the examples have worked very effectively and ODL especially has been a very effective approach that way and that's why we started there and the Linux foundation is set up to do this type of surveying make that kind of pitch to the marketplace and I think it's also a major change for the telecom apparatus because we were used to writing the specification and then everyone, and we were agreeing and then everyone went home for two years and came back with a product so I think for us also I mean because I think if you're going to do something that is quicker than that type of standardization you actually need to have a living reference implementation that kind of pushes it and drives the speed exactly and that is really the Etsy ISG there's a very good center of gravity in that group around all of a good portion of telcos globally and all of their vendors that to me covers what you're saying about the specific buy-off I think that is a good group to work with Any other questions? Is time running out? So your question is is there any functionality that can actually enhance neutron? What do you think about that one? That enhance neutron? Well I think we need we need to as I said this area about connecting the wider area together with the data center networking piece I think there is a number of things we need to improve in the neutron area then I think there is also I mean a lot of discussion so how much should we stick into the neutron layer how much should be above and how much should be below and I think there will probably be a lot of discussions on ongoing summits as well so I think there is still I think neutron needs to probably be worked out a bit more But there are existing vendor solutions and telco solutions that actually that are out there we need to compel them to incorporate into the scheme so I think that they're it's there just getting them to bring it into the open source world is going to be part of it Alright so they're giving us the signal thank you everybody appreciate the time