 Okay, so welcome everyone. I guess we'll get started, we're at 20 past and we've got about 40 minutes here, so we wanted to kick off this session. My name is Jennifer Lin and I lead product management and I'm joined here with Don Keeler, who's Senior Vice President at Hitachi and we're gonna talk to you today a little bit about a partnership that we've been engaging in. I'll kick it off with just a little bit on the Juniper Contrail side. The majority of the session will be led by Don and he'll talk more specifically about aspects of the solution. So we're gonna talk a little bit about, there's been a lot of discussion here at the conference already in terms of the differences between broader cloud platforms and infrastructure and platform as a service versus the specific needs of carriers with NFV. We really conceived a lot of the stuff that we've been doing with Contrail around that dichotomy, so we'll talk a little bit more about that. There are some very specific needs in the 4G packet core and we're gonna talk a little bit about what problem are we solving and what have we done to date and then why does this partnership make the sum of the parts greater? We're really here to kind of talk a lot about what we've already done and not so much, wouldn't it be great if NFV could be addressed in the context of OpenStack? We already made that decision probably two years ago and when we kicked off this partnership I think we were both convinced that the equal momentum around cladification and the specific challenges of the mobile packet core could kind of come together in a nice way. So we'd like to share a lot of the fruits of the efforts really over the last couple of years and then if we have time at the end we'll open it up for Q&A. So some of you may be familiar, OpenContrail, we kicked off this open source project obviously driven a lot by the acquisition that Juniper Networks had made of a small startup called Contrail Systems and Contrail was really looking at how do we do cloud network service automation? A lot of the discussion that we've had with our large carrier customers is how do we take a lot of these best practices in building out an elastic architecture that scale out standards-based interoperable in a way that doesn't cause us to displace a lot of the goodness that we already have in a pervasive IP network. And as many of you know, a lot of the standards work around Contrail and the architecture with L3VPNN systems co-authored by AT&T and Verizon some years ago to address how do we define for cloud data centers, virtualized data centers a standards-based interoperable environment so that we can have all of the benefits of a highly distributed elastic architecture but still solve the issues around resiliency, availability, multi-tenancy and security. So we'll talk a little bit more about that. Really just this balance between what were some of the drivers? From a technology perspective, everybody understood that this need for a scale out horizontally scalable architecture was not necessarily the way that networks were built in the past where you kind of built for the peak capacity and you filled it up as things came on. The move to cloud, which really empowers the end user, means that you can't build for an upper bound. So what a lot of the cloud providers have done is to build obviously a very horizontally scale out infrastructure that essentially builds on as usage grows and then billing back the user. And in mobility, obviously, we as mobile users already see that paradigm that the backend infrastructure was not really built with that driver in place. For the last few years, a lot of the focus in IT environments has been on cost reduction but increasingly that's balanced with agility. So a lot of what we're talking about here with NFE and as we work with our carrier customers around service creation is of course you have to bring the cost down. That's sort of the minimum pay to play but really what people are measuring increasingly is business agility and this notion of fail fast which hasn't been sort of comfortable within the carrier markets but is really a key tenet of how the cloud providers think. There are mistakes that are gonna come along the way. The resiliency of the application and the resiliency of the infrastructure, the sort of cattle versus pets discussion is something that I think as this market matures we're seeing more sort of examples of how do you balance that elastic scale out with the resiliency in HA that carriers expect. At the same time, you know, organizationally and a different lexicon for cloud, a lot of those sort of layer eight issues have prevented folks from embracing this as quickly as the technology was available and there obviously were some gaps to address. So technical maturity and some of the retraining that grew up with the networking industry where proprietary operating systems were based on specific CLI commands per device. A lot of what we're doing with system level SDN and cloud network automation is simplifying the network configuration in a way that automates the delivery of services. So we'll basically show one very specific example. We've had incredible success with OpenContrail and the folks that initially have moved into deployment haven't necessarily been the large scale carriers. We're sharing a lot of the actual deployments in a user group session tomorrow morning. We're six of our customers who are cloud builders are gonna share essentially their evaluation, the benchmark testing that they did, their operational best practices and we are really growing this community. You know, at the beginning, a lot of the, pretty much all of the developers were Juniper employees coming from the contrail team. Some of the customers that were starting to showcase have already contributed back into the open source community, a lot of the enhancements they've made and this sort of effort, I think, which is fairly new in the networking industry, especially for carriers. You know, Juniper is a founding member and platinum sponsor of efforts like OPNFE and a lot of those open platform for network functions virtualization type discussions. People are looking not to agree that this is a good idea but looking for data driven solutions, data driven decision making around actual solutions. You know, the loose coupling from a software architecture, things need to be API driven and this notion that started in the SDN discussion of programmatic control for network systems has really evolved now. A lot of the APIs that are presented either as neutron APIs or services APIs be they load balancing as a service or other specific service from a network perspective are really maturing. A lot of the things when we started this journey as gold members in OpenStack were not yet defined as clean and well-behaved APIs. When we talked about the V router capability in contrail, which essentially is distributed virtual routing in the Linux kernel, that was a little bit of a different type of discussion in the convergence of computes that storage and networking but obviously there's a lot of sessions at this summit around that. So I think we are converging as an industry on why that makes sense and how we can achieve a loose coupling between the different planes of the network, the data plane, the control plane, the service plane, the management plane and how we can do it as an open ecosystem. So as I mentioned, you can play this one out. As I mentioned, the work around contrail started before even the founding of contrail as a company was driven through the IETF. We're now in the seventh iteration fully fledged IETF working group but the key tenant was a vendor agnostic architecture for essentially an overlay-based network that would interoperate day one with existing network infrastructure and that's not a single vendor network infrastructure. So I think we've proven with the ability to scale into actual production deployments. We've been deployed on obviously not only Juniper MXes and Juniper switching infrastructure but in many cases a mixed vendor environment of switching and routing. As long as we can retain this notion at the layer three gateway of an IP VPN and that's a key tenant of how the contrail architecture was conceived, we didn't want to make compromises in terms of the ability to achieve secure multi-tenancy in a virtualized environment. At the same time, the reason that these large carriers got behind this approach was because it interoperates with existing WAN gateways day one. So the notion of a hybrid cloud is not some future roadmap item. We are already interconnecting overlays within a specific data center to the wide area network, MPLS VPNs, IP VPNs, Ethernet VPNs in a standards-based way and proving the interoperability of every layer of the network. Increasingly too, this notion of a Linux kernel module that natively does routing, not just for scalability, latency reduction but also for service delivery is making a lot of sense to folks. And the area that we see the most mature thinking about this, are folks like the real-time applications guys. We do a lot of work with real-time analytics, gaming providers, entertainment folks who increasingly are moving away from a hypervisor environment, maybe to a Linux container environment, but they're very latency sensitive. And in many cases, they're using layer three routing protocols like BGP, not just to scale, but to essentially do service insertion and policy enforcement right at the host level. That's proven to be a key differentiator for us and the fact that we have an open-source code base that we're starting to see a lot of co-creation with our broader community is something that I think has helped the adoption. The way that we do network virtualization and attach policies to abstracted virtual networks is really enabled also by the way we do service chaining. It's a similar approach where we have a unified control plane which is good old next-hop routing and whether it's a physical appliance or a virtualized appliance, we can essentially route to that service. We can present that service as a service, whether it's a firewall or a load balancer or a caching service or a mobility service. And when the group-based policy is created, it's created at a vendor agnostic level. We use then service templates to push down the low-level configurations and make sure that it's enforced in the architecture. So a lot of that sort of service automation and dynamic service delivery that the lessons learned from a lot of the cloud builders does become very relevant in the way that we do distributed service delivery in a cloud. Lots of data, as I mentioned, around each of the tenants here, we have a couple of customers that are doing sessions here at the summit to share their benchmark testing and the scaling tests that they did. Since a year ago, I think we are getting a lot of best practices both on the operational side as well as how these tests are being conducted to evaluate SDN-type solutions. And the term SDN, as many of you know, has probably been evolved quite a bit since it first started a couple of years ago. We look at it more as network virtualization, which is policy-based and automated, as well as the ability to dynamically insert services. And the ability to do that with a way that natively interoperates with a mixed-vendor physical infrastructure so that we can start to converge physical, virtual, emerging, legacy, cloud, verticalized, et cetera, is really the key to this. At the solutions level, too, a lot of our partners are really building their own best practices, and we're capturing that with a reference architecture approach, sharing not only the hardware and software bill of materials, but also the configurations. As an example, many of our customers are using a non-Juniper Layer 3 gateway. We've done some interoperability testing and shared those configurations back just to show that if it's an RFC 4364 IPVPN compliant router, we can interoperate with it day one, which eliminates the need for extra gateways, which is extra troubleshooting, extra complexity, extra cost, and really puts a bottleneck in the system that for many of these real-time video and mobility type applications adds extra complexity. So, you know, there's a few key design principles and a lot of the discussions and the buffs around NFV are really, you know, constantly coming back to these key tenants to make sure that as we deliver NFV services, we don't compromise, number one, on what IP networks can already do in a secure and resilient way, but also address the new problem. So some of these are really a little bit of motherhood and apple pie, but when you go back and you evaluate each of the alternatives against these directives, there is sort of a numbering system that we've seen some of our, you know, more mature carriers kind of start to put together. And lastly, you know, we've had a lot of success, I think partially because we use sort of native IP principles and how we do this in growing the ecosystem. Obviously, we're highlighting one of our very strategic partners here today in terms of the services that we deliver and we wanted to share that goodness a lot because, you know, in the keynote session this morning, there was a lot of discussion about, you know, what are the differences between NFV and broader cloud services? And I think as an ecosystem, we're learning that iteratively and trying to share that with the broader OpenStack community, OPNFE, I'm on the board of Open Daylight, lots of discussions in those types of open forums. And OpenContrail being, you know, its own sub-community on some of this, we can kind of get deeper into a lot of the data and the comparisons. So with that, I'll turn it over to Don and we'll get into the meat of it. Right, thank you. My name is Don Keeler and as Jennifer had mentioned, what we're gonna do is I'll share with you some of our experiences that we've had in working with Juniper in a trial that we've been conducting for a number of months now in a mobile operator. And one of the things that we're focusing on is a mobile packet core. And for those who are not familiar with the various components that make up a mobile packet core, the one specifically we'll be talking about today will be the MME SGSN, which facilitates the mobility, tracking the mobiles, paging of the devices, authenticating through the HSS or HLR, the various devices, and then you have the gateways, which effectively do the bear. And so one of the things that we, as we're looking at moving into NFV, in the virtual capacity of these nodes, the things that we really have to, we wanted to look at how do we make it obviously cheaper for the operator, because they're worried about the total cost of ownership, as well as making it less complex in terms of building out, especially when it comes to scale, because each time you add one of these nodes or have to increment a node, you have to touch a number of nodes around it. And so as well as it has to be carrier grade, we have to maintain the five-ninth reliability that the operators are expecting these nodes to provide, even though they're now being offered in a virtual environment. So as you go through, some of the things that we were seeing that operators were concerned about was the fact that the call models were always changing. It was very unpredictable. We found with some of our operators that they found that the active vital and the various different transitions that a mobile makes were four times what they had anticipated. So you had a call model that was always constantly varying, especially with the introduction of new devices with the relatively flat money that these guys are getting in terms of ARPU that they need to find how do they continue to grow their services. AT&T was actually at a SDN NFE conference in Dallas just a couple of weeks ago in which they said that they anticipate the network in terms of its data capacity that they will grow in excess of four times in the next four years, but they were keeping their capital outlay consistent over that time. So how do you manage to continue to deliver the service that are expected of an operator when they see these growth and yet the revenues are relatively flat? So obviously looking to NFE, the things that we're having to say is, okay, we wanna be able to run on COTS hardware, but it still has to maintain the reliability that we're expecting these network elements to maintain. Obviously it's gotta support all the various G, 2G, 3G, 4G costs effectively and being able to scale out as needed when you see what's happening. Now one of the things as we were going through and looking at all these, the thing that we were looking at was we found out, we just can't port our app that we had that was running on ATCA, we just couldn't port it onto a VM and say, you know what, there it is, it's virtual. But what we found is that the bottleneck that it had when it was sitting in an ATCA purpose built hardware platform was the same bottleneck it was experiencing when we just ported over to a virtual environment. So what we really had to do was figure out how we best could deconstruct our application and actually make it so it really could be, if you will, cloudified where we could actually scale in multiple different directions depending on what was happening in the call model. I just want to give just a quick background of who Hitachi, I'm actually based in Dallas, but who we are, Hitachi has been in the communication business for over 70 years. We actually provide, mainly in Japan, CDMA, YMAX, and LTE network components. And Hitachi CTA is part of the network and telecommunication group that's out of Hitachi Limited. We actually are in North America where the entity that would bring various networking equipment from Hitachi, Japan into the US market. So we did things like ATM switches, PBXs, all sorts of things. And now in 2008, Hitachi and Nortel actually got together and won the KDDI LTE network. KDDI is the second largest mobile operator in Japan. And when Nortel went bankrupt, that was problematic for Hitachi and KDDI. And so Hitachi actually purchased the Nortel EPC asset at the end of 2009. So I was part of that acquisition. So we brought over the whole team that was building the Nortel LTE components. And so that's how Hitachi actually became building the, in our case, the MME SGSN as well as the gateways in Japan. So we actually delivered our first prototype of a MME SGSN actually utilizing some Adjuniper's technology at the end of 2012 and actually showed it at Mobile World Congress 2013. And now we're ready to actually deploy our MME SGSN as a VNF right now in 2014. So what do we do? So as I mentioned, we had to decompose our MME SGSN into various different VMs. So it allows us to scale in the various dimensions. Typically, if you look at a control device, we have to scale based on subs, we have to scale based on throughput, and we also have to scale based on signaling. So when you wanna add a bunch of EnoBs, it takes up capacity as well as memory space. The throughput would only be for 2G, 3G, SGSNs as well as the ability to add subscribers from a call P perspective. And so what we found is that in our ATCA offering, we would always hit the bottleneck in just one of the dimensions. But what being able to deconstruct our app, what it allowed us really to do was begin to scale, not only when you hit the max subs, well, you can also, you continue to scale your data throughput as well as your signaling events. And so what we found we could do is that we could not only scale out to the maximums, but we could continue scaling out. If you were looking at a radar diagram, we could actually scale quite significantly, independently of each one of the various domains. The other thing that we really had to make sure of is that we could still maintain the five nines reliability. You know, whether you draw up, when you ring fence around it, where you have the NFVI as well as the VNF, you still have to deliver five nines. In terms of reliability, it has to be able to be deployed, you gotta be able to upgrade it. One of the challenges is, you know, how do you do system upgrades? And in the app, we can certainly upgrade each component in an in-service environment, but then we have to continue to be able to do that, including the NFVI. So in pulling this all together, we've been able to demonstrate this in an environment, in a lab environment with the operators right now. We find that it ultimately really does reduce the OPEX complexity that an operator has because of the fact that we can scale, and we actually really focus on having a single IP point of presence, so that even though you scale all these VMs to an EnoB, it doesn't look like it anything changed or to the HSS, HLR, so you're able to scale, and yet nothing, you don't have to touch any of the nodes around you. All you have to do is be able to have the hardware that's available to be able to build it out. So in our labs, we're on all sorts of different hardware, so we really don't care what it's on, whereas almost everyone, most everybody else's version of an MME, SGSN, or P-Gateway are all on purpose-built hardware, so you see as moving to really a virtual infrastructure, it really enables you to see the value of the COTS hardware. And finally, we really see, because of this isolation and being able to add, we really can deploy these things much faster, and have seen that as we've moved it. The other thing that we noticed is that we could actually scale down. We never could scale down very well, because when you had it on an ATCA chassis, you had to put so much hardware just to put on the very first subscriber, whereas now you can on a single server, you can actually have an MME, SGSN, P-Gateway all running and being able to be fully redundant with, so you're able to scale both down and obviously continue to scale up in quite a significant way. So some of the things that, you know, what's really been nice in our partnership with Juniper is that we've really been able to take this VNF that we've been showing and really show the operator that it is carrier grade, that it is really ready to be stood up as an application that can be put into the network. It is highly available. We are seeing the five-nines reliability as we beat it up both in our labs, as well as in the operator's lab. We see that it can scale out so that we have both a small configuration, which is just two servers, up to one that has 20 servers in it and just being able to see the various degrees in which we can scale the application. And finally, you know, you really can see the reduced total cost of ownership. One, as I mentioned, because of our single IP point of presence, that as you build this out before, as you, if you had to scale with ATCA, as soon as you put that next box down, you had to get everybody else involved in terms of being able to put it into the network, add it to, and make sure that everybody else could see it. So you really see that we are able to scale it in a much more cost-effective way. And because we have multiple VMs that can scale independently of one another, it wasn't like when we deployed the single node that we were leaving resources unused, whereas now you are able to highly, more efficiently utilize the resources of the servers by building, adding only the VMs that you need to as you go through it. Just want to talk a little bit of some of the things that we had to do. One of the challenges that we have is that from an MME perspective, it has to be very deterministic about what the capacity is. The operator's expecting that you quote a capacity and he's expecting that you meet it, otherwise you provide more hardware. And so one of the challenges that we have is that typically we'd like to see on your left where it's, where there's multiple VMs, it doesn't matter what CPU they're on, that everything works fine, but we know that from our experience that the scheduler isn't where it needs to be. And we were starving all of our VMs because they're very integral with one another, you can't have one starving the other. And what we're finding is that we're starving the VMs. And so what we've done is actually right now we're pinning cores so that we can actually get the deterministic behavior that we're expecting of the application. The other thing as Jennifer actually talked about OpenVS doesn't do what it needs to be. And so with Juniper obviously, with their vRouter solution provides us a robust networking capability that is required of the application. We're in the business, I'm in the business of building an app and so we really need a partner that has the infrastructure that is needed. And we see that this is true for what Contral has brought to us whereas we couldn't get that when we were looking at just OpenVSwitch. So what we've been looking at is where, how do we now deliver something that's simple to deploy? That's really what our goal has been. We wanted to make sure that as operators scaled, it scales in a meaningful way. And but in not a operationally complex way. And we think that we've been able to do that. We've been able to, because of the way that we've re-architected the application, we can actually scale it in a way that would meet the needs of the changing call models that we see, whether it be from subscribers, whether it be introducing of small cells where they wanna have hundreds of thousands of cells attached to the network, that that is not a problem that we're able to begin to look at mechanisms to auto-scale it as it needs with by utilizing more analytics, et cetera, where you're able to look and plan for these unplanned events. We are utilizing, with Contrail, we're using the APIs that are provided to us and we actually see that that's something that we can continue to move forward with where it is that we're able to, as a VNF write on any NFVI that is provided by the operator or is requested. And most importantly, it has to be carrier grade. And this is the thing, it'll never get stood up if you can't meet the five nines. That the operators want. Anyways, that's all we have. If you have any questions, we'd be willing to take them. Not all at once, please. I had a question. So you talked about the five nines and you said you were doing things at the HA level and potentially other ways as well. Could you give more technical details on what techniques you're using to get to the five nines? Well, I'll let Jennifer talk what they've done in Contrail. I mean, we do typical things from a VNF perspective where the app is fully redundant. Depending on the type of VM, it's either one to one or load sharing or N plus one. And so we've done that in the app and then we've looked to Contrail to be able to actually provide the more, make the NFVI robust. Yeah, I mean, from a network perspective, obviously Juniper's broader routing, switching infrastructure, we've been through this quite a bit in terms of maintaining the SLA with five nines availability. A lot of the lessons learned out of that were carried forward. We did a grounds up BGP implementation. That's where we get the unified control plane. So we don't have to reinvent how you basically ensure no single point of failure for every aspect of our software architecture. We have sort of resiliency built in, whether it's the control nodes in terms of the control plane function, the configuration management nodes, which are also completely scale out, the analytics nodes, and at the same time, for any given host, if you lose any one of those nodes, you never interrupt the service. And that's obviously a lesson that we've learned in routing. When you essentially lose a router, traffic doesn't stop. And that was actually one of the key issues with some of the generation one SDN controllers that we heard a near full back from a lot of the carriers. It was unacceptable to essentially take a step back on the expectations of resiliency. The other aspect of it that is, once again, a lesson carried forward from large carrier grade networks is the ability to do in-service software upgrades. So we can have two different versions of the code running, have a salt and pepper type configuration while you do that upgrade. So we've documented a lot of the best practices in terms of large scale upgrades across the physical and virtual infrastructure. The other aspect of it that Don kind of alluded to is we've given a lot of diagnostic and troubleshooting information. So the instrumentation that goes all the way down to the Linux kernel for the network administrator is unprecedented in terms of overlay type technologies. So we can look at five tuple flow statistics for every virtual machine. We can do pings and trace routes and we can look at latency between either virtual machines or aggregated across virtual networks. So there's a huge amount of visibility and diagnostic and feedback loop there to ensure that the resiliency of the system is there. Automation is also one key aspect because as you know, there is sort of in traditionally in device level CLI commands, the ability to fat finger and make a mistake that brings down your sort of availability of the entire system. So as we've learned in mobility controllers over the years, the ability to kind of have a system topology view but automate a lot of those best practices cuts down on the human error thing quite a bit. So any other questions? Okay, well, thanks. Oh, yes. Well, I'm not the right guy to tell you, give you technical information, but we can certainly provide some information on it. Yeah, we can share as a follow up. I mean, in the Juno release, there is quite a good discussion more broadly in OpenStack Juno around CPU pinning and some of the best practices. And then specifically for the Hitachi example, I think the main thing is back to sort of balancing the system because the idea of pooling those resources kind of runs counter to the idea of pinning a CPU core. But I think there are some best practices that essentially are being baked into the templates that maybe address some of that. But yeah, we can do a follow up there. Yes, yes. So we use the OpenStack right now running in a Havana environment with the neutron plugin based on Contrail as well as the VNF from Hitachi. Yeah, I mean, a lot of our, so right now, Contrail is upstream as of the Juno release. A lot of the testing and deployment that we've done has been based on the neutron APIs. There are some things obviously that we've done specific to help this mobility solution. Over time, we expect more of those to be upstream. Okay, I have a question. For the, I'm from China actually in China community here. Okay. And we talk about those OpenStack for NFV with a couple of friends from Huawei and Zhongxin, right? And the key challenge for them is how to ensure the availability when we implement OpenStack because there are different level of availability like VM level, right? L2, L3, or even the gateway or other sort of virtualized network components. So when we look at into this PBT, I think you enable the hardware plugin in OpenStack, right? So this is the key to shift the availability work to hardware. And my question is, how long has this plugin has been developed and how the maturity level of those plugins, yeah. Well, I think said another way, one of the key tenants from a Contrail perspective is don't reinvent the wheel if something works well. So I think part of the long pole in delivering NFV solutions is there are new protocols that are trying to address problems that are already actually addressed. We just came out of a three week bake-off in Asia, one of the largest carriers and they did a very detailed, very thorough benchmark test specifically around availability for OpenStack-enabled systems. And that stressed both the physical network infrastructure which was multi-vendor as well as the virtual overlay as well as the capabilities of certain services which were a mix of mobility solution type services as well as broader cloud services, firewall load balancer, et cetera. That was probably one of the best data-driven bake-offs that we've seen specifically around HA for both the network component, but more broadly, where are the gaps? And there's some very known gaps in other aspects of OpenStack that came out of that study. I believe some of that will be shared in future OpenStack sessions because the intent was to share that with the community more broadly. As we're starting to see some of the non-carrier but some of the deployed cloud builders starting to really share their benchmark tests and contributing that backup stream into the OpenStack community as well as the OpenContrail community, we've had sort of an open discussion there about what gaps are very well known and which ones are already served by a physical network infrastructure and don't need to be duplicated in the virtualized overlay. Okay, I guess with that we're probably out of time. Thanks a lot for joining us and enjoy the rest of the week. Thank you.