 Okay, I guess Thanks for everyone for being here. My name is Jennifer Lynn I lead the product management for the contrail team at juniper and this is our sponsor session So what I'm gonna do in this is essentially I'll give a little bit of what we're seeing in the in the overall marketplace and talk specifically about open contrail Which as many of you know is an open-source project That we started when we actually gade contrail into the open-source community Give a little bit of an update kind of what's going on there. We've made some Enhancements that I'd like to make sure I review with the team here And also talk about a little bit about the market more broadly I was in Dusseldorf Germany two weeks ago at the SDN NFE World Congress And that's obviously a very different forum than this group here but a lot of the discussions and a lot of the learnings and a lot of the You know questions that are coming up I wanted to make sure that we leave time at the end and just get a little bit of feedback from from the folks here as well So for those that don't know I think there's been a lot of talk about well You know what what problem are we solving and a lot of these things for for us in the contrail team? Haven't really changed but obviously there is a lot of change going on not just in the networking industry But in the IT industry more broadly so this discussion about old IT and new IT I think is very relevant from a network perspective moving from this model of device level configuration to system level Automation is a fairly large change We've seen controller-based architectures before but I was just in the panel around Neutron and I think the question about overlays is not really a question anymore I think people are embracing overlays But there are still some gaps between what an overlay can do and what it needs to do as part of a broader network system One of the questions that came up in that panel is I don't want something that's specific to a data center implementation I want this to work on a global basis, but I need to be able to localize the policies I need to be able to localize the identity stores I need to be able that's the exact balance that we want to kind of Talk about here with how do we build a hierarchical architecture that enables distributed applications to kind of do a lot of this automation that we're doing here The other you know major change for for the networking industry is suddenly everybody's embracing open source technologies Obviously when when open stacks started and it continues to be one of the largest open source projects in the industry But for the networking industry overall open source was not necessarily Mainstay, I think you know a lot of the open initiatives whether it's open daylight I'm on the board of open daylight for Juniper OPN of fee which Juniper is also a founding member and a platinum sponsor You know obviously open stack a lot of the standards based initiatives in the past were an open industry dialogue But it's obviously a very different landscape. How do we get to this model of exposing vendor agnostic? API's to enable an ecosystem to play together. That's fairly different for the networking industry From a contra perspective we serve three very distinct segments And I think the sort of the deployment cycles the adoption cycles are actually quite distinct So we make a big difference between but the types of customers that are Somewhat Greenfield, maybe some of the SaaS providers who are building their own applications And they have a little bit more flexibility in terms of the application environment for many of the traditional enterprise Their version of a virtual private cloud is really to enable IT as a service even at Juniper We're building our core services on an open stack cloud. We do our development on our own technology It allows the developers to move with more agility I think open source is one key piece of that and it was something that you know our development team Really wanted to do to kind of keep the pace But it's also the transparency and the engagement with the broader community and as it grows We're seeing that as a real proof point for open-con trail as well But that is also once again an operational change for a lot of the the segments that you see here And then finally the carrier space Obviously, that's a good chunk of Juniper's core business as this discussion about hybrid cloud and you know cloud migration And you know, how do we federate these environments a lot of even these non-carrier segments come to us and say Well, you guys understand the carrier space as the enterprise tries to do IT as a service and the big banks Tell us this a lot. They really need to think like carriers. They need to support things like multi-tenancy They need to essentially build back different business units based on usage which the cloud providers are doing as well But this is sort of a new concept So as I team moves from a cost center to carrying a P&L and charging back each of the development teams or each of the business it's a pretty big business model change and Obviously the cloud providers many in the cloud industry will say cloud is more of a business model than it is a technology There are some very distinct things in terms of the level of user empowerment the ability to go to a self-service portal and get Resources on demand that is, you know without filing a trouble ticket. That is a very big change One of the things that I think is a key tenant from a contra perspective is not to reinvent the wheel in terms of how Networks are done and this this slide obviously is a fairly high-level slide But this notion of number one workload mobility whether it's within clusters or across data centers or across different types of cloud environments This was a design principle from day one and a lot of the thinking around the standards work that went into contrail with L3vpn N systems was to essentially not break what's already working So L3vpns and BGP peering has been in large-scale networks for some time If you look at what folks like Google Compute Engine are doing the reason that they can spin up 200 instances in less than 30 seconds is because they've distributed a lot of the technology now Obviously they're running 300,000 servers per data center and their and their applications are maybe somewhat more monolithic and Greenfield than some of the large enterprises some of the large banks who have mainframes still in their infrastructure However that simplicity and that level of automation is something that we're taking a page out of the book at the same time from the large-scale Carriers as well as the largest of the cloud builders and we believe that there's very few Architectures that are kind of finding that balance between how the emerging cloud builders are doing it to keep the agility and Support new applications and things like Docker containers from the beginning But also in a way that day one ties into the existing physical networks So we can peer directly in a in a VPN directly with the existing layer three gateway that's already there Whether it's a Juniper MX or Cisco ASR, you know 9k or someone else's router. I think When we started this journey with with contrail from the beginning We saw OpenStack as the project in the industry that had the most momentum. It was still obviously fairly early on in the days of OpenStack and Neutron or quantum at the time was not the central discussion Now I think what you're seeing and a lot of the buffs and discussions around neutron is that the networking model Needs to get to the point for these large-scale systems And the networking discussions are actually quite different than they were in the early days of quantum We're not talking about L2 adjacency as the answer anymore. We're talking about how do we do distributed routing? How do we do multiple service chaining? How do we do? You know very distributed policy enforcement for very dynamic policies that can't be loaded statically up front Now the networking problems obviously are one piece But the system level sort of challenge with a software stack is something that I think you know OpenStack has done a very good job for many of our more network centric type customers They've asked us to play a bigger role in looking at the system overall the components of the stack The architecture around it the best practices etc, and I'll talk a little bit more about that But in the middle yes, there's this notion of not just the neutron component But the entire software stack as a binary that gets loaded in at the end here A lot of the best practices in terms of how do you actually rack and stack this? How do you deploy it in what order do you do upgrades? How do you troubleshoot the physical and virtual network? There's a lot of questions still in the networking industry as this transition happens and once again You know when Juniper acquired contrail in December of 2012 I think the benefit that we had as Juniper was that we had a fairly software centric mindset with Junos We had a real-time kernel. We had a modular operating system. We had a Very good cadence in terms of new software delivery and the quality of the code and that's allowed us to essentially take You know the best of what we can do as a broader company while not tying in any hard way to Junos Loosely coupling but at the system level we can show a lot of the benefits now the reality in a lot of our Contrail deployments today is that we've deployed in production in completely mixed vendor environments both from a switching perspective from a routing Perspective and from a services appliance perspective The architecture was defined that way and this is more than three years ago that one of our contrail co-founders Pedro Marx first defined in the Co-authored in the IETF the L3 VPN and systems draft with AT&T Verizon and other large players And the problem that they were trying to solve was to define a vendor agnostic Architecture and software that would solve the issues in a virtualized data center around Multitenancy access control and workload mobility Now a lot of the paradigms that you'll see in the data models that we use it Come from a mobility mindset and in this case we're not talking about wireless dot 1x clients We're talking about virtual machines that may move from one cluster to another or across clouds But IP has solved that problem and we take forward a lot of those learnings in in the contrail architecture At the beginning when we showed this slide it was really to reinforce the fact that we Have abstracted at various layers and we could support any hypervisor. We could run it on any x86 Linux environment we could support any encapsulation now I think what I wanted to update folks here with is that we have implemented on various hypervisor We started with KVM. We've done Zen with other sort of stacks We've implemented ESXi with the VRouter implementation now And we've also done a lot with bare metal and Linux container environments without a hypervisor Because as many of you know the Googles and the Facebooks of the world don't use hypervisors So we're serving a lot of these web-centric type customers who essentially are not solving the same IT problem That many of the traditional enterprise were and they found a way to solve the segmentation problem and still get the effect of pooled resources across compute and storage without the hypervisor That's why so much attention around Docker Once again, I think there's a lot of fun out there that we only work with sort of a juniper hardware and software infrastructure As I mentioned a lot of our early deployments kind of prove from the customer perspective This absolutely works. We've even started sharing some of the kids configs for non juniper gear in terms of the best practices Both on the layer three gateway side as well as the switching architectures I think the other piece is that from the beginning a lot of the intellectual property was around in order to enable System-level automation the control plane needed to be well-defined and in the early days of SDN There was probably a disproportion amount of discussion around what is the encapsulation? Today we support many different types. We started with GRE Because that's pervasively supported today. We will support, you know, the tunnel fabric over UDP We will support VX LAN. We see sort of a roadmap as this evolves Obviously, there are some that are looking at MPLS over MPLS The piece that of MPLS obviously that we retain is the 20-bit label to identify the virtual network And obviously as we've seen and I think there was a recent SDN blog by one of the quote founders of SDN Scott Schenker where he also reinforced that SDN2 should not reinvent the goodness of What we've learned to do with things like MPLS But we have to solve the new problem We have to do it in a way that we can drive rapid adoption without waiting another three years for protocols to be pervasive in the network The segmentation problem I think in the in the Contral architecture The concept of VLANs goes away and the segmentation is this notion of virtual networks But used to be maybe somewhat expensive when tied to physical hardware and a software architecture actually quite Unlimited so you can define a virtual network in a way that's relevant from an application and user context perspective and make that the model of Segmentation you can have any you know many different slices of virtual networks and enforce policies based on those abstractions And as I mentioned we had co-authored the group-based policy blueprint within OpenStack Probably over a year and a half ago Really trying to balance this how do we address some of the you know host-level enforcement issues but Simplify a lot of the policy administration so that it's logically centralized, but Distributed enforcement That also means that from a security perspective you hear a lot about Microsegmentation and the ability to do is distributed firewall in the Linux kernel We already support those models from the beginning of tenant isolation You know down to the Linux kernel with the v-router and it's not it's it's partially from a Application optimization perspective with things like localized QoS. It's from the other side A very good way of doing distributed security enforcement whether it's blocking certain types of traffic You know inspecting before Before allowing etc I think the other thing that has made it possible to move into deployments quickly is that we keep the Expectations of the physical switching infrastructure Relatively straightforward like the large cloud players We expect any to any IP connectivity in a in a IP fabric Obviously not everybody has built a class fabric Google style data center And as we make that transition with many of our let's say enterprise customers Obviously, we can support the Contra architecture, but many of them are moving to that type of ECMP cloth style fabric Because now with the web loads You're not keeping 80% of the traffic within a VLAN anymore if you look at Amazon and Google and the patterns of their traffic Even, you know Hadoop clusters. We're now obviously in a very distributed model and they're constantly You know moving east-west traffic So if you look at a lot of the video analytics companies the gaming companies, they're really excited about this notion of running Essentially a localized routing table so they can do a local lookup and move from one host to another without some central bottleneck That's also been a lot of the issue with You know some of the alternative solutions where they have software gateways, which not only add cost and complexity But also are putting an extra bottleneck in the network infrastructure So with obviously an optimized layered three type fabric. We're trying to reduce the number of hops So that these real-time applications can take advantage of an efficient network. I Mention kind of the reference architecture approach a lot of that has to do in with not just the technology itself a lot of the challenges of OpenStack even are In the provisioning in the troubleshooting in the correlation between the physical side and the virtual side Understanding is an an application problem. Is it a network problem? Is it a storage problem? Etc. So the goodness of OpenStack in converging the IT systems is one piece We were hearing again and again that the network guys are losing visibility. They can't ping a VM They can't do a trace route. They can't do a packet capture They can't essentially understand aggregate throughput and flows for virtual machines within a virtual network That's the type of thing that I think You know we've we really focused on with the analytics with the troubleshooting with the reference architectures, etc And we work with obviously many different partners in this journey The the fact that we've open sourced our code. We've made our you know bug tracking Fully available to our partners. We've put all of our test scripts and our documentation out there has helped us essentially Not only engage our customers, but also just as importantly engage some of our technology ecosystem partners and the rate at which We're moving with these releases Probably every six to eight weeks is definitely an agile software development cycle if you look at a lot of the you know Empowerment of the user concept that was a big thread in mobility the cloud user is king He he or she gets to do what they want when they want from a self-service portal and it's served up immediately At the same time the folks that are essentially going to be let's say the tenants of a virtual private cloud Are the folks that are developing applications and maybe network services for NFV And they want to be able to access the infrastructure and go live without a lot of Overhead in terms of IT and administration So this next role of the cloud administrator is really important in terms of upfront setting what those policies are We worked with one major you know sass provider who from the beginning of the engagement gave us about 40 application and user Templates that needed to be enforced in the overlay and with the physical network Those included network and security policies things like load balancing policies You know they have a three-tier web application type architecture You know the back-end database should not be talking directly to port 80 traffic if you see that block it all port 80 Traffic should go through the web server tier those types of high-level rules can be enforced or defined up front And then enforced at the time of the transaction. I Think you know we are obviously a software team of with deep networking knowledge And so understanding this broader problem and then bringing back to empower the network guy to do the troubleshooting Understand what's going on in the physical and virtual network? I'll talk through some of the demos one of the things that we're showing here is Essentially, how do you do correlation? Across the physical and virtual topologies. How do you do topology discovery with LLDP so that if something goes wrong You can quickly identify which flows with which VMs to which port in the switch, etc And that's not obviously that's a vendor agnostic problem Some of the key capabilities from a contra perspective It's not that these services are new services the way that they're being applied is quite new You know we do IPAM DNS and DHCP and the V router if a customer has obviously that in an existing They can apply it but in many cases localizing some of these capabilities is much more efficient and much more scalable And allows more portability as well initially and there was there's still ongoing discussion about You know in a host level Linux kernel How much routing and switching we need to be doing I think what we can say is that we see more and more of these real-time emerging Applications the level of native layer three capabilities that is going into the host is increasing You know we have a local video analytics company in Mountain View, California that was recently acquired They're running essentially BGP in every host and they did that essentially for video latency reasons And then they had a fairly large issue in terms of how to efficiently distribute the routes and avoid full route prefix Population down in the host level. I think we've balanced that we only populate essentially the route prefixes in the V router for the relevant in the locally relevant workloads and That's obviously using a page out of the book of MPLS VPNs once again a lot of the problem that folks were trying to solve was to Federate, you know many different subnetworks with a common architecture and now we're seeing that a lot in cloud You know Amazon's not going away open-stack private clouds are not going away. This needs to work over a common network infrastructure We also focus a lot on things like service chaining of the network Control plane that we use obviously is very consistent. So when we insert either a physical or a virtual Service we essentially next hop route to it And that allows us to be extremely flexible in the types of services that we can insert But it also allows us to define a policy once and then just change the configuration model as new Services are inserted and as open-stack continues to evolve with things like load balancing as a service and those API's Get get tighter in terms of definition obviously will will support more of those today We're already supporting things like LBAS API's for HAProxy We're working obviously on things like physical f5s as well as virtual f5s. There's some of those demos being shown A lot of the definite definition comes with the understanding of how this architecture was done in a you know The balance between what's distributed and what's centralized and how we build a sort of scale-out infrastructure There is no upper bound So, you know some of the sort of scaling demos that we're showing in the broader Discussions around neutron there was some discussion about the fact that once you get to 200 nodes the network conks out Well, you know in a contrail infrastructure I think some of the demos were showing you know a thousand v routers for a thousand nodes and still not hitting any type of scaling limitation The other piece and this is something that we've learned, you know with routing large routing systems Which is probably the largest distributed system that's out there Is that you can't have a single point of failure and when you lose any single node? You can't interrupt the traffic So we've worked a lot over the last few you know few months in a lot of different bake-offs in a lot of different countries on HA and how that works both from an open-stack perspective as well as from a contrail architecture perspective. I Just talked to you know another network provider They're getting way behind open contrail relative to some of the other SDN solutions because day one no matter whose network it is We can interoperate so that's always nice to see I mean we compete in a way But the network's job is to interconnect so when you have another networking company who may or may not have had success in Building an SDN controller come and say well We're gonna get behind this because we just need something that works and we need something that works with our stuff So I mentioned the enterprise problem, and it is a little bit different You know if you if you think of the web guys they have obviously a class data center today But they don't have a lot of legacy mainframes. They don't have a lot of layer-to-storage keep-a-lives A lot of that is a networking challenge as well And I think we learned some time ago in IP networks You know when you have s&a and token ring and Apple Talk and decknet we're not saying rip it out What we're saying is federate those environments. That's why the lair three gateways are still obviously extremely important We can do enforcement at the first hop or we can do enforcement If it is a native environment where we have the Linux kernel as of the router already But that is again a way of using the hierarchical architecture to address these different workload environments I think OpenStack is becoming a de facto framework at least for how people are thinking about heterogeneous system orchestration and so obviously we don't just support sort of OpenStack, but We're using that as a very good proxy on how do we essentially Federate these environments from a network perspective, but also anticipate You know the application changes that are going on where more and more of this will be automated You know one of the specific things that our carrier customers are very interested is Is around network function virtualization and we've been involved in a lot of bake-offs around that I presented another session two days ago here at the OpenStack summit with one of our partners who has built a composite virtual network function for the 4g packet core and they had to rewrite some aspects of that application because You know, it's not enough to take a application and put it in a virtual machine and ship it as a virtual appliance a lot of the legacy Applications can't take advantage of a scale-out model And so there's no reason to rip them out and try and put them into a cloud if the application obviously doesn't take advantage of an elastic architecture But many of those services do and as we work with not only our for instance our own firewall service We have a scale-out caching service, you know, we work with partners around load balancing etc That model starts to look very consistent You know this model of creating a service template Defining that once and then just instantiated instantiating it as new workloads come up Allows us to essentially balance that, you know definition with the policy application The other thing that I mentioned we've been really working on is sort of this physical to virtual interconnect and the bare metal question Was one, you know, initially a lot of the use cases were around virtual machines now increasingly they're around Supporting either bare metal servers directly or docker containers without a hypervisor I can't tell you how many for instance financial services type questions come up And they say well we were moving from the proprietary hypervisor to the open-source hypervisor and everything was going really well Now we may just you know be lined straight to a docker environment container environment because our Application teams are telling us that they need a docker enabled cloud within four months a lot of the deployments that we've done kind of We we got in fairly late in the game got through sort of a bake-off And when there's a real problem, I think what many of our customers have found is that you know We can sort of meet the scaling and availability Expectations but address this new problem and the way that we address things like docker containers is exactly the way We do the segmentation and policy enforcement for virtual machines, but now instead of a tap interface We're essentially using Linux namespaces and you know Linux containers So there's many ways we can have a v-router sitting on the bare metal server as a Linux kernel module we can also Increasingly use the top of rack switch as a gateway You know we're showing another demo where we're supporting OBS DB to essentially bridge a bare metal segment into the top of rack switch Not just our top of rack switch, but preferably ours And and then once again without without sort of extending all the way down to the host we can essentially You know map it to a virtual network and enforce a policy So the physical reality as you have lots of Virtual machines in an unpredictable location or Linux containers Maybe a hundred Linux containers per host They can be in any any server in any rack in any cluster in any data center This architecture will still hold up the broader problem as this sort of evolves is Essentially, how do we also now apply some of the services? That the network administrators the security administrators need for compliance purposes or whatever it is The way that we do that as I mentioned is is through next hop routing So we have a lot of flexibility in terms of how we insert a service We create we steer the traffic to that next hop apply the service You know traffic needs to go from virtual network red the virtual network green But through policy It's in force that it first has to go through a firewall service and a load balancing service And when you change change that firewall service from one vendor to another the policy doesn't change But obviously the service template may change with the new configuration parameters as Firewall as a service API's get better than obviously there's a common way to push the configuration down today We found a lot of success with the service orchestration and the service template model in how people are defining that that service chaining in their data center and part of this transition so one of the You know SAS providers that we're working with was essentially using you know a Layer to tromboning to a physical f5 load balancer as they evolve their architecture They're moving to a more native layer 3 architecture will they'll have more distributed load balancing services inserted and and very often That's incremental. It's not as a replacement. We have one large financial services customer that has about 3000 security virtual instances In their implementation and that doesn't displace the firewalls the pair of firewalls that sit at the D mark between the WAN and the data center They're putting firewalls between tenant groups Well, I have a snapshot here for instance We we're just showing that you know what we're scaling to a thousand nodes and beyond with this architecture just to show We don't see this as sort of we wish we could it's it's something that we are actually have you know working today as customers scale Obviously a lot of discussion about should we cap it at 200 nodes and just put you know new controllers for each set of a Couple hundred nodes. We don't obviously believe that's the right thing from a cost and efficiency perspective So we're doing a lot of bake-offs now I think we would ask that a lot of our customers and partners hold us to you know the data and data driven decisions There's been a lot of emotion and a lot of lobbying in In the open efforts and so now we're starting to see a lot of the data And actually some of our customers are sharing the the test scripts The test plans that they've done so that there can be sort of best practices on you know What are we actually testing for what do we need to sort of improve the model in the architectures? The other thing that I mentioned the ability to do sort of physical topology discovery And then essentially map flows a drill down into Specific ports into specific virtual machines aggregate that back up This is all you know from the very beginning as we moved into an operational environment You know many of our customers were asking us for more guidance on not just obviously the overlay piece But the interaction with the physical underlay In terms of the docker, this is something we started with a blog We put more information now in terms of open-contrail and the types of requirements that we're seeing there The docker guys endorsed us at the last open-stack summit They like the way that you know that the the networking is simplified but also highly distributed You know, I think this is something that if you're looking at contrail encourage you to take a look at we've shared a lot of the Configurations we have some videos out there now Obviously a lot of excitement around docker Thanks to our marketing team There's also a host of new demos that are being put out there Which are just a couple of minutes to kind of show the capabilities that we've added and some of the services partners and how they're used in in in our demos and Some of those are posted on YouTube others You'll see show up through open-contrail. We're trying to color that too with you know blogs by the users So many of our customers and partners have already blogged about you know Not only the good experiences that they have had but the gaps that they'd like to see us to address This afternoon we have our open-contrail advisory board meeting with many of the folks here at the summit and you know That discussion is both from a business side as well as a technology side Aspects we're looking for active feedback around community governance around roadmap around lessons learned We've opened up some of the community support to folks that have already deployed What we're finding is that while the code is the same between open-contrail and juniper contrail Obviously the support model is is what we monetize from a juniper perspective But a lot of our customers start with open-contrail and they like the fact that it's an open transparent model They obviously want to interact with us commercially as juniper because they're looking to scale and they want to make sure that it's Sported there's a picture of our booth. We've had a lot of traffic through there And we it got particularly busy when we hosted the the beer bash This is just a snapshot of some of the partners that we've been working with and what we found is that both from a systems integration Perspective as well as from a layer four through seven virtual services perspective It's been fairly easy for folks to kind of interact with the contrail architecture But I think we have more work to do in terms of how we're documenting How we're testing and all of that This also just recaps. I mean it's been you know about two years now Things have moved very quickly. We gaid contrail in December of 2012 Actually September of 2012 as open-source and you're starting to see some SDN players open-sourcing now and one of the you know key questions that we asked ourselves when we Started open-contrail is you know if you don't do it when you start It's very hard to do it successfully after the fact now that everybody has validated the model of open-source I think we will see more SDN players open-sourcing But if you look at you know the way our development team uses the open-source model for their own agility It's it's very hard to do it after the fact So it kind of comes across as desperation when you do it too late in the game But for us it's obviously become not only an internal differentiator in terms of you know how quickly we're moving with new releases But also it's allowed some of our partners and customers to already contribute code back You know we had a case study with CloudWatt who's already in production They've contributed quite a bit in terms of you know how they're doing load balancing how they're doing packaging how they're doing the SSL Implementation etc, but also just participating more broadly in meet-ups and that kind of thing So that the base infrastructure was the first thing in terms of implementing the layer 3 VPN and systems architecture What we added we submitted the AWS VPC API blueprint into open stack You know today, that's obviously Still a question in terms of the extent to which open stack Embraces other cloud environments from a network perspective. We have to embrace that model So the notion there was to essentially take the same virtual private cloud APIs as One has in the in the Amazon cloud Run them in a private cloud in terms of open stack and not change the scripts Whether it's that way or through heat templates and you know Other models I think that networking problem is definitely one that we want to differentiate ourselves with I Think the other thing is that a lot of our customers are testing the contra limping implementation Against the neutron v2 APIs. So, you know, many of our customers have said, you know a baseline requirement is that you know, we stress the neutron v2 APIs as part of our benchmark not just for functionality and Implementation of the APIs, but also, you know, how are you doing API versioning? How are you doing release management against those those API so we can keep the architecture loosely coupled? but that was also a Key issue or a key debate topic in the last panel, you know The last open stack summit a lot of folks were talking about the challenges with the default neutron Implementation to the point that folks were questioning neutron if we start to make data driven decisions and don't necessarily Take certain things for granted. I think it doesn't make sense to say we should do away with neutron It makes sense to fix the issues You know the the last or the first part of this year We were upstreamed officially with open stack Juno We kicked off the open contrail advisory board with some of our key customers like NTT and AT&T and Symantec and workday and that list is still growing. We're seeing a lot of active involvement In that environment and then the contrail reference architecture as I mentioned, we're not just sort of looking at the Networking piece, but the networking piece in context of the broader architecture as well as how does this work with distributed storage? We're assuming that not everybody is putting large EMC arrays into their into their clouds, etc But we have to balance all of that. I mentioned the scaling the HA testing the correlation between physical and virtual A lot of those things are sort of ongoing The native IPv6 implementation with the overlay was definitely a goal for us in calendar year 2014 that we hit I think there was another session actually two sessions on folks that actually took it upon themselves to do an open source project Around a distributed sort of V router and I think there is sort of convergence We like seeing those types of things and and other communities and we can kind of divide and conquer with folks that have kind of You know tackled parts of that problem already Okay, so I think I've left a little bit of time for any questions or comments Open it up Anything that you didn't hear about that you wanted us to address is as sponsors this time. Yeah, please just wondering How you guys actually cooperating with the other? Irrelevant project like for each of our Congress or even Newton is it's kind of separate Approachy you guys trying to pursuing now or you guys really try to cooperate with the other Related actually well, I think they're quite different so neutron We actually have a core contributor in the contrail team for neutron We we are now a you know Sort of an official upstream neutron plug-in with the contrail piece So we're obviously very invested in neutron our core contributions have been in neutron as far as Congress goes you know this notion of Audit and compliance is actually something that we also take very seriously And you know as that evolves I think will will probably be more active specifically in Congress But the group-based policy blueprint that was three summits ago We actually co-authored so in terms of how we think about the interplay between policy You know definition in a way that's you know, not so granular but abstracted up to a group Or a virtual network as we think of it Is is definitely not a new concept so that discussion has been going we're working with some of our you know Key customers because especially like in the financial services industry We had a very large financial services customer advisory board meeting in New York with 18 of the major banks And the thing that they said to us from a IT perspective is we're sort of in the business of risk management So we need an SDN type efficiency model and automation, but you can't make any compromises So if you tell me that it's not compliant if I lose my you know socks compliance or if I'm a retailer and I lose my You know PCI compliance I'm not going to even start right now Open stack has this broader issue and you've probably seen some of the security breakout teams talking about how do we do open stack compliance? We'd like to take a native you know an active role in that a lot of our security team Has been working closely with some of our contrail customers to define the cloud security audit and how we implement that with Contrail and you know with the broader network teams Yeah, good. Well, yeah, we compete directly with NSX. However, what I'm talking about here is the hypervisor support with ESXi and What many of our customers have said is as they move from a VMware environment to an open stack cloud They want to make sure that they don't have to recertify workloads So especially in the government industry you see that they've certified it for an ESXi hypervisor They want to move that virtualized workload over to an open stack cloud We support that today and you know the The other aspect of it is the vCenter APIs, which we're also I think demoing out in the booth You know the vCenter APIs have evolved quite a bit from you know the early days I think you see VMware sort of embracing open stack. So as that evolves That's you know getting easier to support as well But I think it is really about the mindset so many of the customers who have VMware workloads today They run quite well in VMware today They don't necessarily need to rip those out and move them into an open stack environment We're there where we're seeing folks building the open stack cloud environment is where they need that cloud And they can basically you know have a you know the new clusters come up in a native open stack environment And from a network perspective we can federate that to the you know VMware environment quite well If a customer wants to swing those workloads over we're supporting that as well Okay, any other questions? Yes So I showed one slide and that's actually you know of the differentiation Those are the key ones, you know in terms of scaling in terms of service chaining in terms of open standards based Implementation that works without extra gateways In terms of the the flexibility to pull in new services and not do a development program around integration of each service In terms of being you know vendor agnostic in terms of cost In terms of agility, you know, we're hearing a lot about the and I you know, we really like the NYSERA guys They obviously you know kicked off the whole quantum discussion within open stack VMware paid one point two billion dollars for NYSERA specifically to kind of change the discussion and networking and It definitely I mean we we all as a community networking discussions have definitely changed You know, but I think what we're seeing from a you know Competition point of view Some of our customers are sharing their results and what we just you know as that sort of matures in terms of data plane scaling You know control plane scaling ability to do HA ability to insert services cost You know complexity network troubleshooting These kind of things. I think that will get clearer over time All right. Yeah. Thanks again for joining. I'm glad you found the room