 Thank you. Yeah, it's a light show. All right. Well, thank you for making time today. This is the canonical track session on NFV, SDNs. You've been to OpenStack, JuJu, and a combination of Canonical and Juniper. I'm John Zanos with Canonical. I manage our Alliance organization, which also includes our team focused on telecom, as well as our SDN partnerships. And I'm Jennifer Lin. Good afternoon. I lead the product management team with the Contrail at Juniper Networks. So we're going to walk you through a couple of things, a little bit about what we're doing in NFV, OpenStack, SDNs, how we're collaborating together, as well as do a bit of live demo, which we always like to do. First, we want to start off, why NFV, why automation, why is this relevant to an OpenStack summit? First of all, we see NFV as probably one of the key use cases for OpenStack. We've seen a number of carriers approach both of us for, how do I use it in the context of network function virtualization? They're being driven by a market reality that puts them in competition with people that they haven't competed in the past. Those companies are Google, Facebook. If you've been following it in the news, it's clear that Google has aspirations to be a service provider in some shape or form. The challenge for the telcos is they compete in a market where Google and Amazon and Airbnb and WhatsApp can deploy thousands and hundreds of, well, thousands and thousands of machines with an individual person, an individual administrator, because they do lots and lots of automation and simplification. While the telcos are trying to get there, but right now, typically, a system administrator can deploy hundreds of machines. Yeah, I think, as many of you know, network function virtualization is really an attempt by the carriers to take advantage of a lot of the lessons learned in cloud computing and use low-cost commodity x86 hardware with a Linux operating system and virtualized services. They need to find ways to grow their business with new revenue-generating services in a way that they can roll out very quickly, but not reinvent the wheel. For those of you that don't know what Contrail and OpenContrail is about, we're really focused on, it's an open-source codebase. We're very focused on network virtualization and enabling services in an automated way. Part of the segment that we're really going after with this offer are the carriers who are looking to grow their managed services business and offer cloud-based services, generally taking their existing managed services and evolving their platform to more of a core telco cloud environment. As they do that, one of the main things, as John mentions, is much higher levels of automation. So we're not doing device-by-device CLI configuration, but more configuring systems in a much more automated way. And things like OpenStack, obviously, are becoming quite popular because they leverage a lot of the vendor agnostic and automated paradigms that the large web-scale guys have brought to the table. So when we started our journey together as two companies, if you may or may not remember, we announced this six months ago in Paris. And it was driven by the fact that we were both being approached by carriers who were asking for help competing in this new world. Once again, they were looking to automate things. They were trying to make sure they were able to reuse scale, simplify, and both of us open-contrail, you've been to, you've been to OpenStack and our tooling, like Juju, is all open source and that was very important to them. So the carriers have gotten together first in the standards body called Etsy to build a framework for NFV, for network function virtualization. It's designed to allow them to coalesce and move forward. This architecture actually was then adopted by OPNFV, we're both member companies of OPNFV, which is a group initiated by a group of carriers designed to figure out how you actually make code that makes this happen. And it's not actually all about the code, it's about a framework that's loosely pluggable that you can put multiple pieces in like multiple SDNs, but they're certainly interested in the partners, the vendors, the carrier community that can move this forward. Our approach is, we look at a couple elements that you start overlaying on this. This is you've been to OpenStack, which is part of our joint solution. You look at the virtualization level layer, which can be KVM, LexD, for those of you that sat through that session where we just announced our next generation container, and this is a light visor managing those containers. Cinder, Neutron, and the SDN. And in this case, what we're working together is with OpenContrail on this SDN specifically because we think that OpenContrail is furthest along in terms of being ready for addressing some of the shortcomings that exist within Neutron. But on top of that, we see an opportunity to use MAS, which is our Metal as a Service to deploy operating systems and recognize nodes and turn them on and off so things are very scalable. And then we use Juju as a Deployer of VNFs. For us, basically Juju is a service model tool that is able to deploy applications. In this case, the applications just happen to be network functions. What we're not doing, and this is important in the context of our discussion with the carriers, we're not supplying a true SLA-based orchestrator. That is the responsibility of others. Our joint solution plugs into that. It can be an open source one like Mano that Telefonica is doing. It can be a large scale one that's typical in the OSSBSS Worldleg work that Amdox has done. We see ourselves as enabling this commoditized structure underneath it, i.e. the open source elements that can sit into a complex SLA-based orchestrator that understands the business decisioning that Telco provider wants to supply. And then lastly, we're looking to team with the VNF specific vendors to put their logic in the charms that Juju delivers. So we're encapsulating the VNF, we're deploying it with Juju, we're sitting it on top of this open stack, open, contrail, cloud. And that's our approach for a NFV architecture. You wanna start with this one? Sure, so as folks have done this, we kind of took the approach of how do we simplify, obviously the infrastructure, but in particular the network. And when the open stack effort started, this project of Neutron probably started with a slightly different goal than what it's kind of evolved into. And some of the challenges that Neutron has faced in terms of scale, how do you avoid a single point of failure? How do you ensure that you have resiliency and as you grow out these systems, large scale carriers don't have to compromise on the SLA's that they've put out there. As well as obviously some of the capabilities that in a world where you're virtualizing the network functions, the VNFs, we can do service chaining in a way that's not tied to a vendor implementation. From a contrail point of view, we took the network system as an abstraction and developed a Neutron plug-in that we feel addresses many of those issues that were not yet addressed in upstream Neutron. A lot of the scalability challenges that folks were running into in alternative solutions, we were able to address by having essentially a layer three and layer two paradigm where we could scale horizontally our control nodes. And so we really have focused a lot on building out large carrier class type systems. Obviously all of these folks have started with small POCs and trials. We're starting to see a lot of these RFPs looking quite similar. We're not just getting invited to POCs and trials anymore. We're responding to a lot of fairly comprehensive RFPs where for instance, one large provider is looking at 12 projects where they wanna be able to leverage this telco cloud infrastructure for mobility services, for managed services like security, caching, CDN, firewalls, as well as for their own internal enterprise IT environment in their carrier. So at the end of the day, what we recognize is there's obviously a number of shortcomings with Neutron. Any of you that are working with it know that. We're of the camp that we're treating Neutron as an API and we're then plugging the open contrail SDN underneath it. We as canonical are partnering with multiple SDNs and our strategy is to make automation available to them so they can deploy their SDN with different versions of OpenStack and let the carrier make their decision. We've started this journey with Junivert because in our assessment, they started to work with us very early because in many ways they were very far along this journey as an SDN. And this is where I think the presentation of Neutron as a set of APIs, we're seeing a lot of our customers testing our contrail implementation against the Neutron V2 APIs. We were upstream as a core Neutron plugin in the Juno timeframe, but I think this distinction between the loose coupling of the APIs from the upstream Neutron implementation where there have been a lot of challenges and limitations in terms of scaling and resiliency in services. So that's why that distinction, I think there's been a lot of debate in the Neutron design summit sessions for those of you that have been there in the last week as well as in previous sessions. And as Neutron has evolved from a pure VLAN segmentation type model to how do we do distributed routing? How do we do layer three service training? How do we essentially scale out services where we don't know what that upper bound peak capacity is? Those are all the things that are hard problems that we have tried to bring to the table in a carrier class system. So at the end of the day, we actually built a Consol Day Dissolution with the objectives we talked about. We wanted to make it easy to deploy. We wanted to make it easy to scale and we wanted to make sure that the SDN open contrail works seamlessly well with Ubuntu OpenStack. So this architecture leverages a couple of tools that we have. We're using Landscape and Juju to deploy and automate the deployment of OpenStack and the SDN. We have Mas underneath, which is metal as a service, as I said, as a way to deploy, recognize the different nodes, as well as actually putting on or off the operating system. Obviously we connect open contrail into this architecture. And at the end of the day, what we've been doing thus far as compared to just announcing a partnership that was nothing more than logo swapping, we've been doing a lot of engineering work to make this a reality. So when you think about Juju as the tool to simplify the deployment of an OpenStack cloud with OpenContrail, this is just a snapshot of what it looks like. Each one of these boxes is a charm. And as you can see, Ceph, Cinder, Nova, et cetera. And then a set of charms that Juniver created for OpenContrail. In terms of just so you understand the broad concept here that we're talking about, each charm represents a service. It can be an application, it can be a database, log stack, it can be Hadoop, it can be any element of OpenStack. Each of these services have relationships and that's what Juju deploys as a service model. At the end of the day, the charm represents fundamentally the DNA of the application. It represents a way for any ISV, or in this case Juniper, to bundle in the essence of their application in whatever code they're writing. So they can write it in Bash, Go, they could be using Puffet scripts, Chef scripts. And then it becomes the representation of the application on the canvas that Juju then deploys. And the way we've structured this is the ISV, or in this case Juniper, actually owns the charm. So every time they update their code, they update the charm and that's why it's a real-time representation of how best to deploy their code. So beyond just OpenStack and the SDN itself, it's designed to deploy any virtual network function because for us, they're all just applications. And again, deploy, scale, integrate, that's what we're trying to accomplish. And ultimately we create a model that has reusable components that can be deployed and reused again. So the OpenContrail charms are available to anybody to deploy. That's deploying OpenStack using Juju. So we're gonna do a bit of a demo that we're gonna do together and we have confidence that it'll work seamlessly well. First, just to give you a sense, just using Juju, this is a Juju canvas. It could be reside on an OpenStack cloud, the set of servers in the data center or a public cloud like AWS. Ultimately, we've created bundles as you saw before in the static picture of OpenStack and ultimately you can just drag and drop this bundle and deploy OpenStack, it'll come up. This is a bit of a demo representation, so it deployed very quickly. In real life, it would be turning on the nodes associated with each application. Now on this first orange box, right Scott, we actually have an actual deployment of OpenStack with actual charms of OpenContrail that want you to describe a little bit. Yeah, I think part of the reason why this relationship and partnership was important, obviously in the previous world you had very siloed teams working on the infrastructure components, network compute and storage, and then fairly packaged applications that would sit on top. As we converge network compute and storage and as the application environment becomes more distributed and dynamic, a lot of what's been hard about this is getting an abstraction for the various services and then from a network perspective applying let's say policies to sets of application components. So where we've been successful with something like Contrail is the ability to let's say take a back end database tier like your MongoDB and assign that as a virtual network and attach a network and security policy to that. So this sort of abstraction, obviously a lot of the value in doing this is to simplify and automate and not have to redefine those interrelationship between application components and the declarative rules that are applied to those components and at the same time as different services are brought in these specific configurations will be put into these templates but the policy rules never change and the interrelationships between those policy groups don't change and that's where I think coming at it from two different angles there's been a very good alignment between the way that we approach this with open Contrail and the way that Canonical was thinking about simplifying the application and infrastructure environment. So ultimately not only does this bundle represent all the work that both companies have done and then is already available to others. It also represents a framework to deploy OpenStack and Contrail and ultimately to scale it up and all you would do is add units as simply as adding a number of units each unit represents a node and you can multiply it. You could also connect this through an API to a policy engine or orchestration engine that could say I see the network stressing I need to add more nodes, add more nodes ABCD. Now, this is a live environment so we're actually seeing the OpenStack dashboard, an overview of the compute side, overview of the network topology. Oh, admin, admin. Now what's really interesting which I'll let Jennifer actually come here and run is we actually have OpenContrail running live. So here it is, go ahead. Oh, sorry, so yes, in the Horizon dashboard obviously we keep the virtual network quite simple. You have a public virtual network and a corporate virtual network and then we have a separate UI for the network administrators who may need to drill into components that you saw there in terms of the control nodes and the compute nodes and the Contrail infrastructure. We have a lot of visibility here in terms of not only components of the infrastructure, everything timed out. But also diagnostic information so it's very important for the administrators to be able to for instance see the aggregate traffic between one virtual machine and another virtual machine or pull that up and say give me the aggregate throughput between my backend database tier and my front end web tier. We've done a lot of work jointly in carrier environments where they're dealing with a very low latency stateful application environment. So you'll hear this term deterministic NFV where there's a very big sensitivity to throughput, latency, packet loss, jitter, all of those things. If you're talking about a voice application or a gaming application, the types of views that these administrators need to have is obviously a pretty big step up from what they've previously had in a, I'm gonna keep the time on it, in a traditional environment. So that sort of dashboard view you can see up top, you can drill into a single component and see a lot of the statistics around that as well as get sort of a broader view of virtual networks that exist. In this case, we have an Ubuntu virtual network and a public network and we define essentially the policies by which those two segments can talk to each other. If you look at a lot of the other neutron implementations, the challenge has been we're sort of recreating very low level configurations in a new open stack infrastructure. What we tried to do here is keep the level of abstraction really high and hide the complexity of all the low level configurations that go on underneath. And that helps as we move into a cloud environment where you get towards one administrator who's handling a converged infrastructure of compute storage and networking and may not be as deep an expert in one domain as we've previously seen, let's say, in siloed functional groups. And for those of you that may have been in the OPNFE day yesterday, we were asked on short notice to actually do a demonstration. And the reason we were able to do it and basically what amounted to a day and a half is that we were using tools and charms that already existed. So we demonstrated deploying open stack as we've just shown. We demonstrated deploying an open daylight because if you know OPNFE started with open daylight, but we also demonstrated deploying open contrail because those charms had already been developed by Juniper. And we were able to very simply do a demonstration in the course of less than a day and a half because we didn't really have to create anything new. And that's the elegance of the tooling, right? It looks at applications just as applications. We're able to put the SDN with open stack. And the next thing we'll show is an actual running environment of Juju with virtual network functions. In this case, it's an IMS by Clearwater, which is a company, which is the open source element of MetaSwitch. And what we'll do is deploy a Rescom charm. Commit, automatically place, confirm. Now for those of you that know the telco space, it's not so easy to deploy a service. The service is deploying. We're gonna build a relationship. Juju helps you figure out, I'm not a network expert like Jennifer, but I obviously know now that it belongs here. This is where the relationship goes. I commit that relationship, add it, confirm. Another service is added. So what we've been able to show very quickly is that the concept that the telcos want to achieve in NFV, which is scalable services, automation, making it a lot easier to deploy. And mostly given the same tooling that allows them to compete, so to speak, with fair footing against the Google of the world, isn't beyond their reach in terms of months and quarters and years. It's basically available now and that's what we've bundled together. So. And one of the things that we've heard, for instance, is as capacity grows or as the number of users increase, folks were spending a disproportionate amount of time configuring VLANs or rolling in new services. Once these rules are defined, then obviously that elastic scaling of a service is easier to automate. So we have a capability in Contrail as well for the network services of auto scaling whereas the capacity grows and as you hit your service definition, it spins up another virtual machine in the glance image and adds more firewall capacity or IMS capacity or session border controller capacity. That's what folks are trying to get to in a virtualized environment without having to rethink the access control policies and the security policies for each service every time. And that was turning into probably 80% of the time that they were spending to deploy a new service. So if you click on any one of these charms and you would want to drill a little deeper into it, not only do you can see the service relationships but you can drill into the actual machines and drill deeper and deeper. So that's the live demo part as you saw with the repeated need to admin, admin. It was active and timed out, we timed in, but the services are deployed that quickly. OpenStack has deployed that easily and ultimately you can scale it up as I showed by picking one unit to five nodes, the 5,000 nodes. Let me just go back to present here. So in the course of six months, not only have we sat down together and framed out what we wanted to do and how we would approach the market and how we would represent deploying OpenStack, deploying the SDN, automating the deployment of VNFs, we actually have had two or three joint customers. One joint customer we'll take a moment and talk about is Pier One, which is actually deploying a combination of you've been to OpenStack, our tooling, and obviously OpenContrail as the SDN. Yeah, so Pier One was founded in 1999 and they're actually a local company have a presence here in Vancouver but a global presence in terms of a secure network backbone for private peering points to the internet exchanges and they've grown over the last few years, actually almost over a decade, a private hosting business, public and private hosting businesses in their 19 data centers. A lot of their core competence is around large scale networks and security and as the folks there will say, I mean they have a very strong preference to ensure they have a multi-vendor environment. They are a very good customer of Juniper Networks and I think have deployed a lot of Juniper. One thing about Contrail that has been very important in our adoption timelines is that we support any vendor underlay as long as it's a standards-based interoperable underlay and supports IP VPNs. So from a Contrail perspective, that's been a key thing in terms of adoption. Pier One, as they've grown, they've evolved their services and hosted more, one of the differentiations that they have is they found that their customers in the hosting environment wanted more and more customization. So one size did not fit all and they actually really pride themselves on the ability to meet the needs of more demanding customers like video analytics and gaming customers who tend to want to outsource the infrastructure but still retain the control on how those environments are run. So when they approached both of us, it was with the purpose of migrating to an open stack based cloud infrastructure with both open Contrail as the SDN. This diagram shows a little bit about where they were and where they're trying to go. Unfortunately, we had Gary lined up from Pier One, their senior architect and he did a good job in the morning and he came down with a bit of a headache this afternoon. So Jennifer and I are gonna try to represent his synopsis here. Yeah, I mean, what this shows is sort of their existing environment. And today, as I mentioned, they have a private hosting and private cloud business based primarily on VMware. They have served a lot of primarily enterprise and SMB customers with that infrastructure. But as I mentioned, they found that they needed more flexibility in the types of services that were being offered and they wanted to move towards a more open cloud environment where they didn't have a proprietary single vendor approach with their cloud infrastructure. So as they've kind of embraced openness in other aspects, they're now just getting started in an open stack environment. But looking at how they can map a lot of their more emerging customers into the open stack hybrid cloud model. And this morning when we had another session and talked through some of the use cases, we are seeing, and Pier One had done a study a couple of months ago where they said that 78% of their customers expect to be in a hybrid cloud environment in the next three years. So this sort of notion that you build at once and you kind of have the formula for the next couple of years, they were finding that a lot of their customers were willing to host some of their loads in their public or private cloud service, but those same customers expected to be served in AWS VPC and SoftLayer and Google Compute and Digital Ocean and how they built the network had to accommodate for that notion of federated domains, which plays in very well with the way that Pier One traditionally has built their infrastructure where they've built essentially multi-tenant networks with MPLS VPNs and IP VPNs, which allow them to be very flexible in how they interconnect those domains. And when we chatted with them fundamentally, they came to both of us and they were looking for an expedited way and we actually used a product that we have called Bootstack which is deploying an OpenStack cloud with us supplying remote managed service to accelerate the deployment. And at the end of the day, the idea of the network being across both private and public clouds is really about application portability. And what we recognize is that you'll have people that wanna start on a private cloud and extend into a public cloud when a use case triggers spiky behavior like the web front end, but you want your data to sit back on your private cloud because you wanna control it and then the databases would reside there. And this idea of being able to bounce between public, private or private public back and forth is really paramount to the success of a hybrid cloud and ultimately this architecture is demonstrating that it's actually within reach. And already I think they have obviously an environment where they have to support proprietary hypervisors, open source hypervisors, containers, and bare rental servers. So from a network perspective, we need to obviously both in the physical and virtual infrastructure and the joint architecture accommodate that in a federated model. So that was also a key criteria for them. So I think to wrap this up and open it up to questions we're of an opinion of a couple of key things. We think Neutron is great as an API, shortcomings aside, especially when you plug an SDN into it. We believe that if you can supply automation, but not only automation, but you can give them a way to reuse what they've developed, a way to simplify the deployment and the management and ultimately a way to kind of balance between open source and proprietary software. We're gonna be able to help them accelerate to a hybrid cloud model and to leverage the model where an SDN supplies an extensible network attached to open stack. So with that, I think we'll leave you with those closing points and open it up to any questions. Would you like to use the mic? Yeah, thank you. Yeah, can you hear me now? Now we can. So if you can kindly point us to which of these problems are addressed right now with this open country solution? Well, I think we believe we've shown some leadership in all of these categories. And actually at the last Neutron, at the last open stack session, we had a couple of our customers lay out what they had tested and kind of what they saw as sort of the playing field. I'll start with scale. There are, and this is shown in one of the bullets there, there are some providers that will say once you get beyond 200 nodes, you're on your own. We've already shown well beyond that with an open country architecture and we haven't seen any issues yet, partially because we use a mature approach with a BGP control plane, which is how routers talk to each other today. What that allows us to do is have some distributed intelligence, but also not create an upper bound and just scale the control nodes horizontally. So we start very small, let's say, in pox and trials. And as the number of compute nodes grows, if you get, let's say, past a thousand, you add another control node and you can just keep going. That actually was a major problem in sort of generation one SDNs where you created a single point of failure and a highly centralized architecture and the first packet of every flow had to go, we've kind of achieved that model of distribute, that balance of distributed forwarding and control and centralized administration to hit the simplification. From a availability perspective, we've published a lot of blogs on opencontrail.org around performance. So we have near line rate throughput performance and we've done a lot of work, for instance, with DPDK optimizations to get our performance in terms of PPS. I think it's been more than 10x since we did those optimizations. From an availability perspective, a lot of that HA and resiliency is sort of taking a page out of the book on how carrier class networks work today. We can do things like in-service software upgrades where you have two control nodes on different versions of a release and you never stop forwarding traffic because you can essentially upgrade one and do sort of assault and pepper environment for some time. Those are the types of things. I mean, we can definitely spend a lot more with you but we believe those are differentiators in how we've approached Contrail. Good, does anybody else have a question if you can go to the mic please? Could you talk a little bit about what MAIS provides you specifically? Is it anything like Ironic? Where does that fit? So, fundamentally, MAIS has attributes that are similar to Ironic. We view it as fundamentally a functioning way to bring the cloud to bare metal. It identifies the node. It ultimately allows you to turn it on and off. It ultimately is connected through Juju. The application defines what it needs. Juju makes a call to MAIS and says I need a physical node. It identifies the node that matches to the requirements. It deploys the appropriate operating system. It could be Ubuntu. It could be Windows. It could be RHEL. It could be SUSE. And then turn it on, turn it off and make that available where it's needed. And the reason that was so important, we created, it's a piece of open source software. Other people are contributing to it. Sidebar story is London Stock Exchange actually needed to use MAIS in regard to SUSE. They contributed the code for MAIS to deploy the SUSE operating system. But we needed that to be available for Juju and the application to have access to the nodes when it needed it. So hopefully that answers your question. Yeah, for sure. The day one provisioning has been a challenge and I think you'll see lots of flavors. This sort of day one provisioning, I think that's where as this system comes together, folks don't have to reinvent it. We even with what we had done with our sort of Ubuntu OpenStack Plus Contra Networking had written our own server manager capabilities to essentially do the initial deployments. As customers see the capabilities of Juju, we're seeing we don't need to, in many cases, reinvent how that is done. But at the end of the day, people are going to have different flavors on how they want to see provisioning. What we're trying to do as a community is share some of those best practices back so people don't spend, you know, months sort of learning the hard way. Yeah, I guess one side of our comment is that what we found in interacting with the carriers from past practices right now, they're strongly tilting to open source. So they've all almost gone to the point where that's a requirement. Now they don't make it a full gating requirement, but what they do is, you know, put a preference for open source, I would say in certain dialogue we have with them, it's shown that that preference is very real. Maybe one last question then we should break because I know we're clearly a bit over our time. Okay, it was still about the question about Mass and Juju. So you said that basically it helps with the, I would say, day one provisioning. That's okay for servers, I get it, but I don't get how it's done with the, I would say classical isolation between, for example, you have different networks, different VLANs to isolate traffic. So for server you can define different NICs with different VLANs, but still on the network here you have something to do. What's happened? This is, I mean, from a network perspective, a lot of the reason that for instance the SaaS providers have adopted OpenContrail is for the tenant isolation and the VPC constructs. We once again take a page out of the book of IP VPNs where by default when we create a virtual network, which think of that as a replacement for a VLAN, it's a logical abstraction, it cannot talk to anyone else by default. This is the sort of methodology that folks are using today for large financial services, wide area networks with MPLS VPNs, right? So that was based on some work by one of our Contrail co-founders in the IETF around L3 VPN N systems. And this aspect of secure multi-tenancy and tenant isolation and the ability to attach a policy to that tenant group is one of the key things that we bring to the table with the Contrail architecture. And that's why it's so important that we wove all these pieces together, right? The intent was not to infer that MAS was managing that part of the network. Well, thank you everybody for your time today. If you have any follow-up questions, Jennifer and I are available. Thank you.