 The picture is not very good. OK, so let's go ahead and get started. Good afternoon, everyone. My name is Pratik Rajadri. I'm one of the product managers within OpenContrail, within Juniper Networks. I lead some of the SDN, NFE, and Cloud initiatives within the team. And I'll be co-presenting this session along with Matthew Rohan. Yeah, hello, everybody. My name is Matthew Rohan. I'm working at Orange Labs. And I will tell you about the EasyGo network project that we are launching in a trial for now, and probably in production soon. So today we are going to talk about simplifying and automating branch networking. And so as Matthew mentioned in doing so, we will look at the journey and approach that one of the tier one service providers is taking what are the requirements and expectations that they have in delivering this solution and doing it by leveraging some of the existing technologies and services like L3 VPNs. So we will also look at the OpenContrail product, which is a cloud network automation initiative within Juniper. And so essentially, we will start off with introducing the two companies, Orange and OpenContrail, and then look at some of the requirements in terms of network services to enable such a solution. We will also look at from a contrary perspective what requirements are we seeing from various diverse customers that we have. We'll also look at the solution in a lot more detail, the EasyGo network solution in a lot more detail. And from OpenContrail perspective, we will look at the architecture and features that can enable a solution like that. And then finally, we will conclude with Matthew providing the perspective of Orange just summarizing the perspective of Orange and what are the lessons they've learned along the way. So first of all, OpenContrail itself is completely open-sourced. It is available as a battery V2 license. It is our open-source initiative towards cloud network automation, which means that we solve the networking challenges that are there in cloud environments. The product is built using standard protocols VGP, XMPP, OVSDV, and so on. And that enables interoperability with different kinds of environments. It enables multi-vendor and vendor agnostic system integration. It is completely API-driven. Automation is very key for several of our customers. So it is the every component of Contrail enables APIs. And that enables automation for our customers. And since service providers is a big set of customers that we have, we have made this product truly carrier-grade in terms of performance, availability, scalability, and so on. So a very important aspect of the product is the carrier-gradeness of the product. Now, I have listed down several of our customer segments. We have got the same product caters to all these wide variety of customers. We have got cloud services and emerging companies on one side. We've got traditional enterprises. And we have got service providers, whether KVL-MSOs, and Telcos, or hosting companies. Now, the single product, as I mentioned, caters to all these different set of customers. On the cloud services side, I've listed down several of our customers. We've got SAS and IT as a services primary use cases. There are large SAS companies, security enterprises. Cloudbot is one of the subsidiaries of Horange. They use OpenContrail. There's social networking, software enterprise, large industrial internet, enterprise gaming companies, and so on. So their requirement is basically how can I launch VMs and containers and provide micro-segmentation across them, essentially create two virtual networks and provide the micro-segmentation across those VMs. IPAM, DNS, DHCP are some of the features that they look for and create security policies. They also look at VNF, so network functions, wherever necessary. In terms of enterprises, their use cases bear metal as a service. So mostly focusing on enterprise migration, moving them from a legacy environment to a more cloud-based environment. And one of the examples is Juniper IT that actually uses this technology, uses OpenStack, uses OpenContrail to enable the build servers. And then the most relevant for this particular session is the service providers. And their use cases are varied. In the last session, you probably heard more about ESI. We have got lots of telcos. We have made several announcements of different telcos all over the world that their use cases are network function chaining or service function chaining. They have got the VCP and SDVAN some of those use cases. So with that, I'll let Matthew just talk a little bit about Orange and introduce. Thank you. So I will introduce Orange. It's a large telco company, a French one. And it's a worldwide company providing services to business clients, more than 2 million business clients. So we provide IP VPN services, internet services, cloud services now also, voice services. And our main concern is about high availability, security, and SLAs. So before talking about the service, I want to start by talking about customer expectations about what do they expect about their network services on top of their IP VPNs. So mainly what they want is to be able to manage their network services on demand. They don't want to have to wait for several weeks to be able to run a firewall, for instance, or any network services. And they also want to be able to manage their network services on their own, being able, for instance, to add new rules to a firewall and have this whole operating as quick as possible. So this is one of the main reasons why we launched the EZEGO network services to be able to provide agilities and automation to our customers, because this is what they ask for today. So from an open content perspective, as I mentioned, we have a wide variety of customers. And the use cases or the requirements that we see from our customers is that they have different heterogeneous environments. And how can you provide the networking glue that connects these multiple heterogeneous environments? And on top of that, you can provide a vendor agnostic policy abstraction. So these heterogeneous environments could be legacy, VLAN, VMware-based environments. You've got the traditional environment. Then you've got service providers are building distributed data centers. So they are building multiple of these. Every COS and POP is becoming a mini data center or micro data center. They also have the centralized data centers. So essentially building out multiple of these distributed data centers, which have got virtual machines and containers, bare metal servers and storage. There are physical service appliances, physical firewalls, physical load balancers. There are virtualized instances of those. And then there are public cloud. Some of those public clouds are offered by the service providers themselves, as well as the enterprise branch. And all of these are multiple heterogeneous environments. And they need to be connected to each other. And so what OpenContrail, the requirements that we see are legacy interconnect. How do you connect your traditional environment with your next generation modern distributed data center? P plus we interconnect. How do you extend the concept of virtual networking across virtual machines, as well as bare metals, multi-DC distributed cloud, and service function chaining? Essentially, how do you extend that virtual network across multiple sites and also create service chains, which can span across multiple sites? P plus we service insertion. How do you create service chains, which can have virtualized instances, as well as physical instances? So VNF plus PNF integration. Hybrid cloud. How do you interconnect all these modern data center with public cloud environments, as well as VCP? How do you connect all these environments? How do you extend this to the customer branch? So these are some of the use cases that we see from our customers. Of course, VCP is one of the use cases. And we are going to talk more about the EZGo network. So concerning the planning that we had for the EZGo network service, we had a quite aggressive planning to deliver a live service trial. It took almost one year to work on the engineering part and the design and to onboard the marketing and the operation guide and the trial opening in May 2015. So now let me talk about the design of the solution, of the EZGo network service. So here is how an IP VPN service looks like. You have several customer sites with customer equipment connected to an IP VPN. And customer sites are able to talk to each other through the IP VPN. And they are also able to talk to internet through this IP VPN. So the first thing that we need for the service is the virtualization of the network services. We didn't want to have to deal with the appliances sometime in the customer site. We have to rely on network services to provide the undamaged experience for the customer. So we also wanted to be able to change those services. And we choose to have some service chain per access. So when a customer wants to access its IP VPN, it will go through a service chain before accessing another customer site, for example. So the customer will be able to manage each of its access service chain. But it will be able to manage a global IP VPN service chain too with several network services in it that will be used to access the internet through its IP VPN. So what were the building blocks that we were needed for these services? Most of them are components that you are already aware of because you are at the summit. So we need a self-care portal, of course, for the customer so that he can manage its network services and the chaining of its network services. We rely on plug-and-play customer equipment with no value added in it. Most of the value are provided by network services inside the backbone. So we needed a service network services, some firewall, VNFs, antivirus, all kinds of network services that are available as VNF and as a virtual component. Running on top of community hardware, mainly x86 servers. So we need to manage those resources by a virtual infrastructure manager. We also need the VNF manager and configurator to configure the network services, virtual appliances. We also need a SDN controller to manage the steering of the traffic and the chain between the different VNFs, different virtual appliances. And finally, we need an orchestrator so that we can synchronize the provisioning between the OSS, the BSS, and the infrastructure. So here it goes in real. We developed a self-care portal for the customer. And here are the components that map to the concept that I spoke in the previous slide. So we mostly use the open source component. Of course, we use OpenStack as a virtual infrastructure manager. We are using OpenContrail as a SDN controller. We have some virtual appliances, virtual services, running on top of Debian for firewalling, for example. We are also using some Versa virtual appliances for deep packet inspection, for instance. And we are also using some virtual SRX from Juniper to provide a virtual firewall in front of the internet access. So when a packet comes from a customer site, in this topology, it will go through the Debian firewall first and then go through the deep packet inspection service before going to another customer site. And if it has to go to the internet, it will go through the Juniper SRX before leaving the IP VPN. So if we focus on the SDN controller part, the main requirements that we had and that we needed from a SDN controller perspective were the ability to change the services through APIs. This was provided by OpenContrail. And also, the ability to easily integrate those service chains into our BGP IP VPN. This was also provided by OpenContrail. So I left the floor to you. So now that you heard about the story, I'll talk a little bit about the Contrail architecture that can enable a solution like this. So in terms of the architecture, it's very simplified. We have created a Neutron plugin, and that's how we integrate with OpenStack. So OpenStack, it enables the Neutron V2 APIs. It also has a bunch of other APIs that lets it do things which are not available in Neutron today, for example, service chaining. And essentially, that lets the OpenStack component talk to the Contrail controller, which is a logically centralized but physically distributed controller system. And it talks BGP East-West, and that's how it can scale. So it lets operators define policies. So this is the policy definition layer. Policies such as create two virtual networks, and you can have as part of the virtual networks VMs, bare metal servers, and create in such a fashion that any traffic that goes from the virtual network blue to a virtual network red has to go through a firewall, which is also a virtual machine. So that kind of logical policy can be defined. And the policy enforcement happens at the data plane. So there is a lightweight kernel loadable module called vRouter, which sits in every x86 host. It could also sit in a CP, for example, if the CP happens to be x86 or ARM based. Essentially, what it lets you do is create overlay virtual networks through the use of overlay tunnels. And these tunnels, all of these tunnels terminate at something which we call as a gateway. And that lets you essentially go to either internet or an L3 VPN, for example. Also, the tunnels go and terminate on a top-of-frag switch to which where Contra controller talks OVSDB and creates bridge domains. That's how you can have bare metal servers as part of the same virtual network. So extending the concept of virtual networking from not just VMs and containers, but also to bare metal servers. So in terms, that is the architecture. One of the important things to note here is that since the Contra controller and since the OpenContra component talks standard protocols, it lets users have a vendor agnostic system integration. So for example, whether you're talking about different flavors of x86 servers or different flavors of Linux operating systems or hypervisors, whether you're talking about different kinds of gateways, which talk BGP, whether you're talking about top-of-frag switches, which can talk OVSDB, or whether you're talking about different kinds of orchestrators. For example, we expose every component through API, so it's not just OpenStack, but there are other orchestrators that can talk to the Contra controller, or whether you're talking about the network functions themselves. So it actually provides you a vendor agnostic or multi-vendor kind of a system integration. So a bunch of loosely coupled components coming together and providing an entire cloud orchestration system. Now, one of the reasons why service providers actually like us is that if you look at something that they have been offering to their end customers for a long time, which is IP and MPLS VPNs, the architecture there is very similar to the architecture that we offer in Contrail, and we provide a single unified control plane, and so what we have done is with that single unified control plane using BGP, we have extended the L3 VPN constructs all the way to the hosts in a data center. It could also be extended to a CP environment, if the CP environment so it chooses to. So this is one of the things why service providers actually like this environment. Now, in terms of the Contrail product features, we have got a rich set of features. We have got routing and switching features, which are not only IPv4 but IPv6 enabled. We have got a bunch of IPAM, DNS, DHCP, source NAT, floating IP, which provides one-to-one and adding quality of service. So all those features are provided within the vRouter. We provide load balancing, ECMP-based load balancing that enables you to scale DNFs horizontally. And one of the important things about all these features is that all of this is provided in a very distributed fashion. Because the vRouter itself is distributed, all of this is provided in a very distributed fashion. There are security policy enforcement through the use of distributed firewall. So we have stateful firewall capabilities within the vRouter itself. As I mentioned, we can run third-party network services there. We saw an example of several third-party network services that were running in the case of EZEGO network. There are gateway services, L2, L3 services, analytics, one of the strong points of ConTrail, which enables you to monitor and troubleshoot the environment. One of the other aspects about overlay underlay correlation, that is another aspect of analytics, which enables you to map overlay flows to actual underlay paths. So you can essentially see what overlay flows are taking what underlay paths. Very important for troubleshooting, especially. Service chaining, whether we are talking about layer 2 or layer 3 services, or whether we are talking about virtual or physical services, service chaining, or we are talking about even IPv6 services. So all of those service chaining is provided. High availability, as I mentioned, it is carrier grade. So we have provided high level of availability. And every component of ConTrail offers API services that lets you do the automation, the cloud network automation that we talked about. So I did mention that ConTrail is open sourced. Now how open is ConTrail, really? We've got a single source code repository, which is in GitHub. You can actually go to GitHub and search for open ConTrail. You can get the entire source code. So we are completely open sourced, as you can see. We do not have a fork. So essentially a single GitHub source code repository, from which we derive some of the community releases, as well as Juniper supported releases. That's where we monetize. That's where Juniper monetizes. And then you have got a Launchpad bug database, which is an open bug database, where you can go and view all the bugs submitted either by customers or developers. And then you've got, of course, the community developing a lot of code. And there is also an open ConTrail advisory board that oversees all of this. And that advisory board consists of veterans in the industry and users of open ConTrail. Again, there is a lot more information about this on opencontrail.org. So please feel free to take a look at that website. Yeah. Thank you. To conclude, I would like to talk about orange perspective, and mainly about why the open source components are so important for us in this project. We needed them to be able to actually move faster and being able to fix bugs when we found some, be able to deep dive in the code, and that allow us to develop some specific feature on top of open ConTrail or any other open source component, like OpenStack. We are very active in OpenStack, too. And it's really a key feature not to have to wait for the good willing of the vendor to have a bug fix or to have a specific feature. We want also to move forward, to move toward standardization of API. As I already said, the key component for us are service chaining for this service and BGP VPN interconnection. So there is a networking service function chaining project in Neutron that is leading the API work around the service chaining in Neutron. And we are leading also the BGP VPN project in Neutron that aims at providing a standardized API to attach Neutron networks to BGP VPNs. So to conclude, this project was disruptive for Orange, for Operation Guide, because we switched from a router world where we manage a router by CLI or by some networking configuration protocols to APIs to server to a world more managed by IT. And we are moving, of course, from a ticket to the vendor to a launchpad world where anyone can fill a bug and work on bug issues and propose some new patches. And that was one of the most interesting work inside this project. Thank you. So that's all we had. Again, there's a lot more information from the open control site on opencontrail.org website. And I'll encourage you to go take a look at it. If there are any outstanding questions, please feel free. Yes. I have a question for Patrick and Matthew, each other. Peter Patrick, do you think the compatibility with the ODL is important for the open country? Yes, for you. And then what I would expect from deploying this kind of like CPE solutions in the cloud, should it be KPI is OPEX deduction or with some revenue increase? What would be it? So I can take your first question. So first of all, we are a silver member of ODL. So we look at it as a very complementary approach. In fact, there are customers where ODL takes care of multi-vendor device management, device configuration management, whereas we take care of the SDN environment. So we see ourselves as very complementary in that case. But you do not provide the ODL-compatible Northbound APIs? ODL-compatible Northbound APIs. Yes. So in an earlier release, we had a Southbound plugin to opencontrail. But Northbound. Northbound from where? The usual program, your SDK, based on the ODL or something. So here's the thing, right? Contrail has got REST APIs. Every component in Contrail has APIs enabled, right? So any kind of layer can actually talk to this from a Northbound perspective. And we've done lots of integrations with different kinds of orchestration systems. On this question, I would say that the standard API to manage both ODL and opencontrail might be Newton. That's the way we see the thing. That's why we are working on BGP VPN to standardize this kind of API and this kind of use cases that can be provided both by opencontrail and by ODL on all other SDN controllers. So which means that the TACOS does not mind which vendors API it is as far as it works. Yeah, of course. And then what is your KPI? Concerning the KPI and the cost reduction, of course, it's something that we want to address for this project. But as I introduced, I think the most value in this project is the capability for the customers to really have a self-care portal to manage its services dynamically and on demand. That is the main issue that we want to address through this project. Enhance the functionality for the customers. Enhance and give agility to our customers. Any other questions? All right, if there are no other questions, thank you so much for your time. Thank you.