 Hello, everyone. Good morning. My name is Vanikay Daptari, and I work as a product manager in the Contrail Cloud Networking Group. I have with me my co-presenter, Stefan Kapell and Mikhail Glace, the nice guys from Nice in France. They work for IBM, by the way. And the topic here is going to be how do we use OpenStack Heat and extend OpenStack Heat to implement or to deploy security policies and network function service chains? I'm hoping that you have heard about security policies and network function service chains. But in case you haven't, I'll try and give a little bit of an idea of what these network function service chains are. We'll keep the agenda simple and easy. Give a quick background of the products that we are talking about. I'll focus on OpenContrail and my colleagues from IBM will talk about the IBM Cloud Orchestrator. We'll give a brief background of what's the relevance of OpenStack Heat here in what we are talking about, network function service chaining. And then we'll see some use cases. And then my colleagues from IBM will also help us see these things in action. And they'll show a demonstration of OpenContrail integrated with OpenStack Heat integrated with IBM Cloud Orchestrator. And if we have time, we'll also do some Q&A. We'll try to keep time for the Q&A. So a quick background. I leave the IBM Cloud Orchestrator. I leave the IBM guys to talk about the Cloud Orchestrator. They are best equipped to talk about the Cloud Orchestrator. And this is essentially in two slides what the Cloud Orchestrator enables. But my colleagues from my friends from IBM will be better equipped to talk about the Cloud Orchestrator. Now, I'll quickly talk about Contrail. And this is a typical solution architecture that you will see when Contrail is deployed alongside OpenStack. So what's the role of OpenStack? Or for that matter, IBM Cloud Orchestrator. It plays the role of a framework for expressing app intent. So the app developer uses IBM Cloud Orchestrator or OpenStack as an interface to express what they need from the infrastructure in a simple, high-level, abstract fashion. And this abstract definition of the app intent is handed to something like an SDN controller, what you see in the slide underneath the Orchestrator piece. The network component of the infrastructure requirements that the app developer has specified, those get handed to the controller. And the controller then plays the role of a network compiler. So what do we mean by a network compiler? The app developer has expressed the intent in terms of an high-level, abstract fashion, what they want from the infrastructure, and particularly from the network. How could they specify this in an abstract fashion that they are deploying multiple tiers of an application and they want a network for every tier? So this is how they express the app intent in an abstract fashion. That gets handed to the controller. The controller then plays the role of a network compiler and translates this high-level, abstract definition of the infrastructure requirements into more low-level constructs. Low-level constructs like routes, firewall filters, or ACLs, or routing instances, or VRFs. These are the low-level constructs that an app developer either may not be familiar with or may not want to be familiar with. So the role of the compiler and the orchestrator is to together abstract that away, the implementation of the networking constructs away from the app developer or the app deployer, and the controller compiles that, spits out the low-level constructs, and then the controller takes those low-level constructs and programs those into distributed forwarding elements. Distributed forwarding elements, we are all familiar with the Linux bridge or the OpenV switch. But in the control solution, we have our own take on the distributed forwarding element. We call it the vRouter that sits in the kernel as a kernel module in every x86 node in your cloud data center. So that is going to be where the distributed forwarding is implemented, as well as all the security policies that are specified by the app developer. In the abstract fashion, of course, they get applied and enforced in a fully distributed manner. So the vRouter then becomes a fully distributed forwarding element and a fully distributed firewall. So that's where the security policies are also implemented. Now, we are going to talk about network function service chains, and the controller and the vRouter are jointly responsible for making those service chains happen as well. And we'll see a little more about what that means. So that's really the responsibility of the controller and the vRouter. The good part about the solution that Contra will implement is that we are stitching overlay networks. We are creating overlay networks using IP VPNs, and we are using BGP as the control plane. So that allows this solution to be completely agnostic to what's running in your physical underlay. And you can make changes in the overlay without having to store any tenant state, whether it is tenant VLANs or tenant ACLs or tenant firewall policies, you don't have to store any of those in the physical underlay. So that's the good part of the solution. Now, the other good part of the solution is that it applies seamlessly across the vehicles that you use to deploy your compute workloads. So the transition to a modern cloud data center is not, it does not happen overnight. And so you have applications deployed in bare metal servers, as you can see on that part of the slide, on your left. And then on the right, you see some virtualized application workloads. And then you also see some containerized application workloads. So modern infrastructure tends to have compute workloads in all these three different compute vehicles. So the good part about Contra is because we are using IP VPNs, we can extend the same sort of network primitives across these different compute workloads. And then on the top box, you see different orchestrators. So because the networking primitives are exposed via REST APIs, the integration with any orchestrator, whether it be Kubernetes for controllers, for containers, or it be OpenStack, or it be IBM Cloud Orchestrator, or it be VMware's vCenter, the integration is via the RESTful APIs. So that's the other nice part about using Contra. Now, let's quickly come to what an application developer is going to interface with the orchestrator. Let's say, let's look at the top half of this slide. The application developer is here trying to deploy a three tier web application. And while deploying this three tier web application, all the application developer knows is that each tier needs to be isolated from the other tier. And there are also certain policies. For example, in my three tier web application, I have a front end that's implemented by the green virtual machines. And I want to keep the green virtual machines to talk to themselves. But in order to talk to any other tier, there should be explicit policy. What should that policy be? Now, there are other tiers. There is a caching tier, that's the blue virtual machines, B1, B2, B3. And then there's a database virtual machines, the yellow Y1, Y2, Y3. Now, the application developer knows that for no reason should the front end have to communicate directly with the database. All communication has to first go to the caching tier. So that's one of the most important policies. So you can see there is no policy connecting the front end network to the back end network. But there is a policy connecting the front end network to the middle tier. And by the way, this is a web application. So the application developer knows that he only expects to see HTTP traffic. So that's the other policy that he's going to specify. Allow only HTTP traffic between the front and the middle tier. Then this is the most interesting part. Between the middle tier and the database tier, they want to make sure that the traffic is cleaned via a virtualized firewall and then sent to the database. So before being sent to the database, I want the traffic to be cleaned by a virtualized firewall. That's all the application developer wants and how that is implemented is completely, he's completely agnostic to it. So here the orchestrator and the NOVA component comes into the picture, where the application virtual machines, the G1, G2, G3, B1, B2, B3, Y1, Y2, Y3, where they are deployed is the decision NOVA takes based on the algorithms that are running within NOVA. And the firewall also happens to be a virtualized firewall. So where the firewall VMs are going to be launched is also a decision that NOVA is going to take. So it's going to run some algorithm and available compute is going to be chosen to launch the firewall VMs as well. Now, this happens to be an example with a virtualized firewall, but you could have some other network functions, like you could have a load balancer and maybe you have a physical load balancer racked somewhere in your data center. So the problem with these modern cloud data centers is how do you steer traffic to a sequence of services? Often in web applications, you have a number of services back to back. For example, you could have a sequence that says, I want to send traffic through a load balancer and then through a firewall and then through a van acceleration and then onto out to the van. So that's an example sequence of services that I may want to specify. And these may be physical network functions or they may be virtualized network functions and they could be launched anywhere. They could be racked anywhere in your data center. How do you actually make sure traffic is stitched across these different network functions which may be launched anywhere in your data center, which may even be launched in your service providers data center or they may be launched in a different data center. How do you actually make sure traffic is stitched across these multiple network functions in the order that the application developer has specified? That's where something like an open-contrail controller comes into the picture, programs the necessary next hops that if the source is a blue virtual machine, if the traffic is coming from blue and destined towards a yellow VM, the next hop is actually the virtualized firewall. So that programming of the next hops is taken care by the controller and it is programmed into the vRouter, the forwarding element that makes sure that the traffic will follow the path that is specified by the application developer in an abstract fashion, okay? So that is what we call service chaining. And like I mentioned, I'll quickly recap that again. Service chaining is not confined to one service. You could have multiple services. You could have both physical and virtualized network functions in a sequence and they could even, you are not even confined to a particular data center. You are not confined to a particular vendor of that network function. So all these different permutations and combinations are possible and it's the joint responsibility of the controller and the vRouter to make that sequence happen, make the traffic follow that path of services. So let's take an example of some traffic that violates the policy. It gets blocked right at the vRouter in the host. So that's a fully distributed firewall acting at the host level. And then when you take traffic that has to actually traverse the service chain because the next hops have been programmed in the vRouters, the vRouter will make sure the traffic originating from the blue virtual machine B2 destined towards the yellow virtual machine Y3 is first sent to the firewall and then sent to its ultimate destination. So this is service chaining. Now, this, by the way, if you're familiar with OpenStack Neutron, you'll realize that it's not possible with the stock Neutron. So that's the value that OpenContrail has provided. What we have done is we have not only implemented the APIs that the Neutron specifies, but we've also gone ahead and extended the Neutron API specification. So we have a whole set of new APIs that implement functionality like this, network function service chains. And we've provided the implementation for those, for these network function service chains, to implement these network function service chains and ability for you to specify what the service chain should look like and then actually instantiate that. So that's what, Contrail, does it extends the Neutron API spec. But you may ask, what about OpenStack Heat? What's it got to do with OpenStack Heat? Now, let's quickly talk about OpenStack Heat. OpenStack Heat, this is OpenStack Summit, it needs no introduction, everyone here is familiar with OpenStack Heat. But it's basically just another abstraction framework that allows us to express entire application stacks. So in the previous slide, I was trying to deploy a three tier web application with a virtualized firewall. So that was my entire application stack and Heat provides me a mechanism to express that entire application stack and then using these heat templates I described that and when I run that stack, the entire, all the components of the application get deployed. But the important part is that of service chaining with stock OpenStack Heat from the top of the tree, you won't be able to deploy these network function service chains that I described in the earlier slide. So that's where we come in again. Because we extended the Neutron API specification and we wanted our customers to be able to do the same using OpenStack Heat, we also went ahead and extended OpenStack Heat. So essentially all we've done is we've introduced some new heat resources that map to the new constructs that we have introduced. So they are virtual networks, security, I mean service templates that allow you to specify a template of the service you're trying to deploy and then an actual instance of that template called a service instance. So these are the APIs that we have extended on the Neutron side and we created corresponding resources within Heat. So this is what it looks like. You have the Heat Engine and underneath the Heat Engine you have the Heat Templates. But you have the built-in Heat resources that you get from OpenStack. And then you see on the center, you see the Heat plugin that we have written that Juniper Contrail has implemented. And within the Heat resources, you'll see the Heat resources corresponding to the Neutron extensions that Contrail has added. Namely network policy, the ability to attach a policy and then the ability to create a service template and actually deploy a service based on that template. So that's really the value we have added and that's really what we are here to talk about and then also demonstrate that. How this is made possible? It's by virtue of the APIs. So like I mentioned, the integration with any orchestrator happens via northbound APIs. We have APIs on config, operational and analytics. And this is what it looks like. So at the top you see an orchestration application. Heat is an example of an orchestration application. And the way Heat would integrate or the way we would implement the new resources for the Neutron extensions that we have added is by invoking the corresponding VNC APIs that are not part of the standard OpenStack Neutron API set. So that's how the integration is made possible. What about use cases? So we understood the concept of service chaining, but where do you use service chaining? Where do you see it in action? Let's take a simple example. Again, the top half here shows the logical picture. And let's say I'm a distributed enterprise with multiple branch locations sitting across an L3 VPN. And some of my enterprise employees are trying to access the internet. But I, as the administrator of this enterprise, I'm trying to define a policy that says all traffic that's going to the internet definitely needs to be sent via a firewall. And then some deep packet inspection engine. So I happen to be leveraging my service provider, their service provider's data center, and I've launched these network functions in the SP's data center. And so I want traffic originating from my enterprise location that's destined to the internet to be sent via a sequence of those services running in my service provider's data center. So the logical picture that you see on the top is how you express your intent. And the picture on the bottom is how that actually manifests in the physical world. And what you see here is an example of the service chaining. Similarly, if one enterprise location across the L3 VPN is trying to communicate with another enterprise location, and again, you want to send traffic that's crossing the L3 VPN to be sent through a sequence of services. The services may be running in a service provider's data center. You make that happen by specifying the top half in the abstract fashion, and the bottom half is how it manifests. Here's another example. Let's say an enterprise has different branch locations, but they also have their own data center. And let's say they are running some internal web application, some internet application that's running in their data center. And I want to send traffic that's trying to access the web portal through a load balancer and a firewall. So that's, again, how I express the intent and how it manifests. So that's essentially all I wanted to talk about. I'll hand at this point the bet into my friends from IBM. And what they'll try to show you is using IBM's Cloud Orchestrator, how do you deploy these network function service chains that you saw here? So at this point, I'd like to invite Stefan. So hello to you all. And thank you, Aniket, for the great presentation you have made. My name is Stefan Kapelle. I work for the IBM Client Innovation Center in Nice in France. So now it's time for me to present you live demonstration, I hope, because I just lost my Wi-Fi connection regarding the use of e-templates in a complete or flexible IT provisioning orchestration workflow. So please take a few seconds to write down my email or our new Twitter. This is my agenda. First, I will present the IBM Client Innovation Center that is a new IBM GTS initiative to support top technology IT deployments. Then I will briefly explain our vision regarding IT provisioning and orchestration and it will be followed by the presentation of our orchestration tool called IBM Cloud Orchestrator. And then this tool plus the control SDN solution from Juniper. And I will explain how there are integrated together based on the template. And to finish, I will briefly describe the infrastructure used for the live demo and the step-by-step scenario. So this first slide regarding the IBM Client Innovation Center Networking Services. Two centers has been created in April, one in Dallas in the US and one in Nice in France. Focuses technologies are network function virtualization, so we will speak about NFV and VNF. Software defined networking, so we will speak about SDN. And this will also focus open source platform and open source software such as OpenStack. So this slide is just to show our vision regarding IT provisioning and IT orchestration. As you can see, we can divide this period of time into three different periods. The first one started three years ago and we call it the standardized period of time. It was a time where new open standards has emerged. So obviously OpenStack, but also the notion of fabric in the data center space. And in addition, the tunneling mechanism like VIXLINE or MPLS over GRE. So based on these open standards, we are now able to deploy an end-to-end solution by selecting the best of build technology. Then the second period, it's called the industrialization provisioning period. Basically, if we adopt SDN, we can provision and manage all network devices from a centralized point, that is the SDN controller. But we still need to operate and manage the SDN controller manually. So if we want to gain benefits here and reduce time to market, we need to have an orchestrator on top of these products. That will be an industrializing provisioning tool that will orchestrate, that will provision all the components for us. So that we can avoid any configuration, manual intervention, maybe not all the time, but for a repetitive task we will use an orchestrator to play a sort of kind of music partition somehow. And then the last period of time is the automate period. So we will speak about cloud automation. Or we can automate the IT provisioning or even the change management system. To achieve this, for instance, we can combine our orchestrator and a real-time monitoring system. Basically, a real-time monitoring system monitor an IT infrastructure. And then when an alarm is received or a threshold is raised, there is an alert action. This alert action can be a pop-up window or maybe text messages or an email to inform the administrator or the end user of an outage. The announcement here is that the real-time monitoring system will provide an alert action and this alert action will trigger the orchestrator. And the orchestrator will adapt dynamically the IT infrastructure. So to resume, we cannot optimize everything. But sometimes, because sometimes there are two technical, two complex to be automated. But if we focus on repetitive tasks without any added technical value, this is here a real benefit of the use of an orchestrator. So now, I will go a little more in detail on our IBM Cloud Orchestrator tool. It is really an important piece in the scope here because it is located on top of our partner's networking solution, so on top of SDN controllers and VNF solution. So thanks to an OpenStack native support, ICO, IBM Cloud Orchestrator, supports today multiple SDN controllers such as today's Juniper Control SDN solution from Juniper. ICO is a multi-layer orchestration tool. It means that it not only focus on the networking part, but also the compute and the storage system and even the business application management and provisioning. So then we have a second layer named Workload Orchestration that is more the pattern management. This is where we will define a workflow and then the engine of IBM Cloud Orchestrator will run this workflow to step by step provision the IT environment. The top layer of our constructor is the service orchestration. So it comes with the self-service portal. It's like a webpage just to have access on the tool. It could be directly accessible from the application itself or it could be integrated to an internet. And inside this self-service portal, we will retrieve the self-service catalog. So a collection of offering. We have these providers, the service provider will develop for a specific client. It will allow to add an abstraction layer as Annika said, to avoid any technical parameters to enter and to focus on the service level we want to map to the client. Now two slides regarding the integration between the orchestrator and the Juniper SDN solution called Contrail. We had for the story two different ways to do it. The first was launched two years ago or two years ago and was based only on West API called integration. So basically the ICOBU developed content pack that facilitates with predefined function to call the Juniper Contrail controller. Now we have announced this and we have a more open solution based on each templates. There comes with OpenStack. So it is a common open solution language or set of commands to orchestrate either OpenStack and Contrail. And as ICO is based on OpenStack and as Contrail is based on OpenStack too, it's really easy to integrate them with native commands from OpenStack. So we will see this today. And compared to other solution, we don't have this really transparency based on OpenStack between ICO and other SDN controller from other vendor. Usually we really need to use West API integration to get all the features supported by the controller to be exposed to the orchestrator level. And here with the use of each templates, we have 100% of Contrail features supported via it. This is a really valuable differentiator and when we talk about DevOps or we can facilitate DevOps using open standard solution like OpenStack, here we can really address business application deployment down to infrastructure deployment in the same workflow using the same each language. And this slide, just to understand how it is integrated. So on top we have our orchestrator, ICO, and then Contrail provide what we call the Contrail resource plugin. That is included in the heat engine of OpenStack. When the heat engine receive heat template, it commands, it alignizes all the commands. And as you can see on the right, some are typical OpenStack commands. So like in the yellow circle. And so they are directly managed by the heat engine or is itself. Some are Contrail based commands. So like you have shown on the blue circle. And then the heat engine will redirect these commands to the Contrail resource plugin. And then the Contrail resource plugin will contact the appropriate function in the Contrail SDN solution. So that we only need heat to make it working without any other type of integration between the orchestrator and Contrail. So now I will present you the live demo scenario. So at the left, we have a client premises. It could be also a branch office, a physical legacy environment connected to a pop router. At T0, we will see that no services has been provisioned. So from the client side, we have no access to internet. And also we have no access to a collection of services like voice services, et cetera. What we can do is we will connect to the ICO web service portal. Here we simulate an operator from a service provider. But we can also, if the service provider want to delegate the service to the end user, it can delegate it to the end client administrator, for example. And then it will connect on a dedicated web service portal in which it will retrieve, sorry, in which it will retrieve its self-service catalog and the specific and designed offering developed by the service provider. So you just have to open the portal, then click on some boxes and press the launch button. And when it will press the launch button, the following complete IT architecture will be provisioned. So ICO will contact OpenSync, will contact CoreTrail to provision this entire IT infrastructure and VNF, so represented in the blue square. So first it will request the provisioning of three virtual networks. So private virtual networks, public virtual networks, and then a DMZ virtual network. It is just an example of what we can do, not a limitation, but just an example. And then on the private virtual network will be hosted some services, some VNF, like the Wi-Fi controller, CIP services, so voice services, and also a virtual desktop service. And then it will be connected to the firewall to secure the traffic, which is the VSRX firewall from Juniper. The same for the public network, it will be connected to the internet through public virtual water. And in the blue dashed line, you can see overlays. So overlays are created from the internet pop filter to the public virtual water to enable the client to get access to internet. And on the other side, the private virtual water will be connected to the client pop filter in a dedicated virtual CP, so dedicated for this client. So now I hope I have the internet connection to show you this in real. I have no more. Excuse me, could you come with a credential? No, I am not connecting to Wi-Fi. Okay, because I asked for open one. Yeah, no, maybe you can use the yes, but I asked for more. Also okay, I'm connected again, so it's okay. Okay, so it's okay, nice. So what we can do first is check our client VM to show you that there is no connection at all at the beginning. So I'm connecting to my VMware environment, so nothing related to OpenStack and Contrail to simulate a client laptop or PC. So okay, so this is my client VM. I will open a browser on it. Oh, it will be, I changed the mode. So, okay, so I will open a browser and then test if I can access to internet. So just doing quick.ibn.com and just to show you that the request is running and nothing at all is, I can access to nothing. So this is the same for all services we will provision. So the Wi-Fi services, the Sieb Gateway services, firewall and DDoS. So nothing is working at T0. Then I will show you that I have my OpenStack dashboard here and there is, there are no stack already deployed. Okay, so no items to deploy. And then on the Juniper controller dashboard that no virtual networks are already created and et cetera. And so then we developed a graphical view so of the underlay and the overlay on the same map just to show for the demo that the services and virtual network will be created and will appear here in this zone. So now I will connect to IBM Cloud Orchestrator dashboard and I will find in my service catalog some offers I can select. So we have categories corresponding maybe to each entities of your company. So I will choose this one. So deploy or destroy complete service training and using each templates. The first things I see is doing, it is requesting from control and OpenStack what services, what stacks and what NFV, VNF are already deployed. You can see here resume of the map we have already deployed so nothing is deployed at all. And here a set of checkbox of currently deployed services. Now each checkbox is related to each template. So by checking the boxes, I will ask for ICO to request the provisioning of each components. So we divide on some components. So the first is the creation of basic networks. So the public and virtual network here and the private virtual network, the DMZ virtual network, the firewall services between the private and public network which will perform stateful firewalling and network address translation. The IPS services to protect our public web server from the from internet. And then voice services, wifi controller and remote desktop. So by checking all the boxes, I am now able to deploy the complete IT infrastructure. So by clicking on deploy hall, ICO will request the creation of the provisioning of all components. So we will quickly shift on the monitoring tool, higher somehow. So it will appear in a few seconds because it has a refresh timer of two seconds, some elements. So here ICO is requesting a control to create provision the network. And then it is requesting an open stack also. And this is related because a control is based on open stack. So I think I have to refresh. Oh yeah, so some task has running. Okay, sorry, we miss it. So, but, so here, I mean, here are represented two sites. One, our private cloud environment at the left. So one compute node is geographically based in this in South of France. And we have deployed on this private cloud some security services. Services we don't want to deploy on the public environment. So we'll be hosting, and it is already done, the gd.appliance and the NAT instance, which is firewall. So we keep our security products, our security zone in our private cloud. And then we deploy other VNF like the web server, CIP gateway, Wi-Fi controller to software Amsterdam. So by using availability zone, we are able to select where we want to deploy VNF. So now, if I come back to my open stack dashboard and just press F5, I will be able to see that all stacks, I have requested, so not here. Ah, so I'm not in the, sorry. So all stacks related to each templates have been created. So you can see this here. And as you can see here on the availability zone, some are provisioned in software Amsterdam and some on the Nova zones, which is our private cloud environment. And they are running from one minute or two minutes. So this is really the provisioning we have just done. And there is the same vision in Contrail, so about virtual networks. So we will retrieve the three virtual networks, private, private, DMZ, public, place management, of course, and two other virtual networks for the GDDoS appliance, transparency mode. So two armed. And now I can go back to my client VM to show you that I will be able to access to internet. So I'm not lucky today. Nice. It seems something happened with the firewall. So it is very flexible because we can add or remove some provisioning items or VNF. So I will remove the firewall services and update the service just to relaunch the firewall again. So as you can see, firewall services are not currently deployed. So I will recheck the box and relaunch the specific VNF deployments. It's so fast. Okay, so I missed the value. Nobody tell me. Okay. But it takes two or three minutes from the VSRX to launch. So we can maybe go on the wifi controller now. So, and maybe log also on the VSRX. So that's a real VNF function, real virtual network function. Also the VSRX is not here yet. Okay, now internet is working. So I will not be, so, oh yes, I am late. So internet is now working. And I am able to log in to the VSRX. So performing stateful firewalling and not at this transition. And also the GDOS instances. So just here. And if I move to the right, I access now to my public web server. So if I press F5 several times to generate some traffic, we will able to see that in the GDOS appliance, we are in real time inspecting the traffic coming from the client VM to the public web server. So yes. So this is my net public address from the VSRX. And this is the IP address of the public web server. So this is a real use case, working with some difficulties, but it's working. And we have provisioned this entire IT infrastructure in one or two minutes. So we have to, okay, we have to close. So thanks to you all. And if you have some questions, please come to me to I will answer as a question after, okay. So thank you.