 Hi, good morning. Thanks for attending the session. I have all of 10 minutes, so I'm going to be very quick about it. Before I delve into the actual demo itself, I'll spend a couple of minutes, maybe one or two minutes, just to talk about what is Juniper's SDN solution. Basically, it's Contrail. So Contrail is open source. We work with many different orchestration systems, and being OpenStack Summit, so we will work with OpenStack. And the demo we're going to show is basically based on using OpenStack as the orchestration tool. That's why it's called Contrail and orchestration. And orchestration here is basically OpenStack. So basically, just to give you an idea of where Contrail fits into OpenStack, I'm going to use a very high-level example. In OpenStack, you have many components. So I'll just use a few. So you have your compute components to talk to the compute nodes, the compute APIs. You have the storage APIs to talk to the storage. And you have the network part, where it's connecting all the servers. It's connecting all the networking improvements like routers, virtual appliances, physical appliances. And that's where Contrail comes in. It sits in the middle to talk to all the networking devices. And how the interface between OpenStack and Contrail is basically neutron. So what are the components that we have inside of Contrail? So if you look at it, being SDN, we split the planes. We have basically two components. One of them sits inside the compute node, which we call the V-Router. That basically replaces the Linux bridge. And the second component is basically the controller. So controller takes care of telling the V-Router how to follow traffic. When you create a virtual network inside OpenStack, it needs to configure the V-Router. So it does that as well. All the VMs connect to the V-Router. All the traffic goes through V-Router. And so there's a ton of useful, valuable information that you can collect. And we collect it and process it with our analytics tool. So that's basically a very brief idea of what Contrail is. If you need to know more information, just go to C32. We are there. So the lab setup, basically, we have five different servers. So the first server basically helps the configuration node and OpenStack. We have two control nodes and two compute nodes, which install the V-Router. So without further ado, I'm going to go right into the demo itself. So I'm using RDP to go into my lab in my office. So it's kind of small here. So basically, I think this UI is very familiar to a lot of you guys. So what I'm going to do is basically just to do something that typically what you do in the data center. Create virtual networks, put some VMs in a virtual network, and do some policies so that they can talk to each other. Simple stuff. And then I'll show you some of the things that Contrail, in addition, can actually help you. So creating virtual networks pretty straightforward. Just go to the networking piece and do a create. Just give it some name. So let's give it, say, OpenStack front end and give it an IP address. That's it. So basically, what happened is that through OpenStack to Neutron, Contrail controller knows that we need to create this virtual network. So once that is done, you can actually put some VMs instances in it. Let me create one more so that I can show you. So creating VMs, basically, once we create a VM, we need to, it's like, in a normal way, what you do is you create a VM, you patch it to a particular VLAN. So in our case, basically, what we need to do is to put the VMs into what we call the proper virtual network. So how we put this, basically, there's a bunch of interfaces that you have already created. So I just need the interface to the particular VM and it can launch. So what that does is that when the VM, when the instance came up, come up, it will attach itself to this particular virtual network. So let's do one more. So what I'm doing now is basically to create one instance in each of the virtual network. So by default, in VLANs, they can't talk to each other unless you route it. So what I'm going to show you is that I'm going to go into the console of one of VMs. You can see that I can't connect to the other side. But I can just apply a policy. And that's it. Control is going to tell the folding devices that these two networks can talk to each other and then the traffic will go through. So 11.253. So we can't actually ping the other side. So basically, doing policy, it's pretty straightforward. We have the network policy tab. And basically, I'm going to just create a policy. So basically, this will create a policy without any rule. And then what I'll do is basically go into the policy itself and says that any traffic that's coming from the source network of this one to a destination network of this one, what I'm going to do is I'm going to allow it pass. So that policy has been created. And now what I need to do is basically to attach it to the network. So through the throughput side, we basically tell the controller that what we want to do is to allow these two virtual networks to talk to each other. And so if nothings go wrong, you see the ping starting to come. So it's like, there's no need for you to go to, for example, applying some ACLs on the switch. And if the VMs move, you have to go and physically go and man the ACL. So policy setting is actually very tough inside the data center. So of course, what control does is it brings not just these features to the table. It has a lot of other features. One other thing is that, for example, you have all these VMs comes up. It's given some IP, right? But you don't really care about IP. What you want to do is basically to know how to access this system. And usually, you will give him some meaningful name. And a virtual DNS becomes important. So I can try to ping the name. So I have not done anything. So what control does is that in a virtual order, whenever you have a VM comes up, that VM has an IP address. And he's going to put it into a mapping of the host name, of the name that you give to the VM, as well as the IP address is going to have a mapping. And then when you try to do something from the VM, it will query the DNS, which is the VRouter. And then you get reply. So it's very automatic, very straightforward. And we do have a control UI. So basically, the controller has REST APIs that you can use. And we use that to basically create a different UI. So they can see more things. Things like how many virtual routers are there, how many control nodes, utilization of each particular VRouter, and stuff like that. Things. So a bunch of information. So let's say, for example, you want to do some debugging within the system. You can do some kind of packet captures. So typically, what you need to do is to go into the switch, do some kind of pod mirroring. And then if the pod mirrors need to span switches, then you may have to do a span. Or you have to configure multiple switches at the same time. So what this system helps you to do is that you can do all these things very seamlessly. Just saying that you want to mirror traffic. Anything that comes from, say, this particular network to a different destination network. And what this will do is that we will actually launch what we call a service instance, where it's basically a wire shark that is pre-configured inside a Linux virtual instance. And controller is going to dynamically pipe the traffic from one network to the other network. And when it's done, if you go back to the open stack, OK, the last one. So basically, you get your traffic. All the ICMP traffic that I'm doing, the pink just now, it's all here. So it makes things very easy for you to do troubleshooting, for example, if you have something wrong in the network. So I've run out of time. So if you want to see more demo, we have a lot more useful features. We are at C32. Visit us. Thank you.