 Okay, let's go ahead and get started. So, well, thank you everyone for coming in early on a rainy morning for this exciting session. So, this topic is breaching virtualization and security with OpenStack and Contrail. And I will be, so the format is, I will be talking for about 15 to 20 minutes. And then I'm actually going to give a quick demo of some security services that we can dynamically spin up on top of our OpenSDNController, OpenContrail, and OpenStack. And then my colleagues, Shui and Aniket, will be leading the hands-on session and walk you through exactly how you do service chaining for security services. So we welcome you to interact with us. So you're welcome to tweet and join our LinkedIn group and visit our opencontrail.org website. So basically, our SDNController is integrated with OpenStack and it's completely open sourced. So we welcome developers to participate in the development. So in Chinese, the word crisis has two characters. The first character, Wei, means a time of danger. And the second character, Ji, means opportunity. So the Chinese understands that changes are inevitable. And when change happens, it brings both danger and opportunities. And you're either energized or demoralized, depending on whether you see the opportunities or you just see the danger. So what's my point here? So we are in the forefront of a revolution towards cloud and virtualization. And that actually brings a lot of complexities and challenges to the way we deploy security. And if you were in some of the previous sessions in this summit, you've heard a lot about what kind of challenges cloud and virtualization has brought to the security architecture. So I'll just list some of them here. So first of all, cloud and virtualization was driven by dynamic applications. And they, in turn, enabled applications to be more dynamic. So when I say dynamic, what I mean is actually it's twofold. The first dimension of dynamic application is the applications can really shrink and grow in a momentary notice. So to give you an example, back in 2012, this is Linga. So Alec Baldwin was on the plane, and he refused to turn off his smartphone or tablet because he wants to play the Linga game, Words with Friends. And then once the news broke out, there was record traffic on Linga website trying to play the game. So the application can really grow very fast unexpectedly. And the second dimension is actually the rate of innovation that's happening in the software development space. So with agile development, you're very familiar with the DevOps. So the DevOps team need to spin up dynamic environment to test and release the software several times a day. And these application environments may have different security implications and security requirements. So for security that's very static, it's not going to catch up with the new requirement posed by the dynamic applications. So security must change along with the applications. And the second challenge is workload mobility. So even within one data center, due to high availability requirements and the need to meet SLA requirements, and also for disaster recovery, workloads may move around within the same data center or even across different data centers. So the traditional way of doing security. So security must be always on in the new era. And also it must be in lockstep with your mobile workload. And last but not least, so in a virtualized cloud data center, we are seeing an increasing amount of East-West traffic, which is traffic between your virtual machines, intro data center. So according to statistics, more than 80% of the data center traffic is actually intro data center between your virtual machines versus less than 20% is actually going in and out of your data centers. So, oh, sorry. So what that means, the security, the data center security must be more granular. So now if you only have a parameter-based security, you have a firewall at the edge of your data center. That would not work because either you have to divert all your traffic, like intro data center, to the parameter and back, which caused huge inefficiency. Or these communications, these traffic, just evade your security protection. They won't go through any firewall and protection anymore. So those are the challenges that virtualization and cloud are posing on the security architecture. But we are arguing that at the same time, they are actually bringing huge opportunities for security to do a better job. So in order to show you where the opportunities are, I want to go back a little bit and talk about the key things that enable security to do its job. For security to be efficient, we need both contact and security. So by contact, I mean application contact. The environment the applications are running in, for example, the information about the application, whether it's Oracle database and whether it's your web server, the vendor information, what operating system and the file system, and also how much storage I need and what's my CPU usage. And with these, it's kind of like you have a better knowledge of what the applications require from the architecture. So you can do better security and protection. And the other thing that's really contributing to good security is isolation. So basically, by isolation, I mean the security function itself ideally should be isolated from the entities that it is protecting. So for example, you don't want a host-based security. And when the host is infiltrated, the security can be easily disabled. If your antivirus software can be easily disabled, then it doesn't achieve your purpose of doing good protection. And traditionally, security has been struggling to achieve both. Because for host-based security, it has very good contacts. It understands what applications are running in this environment. And it understands what applications and the file system and everything. But it doesn't have good isolation, just like what I said, an attacker can easily, relatively easily disable the antivirus software if you're already to control of the host. Likewise, traditionally, network-based security has very good isolation because it doesn't run on the same host. It doesn't run on the same entity as the applications they're protecting. They're in the network. So they're isolated from the applications. But it has been challenging for network-based security to get the contacts information. But they do need the application contacts to be more efficient. To give you an example, if you have some UTM system or a firewall, it might need to go through tens of thousands of signatures wasting a lot of CPU cycles to be able to find out which signature this attacker actually matches. But if it actually knows that, oh, I'm running Oracle server, then it can actually narrow down the number of signatures that it needs to match and maybe reduce it down to hundreds instead of tens of thousands. So knowing the application contact can make security much more efficient. So we argue that in the era of virtualization and cloud, we actually have all the conditions that we can actually do security much better. So we can actually have security that knows both the contact and also have very good isolation. So what do I mean by that? So at the host level, right now with server virtualization, so you have a hypervisor layer that's separated from your application environment, which is running in your virtual machines. So if you're protecting your applications, then the hypervisor layer, the security that's implemented in the hypervisor layer, has very good isolation. And because they are on the host, they already have very good application contact. Now with security, they actually gain more isolation. So it's harder for the attackers to get to the hypervisor level to actually disable the security. So similarly, for network-based security, traditionally, we are challenged with getting the application contact. But with software-defined networking, so we actually have a logically centralized SDN controller that actually has your information of all your virtual networks. So it actually provides, along with OpenStack, it provides much better programmability. And the users can actually easily convey its security requirements through a policy-driven framework that's enabled by SDN and network virtualization. So in the next few slides, I'm going to use Contrail and OpenStack to give you an example of how we make this happen. So first, a quick introduction of Contrail. So basically, Contrail, I use interchangeably with OpenContrail because it's an open-source project. It's an open-source SDN controller and cloud networking virtualization, orchestration, and automation software that's led by Juniper. But now, we have seen pretty rapid adoption in the open-source community. Until now, we have contributed more than 750,000 lines of code into the open-source community. And so we have seen huge adoptions among service providers and enterprises alike. And we have about 130 proof of concept and deployment. And among them, 90% of all top service providers in the world actually have done proof of concept with Contrail for their SDN and cloud and NFV projects. So architectural-wise, at the bottom of the stack, we have a network virtualization. So basically, this is your plumbing. And traditionally, this has been done with VLAN and where you need to deal with broadcast storms and restrictions on the number of VLAN IDs and a lot of scaling issues. And so Juniper, along with some other leading vendors in SDN, we adopt a software-overlay approach that removed the restrictions that VLAN has. So you can actually dynamically spin up virtual networks. So these virtual networks can also represent the different security requirements that your applications have. So if a group of your applications or a certain part of your multi-tier application, they have specific security requirements, you can put them in the same virtual machine group or endpoint group so that they can be protected with a set of individual policies. And the next layer up, we actually have a set of virtualized network services. So Juniper has traditionally been pretty strong. And we have a suite of virtualized security services that we can run on our SDN platform that includes our Firefly parameter, which is a firewall product and also an anti-DDoS product called DDoS Secure. And we also have web application security and also SSL VPN that we can run. And we actually have a demo of running them over a country platform. And because the country is open source and open standard, a very open platform, we can actually easily enable third party security and network services to be running on top of control. So if you're interested in developing such a service, if we have time at the end, we'll show you how to pull the source trace rule depth stack. And on top of that, we have the SDN control layer and configuration layer. And so this layer is composed of a centralized, logically centralized SDN controller that talks to distributed forwarding elements that's embedded in your compute node, which are normally your physical servers. So we have a similar to the open virtual switch. We have a V router that's running in the hypervisor level. So we support a suite of hypervisors like KBM and Zen and ESXi and Hyper-V in the future. And so what the controller does is it actually accepts user configuration. So through OpenStack, you can have a self-service portal where you can configure virtual networks. And through another control API, you can actually do some additional, more network focused activities like you can configure a service template and do dynamic service chaining. So ContraSDN controller takes all this information and then pass that down, do its processing, and pass that down to the distributed forwarding element. And then on top of that, we can integrate with cloud management platforms like OpenStack and CloudStack. But also in a service provider environment, we can also integrate with their OSS, BSS. And one thing that stands out for Contra is we have very good statistics and analytics information about the infrastructure. So the good thing that it can do with the service provisioning is with this rich set of information, you can actually figure out what additional things you need to do with the service. And you can actually provision the service better. For example, if you provision one virtual machine running a firewall, and suddenly your application footprint increased, and you need to increase your firewall footprint, you can actually detect that through your CPU utilization or your throughput. And that can be fed back to your cloud management system and your SDN controller. And OpenStack can, in turn, spin up more virtual machines and automatically scale your security operation to actually meet the demand of your application needs. So this is just an architecture diagram of the components in Contra. So I already mentioned some of this. So at the compute level, so over here, let me see, how do I? I actually don't know how I get that little red dot. But these two are my compute nodes, so I can have multiple virtual machines running on top of that. That's my workload that I need to protect. And I have a vRouter that's running at the hypervisor level. And at the vRouter level, because one of the differentiation of Contra is, for vRouter, it's a layer 3 entity versus OpenVirtualSwitch, which is a layer 2 entity. So as a layer 3 entity, we can actually have much richer functionalities. We can actually do load balancing and even some firewall functionalities and security functionalities and some NAT functionalities at the hypervisor level without introducing another virtual machine for your routing and other security functions so that the performance is much better when you need communication between different virtual networks. And so these are the vRouter is the distributed forwarding element. And on top of that, we have the SDN controller. So as I mentioned in the beginning, we use an overlay approach to realize the virtual networks and your tenants and multi-tenancy. So we actually are physical network agnostic. So we don't really care what switch fabric you use. It can be Juniper's Q fabric. It can be Cisco's UCS and can be Brocade. Or it can be a mixed environment. And because we establish overlay tunnels on top of that, you don't need to replace your existing data center infrastructure to enjoy the benefit that we can bring. And we tightly integrate with OpenStack for cloud orchestration. So this is a list of some of the key features that we support. I already mentioned some of this. So at the forwarding element level, we already support load balancing and security. So security is actually built in to our design because the virtual networks are inherently secure. So the virtual machines within the same virtual network can talk to each other. But without any additional work done, virtual machines in different networks, they are very separated from each other. And they don't share any information unless you establish a policy to connect them through a security service. So also service chaining. So we actually made service chaining really easy. So I'm going to show you a quick demo in a moment. So the last slide I'm going to have just talks about how we actually map the logical infrastructure to your physical infrastructure. So the physical infrastructure, you can have multiple compute nodes with multiple virtual machines on running on top of them. So the infrastructure I want to build, the logical infrastructure I want to build is on the top. So I want to build, for example, three virtual networks, the green, the blue, and the yellow. And then I want to be able to connect them with certain security features. I want to connect these two with a firewall and those with a deep packet inspection. So how are they done? So these virtual machines can be anywhere in your data center. It's not restricted by the VLAN cluster or anything in our implementation. So what we do is we establish overlay. It can be MPOs over GRE MPOs over UDP or VXLAN tunnels to connect these virtual machines together and then change the firewall service in between. So the advantage of this approach is we can deal with workload movement. So this can be movement of the firewall instance itself. It's a virtual machine itself. Or it can be the workload that you're protecting. They can move around. And the control SDN controller can actually dynamically update the tunnel based on the movement because it understands, along with OpenStack, it understands where the workload has moved to. And it can dynamically adjust tunnel to make sure that the security service is always on and in lockstep with your workload. And the other advantage is it's about auto scaling. So basically, with the rich analytics we can bring, we can actually detect whether there is a need to scale up or scale down your firewall services. And we can automatically get that done. And because at the vRouter level, we can actually do load balancing at the hypervisor level. So if we scale up your firewall services from one to four, we can automatically load balance among these four instances, so giving you much higher capacity. So I think that's the talking part. And so with virtualization and the cloud management, we can actually easily make security very dynamic, very scalable, and very pervasive in your data center. And with that, I'm going to show you a quick demo of our DDoS secure over a country and how that can help you mitigate DDoS attacks. So let me see if I can make this window a little bigger. So this is your usual open stack interface. And we also have a country UI. So basically, they use the same set of underlying information. But with the country UI, you can have some more advanced networking functionalities. So what I do is I'm going to spin up two virtual networks. So one, I can use these two interchangeably. So I'm going to spin up two networks. One is the attacker network, DDoS attacker net. I give it an IP address. So this is how easy I can create a virtual network. And so I'm going to create another one called DDoS target network. So this is where my web server or the entities I want to protect, this is where they're going to reside. Actually, I accidentally, so I'm going to change that later. And so while I'm at open stack, I'm going to spin up a virtual machine in each of my virtual networks. So one is going to be the attacker. And the other one is going to be the target. I'm going to attack. So I put it in my attacker network. And then I'm going to launch another instance to be my target. So now I'm actually going to switch to my country UI so that I can do my dynamic service chaining. So here I'm going to configure a service template. So it's going to be a transparent service, meaning it's a layer two service. So I choose my service image. And then I'm going to have three interfaces. The left interface connect to my attacker. And right interface connect to my target. And I have a management interface where I can see the statistics about my DDoS attack. So with that service template, I can actually spin up multiple service instances. So I'm going to do one now, DDoS service instance. So I'm going to pick the template I just picked. And I'm going to put my management in the public network so I can access through a browser. And I leave the left and right as auto-configured. So this will actually take a minute or two. So basically what this is doing is OpenStack is spinning up a virtual machine to run my anti-DDoS software image. And after that, I'll set up a policy to. So oh, it's actually fast. It's active now. So then I'm going to set up a security policy to direct the traffic from attacker network to the target network to go through my DDoS secure service instance. So this is doing the service training. DDoS policy. So from source network would be my attacker network. And destination network would be my target network. So I can apply a service. That's a service instance I just created. And then once a policy is created, I'm going to apply it to both my attacker and my target network. So I can select my DDoS policy that I just created and save. So I removed because I accidentally added this. So I applied the DDoS policy here. So now I go back to my attacker virtual machine. So I need to log on to the console and see if I can access my target so that I can issue some attacks. So I have config. So for my attacker, I have an IP address of 192.168.123. And my target has an IP address of 101.253. So I'm going to do a quick ping to make sure that I can access my target. So now the ping can go through. So now I'm going to launch my attacks. So some of you who are very familiar with security, I think nowadays, DDoS attacks are getting more and more sophisticated. So traditionally, we see a lot of TCP sync attacks, those like flooding attacks. But recently, there are more and more slow lorries types of attacks that actually really exploit the application defects. So basically, for example, they're opening up TCP sockets, but they don't close it. So your web server can quickly run out of resources. So I'm actually going to launch two types of attacks. So I'm going to use H-Ping3 to launch TCP sync like flooding attack. So dash s means it's a TCP sync attack. So I launch it on port 80, which is my web server. Do a flood. And the IP address. So I put that in the background. And then I'm going to launch a slow lorries attack. So this is just a porous script. Porous, slow, lorries.pl, DNS, I hope my connection is OK. I think the network connection is somewhat slow. So if I can't complete this demo, I can always blame on the network. I blame on the wireless network. OK, now I see some life. Attack or VM. I'm going to go back. So DNS is 192.168.253. Port is 80. So as you can see, it's building sockets on the web server, my target. And then if we go back to the instances, you can see this is my DDoS service instance. And because I put the management interface in the public network, so I can actually log on to this to see the attack. Let me see the address again. 10.1.3.162. 10.1.3.162. Yeah, so I'll be done in a minute or two. So once we see the attacks. 10.1.3.162. So this is our DDoS secure device manager. So it has a dashboard to show you, to let you configure DDoS secure. And also it can show you some live attacks that we detected. So from the dashboard, as you can see, we're already seeing traffic spikes at the DDoS secure device. So this one actually shows the CPU usage. So the CPU usage also spiked up. And this is the attack level. So we already detected the attack. And you can also see that through live incidents. Oh, actually, let me see. Worst offenders? No, sorry. Live incidents. So as you can see, the top one has a very high traffic rate. So this is your TCP sync attack. And this one over here, the second one, in the traffic rate is very slow. So this is your slow lowest attack. So the DDoS secure service actually detected both. And from here, you can see we're in logging mode. So we're not really mitigating the attack. So we're basically just logging the attacks. And what I can do is I can configure my DDoS secure to do mitigation. So I put it in defending mode. Then you see that actually all the traffic, all the attack traffic will be dropped. So now I have changed to defending mode. And you can see the amount of traffic coming into the DDoS secure, it's still quite huge. But the outgoing traffic is going to be zero pretty soon. Because the DDoS secure service dropped out the attack traffic. And so now your target, your web server, or whatever application will be protected. So that ends my demo. And I'm going to invite my coworkers Shree and Aniket to lead you through a hands-on session where you can do some of these things yourself. Thanks, Chloe. So we'll get into the hands-on part of the session. Each of you should have received two hand-outs, one which goes over the lab exercises that we are going to go through today. And the other one is a sandbox information. If you haven't received one, please raise your hands, we'll come and help you. So you need this. We do these hands-on exercises in a lab. And the lab is behind a VPN. So the handout contains the URL that you point your browser to, to connect to that VPN and then log in using the login information. And then there is a little client that you'll have to download. It's a Java applet that you'll have to download. Once you log into the VPN, there's a start button. When you click on the start button, you'll either download a Juno's Pulse or a network connect applet. And once you download, that's when you actually get connected to the VPN. And once you are connected to the VPN, you can point your browser to the 10, 10, 11, 11, or 10, 10, 11, 16 URLs. Those are the URLs to the open stack and the contrail instance that is running in the lab. So we also have a raffle draw towards the end of the session. Folks who will complete the exercise will be, we'll do a raffle drawing for them. And there's an iPad giveaway at the end of the session. So good luck. So I'll spend three or four minutes waiting for everyone to get connected to the VPN. If you have any issues connecting to the VPN, you may raise your hands and we can help you get connected. Once we are connected, we'll do a very quick overview of the architecture and then get on to the hands on as soon as possible. Same password. Be the same password. It's not, why is it not taking? So let me do the same thing on my screen here to quickly show you how it should look like. So this is exactly what I did. I pointed my browser to the OpenLab URL, 63.119.251.102. I logged in using, Chloe has her own credentials to the OpenLab, so she logged in using her credentials and then you hit start and that downloads the Java applet and then that's how you get connected to the VPN. And this should be running to tell you that you are connected to the VPN. All right? So let me spend a few minutes talking about doing a quick recap of the Contrail architecture. And the goal is not to deep dive into the architecture itself. It's only to set context for the exercise. So you can see it's nicely organized into four layers. The four layers that you saw earlier in Chloe's slides, this maps to the same four layers. At the top, you have the cloud orchestration and we are all familiar with OpenStack. So the topmost layer is the OpenStack layer with which you will interface with the system. The next layer, there are two components that Contrail provides. There are two pieces of software that Contrail provides, the controller and the V-Router software. So the controller software is the next layer and the V-Router is a kernel loadable module that instantiates into the hypervisor of each of the compute node where your VMs are going to hang off of. So that's the next layer you can see, the V-Router and the controller software. The controller itself is comprised of three different software modules, the control piece, the config piece and the analytics piece. So the config piece is responsible for translation of the high level abstract definition of your overlay picture and translates that into the lower level data model and hands over that data model to the control. And the control then speaks XMPP to the V-Routers and programs the V-Router to instantiate that picture in the overlay. You describe the picture of the overlay you want to see in the, using the orchestrator software. And then each piece, each component of this system sends a bunch of analytics information and all that analytics information is dumped into highly available databases. And we will see in the Contrail UI how you can retrieve those analytics information about the flow and records and also about the log messages. Underneath the overlay, underneath the overlay is the physical underlay. So the physical underlay provides any to any IP connectivity within your data center. So remember your virtual machines are hanging off of these x86 compute nodes, x86 servers. And the underlay provides any to any IP reachability between these servers. And the only thing Contrail expects from the underlay is this IP reachability. And it does not, there's no other expectation from the physical underlay. So that's the beauty of, that's the beauty of the Contrail solution. So fundamental to network virtualization. So Contrail is a network virtualization platform and fundamental to network virtualization is this idea of being able to give the tenants of the cloud the illusion of having a logically isolated network from isolated from rest of the tenants. And in a private cloud, a tenant is a department, in a public cloud, a tenant is a customer of the cloud. So fundamental to network virtualization is this idea of being able to give the illusion of isolation from rest of the tenants of the cloud. So that's the fundamental, and this network isolation, the building block of this network isolation is the concept of a virtual network. And for Contrail, we build these virtual networks using overlays as opposed to some other solutions where you could have used VLANs, but there are several issues in using VLANs and we will not go over the issues in using VLANs because the focus of this session is to do the hands-on. So remember that virtual networks are implemented in the Contrail world using overlays. And so the tunnels that you see in the picture, these tunnels between the V routers and from the V router to the physical gateway, these tunnels are what are used to implement the overlays. And the gateway could be a physical router or it could be a software router that is used to exit the data center, go from the virtual to the physical world or to go from the data center out to the internet. So that's the role of the gateway. And that's all we need to do, that's all we need to know for, to set the context for the demo and the hands-on exercises. And this is another nice slide that explains you the virtual networks you are instantiating and then the virtual machines that you spawn inside these virtual networks, what the logical picture looks like that's represented at the top. And in terms of how that logical picture actually gets instantiated in the physical world, that's the bottom diagram that's shown. So what we have in the top diagram, there are three different virtual networks. There are some virtual machines hanging off of these virtual networks. And then there are two services. There's one service, there's a firewall service between the green and the blue network. And there is a DPI service between the blue and the yellow network. And these services themselves are also instantiated inside of virtual machines. So these are virtualized services, also called network function virtualization. So these are virtualized services themselves deployed inside virtual machines. So you can see all of these virtual machines. There are two compute nodes in this diagram. So all of these virtual machines are spawned on these two compute nodes. And if there's traffic being sent, so virtual machines belonging to the same network, the green one and the green three, you can see are sitting on two different compute nodes. And the tunnel is established to extend the network from the left host to the right host. So the tunnels are used to extend the network to create the concept of the virtual network. So that's the essential idea you need to take away from this slide. So we saw that demo. And now for the hands-on, this is sort of the agenda, what we'll be going over in the hands-on exercises. So the fundamental building block is the virtual network. So the first thing that we will see in the hands-on is how do you create a virtual network? And then you spawn virtual machines and then put the VNX of these virtual machines in the virtual network that you spawn. So that's the next step we will see. We will spawn virtual machines inside these virtual networks. So these virtual machines will hang off of those virtual networks. And then the other fundamental building block of network virtualization is the idea of isolation. So you will see that in a virtual network, the virtual machines can talk among themselves, but traffic cannot exit the virtual network or traffic from other virtual networks cannot enter that virtual network. So in order to allow traffic to exit and enter the virtual network, you have to actually apply policy. So we will see the application of policy and then attachment of that policy. Now this policy can be a stateless ACL-like policy, and therefore we also have richer and more stateful policies that can be applied. And that is what is called a service insertion. So we will see one example of service insertion. We saw one example of service insertion in Chloe's demo. We saw the DDoS secure being inserted as a service. In our hands-on exercise, we will see a NAT gateway being inserted. So Juniper has this virtualized service instance called the Junos vFirefly, and we've already pre-programmed this Firefly to perform the NAT function. And that image has been added to the Glance repository and is available in your OpenStack instances. That's running in the OpenLab lab. And so we will take this NAT instance and we will instantiate it between two networks. So we will see that. We will see the concept of floating IP and how that gets used. The third exercise, we can do the DDoS secure ourselves. We saw Chloe doing the demonstration and you can do it for yourself. You can do the exact same demo. We have the instructions in your handout. And the fourth exercise finally is the rich amount of debug and analytics information. How do you get access to that? How do you attach packet analyzers? How do you attach wire shark to your virtual networks or your virtual machines? How do you do that? So that's an instructions for all these exercises that are available in your handouts. So the first demo where we will insert the NAT gateway, essentially, let's say there is an enterprise network. So when you create a virtual network, you will assign some private IP block to it. And then clearly, it won't have routes to reach the external internet. But there is another network. There's a public network which we have pre-created, which has routes to reach the internet. And therefore, for this private block of IP addresses that the enterprise network has, in order for the VMs spawned in the enterprise network to be able to reach the internet, you have to apply the NAT service because routes to the internet are available for the public network. So if you apply the NAT service in between the enterprise and the public network, then the VMs in the enterprise network will be able to reach the internet. So that's essentially the demo we will do. The public network already exists. We've already created. It has routes to the external internet. The enterprise network is what we will create. We will spawn one virtual machine inside the enterprise network. We will then apply policy, and then we will insert the Firefly NAT service between these two virtual networks. So all traffic going from the enterprise network will be forced through the NAT service. The NAT service will be applied. And then by virtue of that NAT service, access to the internet will be enabled. So that's the demo we are going to do. That's the hands-on we are going to do. So as I said, the public network already exists. And the first step is to create the enterprise network. So in order to create the enterprise network, you can use either the OpenStack Horizon UI or the Contrail Web UI. So let me introduce you to the two UIs first. So in Chloe's demo, we also saw both the UIs, but mostly we will, for the purpose of this demo, everything that's to do with spawning of virtual machines, we will do from the OpenStack Horizon UI, and everything that's got to do with networking, whether it's creation of networks or application of policy or spawning of service instances, we will do from the Contrail UI. So there are three data centers in the lab that we've created for the purpose of this exercise. So I'm connected to one of them, 10, 10, 11, 11, and that's my Horizon dashboard. And then on the same IP address at the port 8080, you can connect to the Contrail UI. Now everybody is familiar with the OpenStack UI, so let me spend half a minute introducing the Contrail UI. So there are four tabs in the Contrail UI. There is a monitor tab. The operational aspect of a Contrail controller can be seen from the monitor tab. The interesting thing to note here is that there are five servers. And among these five servers, there are two V routers. When I say there are two V routers, it means there are two compute nodes. And within these compute nodes, two V routers have been instantiated. And so on these two compute nodes, all the virtual machines are going to be spawned. And then you saw the Contrail controller comprises of three different nodes, the config control and analytics. So that's my controller running on the remaining three servers. The three remaining three servers. So once again, I have five servers in this data center. Two of them are designated as compute nodes, where V routers are instantiated. And the remaining three, I've distributed my control config and analytics nodes to run my controller. So the idea is that the controller is logically centralized and physically distributed. And for scale and redundancy and high availability, you can instantiate each component, whether it's the controller or config, on as many nodes as you want, for the purposes of high availability and redundancy. So here I have instantiated two control nodes for the purpose of redundancy. And then each of these components, I can, you can drill down and then it shows you some information about the CPU consumption and the memory. And each of these nodes, I can drill down. And then the color and the size of the bubble has its obvious meaning. If it's green, it means it's all good. If it's pink or red, it has something fuzzy going on. So that's the monitor dashboard. And similarly, you can look at the networks that you've already spawned. So it lists all the networks and then it can, so Chloe just created this DDoS attacker net. So look at this nice web 2.0 UI, which shows you exactly what Chloe did. She created a DDoS attacker network and she created a DDoS target network and then she launched or inserted the DDoS secure service instance. And so it shows you a nice pictorial. This is a simple topology, but you can have more complex topology and then you can easily visualize them in this monitor UI. Similarly, you have the debug. You can attach packet analyzers from here to your virtual networks. And then you can inspect the traffic that's traversing your virtual networks. So that's the monitor tab. Similarly, you have the configure tab. So this is the equivalent of, if you are familiar with the Juno's router, there is a config mode and there is an operational mode. So the monitor corresponds to the operational mode. This is the config mode of a typical router. So most of the work we will be doing today will be underneath the networking and the services sub tabs. So you can look at the networks and you can spawn your network over here. So let's everyone browse to this tab, to this sub tab, the network sub tab under networking in the configure tab of the control UI. And then you can create the enterprise network and give it some suffix, maybe give it your sandbox number as a suffix so that you can easily identify your enterprise network. So I'm going to call it enterprise demo. For now we leave the policy blank. So when we are spawning the virtual network, we leave the policy blank and give it some private IP block. It populates the gateway address. Add that IP block and that's it. Hit save. So as you can see, my enterprise demo virtual network was created. So let us all, I'll give a couple of minutes for everyone to spawn your enterprise networks. You give a slash. It's the password, that's the problem. There's an administrator, I'll ask him to increase the number of sessions. So here's what you do. You go to the configure tab. Under configure, you browse to networking. Under networking, there's networks. And then on the right-hand side, on the top right-hand side, there's a create button. And for now when you create the virtual network, leave the policies blank. Give it some IP block like 19216811.0 slash 24. Yes. I know some of you are having problems connecting to the lab or I've already communicated with the administrator to increase the number of sessions for those who are having problems with the number of sessions. But in the interest of time, we'll have to make progress. For those people who are already connected, we'll make some progress. Exactly, so what we've done is because there are multiple sandboxes, in each of the DCs there are about 20 sandboxes. And so what we've done is the public block of IP addresses instead of a slash 24, we've split that slash 24 into slash 29s so that each sandbox has their own public. It's the same thing, instead of a public, it's called sandbox dash public. So the next step we do is we go to the horizon UI and spawn a U1 to virtual machine inside your enterprise network. So we created the enterprise network and now we're gonna spawn the U1 to virtual machine inside this virtual network. So we go to the horizon UI, make sure you're logged into the same sandbox. So the sandbox in contrail maps to a project or a tenant in the OpenStack UI. So I'm logged into the demo and I'll launch an instance from an image. We've pre-populated glance with an U1 to image. I'll call it enterprise U1 to demo. Now for the flavor, I suggest not using tiny and using at least a small, but keep it to a small. And then you have to select a network to put the first VNIC of the virtual machine inside and so select the enterprise network that you just created. What that does is it puts the first NIC inside the enterprise demo virtual network and then simply launch the VM. So let's all do that second step. Launch a U1 to virtual machine from the horizon UI and you select the enterprise virtual network you just created to put the first VNIC inside. So that login information is here. So for spawning the U1 to virtual machine, let's use the U1 to image and then once you create the virtual machine, you can log into the console. So the login for that is just U1 to U1 to. If you're using a different image, then the login information is mentioned on your handout. No, use a small and then you can do an IF config and you will see that the ETH0 interface will have an IP address from the virtual network, the subnet block that you allocated for the virtual network. And we will try to ping Google's public DNS server and clearly it will not have access. We will not be able to ping 8888. So the next step we are going to do, so now we have two separate isolated virtual networks. There is a public network and there is an enterprise network. The enterprise network has one virtual machine and the next step we need to do is to apply policy in order to connect these virtual networks. So the next step is to create a policy. Just delete the VM and try spawning another one. So some users do have access to the lab and are not having problems and are moving ahead. So in the interest of time, let me also show what the next steps are on the screen. So the next step you are going to do is because these two networks are isolated, you have to create a policy and then apply that policy to the two networks. So that's what I'm going to do. So as you can see in the diagram, you first create the policy and then the next step you do is attach the policy to the two networks. So I'm going to do these two steps in the Contrail UI, create the policy and then attach it. So I go to my Contrail UI, go to the Configure tab again and under the Networking there is now a Policies sub tab. So I'm going to save Enterprise Internet Policy and then I save from the source network Enterprise demo. Destination network is demo public. So this simply creates the policy and you can see there is a second column called Associated Networks. So that Associated Networks is right now blank. So you might wonder why there is a two-step process. First you create the policy and then you attach the policy. This is because the policies may be created by one set of security personnel and then the application of the policy may be by other set of administrators. So that's why there is a clean separation of who creates the policies and then who uses those policies and applies those policies to the networks. So we've created this policy and right now it's not associated with any networks. You can see the second column blank. So you go to the networks and remember when we created the network we left the policy field blank. So we go back to the network. We edit the network and then you select the policy that you just created and then you apply that policy. Similarly you go to the demo public. We had applied the firewall policy. I'll now take out the firewall policy and simply apply the enterprise internet policy that I just created. So I've now applied the policy on both the networks and thus I have connected the enterprise network to the public network. But this still is not going to apply NAT for all traffic going from the enterprise towards if my pink packets are sent directed to 8888. This policy is not going to apply NAT yet. So I have to spawn the NAT service instance and then insert that service instance into this policy. So before I do that, let me give a few minutes for everyone to create the policy and then attach the policy to the two virtual networks to connect the two virtual networks. I can see the network that I have created in the complex. Really? Yeah. In OpenStand? Are you on the sandbox one? I created the network in complex and I created the policy in complex but I cannot see it there. I only see it in OpenStand. Which is weird. Maybe something. No, it might be some of the daemons might need to be restarted on the server. I was able to access the changes and now I'm on the server. The weird thing is I'm not seeing any of the problems. Okay. So the thing is both are working off of the same database so you should be able to see from both UIs. So there was a question asking me whether there was a map of the topology I've just created. So you just saw what I did was I created the enterprise virtual network and then applied a policy to connect the enterprise to the public network. So let's see how that shows up. So how that is pictorially represented. So because that's like a monitoring kind of functionality I have to go to the monitor tab and under the monitor tab I'll go to the networking. I'll go to the list of networks and I'll search for my enterprise network which I created in the demo sandbox. Enterprise demo is what I created and you can see enterprise demo is connected to demo public simply by a direct policy. Right now there is no service instance inside or between the two virtual networks. So this is the map that there was a question for. You go to the monitor UI, under networking go to the list of networks, select the network within the sandbox that you are using and then you will see that you'll also see the traffic statistics at the bottom. So I'll go back to my slides and go back to the next step. Creating the policy. The dropdown will list the networks from all the sandboxes but the networks in your sandbox will not be a fully qualified name it will just be the name of the virtual network. For all other sandboxes it has a fully qualified name so it will have the prefix of the sandbox and then the virtual network. So the networks in your sandbox will not have a prefix of the sandbox itself. Yeah, because our enterprise demo there is no enterprise demo. I found it in my sandbox. Yeah, I found it in my sandbox. So the next step is actually inserting the NAT service but in order to insert the NAT service I have to first spawn the NAT service. I have to first spawn a virtual machine and inside the virtual machine run the NAT service and then I can insert it into this policy and so all traffic going from the enterprise network will be subjected to the NAT. So in order to do that there's a two or three step process. The first thing you have to do is you have to create a template of the service that you are going to spawn and when I say a template of the service it simply identifies what image is going to be used to spawn the service and some other parameters like what mode the service is in is it an L2 service is in an L3 service and then what interfaces the service is going to have. So you create a service template before spawning an instance off of that template. So after you create the service template you instantiate an instance from that service template. So the next step is to create a service instance from that template. So the next step is to create a service instance in order to create the service instance you use the service template and then you specify which networks the left and right interfaces are going to be connected to and if you want the service to be also managed you will also put the management interface into some network. And then finally you will edit the policy and then insert the service inside the policy. So that's what you will do for the service insertion workflow. So I'm going to do these two steps create the service template create a service instance and then include it inside a policy I'll show it in my browser. So I go back to the configure UI I go to the networking I go to the services section this time I create service template so for my NAT service a template is already created in any case I'll just create a new one I'll call it a NAT demo template. Now the NAT service is actually going to modify packets it's actually going to do the source NAT functionality and therefore it's actually an in-network NAT kind of service so that's the service mode you will select and for the image you'll simply use the NAT-service image the NAT-service and then there are going to be three interfaces the service is going to be connected to two networks to the enterprise network and the public network so it needs two interfaces a left and a right and then if the service itself needs to be managed then you will also need a management interface so let's have three interfaces in the template and then you save the template so I just created the NAT demo template and then once you create the template you create an instance from that template so you create an instance called NAT instance or I could give it a different name I could call it an internet access instance I use the NAT demo template that I just created now in this case the NAT service is already pre-configured and there is zero configuration or zero management I'm going to have to do to this NAT service and so I leave the management interface auto-configured I'll put the left interface in my enterprise network which is my enterprise demo network and then the right interface I'll put it on my public network so I need my demo public and then I save it when I save it what's going to happen is a virtual machine is going to be spawned with the NAT service image and then it's going to have two interfaces and the right in the two networks so while the VM is spawning I know few folks are ahead of the session so if you're done doing exercise one please do raise your hands I'll come and pick up your business card or name for the Apple iPad drawing and we are kind of running out of time so we'll wrap it up in five minutes we are almost done with the demo so as soon as the instance is spawned merely spawning the instance and putting it in the two virtual networks does not route traffic through that service instance you have to actually include that service instance inside a policy the policy that we just created we will go back to the policy we will edit the policy and then include the application of the NAT service inside that policy so I'll go to 10, 10, 11, 11 I go to the policies I created this enterprise internet policy edit the policy and say apply service so it shows me the list of services that are available to be included and so you include the internet access service that you just created save it in the policy the policy is already applied to the two networks so now if you go back to your UBAN 2 virtual machine and try to ping 8888 the ping packets will be subjected to the NAT service source NAT will be applied and that is how access to the internet will be enabled I think that's pretty much all we wanted to cover in the hands-on so we will come around and take your business cards for Apple iPad drawing I know some of you had connectivity issues I just wanted to let you know that the lab is available for the next 10 days so save the sandbox information you can go back home and try the rest of the exercises leisurely yes oh absolutely so here's what I did there are three things I did I created a service template and then I instantiated using the service template instance and then embed the service instance inside a policy so to do that I went back to my policies tab I edited the policy and I said apply service there's an apply service checkbox and then there is a drop down that allows me to select the services I want to apply so one more minute we will if anyone else wants to enter into the Apple iPad drawing please raise your hand this is 314 what we'll do is people who have written their names on white paper we'll put it on one of our business cards and put a number behind them so that when you draw it there is no partiality can you give me a few business cards so we're going to fully randomize fully randomize here try to be as fair as we can one last thing is you'll find folks from Juniper Contrail in the summit so if you have further questions about the product or about any of the features please feel free to contact any of us you saw our team members in the room so please feel free to contact any of us so while the draw is being done let me take this opportunity to also flash a few slides about dev stack okay I got the signal to cut off but I'll quickly flash the slide about dev stack and you can take a look at it you can browse to that URL download the dev stack which includes Open Contrail and you can run it on your you know inside a Ubuntu VM or on your laptop and we'll be making these slides available so there's there's this page we've created on Etherpad it's called Contrail-DevStack who wants to volunteer? oh just a second oh it's from Cisco Julie Ann Connery we're pretty generous