 Okay. So today we are going to talk about network connectivity in a hybrid cloud use case. And I'm Vinit Jain. These are my colleagues, John Kaspersky and Nikol Gupta. I'll start off with the introduction and describe my big picture. John will then describe the network connectivity in detail. And depending on how much time is left to have Nikol cover an aspect of Keystone as well. Because when we did this hybrid cloud, there were some key aspects that we also had to take into consideration for Keystone to share the Keystone. So hybrid cloud, I mean, this is the ability to consume and interconnect and orchestrate across different types of cloud and also traditional IT. And throughout the multiple sessions we have seen why this is important and where you can have your workloads take the best fit infrastructure, achieve the right balance of risk and speed and so forth. And various surveys project at different points in time in the future where how hybrid clouds will get adopted. One particular data point is from Gartner where they said by 2017, half the enterprises will be using full-blown hybrid clouds. So this is obviously an important use case. So diving into the hybrid cloud architecture, we support where we basically, this is a hybrid cloud between two open stack regions where on the left here is on-prem. On the left is the off-prem region which is basically hosted in our software data centers. It is based on our IBM cloud open stack services managed private cloud that we offer and on the right is in the customer data centers. It is a product we offer which makes deployment of open stack easy. It's called IBM cloud manager. And the key aspects of the use case are that it's basically consists of two regions. The on-prem and off-prem are two different regions. These two are basically connected securely and then they share a common keystone service and can use a common horizon portal to manage the two regions. And the deployed VMs have visibility and can communicate across the two regions. So this is the case where you can have tiers of your application deployed across the two regions. And so let me describe a little bit about the two stacks that we have in the two regions. First, describing the off-prem, offering that we have. This is basically, we refer to it as open stack dedicated. It's basically managed private cloud based on open stack. It is deployed, it's a turnkey solution. It is deployed per client based on using dedicated infrastructure. So it does not share compute servers or storage. We build it out of dedicated hardware. And it is open stack based and therefore you get the access to the open stack APIs and the horizon cloud management. And if it is set up based on the initial size that the client requests us for and then it's a monthly subscription model. I mean they get built monthly for it as long as they need it for. And of course they can grow or shrink it based on the requirements. And then this is a managed cloud. This is an off-prem managed cloud. So clients consume this cloud. IBM manages the open stack, the network gateways, the security, the hardware. I mean all the storage, all those things are things that we keep running and we provide the SLA. And of course support 24-7. A little bit about what is under the hood in this environment is basically today we are based on Juno. And as I said it comes with the horizon web console. The hypervisor is KVM. And we do provide a virtualized network in this environment which is basically allows you to create and delete networks and routers and sort of policies using neutron APIs. There is a load balancer as a service that comes, is offered as part of this cloud. The storage is based on SAF which is, and it's a storage cluster dedicated for this cloud. And of course, right I mean using the glance services you can bring your own images and licenses. Typically corporations would have their own license agreements with their favorite OS provider. And they bring their own license. And of course you get Keystone and Cilometer to do user management and any kind of charge back in this environment. So this is what is our off-prem environment in this particular use case. And then a little bit about our, on the customers on-prem environment which is in their data center. This is deployed using IBM Cloud Manager which is basically a software that we offer. And using this piece of software one can pretty quickly get their OpenStack running in their data center. And get the usual things that one looks for in a cloud like cell service provisioning of VMs, usage metering, OpenStack APIs. The one other thing that we also offer here is beyond the x86 support, we also offer support for other hypervisors that IBM offers. And then we have some optimizations for VM placement and so forth. And of course this is a supported product so should you run into problems using it, it is supported. And so what's included in this software package, it comes with a Chef server that is used to automate the installation. And then it comes with flexible, it allows you to, a lot of flexibility in terms of the topologies that you can install. And so the way it works is initially you install your Chef deployment server and from there then it takes over. And depending on the topologies you want and the configuration from very simple ones to large and multi-region configurations. So all of those are possible using this. And so now we have covered what is, in this hybrid use case, the off-prem, how we get to the off-prem cloud and on-prem cloud. And now let's take a look at what do we mean by this hybrid cloud. So there are two open stack regions that we're set up using the respective methods that are described. In this case we do provide a shared keystone but going forward as keystone federation is available. That is also one of the use cases to look at. And then the on-prem horizon can be used to manage both the regions. So you can just select which region you want to operate in and you can do the operation that way. VMs in each region can communicate with, as part of this hybrid cloud, VMs in each region can talk to each other. So this is the case where applications are split, can be deployed in a split manner or you can even move it between regions. And the VPNs can be route-based or policy-based and they're basically IPsec side-to-side VPNs. And in our particular example we used Viara-based gateways but these concepts should apply to any kind of standard VPN device. So with that I'll hand it over to John who will get into more details on how we establish the network connectivity. Okay, thank you. The network connectivity that we have in the hybrid cloud scenario here, we have network connectivity for two items. One we want the region-to-region network connectivity between region one and region two obviously. And we also want VM-to-VM connectivity so that VMs deployed in region one be able to communicate with VMs deployed in region two. We want VM-to-VM connectivity. Here's a more detailed chart showing the two regions. In this example we have just a single controller and a single compute shown but you could have multiple controllers, a high availability scenario, multiple compute knowns but for simplicity sake we say let's just show a single one here. We have a single VM shown on each side, VM one, VM two. And this is what we want to get communicating back and forth together. On the left side we have a red private network on the off-site region. And on the right we have a yellow GRE network. The GRE network what we're actually going to communicate with is the floating IP address that's signed to VM two. We're not going to talk directly to the IP addresses behind the floating IP. If you look down on the chart a little bit we have the VPN connection set up between the two systems. It's an IP stack VPN and there's two public IP addresses on both sides of this connection. For the sake of this presentation we use 23, 23, 23, 23 and then 24. 24, 24, 24 is the two public IP addresses but obviously you wouldn't be using those IP addresses and such. Beneath that is the virtual tunnel interface. There are two additional addresses that run the tunnel between the two endpoints. Finally, the last thing on this chart I'd like to point out is the bottom left and the bottom right routing information. Just setting up the VPN between the two sites isn't enough. You also have to set up routing tables so that if you run a shared Keystone environment like we were doing here you want to be able to control your on-premise site to be able to access the shared Keystone on the off-premise. So you need to update the routing tables to allow that to happen. You also need to update routing tables so that the VM one can talk to VM two and you have connectivity between those two VM deployed networks. There are five steps basically through this pitch that we'll talk about to get this set up. You've got to install OpenStack on the off-premise. You've got to configure the VPN on on-premise and off-premise so they can communicate the network. Set up the routing tables and configuration for the routing. Install OpenStack on the on-premise and then configure Neutron Security Groups because by default your Neutron Security Groups will block incoming traffic. We're not going to talk about step one or step four here. We assume that you can install OpenStack however it means you want to install it. And we're just going to focus on items two, three and five. First off, select disclaimer on configuring the VPN set up. There's some steps in here. We'll talk about what needs to be done. But the steps here, we're using public IP addresses between the two endpoints and your public IP address. If you go back to your company and try and do this, you may have firewall rules and security settings in your environment that prevent you from using the public IP address in your environment. You will have to work with your network administration and companies to allow that and figure out what's the best way to get through that. So you're not going to take these instructions as is and go back and implement them. Bottom half of this chart, there's two sections here. One is information you'll need to provide to the off-premise site that information about your house and the VPN, information they need to provide to you so you can set the VPN correctly. We're going to go through those more in detail in the next page. On this chart, there's a whole set of values on the far left side, all highlighted there. This is sort of a prereq stuff. Before you even think about doing the VPN, gather this information, make sure you figure out all this information there so that you can set up the VPN correctly. The first four items here are dealing with the off-premise side. You need to know the public IP address for that gateway. You need to know what virtual tunnel interface the gateway is going to be using. You need to know the IP address that OpenStacks can be running on the off-premise so you can figure, in our case, a shared keystone between the two. You need to know what private networks are being supported by the off-premise side so you can set the routing tables correctly to get to those off-premise networks. Setting up the VPN requires a shared secret between the two networks. A shared secret in this case is secret. It was very complex. On the on-premise side, you have a public IP address. You have your private network that you're going to be handling, that you're going to be providing to the off-premise side. You have subnets that you're going to be routing to. The very last statement there says viada gateway. The examples we have in this page and a couple of pages after this are exactly the viada statements that you'll need to run on a viada gateway to configure this stuff. If you're not running viada, the instructions here are still useful because they have a blueprint of what you need to do. You still need to create the virtual tunnel interface in step one. You still need to set up the VPN. If you follow these as basic steps when it gets done and apply that to whatever commands are appropriate for whatever provider for the gateway device you have, look at the right side, step one, creating the virtual tunnel interface. You need to have the tunnel set up. Basically, you specify what interface you're going to have and what your local tunnel side is going to be. Step two, setting up the VPN, the basics for the VPN. You have Internet Key Exchange. You have encapsulating security protocol. These settings need to be set up and configured in advance. They need to be consistent with whatever your off-premise is going to be using. You have to be handshake with them, quartering with them, what you're going to use, make sure they're consistent between the two. Step three is finally configuring the VPN itself, configure the site-to-site connection. You need to know your public IP address, their public IP address, set that up, set the Internet Key Exchange, set the encapsulating security protocol, set the secret you're going to use, set the virtual tunnel interface. All the commands that do that for the Viata are listed here. And if you Google for other platforms, you can see the commands they have. They're not going to be exactly like this, but the same type of steps will need to be done. Once all the steps are done, one through four, and you've saved the configuration, you'll have your local VPN set up, your local VPN gateway all set up, but you're only half done at that point. You still got to set up the off-premise side. When dealing with the off-premise, typically you don't have access to go in and modify the gateway configuration of an off-premise site because that's how you're reaching the off-premise. They're not going to allow you to go in and start changing things on them. So you have to work with that off-premise organization to set the appropriate parameters. They'll be working with them throughout this to set the internal settings and make sure they're consistent. But you have to set work them as well. Some of the values need to be set are listed in the middle there. Some of the on-premise stuff you'll need to provide them so they can make sure they're consistent. One thing I will point out is sort of a plug for what was mentioned earlier. The IBM Cloud OpenState Services, that was mentioned earlier in the pitch, allows you to automatically configure that off-premise gateway using Horizon. They have Horizon plug and they automatically configure as they're via the gateway under the covers. And there's a screenshot here that's only two panels that you really go through. One, you need to define the virtual tunnel interface, specify the IP addresses at the very bottom right here. And on the second panel, you can go through and define the IP set connection. Specify what your peer addresses for their public IP addresses, what's the Internet Key Exchange, the Capsule and Security Protocol, all this stuff can be done just through a Verizon plug-in, or a Horizon plug-in. Once you have the VPN setup configured, either through IBM's way or through whatever method you're doing with Offset, whatever your off-premises, you then need to think about the routing information. You have two regions. You have IP addresses in both regions. You need to build a route information between the two. There are two steps for the routing. First, step one here, you need to set up a route to the OpenStack that's running on the off-premise site. You want to do a shared keystone environment where you need to specify, how do I get to that shared keystone? Where is that located? So you need to set a host route up on your private gateway that says, okay, in order to reach that destination IP address, I need to go across this virtual tunnel information, and they know how to reach that IP address. So you need to set up some default route or a host route for that case. The second item here is setting up connectivity for the virtual machine to virtual machine communication. You got to set up routing to allow that to happen as well. Here we're setting up a subnet route on that private gateway that says, okay, in order to reach the VM machine one that's deployed over on the off-premise, send it through this route here, which just routes it through the virtual tunnel interface and out to the other side. Steps four, five, six at the bottom here and deal with setting up the routes on your controller and your compute nodes that you have locally. This step may or may not be needed depending upon how your controller and compute nodes are actually deployed. If you have your controller and compute nodes set up with a default route that routes all of their traffic back to their private gateway, then all the traffic is already going to be sent to there and there's no need to redirect route someplace else. But if they have multiple interfaces and they have default route out to different public interface or someplace else, then you'll need to specify routes as listed in four and five here to make sure that the routes go to your virtual private interface and then out the virtual tunnel interface and out to the off-premise side. So steps four and five here are optional depending on what your setup really is. Again, I'll note that the statements under one, two, four and five here, these are the actual viata commands that you'll do to set these routes actually on under one and two and they'll be different on your environment. Once you have the route set up, the next thing you want to do is probably test them to actually see if they work and are you able to get to the off-premise location? Obvious thing to use, use ping. But sometimes the off-premise site might not allow ping operation to work. They may have rules in place that block ICMP traffic. If you're communicating with a soft layer site, which we're doing here, they disable ping traffic. You cannot ping their destination. So we had to rely on the second item under option one. They're using curl command connecting to the HTTP site, connecting to the horizon website and getting the HTTP response back. If you've got this all set up and you think everything's supposed to be working, your VPN's up and everything's... But you're not getting a response back from the ping. You're not getting a response back from the HTTP request. In order to debug this, looking at TCP dump output going out that virtual tunnel is probably the first place to start. You'd want to look to see how your package is actually being sent across the tunnel to the off-premise site. If you're not getting any data out of the tunnel, then there's probably something wrong with the routing set up locally that you're not passing the data out of that tunnel. You want to look back on that. If you are getting data out of the tunnel, but you're not getting any responses back, then chances are on the off-premise side that they have some firewalls in place that are preventing the responses getting back to you. So you want to work with the off-premise side to figure, okay, why aren't they sending responses back? But again, the first step in all this is making sure you have VPN connection set up correctly that we did earlier. Once we have the routing set up, once we have the VPN connection set up, we install the open stack on the on-premise side, we set the shared keystone, everything's working fine. We deploy a VM. We want to get connectivity to the VM, but we still don't have connectivity yet. One common issue in that case is the neutron security groups. When you deploy something, the neutron security groups, by default will block all traffic to that deployed VM unless you're running another VM that's deployed as well on that same region, the same subnet. And there are... The subnet, the security rules need to be updated in order to allow incoming traffic to that deployed VM. The default security groups can be updated using the neutron CLI commands, and there's two links at the bottom of the page here. They can also be updated using the Horizon self. Horizon has some plugins and some examples of how to update the security rules with Horizon. I tend to like the UI's a little bit easier and more easy to understand. So here's a couple screenshots of how you do it in Horizon. It didn't show up too well. You go into the access and security section under the compute section. In Horizon, you have the options on the left side. Setting security groups, you don't go through network, you actually go through compute. Under compute, access security is set to security groups. Pull up the default security group, you manage the rules, and there's four rules that they list by default. There are two inbound rules, two outbound rules. One's for IPv4 traffic, one's for IPv6 traffic. If you look at the very first rule there, that's a little hard to read, but it says allow all inbound IPv4 traffic that's from any IP protocol on any port, which, hey, that should work, but there's a statement section at the very end that says only do it for the security group is default, which means only do it if you've deployed with this security group can come in. Since we have a region that's... We have two regions here, one's off-premise. Anything that's coming in from the off-site region wasn't deployed with the security group, so all their traffic's going to be rejected. And that's why we need to add additional rule for ICMP traffic, for SSH traffic, whatever traffic you want to allow until you deploy VMs. You need to allow that on both sides so they can communicate. Once that's done, you should have connectivity between the two VMs and be able to ping SSH or whatever you want to do between the VMs. And that's network connectivity in quick swipe. Next up, shared keystone. Thank you, John. Thank you. So in this environment, since we had a shared keystone, we had to do some tricks to make sure the on-prem open stack cloud could share the keystone with the off-premise. And since the off-premise also had different sets of roles and permissions, since it was managed, there were a set of roles that only an admin could use, which were not available to the customer. So we had to come up with changes to access roles on the off-premise cloud so that we could allow the on-premises, on-premise cloud users to access it correctly. So basically the use case was that the on-premise admin users would not have an admin role in the managed cloud. So we ended up creating a new role for the on-premise admins in the shared keystone. And similarly for the on-premise service users, we added new roles in there. Now what that allowed us to do in the open stack dashboard, as you can see, if you selected region 3, which is the, if you logged in as the admin on-prem and you selected region 3, which was the on-premise cloud, you got to see the admin role for yourself. So it's shown in the top picture there. But if you logged in as, if you switched over to region 1, then the admin tab disappeared. So you didn't have access to the admin role in the managed cloud. So that was one of the requirements. And that's how we did that. Now these are a bunch of commands that we ran in keystone to set that up. This is the first one. So the first one is setting, is creating all the on-premise users in the managed cloud keystone, since it's a shared keystone. And then the second command there, which is creating the admin and service tenants for the on-prem cloud. So we kind of separate the users and the services based on where they were coming from. And we also added in the different roles. So the third command there, create on-prem roles, is creating roles for the different on-prem users. And then we assigned the roles to those users and then ended up creating endpoints in keystone for the on-prem regions. And that allowed us to access both the regions from a single horizon dashboard. So that was all the changes to keystone that we had to do. Thank you. And we had to use this keystone because federation wasn't available at that point. At that point, the federation wasn't developed. There was no web SSO between in Horizon available at that time. So we ended up going the shared keystone way. With federation, you still would follow the same network connectivity model. But you can try out federation. And then you don't need to create all those roles and others. So the two keystones would be distinct from each other. Thank you. Questions? Any questions? Can you speak in the microphone there? So for the VM to VM connectivity, you're assuming that all the VMs are going to pick up a floating IP pool to talk to the other side of it, right? Yes. We're communicating to the floating IP address in this case. Or you're running a flat network or some network that's actually like a provider network where you can actually get access to the VM directly. But yeah, we're assuming floating IPs. OK. And the inter-cloud connectivity. So you have a blanket rule open for all the floating IP pools. That's how it's going over the IPsec VPN tunnel. Yes. The communication with that floating pool is being sent over that tunnel interface. OK. So you used VPNs across. You couldn't, did you try to, or did you think about using, was it over VX LAN tunnels or given that VX LAN runs over layer three? Yeah. So we used VPNs because these were two separate regions, separate cloud regions. And in this particular use case, as a matter of offering, we provide VPN connectivity regardless of whatever the other side is. I mean, there is no dependency in this case. Like in this example, the other side was using GRE tunneling. And on the off-prem, we were using virtualized networking based on SDNVE, which is sort of VX LAN tunneling. So in this case, we didn't create any assumptions on dependencies on what the clouds are running as far as networking is concerned. I see. OK. Thank you. Hi. Here as we have single keystone, all the off-site services and users have to authenticate and travel over the VPN. Have you done any performance benchmark or like what's the performance impact between off-premise and on-premise, maybe authentication, VM creation, or any other open stack operation? So the latency does come into the picture because those flows have to go to keystone during the API calls. One way around this is to move to those PKI tokens. Right. So yeah, new things use PKI tokens, then that don't need that many interactions with keystones. So we haven't worked with PKI tokens so far, and there are problems with PKI tokens themselves. So we'll probably think about using federation. I think that would be the route we'll go. But have we done any performance tests on this? We didn't formally do a benchmark or measurements, but what we found was from a usability perspective, it wasn't accepted. Thank you. Any more questions from audience? Dynamic routing probably would have been easier. For this case, we were trying to get up and running and we went with static routes. There wasn't a whole lot of endpoints to worry about there. We did the one host route, the one subnet route, and that was enough. For a more complicated case, dynamic routing probably would have been a better choice. Also, I think that two organizations would have to agree to dynamic routing in that case. That would be an extension. Okay. Thank you for your time.