 Hi everyone. I'm here to talk a little bit about connecting external network resources to OpenStack because that's a need that often occurs. First, a couple of words where I come from. It's running public clouds, basically. In multiple countries, also some community clouds and private clouds and stuff like that. So we have lots of different requirements from different customers when it comes to how they want to connect external resources. There are also some physical servers that are hosted for customers, which means that you have to connect it to be able to connect the virtual resources to actual physical resources. We mostly run Rocky. We use OpenStack Ansible on Ubuntu for provisioning OpenStack. And we use Neutron ML to open V-switch combination for networking. So what use cases do we have? Of course, there are very many different needs for different customers. Some of those connections that are going to be created needs to be isolated to specific customers, especially in the public cloud. If you need some, a customer needs some special way to connect to other resources, it's nothing that any other customers should be able to use. Of course, Internet Access is another way to connect to external resources. Connecting to existing VLANs that can be used either to transport data to other locations or just to physical servers in the same data center or whatever. Remote locations. A customer might want to connect their OpenStack virtual resources to their own office or their own data center and have ways to communicate even to tenant networks, not just the provider networks. And we have some customers might bring their own network subnet that they have registered through RIPE or any other equal association. And so they might need a customer specific floating IP pool to be able to use IP addresses from that subnet instead of the generally available floating IP pools. So, floating IP pools. Of course, you can use floating IP pools both with private or public addresses. This is the general way that customers use for Internet Access. And, of course, NAT is performed on the network nodes in this case. And the traffic egresses on the network nodes as well. It can be combined with the address scope and subnet pools. And I'll get a little bit more into that in a couple of slides. So, the floating IP pools, pros with those are, of course, that they are very widely used. It's simple to reserve and move addresses between resources. You just attach and attach that address you need. Or you can even keep it in reserve for you, even if it's not attached to a specific resource. Some cons there. It requires NAT, which might not be desired. Network nodes might become bottlenecks as they have to do the NAT, routing a NAT. And you can egress directly from the compute nodes in this case. And they are shared between users. So, how do you set up floating IP pools? That is pretty basic. You set up by creating an external network. Specify the network type and which physical network it should be connected to. And the segmentation ID, usually a VLAN. And then you create a subnet on that network. Also, as I said, you could use address scopes to be able to use routable to mitigate the last con that it required NAT. In this case, you could create an address scope and then create a link network in that address scope. And then a subnet pool that you use for link networks and a subnet pool that is used for the actual networks that customers can provision. And then you create the external network and make sure that it is in the address scope, which means that the network node will see the external interface will be on the link network. And the internal interface on the tenant network will be on the in the same address scope, which means that it won't do any NAT between. And also you will make sure that the routable addresses that the customer use will not overlap because it will assign addresses from the subnet pool. In this case, I use the slash 16 for the subnet pool for the tenant networks. And then the slash 24 in this case will be assigned from that network. Then you can do the same thing. You can make great customer specific floating IP pools. That's basically the same thing. But you will use access control by creating our back rules and the policy access as external, which will create, make it possible to limit this to certain projects, customer projects. So you can have a customer that has their own floating IP pools. And basically the same pros and cons except for the fact that it is not shared between customers. So you can use this for specific customer needs. And basically the only difference here is that the last command that you do an RBAC create and create an access as external policy for the specific network, which makes it seeable only by that user as an external network, that project as an external network. Then we have shared provider networks. Those are directly attached. You can directly attach virtual machines to provider networks. In this case, you can egress on the compute nodes and do VLAN tagging, for example, directly on the compute nodes and not having to route the traffic through a virtual router on the network node. You also don't need to use NAT. You could of course use some kind of enterprise, other enterprise NAT platform if you want to, but it's not at all necessary and it's not within open stack. Exactly as with the floating IP case before, you can use RBAC rules to create policies and limit which projects are able to use the shared network. When it comes to pros here, it is also widely used. It doesn't require any not doing any netting. You can let the traffic egress directly on the compute nodes so network nodes won't be a bottleneck. The con might also be that the traffic egress directly on the compute nodes, depending on how your network infrastructure, what your network infrastructure looks like. If you, for example, only have access to the VLANs from the network nodes, this might not be possible to do. It depends if it's a pro or a con. It can also be a little bit less flexible than floating IPs. It's not as easy to reserve and assign specific IP addresses to specific resources. You basically just create a shared network. If you want to use RBAC, you will do the same thing there, but the policy accesses shared instead of access as external. Then we have the L2GV. It's basically an API to create layer two links between physical networks and VLANs and open stack tenant networks. In this case, it will egress on specific L2GV servers. It could be combined with other services, of course, but it's a specific service that does the egress. You don't have to use any of that. The pros, you get your L2 connections centralized in one location. It's API-driven and pretty much self-service. Cons might be that it's not as widely used. It might be a huge, large risk to find bugs there, for example, and not as much examples available when it comes to configuration. Also, the L2Gateways could, of course, become bottlenecks as they connect all the L2 connections. So, there is some configuration that you have to do before on setting up agents for L2GV and so on, but when that is done on the L2Gateway, you basically just create an L2 connection to a specific interface and a specific segmentation ID on the L2GV, and then you connect that to a specific network. Then you have VPN as a service. Here you basically create the L3 connections between the tenant network and some other IPsec service. It might be another OpenStack VPN as a service, or it might be your office IPsec gateway or a data center gateway or whatever, anything that speaks IPsec, basically. And it's completely self-service, no setup except for enabling VPN as a service is required by the service provider. So, it's all done by the customer by themselves. The traffic egresses, and the IPsec connection is terminated on the network nodes inside the namespace, the router namespace. So, a pro there is, of course, it's encrypted, it's IPsec, and it's fully self-service, self-serviced. And no, the provider is not required to do anything to make it work or to configure it for specific customers. The cons might be encryption. If it's not necessary in a case where you have, for example, the network between the two locations is trusted for some reason. So, it will, of course, give you some performance penalty. It's not quite as stable as many of the other solutions as this requires that the IPsec session is actually up and running, and it's more moving parts compared to, for example, in direct L2 connection or direct L3 connection. Also, the complexity. It has all the complexity of IPsec. So, it's lots of knobs to turn to be able to make sure that it's configured the same in both locations. And also, for the customer, it might be a con that this complexity that you can't see logs from the open stack side of things. So, it's harder to troubleshoot for the customer. So, VPN as a service, you just set it up by first creating Ike policies and IPsec policies. There you can set, of course, all the aspects of the Ike and IPsec negotiations. And then you create the VPN service for the specific router where you want it to be terminated. And create endpoint groups, which is basically just pointers to what networks that are going to be routed through the IPsec tunnel. In this case, one of type subnet, which is your local subnet and the remote subnet by type CIDR. As you want to provide the sitter of the remote network that you're connecting to. Then you create the site connection between to the peer address of the other end of the IPsec connection. And provide a pre-shared secret that has to be the same, of course, in both locations. Then we have the BGP VPNs. Those are really great if you already have BGP IP VPN infrastructure in place. And they can be both L2 or L3. It depends on how you set them up. And there is an admin part and a self-service part. I'm getting to that in two slides. And in this case, the traffic will egress on the network nodes as well. The pros here, as I said, it integrates very well with the existing infrastructure. If you have a BGP IP VPN infrastructure, you have, for example, an MPLS network or infrastructure like that in place, it will be a very easy way to plug open stack resources into that infrastructure. It's partly self-serviced. One of the parts is when you set up route targets and stuff like that, you need to be administrator. So that's set up by the provider. And then the customer can create connections by themselves. It generally, and the con can be there that it generally requires BGP VPN infrastructure. You need to have something in place that already has this infrastructure. On the other hand, you could use it directly between, for example, two open stack clouds and have them exchange routes directly. But to get the full, get everything out of this solution, you would probably want a BGP VPN infrastructure in place. But if you have that, it's really easy to just set up and configure the right route targets and connect it to a project and then set a name for it. And then the user, in this case, can associate networks to that already set up BGP VPN that you have created as an admin. This is a little bit outside of the scope, but cranking might be useful in some cases when you're connecting external resources as well. It's basically that you can expose multiple tagged VLANs to a virtual machine instance instead of just having to create separate NICs for each network you're connected to. This could be internal networks, tenant networks, or it could be connections to provider networks as well. And that's, of course, useful either, might be useful for firewalls and routers, but it could also be useful if you, in any case, that you need many access to many VLANs from a single instance and don't want to create multiple NICs. And in this case, you create, this is an example where you create two networks with different segments, provider segments set, and a couple of subnets for those, and then create ports for both the networks on both the networks. Then you create the trunk from the parent port, which will, in this case, will be port zero. And you can then, with the trunk set command, add more VLANs. And then you can create ports to the trunk. And in this case, get more segmentation IDs, which will be VLANs. And in this case, I choose to inherit the segmentation type from the networks. You can also set that manually if this is, for example, internal networks that don't have a provider segment ID. So some considerations that you have to really think about. Of course, you need to make sure that you don't get any IP address overlapping in any of those cases. And that's of course true both for layer two as well as layer three connections. Even more so if you have L2 connections where you actually have to think more about address allocation as well. There might be cases where you connect to layer two networks and you have to consider where's DHCP coming from in this case? Will there be a risk that two separate DHCP instances will allocate the same IP addresses to multiple instances on the different sides? In this case, it might be a good thing to add an external IPAM to the mix and to be able to control the IP address allocations from some place central. Or you could go direct the direction of using different allocation pools. Making sure that networks on the respective side has the same network sitter but allocates IP addresses from different allocation pools. For example, if you have a slash 24, one side will allocate only from the first half of the slash 24 and the other from the second half of the 24. In general, you could say that all the parts in OpenStack are very flexible. You can really mix and match and there are lots of ways to do different things and to connect all kinds of resources in one way or another. In general, you can just consider that those parts are pretty much like normal physical networking. You have a virtual router that is pretty much easy, pretty much to be seen as a normal physical router. You can connect ports to it, you can connect interfaces, you can do routing, you can do everything that you can basically do in a physical router. And you can connect things in all kinds of ways. And as I said, you can really mix and match things. You could, for example, use a customer-specific provider network that we talked about earlier. And you could attach it to a router port on a virtual router and then use a link network with a slash 30 subnet with one address in this case on the router port and the other address on some other physical equipment or physical router or whatever. And then you can create static routes on the virtual router to and the terminate this in some other part of your network infrastructure and transport elsewhere. And this can, of course, be used to connect any tenant network with anything outside. Yeah. Do we have any questions? Yes. Sorry. You can do HA routers. And if you do HA routers, that is basically a router that lives on multiple network nodes and uses VRRP to fail over between those network nodes if one of them is to fail. Does that answer your question? Okay. Any other questions? Yes? Sorry. Which of those methods to connect external networks play well with DVR? Sorry, could you repeat the question? So, you displayed several methods to connect external network resources. Which of them play well with distributed virtual routing? I think you could use pretty much all of them in conjunction with DVR. Yeah. I think so. I think you could use all of them. You do have some cases with NAT and you might need network nodes after all to do specific NAT conversions. But in general, you could use all of them with DVR as well. So, if I open my network with a shared network provider to the underlay, is there any way as an operator I can enforce packet rules that go from the VMs or from the virtual networks into my underlay? Can I say only these destinations are allowed or only these source IP addresses are allowed in the underlay then to access or to get out from into the underlay? I mean, you do have port security rules that won't allow any, that you spoof. You mean, you don't want the customer to be able to spoof addresses? Yes, to spoof addresses and also to only reach specific targets in my underlay. Say, I'm building somehow a database as a service in the underlay because I do it there. And the customer, the VMs have to reach some services in my underlay but I only want to reach specific services in this underlay and not the whole network where what it can reach. I would think that for the last part of the question I would probably use some external infrastructure or rules directly on the infrastructure that you have. It's not something I would use open stack functionality for. Thank you. More questions? Okay, thank you.