 I'm Masaki Matsushita. We are OpenStack R&D team of entity communications. Our mission is providing technical support for our OpenStack-related projects and making contribution to OpenStack. Here is Yuki. Hi, I'm Yuki. I'm also engineer in entity communications. I'm in charge of operating and managing our OpenStack cloud for verification purpose. So mainly, I engage OpenStack as an operator. Today, we will talk about private Swift Endpoint. Here is the agenda of this presentation. First, we will explain our background. Then we will introduce a similar solution in AWS called VPC Endpoint. After that, we will discuss our implementation of private endpoint in OpenStack and additional operational improvements. Finally, we will summarize the presentation and show some future tasks. Let's move on our background. This diagram shows an example of web application system or public cloud of OpenStack. We have Swift behind the load balancer for API, and they connected to API network, the red one. The load balancer is also connected to external network, the blue one. And the right side is user tenant. There is a load balancer for users. And web application servers are behind it. Application servers are connected to internal network, the green one. And they have no routers because they don't require direct internet access. Then let's think that we want to save some log files to Swift from this service. In this environment, users can access to Swift through the internet. However, application servers have no connectivity to Swift, in this case, because there is no connectivity to external network. And we don't want to provide internet access to the servers only to use Swift. So our interest is how to provide access to Swift for servers connected closed private network while preventing internet access. To achieve our interest, we tried to provide users private Swift endpoint. Private Swift endpoint enables servers which has no internet access to use Swift. We refer to the similar solution of Amazon Web Services. It's called VPC endpoint. VPC endpoint also provides private endpoints to closed network in VPC. VPC is an isolated section in AWS. It is similar to tenants in OpenStack. Additionally, Amazon provides a feature to restrict access to S3, endpoint policy. We can apply endpoint policies to restrict access to only specified buckets. Before talking about private endpoint in OpenStack, let's look at the Amazon Web Amazon solution. This diagram shows user tenants called VPC. Users can make several VPCs. So they are VPC 1 and VPC 2. VPC 1 has two subnets, public and private. Virtual machines in public subnet can access to the internet through the internet gateway. On the other hand, virtual machines in private subnet can't access the internet because the router has no routes to the internet gateway for private subnet. The same goes for S3 because we access it through the internet gateway. Virtual machines in public subnet can access S3, but virtual machines in private subnet cannot access S3. So VPC endpoint enables users to access S3 from private subnet. It provides the route only to S3 API endpoint. The VPC endpoint also provides a very useful feature, endpoint policy. By applying endpoint policy, we can restrict access through the VPC endpoint to specified buckets. The bucket is an equivalent term of container in Swift. Up to here, I talked about the VPC endpoint in Amazon. Now let me talk about how to make private Swift endpoint in OpenStack. So what we want to do is to provide access to Swift without providing internet access. To achieve our interest, our basic idea is providing an external network as API network as a blue one. API network is not connected to the internet. There is a load balancer as API endpoint and DNS server for private network. The DNS server for private network is required to respond DNS queries with the IP address for our private endpoint. By doing so, our users can use the endpoint registered on Keystone without considering the real IP address of private endpoint. To use the DNS server for private network, users need to set DNS server for private subnet. Then application servers can look up the record. We can use the private endpoint with same domain name as the public endpoint. Finally, we can read the private endpoint by private IP address. That is our first step of private Swift endpoint. We consider pros and cons for OpenStack administrators. This implementation is very easy and simple because it uses only normal features of Neutron. It also means it's easy to introduce private endpoints to existing environments. However, it's end users' responsibility to change DNS server for private subnet. Let's move on pros and cons for users. End users can introduce API network by using Neutron API. Meanwhile, end users need to change the DNS server for private subnet. In addition, private IP addresses for API network are reserved by providers. Next, I will turn it over to Yuki. Thank you for Masaki. Masaki shows the basic idea and pros and cons of our implementation. As he mentioned, there are some issues, such as need for extra DNS, change the reference of DNS, and so on. In this part, I will show you our challenge to improve these issues. There are three challenges. Let me start from the first challenge. This challenge is to improve usability of end user regarding DNS settings. Until now, the end user needed to change the reference of DNS. So the goal of this challenge is to use some techniques so that the users don't need to be conscious of the DNS when use API network. First, I'll share the basic knowledge of DNS mask Neutron managed. If the DNS mask is used as DHCP driver, provide the DNS function, as well as DHCP. So the user can use the DHCP server as DNS. But the user can only resolve the instance name at default. Therefore, the user can't resolve the public domain. For example, entity.com. Getting off the track slightly. If you use older versions and Liberty, you can only resolve the template name host-IP address.openstacklocal, getting back on track. In the HTTP agent, we can set Huada on DNS mask in order to resolve the unknown domain. If it's set, the query for unknown name is what is to the specified DNS. And we are able to resolve the public domain. But the point to be watched out is that the hoarding is sent from the same network as other instances. So if we refer to outside DNS, we can hoard from private network. So what settings should we change for enabling Huada? DHCP agent in file have a setting option for it, which means we can set Huada for each tenant or network. And we can manage it by API. But we want to change the Huada by which external network is connected to so that the user can call API without changing the DNS. We can connect it to API network. So I implemented our current so that we can access the DNS server dedicated to API with the same public IP. Because the IP of Huada can be changes for each network. In order to achieve it, the operator assigns the same public IP as DNS cache server to DNS for API. And inject study crude to the tenant router. In this way, I changed the Huada by which external network is connected to. And the user become free from changing the reference of the DNS. Explanation for first challenge is done. So previous challenge achieved that the users don't need to be conscious of DNS. But in previous resolution, we need to inject the study crude to each tenant router. This workload is very heavy. So next, our interest is can we access the DNS dedicated to API without configuring each tenant router? So we worked on this. I'll clear this problem and review the goal of this challenge. This problem is that we need to inject the study crude to tenant router wherever call open stack API via API network. If tenant is included, this workload is very heavy. So the goal of second challenge is that we become free from configuring each tenant router. Against this problem, there are some solutions. For example, automating to inject study crude or using the router outside open stack. I adopted a latter solution because it's simple. In this solution, we need new extra router for being injected study crude. But we don't need to inject the study crude to tenant router yet. And all we have to do for tenant router is setting default gateway to a new extra router. So all packets are sent to this extra router and forward it to DNS server for API because this router has study crude. So we become free from both of them to configure each tenant router. So we added the extra router in this diagram. This diagram is what Masaki showed you earlier as our implementation. In this diagram, I put one router to simplify the explanation. But we actually need two routers at least because it can be a single point of failure in the case of one router. Lastly, I will explain the challenge against the problem that we need to maintain DNS dedicated to API and extra records. Let me confirm this problem and the goal. This problem is that in order to call API API network, we need to prepare the DNS dedicated to API network. But it's a buzzer, it's a buzzer sound that we need extra DNS just to call API. So the goal of the third improving challenge is that we can call API from private network without extra DNS server. This challenge is simple. In the same way as the DNS, we inject study crude for endpoint IP to the extra router. This enables us to access a load balancer with the public IP, quad double rz, from private network. It was accessing load balancer with API network IP until we injected study crude. Now that we can reach load balancer with the public endpoint IP from private network, so we don't need to prepare the extra DNS server. And we can use the DNS cache instead of it. Up to here, the explanation of improving challenge is finished and improved usability and operational ability. Then I'll introduce the conclusive answer. We are actually operating now. This is the diagram of conclusive answer. This diagram is more simple than what we showed at first. From here, I'll explain the API call flow from DMZ and private network. First, let us check one from DMZ network. VM access the DNS mask to resolve the domain. And DNS mask quadids to DNS cache. Therefore, we can resolve the domain name. Once VM knows endpoint IP, VM access load balancer via a public network. So we can call API in this way. It's simple. DMZ network part is that all. So let us move to the case of private network. I'd like to start from explanation for how to resolve the name. The VM access the DNS mask to resolve domain in the same way as DMZ. And the DNS mask try to quad it to DNS cache. The tenant router doesn't know the route for DNS cache. Then the tenant router quad it to the router outside open stack as default gateway. And this router has static code for DNS cache. So what it's via API network. So we can resolve domain name in this way. Once VM knows IP of endpoint, the VM try to access the load balancer with the public IP quad double Z. In the same way as the resolving domain, tenant router doesn't have the route for load balancer. Then quad it to extra router. And the router outside open stack has a static route for load balancer. And quad it via API network. So the VM in private network can call API. I'll show you the demonstration video. I took the video for using private endpoint in open stack environment. So this diagram is the network to policy on horizon. When the user are agreeing with member of, let me start from the explanation for demonstration environment. Blue network is the public network that is connected to internet. And as you know, this symbol means that this network is external. Orange network is API network and also external. And the network name contains on the IP range because the member role isn't allowed to know the IP range for an external network. There is a router in each external network. One is internet gateway. The other is API gateway. Green network is a tenant network that is connected to internet gateway. So we can confirm the address range because this is a tenant network. And this is a road balancer VM. So last network is private network that is connected to any router. And there are three virtual machines in this network. So let us log into this VM. So we are about to look up the floating IP address. We can know the floating IP address in instance detail page. So I copy it. So I'd like to log into the road balancer VM by SSH. First, check the internet connection from this VM. We found that this VM has internet connection. And check the resolution of the domain name. Inquiries is successful. Check also open stack and point domain. These inquiries are processed by DNS mask neutral manage. So we are now in the road balancer VM. Let us to log into the application server VM from the road balancer VM. So we could log into the application server VM. Check the internet connection. But this VM doesn't have the internet connection. And check the resolution of the domain name. This VM can't resolve the domain name. Surely, also we can't resolve the domain name of open stack and point domain. So if we try to access with the OSHA client tools, we fail to access it. So in order to access it from the application server VM, while restricting internet connection, we add the interface of private network to the API gateway. So we found out the interface on the API gateway is added. So check the internet connection again. Surely, it can't reach the internet yet. And check the resolution of the domain name again. And this VM becomes able to resolve the domain because of plugging the API network. And check also the domain of open stack and point domain. Resolve the name. And we got the public IP, not API network IP. So we try to access with it again. So we become able to access with it. And there is one container. And this is a demo container. So I try to upload any file to the demo container. There is no file yet. Upload the OpenRC file to the demo container on Swift for test. So check again. We found that the OpenRC file has been added to the demo container. Surely, we can find out the OpenRC file has been added to the demo container in Horizon. So we can upload the file to Swift via API network while restricting the internet connection. So go back to slide. Session summary. And let me show you the next our interest. This time we focused on how to reach Swift from across the network while restricting the internet connectivity. As you see in the demonstration video, we achieved that the users can do it without the special work. All the users have to do is to connect API external network. And the users can use same endpoint IP and credential as via public. In order to achieve it, we are adding API network and preparing extra router outside OpenStack and injecting study code to extra router. This is summary. So now that we can access Swift without internet connectivity, our interest is moving to endpoint policy. Lastly, I will briefly introduce the idea already implemented as proof of concept. So idea is to locate layer 7 firewall in front of load balancer. We used NGX and MotRuby as layer 7 firewall and use Redis as the database to be stored policy. MotRuby is to check the policy in Redis and just whether the access is allowed or denied. Currently, we can restrict the accessible bucket by this solution. But I think I want to implement it by utilizing OpenStack project. So we are now interested in Congress project. This is our next challenge. So if you want to know our temporary workaround in details, please ask us after this presentation is finished. So our presentation is at all. I would be happy if today's session is helpful. And we have a booth in Marketplace. And even in booths, we are explaining our technology to be used in OpenStack-based public cloud. So please feel free to come our booth. Thank you for your attention. Thank you.