 All right. Hello. Good morning and sorry for the late start. I hope your OpenStack Summit's going well so far. So my name is Gautam Pacharavi. And I'm joined by my colleague here, Carlos. We both work at Red Hat. And Carlos is also the PTL for OpenStack Manila. And we're also joined by my friend here from NetApp, Naheem Saaza. So we're here to talk to you about multi-tenancy orchestration with OpenStack Manila. There's a lot to pack in. So this is going to be a really small presentation with a lot of information. So let's run through it. So we try to touch upon what the ideal way of delivering shared file systems in a secure manner in a shared cloud like OpenStack would be. And how you can do that with OpenStack Manila. And if you can't, maybe you don't really care about the kind of scale that you need in your clouds or you don't have the ability to do that because you're using a software-defined storage system or a storage system, vendor-provided storage system that doesn't have the capability to isolate your data this way, how could you do this with some configuration stuff in Manila? That's what this presentation is all about. So let me switch. Yes, so shared file systems inherently starts with sharing. So the idea that you are going to have units of data, units of your storage system, that you are going to carve out and have multiple clients access concurrently in a secure manner. That's the whole point of it. And so when you're trying to put up something like this as an offering on your cloud, there's going to be a, I mean, you would have to start worrying about how you're going to partition your users and partition their use of this shared file system, such that each of them are ships in the dark. They are not interfering with each other's workloads. They're not able to access data that's not supposed to be accessed by each other and so on. So that is, I think, one of the basic tenets that we started out designing Manila for. So giving you access to isolated data stores that come with a guarantee that at the data level, they are going to be isolated. So no matter what storage system you're using behind Manila, the concept remains that you cannot be having unauthorized access to it, even if you get somehow an access to the storage system through its network or anything of that sort. That's a data isolation guarantee that's always there, no matter the storage system you're using. Now the harder and the more interesting bit is how do you protect this data as it's coming out of the network? And it may not just be about protecting the data and securing its transmission through that network. It could also be the fact that you're running some workloads with some expectation of a quality of service and you don't expect somebody else to be sharing the network alongside you that might interfere with your workloads, whether it's a performance maliciously or even unintentionally. So that's a part about protecting and providing a network isolation aspect to this. So an ideal scenario, that's kind of what we began designing the shared file system service with. The ideal scenario is you would let your users ask for their shared file systems to be exported on a particular network. And how would Manila do that? It would actually go ahead and create isolated NAS servers, exported directly and plugged directly into these networks, these self-service networks that your users are asking for that data to be on. That's the ideal scenario. And then we're going to contrast that with the other scenario where the operator, the OpenStack operator, can come up with a bunch of configuration and some rules such that everybody in the cloud plays nicely and kind of mimic this ideal scenario. So the UX with this ideal scenario is that your end users are able to create networks, isolated networks on your cloud. And these could be on neutron networks, this could be self-service, or it could be provider networks that they have access to creating ports on. So they represent that network on Manila as a shared network and they go ahead and use that shared network to create a share. And when they create a share behind the scenes, Manila is going to take care of provisioning an isolated NAS server on that network, plugging in the ports and so on, and only on that network. And representing all of that stuff as export locations out to the API. That's the ideal use case. The not so ideal, but something that doesn't scale that much. So you're OK probably using it in clouds where the number of tenants are somehow known beforehand. Or maybe you have a permissive tenant trust model. Everybody is the same kind of tenant. Maybe you're in an organization that different departments. There is probably not a strict requirement for this kind of network isolation, a hard data path network isolation that's required. So that would be something like if you're using CFFS for instance. So if you're serving native CFFS shares, you'd have to have access to the CEPublic network to mount those CFFS shares. So your workload is probably running on a virtual machine or a bare metal node or a container or anywhere, really. And it needs to connect to the CEPublic network in order to talk to the MDS statements and so on and get access to your CEPh data. So how would you protect some scenario like that? So that would be what this slide is all about. It's telling you that you could use other concepts in modular such as private share types. So you would isolate your CEPh cluster to one or a few of your OpenStack tenants and protect the share type from being visible and usable by the other tenants that are using your cloud. And you would provide access to the CEPublic network. Again, why are neutrons RBAC rules? So you could create a RBAC policy, protect the storage network, and provide access, make sure that it's visible and usable only to a tenant or a few tenants and so on. And you could repeat this step multiple number of times if you had different sets of tenants that you had to provide this stuff to. And so the starting point here would be to start from the isolation that the storage system itself provides. And in case of CEPh, it would be creating dedicated CEPh FS file systems, creating the MDS servers that are dedicated to each of your tenants or a group of tenants. So that's the breakdown of the configuration that's there. Well, it's a 15-minute presentation, a lot to pack in. So we did want to talk more about the ideal scenario and show you some of the features that are there in Manila so you can manage the ideal scenario a little bit better. So the next few slides, my friends are going to talk to you about the features that are available with shared servers, the use case where you have the ideal NAS scenario. So in the past, we had another talk very similar to this, and we did not have as many features as we have today. So one of them, and this came in API version 2.49. In the past, we weren't able to make Manila to manage existing NAS servers. So if you wanted Manila to manage the lifecycle of a NAS server, you would need Manila to create it. But then with managing and managing for shared servers, you are able to bring existing workloads in your storage under Manila management. So Manila will be able to manage all of the lifecycle of that. And the difference between that and the other driver mode is that you need Manila to take the first of the shared server and then all of the shares. So this is basically the feature that allows you to bring the existing workloads. And there are a couple of use cases to this. One of them being you can move the shared servers from one tunnel to the other, or you can basically bring the shared servers under Manila management. So this is available in API version 2.49. And the next one is shared server migration, which basically allows you to move one shared server or NAS server within all of its shared file systems from one cluster to the other. So instead of doing migrations like share by share, you can migrate them with both migration from all of the shares at once. So it works quite similarly as Manila shares migration if you know Manila a little bit. It's kind of a two-phase approach. So the first phase is the data copy phase. So we will be copying all of the data from the shared server in all of its shares and snapshots if the back ends permit. And then after that data copy is completed, then we are just able to set the status of the shared servers and then to phase one done. And then the administrators are able to control when they are going to actually do the switchover. So before issuing the migration start commands or before starting copying any data, you can issue a check to see if the migration is going to be disruptive if your back end actually supports copying snapshots and everything. So you can know all of that in advance. And then in the second phase, when you do the switchover, that's when if the back end does not support non-desruptive migrations, people would likely get disconnected. But some back ends allow people not getting disconnected in the case. So this is available since API version 2.56. Yeah, it's one of the nice features we have implemented. A lot of enhancements are coming in, release after release. So over to Nainu. So I will talk about two more features that are related to network isolation. The first one is shared network security services. Now, security services basically allows you to manage the authentication and authorization of your users. And the idea here is that you can add a layer of security to your shares through the shared network. So you can add a security service in the shared network configuration, and you can use this to provide this authentication layer to your shared networks. This received some enhancements and the later releases, I believe, 263 version of the API. Before this feature, it was possible to assign a security service to a shared network, but not after the network was already deployed. So now it's similar to what is done in shared migration. You need to perform a check operation and see if it's compatible to the updates. But after that, you can update the security services and deploy the shared network in a shared server. This allows users to update the security services if it was already deployed and bring more flexibility to the users that want to manage shared networks. The last feature I'd like to mention quickly is related to multiple shared, sorry, multiple subnets in a single availability zone. The idea here is that the administrators can add more than one share in the same, more than one subnet in the same availability zone. So you can have multiple subnets in one availability zone and use this to manage your shared networks. Previously, this was not possible and it was a problem because subnets, when a subnet was full and they ran out of possible allocation, but now this was solved and an administrator can create more subnets when needed. And I think that's all I wanted to mention here. So that's all for our presentation. We are happy to answer it if you have any questions. That's all, thank you everybody.