 Hello, I'm Carlos, the current PTL for OpenStack Manila. And I'm delighted to share with you the progress we have made over the last release, which is the yoga release, as well as talk about our plans for this release, which is the red set release and what the directions Manila are going to look into. But first, let's start with a brief overview of what's Manila. So Manila is an OpenStack service to provide tenant-revent-posix compliant and distributed file system as a service. It has been up and stable for several releases now. It offers support for the most popular NAS protocols, such as NFS, CFFS, GlosterFS, MapperFS, HDFS, and it supports using over 35 storage system solutions. Manila is also inherently multi-tenant and secure. So it is capable of providing hard network and data path isolation guarantees with the help of tenant-dedicated shared servers. Tenants itself determine who has access to a shared file system, and access can be revoked at any time. Tenants can also integrate their own notification domains. Manila also provides quota control for billing and preventing resource exhaustion. Tenants' resources are scalable and elastic, so growing and shrinking shared file systems is instant and easy. Adding more to that, Manila's capability to reach. So it is a flexible model to provide tenants a shared service catalog. It offers tenants access to storage capabilities that are crucial, such as hardwall to tenancy, data protection, snapshot support, snapshot capabilities, such as reverting in place, mounting and cloning. These are discoverable and programmable and not opaque. So think of it as more like compute flavors and Kubernetes storage classes and less like scheduler hints or sender volume types, for example. After this brief overview, let's go through some highlights of the Yoga Release, start with a new feature. So the first feature to mention is the addition of multiple subnets in a single availability zone. So since Yoga Release, users can now specify more than one shared network subnet in any availability zone. This allows scaling the network on the nest servers that export their shared file systems. The community maintained container reference driver and the NetApp on Tap Drivers now support modifying subnets on the existing share networks. Another highlight is that shares cannot be soft deleted. So since Yoga, there is some sort of a recycle bin in Manila and shares can be deleted to this recycle bin. And these soft deleted shares can be restored or pushed permanently after a configurable amount of time. Also, cloud administrators are now able to specify affinity filters based on scheduler hints to influence the placement of shares and replicas. This is a step forward on hardware provisioning and ensuring that shares and replicas will be landing in specific packets. Also, placement metadata is governed by atomic policy that protects scheduler instructions from being manipulated by unsuspecting end users. And last but not least, the work to phase out from Oslo root rep in the Oslo private set has begun. Now, talking about some possible features and enhancements for the set release, starting with the SFNFS driver, which is going to receive a couple of updates over the C-cycle in order to address the scalability and performance of the driver. One of the updates is going to be the ability to request for a SFFATM deployed Ganesha. And for now, operators will still be able to request whether to have the SFFATM deployed Ganesha or to deploy it in the way they were used to. This is going to be done through a config option. So this config option in the future is going to be deprecated, but it's another future plan. And on Snapshots metadata, metadata has been reformulated for Bonilla. But what does it mean? It means that we have an effort to make metadata more generic internally. So we will be able to add metadata to even more resources. The work was done for Shares over the Z-Cycle and over the Yoga Cycle and on the Z-Cycle, the focus is on Snapshots. And the work for Shares, Shares already had metadata. So there shouldn't be no impact in the metadata that were already present. And with those changes, we will be able to be bringing up metadata for even more resources. And also this is a step for the new has seen usability for operators and making sure that they have means to group, tag, or filter their resources based on characteristics they define. No Hive should be done in the current metadata of Snapshots as mentioned. And also another important topic is that there were a couple of enhancements for UX, including both CLI and the horizon panels. For most of its lifetime, Bonilla had only its own native client. And after a few requests, we started introducing Bonilla comments to OpenStack. Reaching OpenStack client parity with Bonilla client has been a multi-cycle effort for us and it's getting close to be completed. And all that thanks to the incredible Bonilla contributors. The plan is to complete the implementation of all comments over the Z-Cycle. And at the end of it, I did an application warning to the native client. Bonilla UI also received a couple of enhancements. We had a few interns from the North Dakota State University working with us. And we intend to have supports for our network subnets at the end of this cycle. We still don't have feature parity with the most recent Bonilla APIs, but we are making decent progress. Another topic, ongoing topic is some oversubscription improvements for hardening and resilience. What can we consider oversubscription? If a storage pool supports thin provisioning and reports it to Bonilla, allowing thinly provisioned shares to be created, it is open to oversubscription. Administrators are able to configure the max of a subscription ratio they will accept. And these calculations are going to be, the calculations for to avoid much oversubscriptions are going to be enhanced. To also take in consideration that Bonilla could not be the only service using this storage. That's making this estimate to be more precise and allowing shares to be evenly balanced across the cloud. Bonilla will perform an estimate of the allocated capacity gigabytes and including in the driver's stats reported to the scheduler. So on our back, we have started it is effort in Manila on the Wallaby cycle and there's been some good progress. So over the Z cycle, we want to work towards hardening the phrase one and also implementing the phase two of our back. For more details, please check the secure back definitions. They are pretty complete and I will return documentation. For FIPS, FIPS became a community body goal. FIPS is the federal information processing standard and it's been a requirement for some deployments to ensure that systems will only be using approved libraries and algorithms in terms of security and then secure algorithms. So chains are being made to Manila in order to ensure that Manila also other open stack services will be using FIPS compliant libraries. 30-party driver maintainers will need to ensure that the drivers are FIPS compliant on their one if they want to. And going through some tech debt items, demigration of Oslo route rep is in progress. So we intend to have even more progress over this cycle. We have an open topic to drop Python Keystone client from Python Manila client. And last but not least, there are a few generic driver improvements we would like to have. For example, addressing current limit we have for three 26 volumes and adding trying to address that by adding support to Vitoria SCSI. And also we want to work on the online next stations. We would love to have a few volunteers for those topic items. So if you are interested, please reach us out and we can help you. Also, there are a few other things where we could use our help. For example, over the past cycles we have a bunch of internship opportunities such as route outreach, mentoring students from new universities. For example, North Dakota State University, North End University and the Boston University. And these were all some exchange where we managed to get exciting Manila community to interact with the interns and make progress on a few areas we wanted to. For example, OpenStack Client, Manila UI and OpenStack SDK. If you would like to be a part of it, please let us know and we can help you out through the process. Also we need more reviewers to maintain the Manila code base. So if you are interested, please reach out to me or to Gotham. And this process is an informal mentoring process that turned out to be very successful in the past. And we will guide you through understanding the code base and reviews and so on in the community itself. So yeah, thank you very much. If you have any questions, please direct them to us in order to see OpenStack Manila or you can reach out through OpenStack discuss many of these as well.