 be posted on YouTube later as well as in the project navigator for each of the different projects. So if you know anyone that missed the meeting or they had other things going on, then please share the recording with them. I am going to be sharing my screen, but if you have any questions throughout, we have the presenters on. So we'll pause at the end of each project update for those questions. So feel free to either throw those in the chat or you can verbally share them, of course, after each project update. The project updates are currently recorded, so they're not necessarily going to be something where we encourage interjection just because it'll be a little bit hard for us to hear. But I'll pause at the end of each one to get those questions. So let's go ahead and get started. And I think we're starting with, oh wait, when you're starting with Cinder. Here we go. Hello. Welcome to the Cinder Project Overview and Update. My name's Brian Rosmeita. I'm a Principal Software Engineer at Red Hat. I was the PTL for the Victoria Release, and I'm serving as, again, as PTL for Cinder for the upcoming Bollaby Release. So what does Cinder do? Well, it's the block storage service. So what we do is implement services and libraries that provide on-demand self-service access to block storage resources. And you can see in the diagram the basic layout. We provide a rest API for our clients to contact. There's a message bus. There's several different services that comprise Cinder. There's the scheduler and then the volume managers. And so the long story short, we provide software to define block storage by abstraction and automation on top of various traditional back-end block storage devices. So if you want a volume for your instance, Cinder is where you get it from. So what does a Cinder project do? Well, we produce software in a whole lot of repositories. So the Cinder repository is where the main Cinder code is stored. So that provides the rest API and then all the services that make the block storage service work. We also have a library called OS Brick and that's what's used to actually attach volumes. So Nova uses it to attach volumes to any of your instances. Cinder itself also uses Brick when it needs to attach a volume to perform some type of service for it. We provide the Python Cinder client, which provides Python bindings to the rest API. We also provide the Python Brick Cinder client extension, which allows you to use OS Brick to do attachments but via the command line for particular applications. We also provide the Cinder Tempest plugin and Cinderlib. All right, so what you're here for though is you want to know what's new in Victoria. Okay, so a few things. Microversion 3.61 adds the cluster name to the volume detail response if it's called in an administrative context. So regular end users don't see it, but administrators do and that can be very helpful when you're troubleshooting. We also got Microversion 3.62 that adds a default volume types API and it allows management of a default volume type for any particular project. So this is a way operators asked us for a way so that they can have particular projects use particular volume types that are either tied to a particular back end or particular storage class or something like that. So this gives you an API by which you can do it. And we also have improved handling of the Cinder default volume type and this improved handling has been back ported to Yasuri to 1620 and to train 1540 to keep the behavior consistent. So default volume types have been around for a while and train they were made mandatory in the sense that Cinder does not allow you to have untyped volumes anymore. So you can consult the release notes to see in what way this handling has been improved, but trust me it has. Also we've got the Z standard algorithm compression support was added to the Cinder backup service. So the default is still deflate or what's known as ZLib, but now we also have this very popular modern technique of Z standard that can be used with the backup service. Also a couple of new drivers were added. Dell added the power stored driver for iSCSI and Fiber Channel. And itachi added the HSBD driver for iSCSI and Fiber Channel. Then in addition to that, many volume drivers added features beyond the Cinder required features. So if you look at the Victoria release notes, you can see a list of what's been added. Okay, we addressed some security issues also. So there was OSSN0086, Dell EMC, Scale.io or VX FlexOS back end credentials exposure. So that was fixed during the Victoria development cycle and then backported as far as Queens. So the vulnerability did not occur in Victoria because it was fixed before Victoria was released, but we discovered it during the cycle and it's been backported. So that's something to be aware of if you use Dell EMC Scale.io. There's also OSSN0085. Cinder configuration option can leak secret key from Ceph back end. It only applied to Ceph deployments that were using this particular RBD keyring comp option with Ceph. And that option has been removed in Victoria. So it was deprecated in Yasuri and the OSSN was issued and then we removed it in Victoria. Okay, one other thing I just want to bring to your attention. There was an upgrade to Yasuri issue that was discovered during the Victoria cycle, but it does not affect the Victoria release. But I want to bring it to your attention just so you're aware of it. Now, if you've already successfully upgraded from train to Yasuri, then there's nothing to worry about because the problem that's caused would not allow you to upgrade. So if you're able to upgrade, you're fine. And if you started with train, so if train was your first open stack installation, then you don't have to worry about anything either. But if you upgraded from Stein to train 15.3 or earlier, and you did not purge your Cinder database before the upgrade, not that you need to purge the Cinder database in general, it's just that it so happened if you didn't, you ran into this problem. So if that applies to you, you upgraded Stein to train 15.3 or earlier, and you didn't purge the database before the upgrade, then please read the release notes for Cinder 15.4.0 and for Cinder 16.2.0. So there's several ways that you can address this issue, but you need to read through the release notes and just decide what's the best way for your particular situation. So just be aware of your upgrade path from train to Yasuri may require some actions in the train deployment before you do the upgrade. So I just want to make everyone aware of that. All right, so what's planned for Wallaby? Well, one major thing is we're going to move version two of the Block Storage API. It was deprecated in Pyke and version 3.0 is just like 2.0. Now the difference, the difference, why would you use version 3.0 when you can use version 3.62? That's entirely up to you, but if for some reason you have scripts or something and they're expecting the responses from the version two API, you can get something very much like those responses if you use, if you specify version 3.0 when you make your requests to the Block Storage API. So consult the Block Storage API reference documents for more information about that, but we will remove version two during this cycle. There's some new drivers that have been proposed. OpenE, Jovian, DSS has already merged, so that's a new driver that's guaranteed to come. Cephyscuzzies most likely going to be delivered. It's very close. And then Kyaxia KumoScale is going to be contributing a driver. It's kind of an interesting driver because it uses NVMeoF and they're going to make some updates to the OS Brick Libraries handling of NVMeoF to bring it up to date and then also to support KumoScale. So that's going to be interesting. We're also going to be doing the consistent and secure policies initiative. So not too much to say about that other than we will be consistent with other projects and the policies will be as secure as we can make them. And then there are going to be various internal improvements in Cinder. We have a whole list that we discussed at the Wallaby PTG, the project teams gathering that was just held about a week and a half ago. And if you're interested in seeing what these various internal projects are, you can go to the OpenStack Wiki and look for the Cinder Wallaby PTG summary and there's a list of everything we discussed and what we plan to do. And if you want to contact the Cinder team, I've given you this tiny cc slash cinder info. It'll take you to our base contributors page, but it gives you a very nice listing of all the repositories the project contributes to and also our various means of communication and what our basic processes are. So it gives you a good idea of what the Cinder team is all about. All right and get involved. There are some things that we would like you to do that we could use some help. So for instance the Cinder documentation could use an analysis by a good information architect or even just an information architect or even a high school student could probably do this. Basically we have documentation that's been written by various people aimed at various audiences and it's kind of interleaved and we would like to separate out things that are aimed primarily at operators for running Cinder, operators for configuring Cinder, documents aimed at end users and documents aimed at developers. We have all those and we have actually some pretty good documentation but it's not always easy to find things because of the way it's organized. So we could do some help with somebody coming up with a nice plan for a good way to organize it. Also it would be good to make your backend vendors aware that you value Cinder third-party CI and their drivers. It's not easy for the vendors to maintain the third-party CI as we can see because they're constantly going down and having to be fixed. So it'd be good if you let your backend vendor know that you think it's important that their third-party CI is constantly running on Cinder changes because it guarantees better quality code. There's always a possibility to add tests to the Cinder Tempus plugin if you're so inclined you may have run into a scenario that would be good to be tested. We're always looking for that. And then there's an interesting article that I've been telling people about. It was written in 2013 I think but it's still very relevant. It's 10 ways to contribute to an open source project without writing code. So if you don't want to write code for tests, you don't want to write code for features, there are various other ways that you can contribute to open source projects like Cinder and so I encourage you to check that out. That's all I've got. Thank you very much and I'll be happy to take questions at the appropriate time. Thank you. All right. Thank you Brian and I see him in the meeting. So are there any questions for him about the Cinder features he discussed for Victoria as well as what's coming in Wallaby? Well, it looks like he put some communication places so if you're either watching us after the community meeting or you do have questions later, feel free to get in contact with them. But now we're going to go on to the Glance updates. Thank you very much. Hello everyone. My name is Abhishek Kikani and I'm working with Red Hat as a senior software engineer. I'm here to provide a project over you an update for Glance. I'm serving as a Glance PTL for the last couple of cycles and we'll be continuing for Wallaby Cycle as well. So basically, I am associated with Glance since Ice House mostly around six years and was involved in imported, new import API. Then some of the new features added to Glance like high-dold images, then multiple stores support different kind of stores, multiple stores for Glance, then importing image to multiple stores, copying image, existing image to multiple stores, etc. So in this session, we are going to see what is Glance and what's currently going around Glance and what are the planning for current Wallaby Cycle of Glance. And in the end, if we have any questions, then we'd like to answer that as well. So let's start. So basically, what does Glance do? Glance is the OpenStack image service. Glance provides services and associated libraries like Glance Store, Browse, Share, Distribute, and Manage bootable disk images, other data closely with initializing compute resources and metadata definitions. So basically, Glance is one of the core project of the OpenStack. So as I said, as it is one of the core project of OpenStack, it is founded during the Bexar release of OpenStack, which is the second release of OpenStack. Let us survey indicates that Glance is deployed in 95% of clouds in production or test phases. New features and enhancements for Victoria. So during Victoria, actually Glance now supports multiple stores. So you can have different types of stores like combination of different types of stores like RBD, then file, RBD plus file, etc. And in Osuri, we have added features like we can import images, single image into multiple stores at the time of creation, as well as we can copy existing image into multiple stores. So for example, if we are using say a free Osuri version, for example train and you want to upgrade your cloud to Victoria, then you can copy your existing image into multiple stores using that feature. So in Victoria, we have worked around a little bit on fine tuning that copy image feature where we are now allowing copying of unowned images by policy. So administrator can set this policy in the policy.yaml or policy.json file to allow users to copy images which not belong to them. So enhancement in multiple stores features administrator can now set policy to allow user to copy images owned by other tenants. So if you, there is a detailed spec link given here, then we have sparse image upload. Basically, RBD and file system drivers now supports sparse image upload means sparse image upload means it's ignoring null void sequences and upload only data itself at a given upset resulting in saving your storage. So there is one config parameter. If you enable that config parameter, then you will enable this sparse image upload. You can find details in the given spec. Mixing sender driver compatible with multiple stores. So basically when we have added multiple store supports features at that time, there was no provision for configuring multiple sender backends as a glance load driver. So in this Victoria cycle, we have added this facility where in sender, if you have different backends exposed to using volume types, you can configure a sink, a different store in glance for each of the volume type. So now you can have different sender drivers, multiple sender drivers in glance as well. You can find those details in the given spec. So basically, this is the features and enhancement we have done for Victoria. Apart from that, there are many bug fixes and some small features like you can now set virtual size to the image. Not you, our size to the image can be set automatically at the time of creation. So basically this can be used by Nova and sender to avoid running heavy operations like KMO image info to calculate virtual size at their end. So this kind of features has been added to Victoria as well. So you can go through the Victoria release notes to find the detail information about what we have done in Victoria for a glance. This is just the basic highlights. Now we will go to possible features and enhancements for Volubi. So this is what we are planning to do for glance in Volubi, image encryption and decryption. So user can upload encrypted images to glance. Basically, you can find more information in the given spec. Glance cluster awareness. So this is again related to H framework in H deployment or in H epoxy deployment. You will know how many glance API nodes are running. So basically, the use of this is that when you have multiple glance API nodes running and you are using glance direct import method to create the image, then it is not guaranteed that your all calls of glance image import will run through one API node only. So it is possible that your create image call goes to node A and staging call goes to node B and import call goes to node C. And as data of your image is on node B and import call is on node C, it will fail as it will not found the data to import into the packing. So to avoid in this situation, we are coming with glass cluster awareness where it will knows that where exactly is your data is and it will divert that import call to that particular node. So that is we are going to work then move cache under API. Currently, cache is managed by admins and with a different utility called as glance cache manage. So it is totally a different client based tool, which we are planning to move under API. So we are going to introduce a new between point to handle cache-related operations. And those commands will be made available in glance, Python glance client as well. So basically existing glance cache managed tool will be deprecated and removed and it will be available under V2 API. Apart from that, we are planning to complete community goals to implement a role-based access control system and bug fixes if in. Apart from this, you can find the various topics we have discussed at given etherpad. So this is the PTG etherpad where you can find the discussion topic from the discussion as well as the recording of the session is session. So kindly go through it and let us know if you have any questions. So basically that's it from the current cycle point of view. Now we definitely need your help. We need more contributors, particularly if the features people want are going to be implemented. So at the moment, basically Glanstein is hardly of four to five contributors who trying to implement these new features. And it is hard for us that if there is any new feature comes in, then it is very difficult for us to manage. So if you are interested and if you are planning to add new features, then kindly contact us and we will be helping you, whatever you need from us. So if you want to contribute, then there are lots of opportunities depending on your interest. You can contribute in coding, fixing bugs, then reviewing code. So reviewing is also best part of the contribution, improving documentation, improving taste coverage, improving taste test coverage, etc. So basically if you are interested in contributing or if you are interested in Glans, then you can also join Glans weekly meeting, which happens every Thursday around 1400 UTC at OpenStack-Mitting IRC channel. So and if you have any questions, then you can talk to us on IRC using OpenStack Glans IRC channel as well as you can communicate with us on OpenStack discuss mailing list as well. Yeah, that's it from this strategic update. Let me know if you have any questions we will meet in the Q&A session and answer session after the presentation. Thank you. Have a good day. All right, so that was the Glans update for the Victoria and Wallaby cycles. I don't see any questions in the chat, but I did want to give anyone an opportunity to voice their questions now if they have any. Otherwise, we'll move on to Manila. I also want to say we do have another project that came in just during that presentation, so we'll actually be doing a NOVA preview at the very end as well. So we still have five more projects, but let's go ahead and transition to the Manila, Victoria and Wallaby updates. My name is Gautam Pacharavi and I'm the current PTL for the OpenStack Manila team. I'd like to give you a high-level overview of the project and give you an update about the things that we've accomplished in the recent Victoria release and our plans for Wallaby. So what is Manila? Manila is a service that seeks to provide OpenStack users the ability to provision and manage the life cycles of POSIX compliant shared and distributed file systems. It's inherently multi-tenant and secure. It is capable of providing hard network and data path isolation guarantees with the help of tenant-dedicated share servers. Tenants from the get-go can determine who has access to a shared file system and this access can be revoked at any time in real time. Tenants can integrate their own authentication domains, so think Kerberos, Active Directory or LDAP. Further, tenant resources are scalable and elastic, so they can be growing and shrinking shared file systems instantaneously and easily. Manila supports several NAS protocols like NFS, CFFS, CIFS, ClusterFS, HTFS and so on and it has drivers for over 35 storage systems or solutions. It can make intelligent placement decisions to ensure that you're making optimum use of your shared storage. Manila also provides a flexible model to expose storage system service catalogs to end users in a discoverable and programmable way. So let's preview a few things that the project team accomplished in the Victoria cycle. So we added support for shared server migration. This is a two-phase design and administrators can use this feature to facilitate cold and live migration of shared servers and they can go within storage pools or even across storage pools and backends and this feature has been implemented in the container and then add up storage drivers and more drivers are to follow in the in the next few releases. So the share application feature is now generally available. We've added this feature as experimental in the Mitaka cycle and over many cycles we've actually committed several improvements and many of which are now being well tested and well used over this time. So we no longer consider these APIs experimental. You don't need to include the experimental header to have access to these APIs and you can use these APIs to plan your load balancing or disaster recovery strategies. We had several driver feature improvements including in the container driver we added support for share migration, we added support for adaptive QoS and share server transfer limits in the net app driver. The Dell EMC unity driver now supports a new driver filter and snapshots are fully supported in this FFS driver. Several client enhancements were made as well. We continue to improve on our OSC integration. The OSC client now supports interacting with shares, snapshots, access rules, share types, quotas and resize. We continue to play the catch up game and complete the parity there with the Python monologue client. We also added support for user messages in the UI so users can don't need to leave the UI in order to triage asynchronous failures that can happen and that can be reattempted and we made several improvements to testing and continuous integration throughout this cycle and I think this reflects in the number of bug fixes that we committed during this cycle because we added new test cases to several existing file system protocols or file system management modes like for example the hard multi-tendency mode and for exclusively testing the admin interactions against various shared backends and stuff. So we also made many improvements in the monola CSI land although the release cycle is not coordinated with the rest of OpenStack. It follows the Kubernetes release cycle but what coincided with the Victoria cycle has been the introduction of new Helm charts, a new OpenShift operator and support for OpenStack availability zones and also for shoving in any runtime configuration to make intelligent decisions while mounting shares onto the Kubernetes node plugins and for also share metadata to be added to tag the provision resources and so on. So all of this can also be used against older versions of OpenStack and that's the way that the driver has been implemented. So we just recently concluded our project technical gathering for the upcoming Wallaby cycle and so we have a fair idea of the things we want to accomplish in the current release cycle. First off is Vert.io FS. This is a novel file system attachment protocol that's been developed within the Linux kernel and it's aimed at virtual machines. So now that there is sufficient mainstream adoption for the kernel it's time to integrate that into OpenStack and so with this release we play we aim to provide file system attachments to NOVA VMs and with the help of NOVA APIs you could do what you could be doing with block devices today. So let's say we can you can execute OpenStack our FS attach or detach and expect NOVA to interact with Manila to gather all of the attachment info, arbitrate the security and the access rules and so on, mount the file system via the host kernel, make it available to the guest virtual machine. So this should greatly enhance the user experience for Manila and NOVA users of shared file systems and it also provides a more secure way of accessing shared file system drivers in Manila that do not support the hard multi-tenancy guarantees, the network path multi-tenancy guarantees that some of them do. So we're also looking to enhance support for the CFFS drivers in the upcoming cycle. We will be adding support for enhanced snapshot cloning, something that Ceph is backporting to the Ceph Nautilus release upstream and we're also adding support for the upcoming releases of Ceph such as Ceph Octopus and Ceph Pacific. We will also be making several RBAC and security improvements, we'll be supporting the reader user admin role as well as refreshing policies to support the user scope feature that's been added to Keystone in the past several cycles. We're also planning to drop the use of root wrap and provide a more secure and flexible way of privilege escalation via a Oslo-PRIF SAP in this cycle. We also plan to enhance security services to be mutable so users can make any day two changes to their security services and or even add or remove security services on existing share networks. And we're trying to make the metadata APIs consistent across all user-facing resources in Manila and of course we'll continue to keep the momentum on OpenStack Client, OpenStack SDK and as I said before we have several new contributors in the form of students that are looking to get involved with OpenStack or OpenSource and we're helping them help us land this important piece. And we're also looking to continue making UI improvements where we're going to be doing a version catch-up with the Manila API and in the CSI drivers we're looking to add support to share resize and also try to re-architect the driver to be a multi-protocol driver so that way it makes things easier for day two management and for observability and other concerns. So we have a lot to accomplish in terms of features and bug fixes and so we'd greatly benefit from your help. So should you be interested to contribute, we'd love to have help in several areas code, maintainer, ship and documentation. We enjoy bringing new contributors on board and we're changing some of the process to make it easier to become a core reviewer. So please get in touch with us if you're interested. Alongside there are a couple of useful links here for unfinished work that's important to the project team. So if you're willing to help these are great places to start. That said, thank you so much for listening. I highly appreciate your contributions, help and support in keeping us motivated and for making Manila better with each release. Awesome, excuse me. So are there any questions around the Manila updates for the Victoria and Wildy cycles? All right everyone's quiet today. So with that we're going to head to Masakari. Hello everyone, this is Rados for PDGX speaking for the Masakari project. I'm going to show you some basic information about the Masakari project and also what has happened in the last release and what we are planning for the next release. For the starters, what does Masakari do? Masakari is delivering high availability for instances in an open stack cloud. It is implemented in terms of notifications and recovery workflows. Notifications are delivered by monitors which may in turn rely on external sources of truth like pacemake. So now for a little background about the Masakari project. It was founded during the rocky release of open stack. It was previously developed by entity and open source by them. We had 25 contributors in the Victoria cycle and we hope to have more during the Wildy cycle. So why Masakari in the first place? Cloud workloads are not always cloud native and resilience for gas applications may need high availability solutions such as Masakari. This brings the open stack platform closer to solutions like Ovid or Proxmax when you get the HA functionality almost out of the box. Similarly, if you don't control what is running in your cloud and you want to meet your SLAs, you might want to use Masakari to deliver high availability for your customers. Masakari is a simple project in terms of the open stack ecosystem. It has only two dependencies. Keystone for authentication and Nova for the virtual machine side. But it gets a little bit more convoluted when we look at the inside of Masakari. So from a very high level we can see core clients and monitors making up the Masakari project. In the core we can see API which is contacted by users and monitors equally. And API allows you to configure your segments and also to receive notifications usually from the monitors but users can also send notifications to it. And there's also the engine which is the actual workhorse of Masakari. It acts upon the notifications so it runs those recovery workflows I've been talking about. And the other part being clients. It's typical like in the other open stack projects. It's centered around the open stack client, the open stack SDK and also the standalone interface as well as a plugin for the dashboard so Horizon. And the last but not least monitors the interesting part for the detection of the actual failures. So there are four kinds of monitors at the moment. The first kind is instance monitor. It's compatible with Livered. It has been tested with QME and QME plus KVM. It can probably work with other backends of Livered but it hasn't been tested yet. There's also the host monitor so it's integrated with pacemaker and it detects host failures. There's also the process monitor. It monitors Nova compute process. And the last one is the introspection of instances monitor which is compatible only with Livered with QME and KVM optionally. And it does look into the instances to check whether the health status is correct. We finished the Victoria cycle with only one feature which is a separation of host and instance level protection tagging. So basically before that feature Massacre retreated equally whether there was a host or an instance failure. You couldn't as a user differentiate between instances that are going to be protected against instance failure and those protected against host failure. And now it is possible. But for the wallabill release we've got a bunch of ideas about what to implement in Massacre. For summary of those please visit the link at the top of this slide. And I will now check through the three I guess most important ones from the summary. So the first one being the evaluation of pacemaker alternatives, console, etcd. Or perhaps the alternative is not the best word in general because pacemaker, console and etcd are very different things. But Massacre uses pacemaker for the detection of host failures. And pacemaker actually has its limitations. What the most basic limitation is that if you are running core sync and if you don't want to run pacemaker remote functionalities then you are limited to only 16 nodes. And that's usually too few for a typical cloud. With pacemaker it can be work around it by using remotes. But the problem with remotes is that they work differently to the basic core sync stack. And they add additional complexity to the pacemaker cluster. So Massacre is looking forward to evaluating alternatives in the form of console and etcd which are also able to be used as host state tracking solutions. Another similar and also related topic to that is moving fencing and host status verification closer to Massacre. So for now Massacre is kind of blind. It completely relies on pacemaker to do its job correctly. And Massacre is unable to verify whether pacemaker is configured correctly and whether it acted correctly in that particular case. And if that isn't true, if fencing didn't happen there may be various issues in real operations. Like for example if the original host is actually still running connected to the storage array you might get broken volumes. And what we want to do is we want to evaluate how ironic could help Massacre here. Because basically we need functionalities related to controlling a bunch of bare metal hosts. And finally for an unrelated feature restoring the original state. So the state before Massacre took its actions. And for now when Massacre runs its recovery workflows then it's done. And it's not really possible for the user to revert what has happened. So all the evacuations that were done were done and that's it. But from time to time when you restore the hardware to its original glory you might want to restore the instances that were running there previously without having to rely on external projects like for example Watcher to rebalance your cluster again. And Massacre needs your help. So join us on ISE on the OpenStack-Massacre channel at 3node. I send our every two weeks meeting on ISE. I try to not say bi-weekly because bi-weekly might mean twice a week. Propose and discuss features and enhancements. Report and trash bags on launchpad. Review changes. Contribute a blueprint and RSPAC. Contribute code. Fix a feature. We welcome any kind of help. Make our patron slash logo slash hero slash sunburnout happy. And thank you very much for your attention. If you have any questions I'm here to answer them. All right are there any questions around the Massacre updates? All right we'll continue. The next one is Neutron. I'm Savek and I'm Neutron Ptl since a few cycles. I started serving as Ptl for Neutron in Uzuri and I continue that through Victoria and now in Wallaby. I work for Red Hat and you can catch me on IRC mostly on OpenStack Neutron channel at 3node if you need anything related to Neutron. Today I wanted to show you some updates about what Neutron team achieved in the Victoria cycle. So let's start with some general statistics based on Stakelithics first. So Neutron team matched almost 600 patches to the Neutron and Neutron stadium project during Victoria cycle. We completed five blueprints closed we closed almost 170 bags and there was more than 100 individual contributors who sent at least one patch to the Neutron project in this last cycle. So as you can see the project seems to be pretty healthy and stable we have a bunch of contributors we are doing a lot of work every release. And now let's talk a bit about new features which we introduced in Victoria. So first of all we added support for metadata over IPv6. Neutron is first project as far as I know which provides metadata service over IPv6 address. We are using link local IPv6 address for that. It is equivalent of the IPv4 address which every one of you probably knows 169.254.169.254. Now you can get metadata from metadata service in the IPv6 only network using this new link local IP address. As it is link local IP address you have to specify give an interface name in the URL when doing request but other than that everything else should work in exactly the same way like in IPv4 world and you could you should be able to get metadata from metadata service. One thing which you should be aware of is that cloud init for example as far as I know don't support metadata over IPv6 yet. So cloud init will not work with IPv6 only networks for now but if you have some own scripts or if you want to add some your own I think it's called data provider in IPv6 in cloud init you can do that and you can use IPv6 to get metadata. Next thing which we introduced in this release is support for flat networks in the DVR distributed routers. Previously you could only attach to the DVR routers VLAN or panel-based networks like Wakeflan or GRE networks. If you attached flat network to the DVR routers different strange things could happen like for example duplicate packets sent through the interfaces on this network and things like that. Some more info about that is available in the related bug report which is linked on this slide. Another features which I want to highlight here are some OVN related things. Basically in ULU Recycle we merged OVN backend and OVN driver from to the neutron core repository. So OVN became one of the in-tree driver, neutron driver instead of being a separate stadium project. But we know that OVN has got some parity gaps, feature parity gaps between comparing to ML2 OVS and we wanted to we are working hard to close those gaps every cycle and in Victoria we added support for floating AP port for warding and for router avability zones in the OVN driver. So basically you can use port for warding with OVN backend now. And the OVN driver will also now read avability zone hints from your router and will schedule router ports accordingly to the given avability zones. We also added a couple of new config options which may be pretty important to know from operator's point of view. First of such options is keep alive the use no contract. This one is important if you are using keep alive the older than 2.0 because in such version keep alive the don't know about no track option in config file and will complain if Newton will add such option to the keep alive the config file. Default value of this option is true because in the newest distribution distributions like ubuntu 20.04 or sentos 8 there is already keep alive the 2.0 and that should work fine but if you are using some older draw or you have your own older keep alive the then it may be worth to it may be required to change this value to false. Another new config option is HTTP retries which is a Neutron server config option and it basically says that how many times Neutron Nova or ironic client used by Neutron how many times it should retry API requests which are sent from Neutron to Nova for example with a notification that port is provisioned or things like that. By default we are retrying three times in case of some network outage of or something some other issue a networking issue during the serious request sent to Nova but you can of course change this to some other value. Other smaller improvements which we added in the Victoria cycle for example that all Neutron agent processes now has got the same format of the process name like Neutron server workers so it will be visible as Neutron agent name like Neutron DHCP agent or Neutron L3 agent and then in parenthesis there will be full original process name including interpreter like user build Python and the rest of the process name which you already know. This usually is not really very important for users but if you have any custom scripts or tools which for example rely on output from PS command then you may need to update your tools accordingly to this change. From other things I can also mention that port DNS assignment now reflects DNS domain defined in the network or send by user in the API. Previously it was always only based on the DNS domain value specified in the Neutron config file now it can be specified by API and last but not least we also changed terminologies used in our code base for example we changed words like master's life to primary and backup. This is mostly internal change in Neutron code not really very visible for users but still I think it's important to to mention that we did change like that in last cycle and that's all updates about Neutron project in the Victoria cycle thank you and goodbye. All right I know we're running close on time or we're about to run over on time but we do still have two more short updates so are there any questions around Neutron for this cycle? Awesome they're gonna go right into telemetry and then we'll have a short update from Nova at the end. Hello and welcome to the OpenStack telemetry project overview and update for the Wallaby cycle my name is Matthias Runge. If you don't know me I'm a principal software engineer at Red Hat and I have been around with OpenStack since the Grizzly cycle. I've been recently elected as PTL for telemetry and I am the successor of Rong who is currently serving as PTL for the projects Murano and for Solom. So what does telemetry do? Telemetry is used for gathering metrics and events. It is listening on the message bus and captures events like a VM was spawned, a network was created or a volume has been deleted. It is also using the OpenStack API to pull data out of the services to collect information about usage, memory consumption on a tenant basis. Combined with the service AODH or A and together with HEAT it provides an autoscaling service for example to scale up or scale down resources. OpenStack telemetry was probably founded during the Grizzly cycle and it started with only CELOMETER which was later split into separate components during the Metakar cycle. Actually I couldn't find accurate data about this and please correct me if I'm wrong here. Over the past cycle we had about 20 different contributors. While this number sounds pretty great and sane I would love to encourage you all to contribute more because telemetry is an important part of OpenStack as we see from the latest survey numbers. Telemetry is used in 45 percent of production environments, is being tested in 8 percent of deployments and another 14 percent are considering to use it. Since there are a lot of different names around and under the telemetry umbrella I have an architecture overview here where you can see CELOMETER is collecting data. Events are being sent to PANCO, metrics to NOKI and A is the alarming component. It is repeatedly pulling metrics from the NOKI API, compares the results with its rules and would issue a call to HEAT if an alarm is triggered. The most notable feature over the past cycle or past few cycles is the dynamic pulse system. The idea here is to create or update PULSTAS on the fly which wasn't possible before. PULSTAS are being used to pull metrics out via the OpenStack API. For the future we have two major changes in planning. You may have seen discussions around NOKI and NOKI being supported or not, NOKI being scalable or not, etc., etc. Personally, I would like to solve this rather sooner than later. There has been discussion around getting NOKI back under the OpenStack umbrella. To be honest, I'm not sure what that would solve compared to NOKI being independent. At least the discussion was about that was uncontroversial without a clear outcome. Unfortunately, HEAT also stopped testing against CELOMETER and NOKI because it caused them to many issues. For the future, I would like to encourage you to contribute to this library project. With that, I'm turning to my last slide and I would like to hear your feedback, hear your pain points or see use cases. And finally, if you didn't participate in the survey yet, please consider to do so. Thank you. Awesome. Unfortunately, this presenter wasn't able to join today, but feel free to reach out to them on IRC. I'm going to stop sharing so we can get that last presentation up. I think it's only two minutes for the NOVA piece, so I'm going to stop. And then I think Helena is going to share her screen so we can get the NOVA update going. Here we go. I don't think the sound is coming through. Welcome, everyone. My name is Balazs Gibizer, and I am here to give you an update about what the NOVA team delivered in the Victoria cycle. But first, a short recap of what NOVA is. NOVA is the main computing project in the OpenStack. It implements creating virtual servers and managing the lifecycle of those servers. In the Victoria cycle, we had 75 individual contributors, we merged around 370 commits and invented 9 blueprints. On the next slide, I will highlight the main changes of the cycle. First, we continued supporting mixing physical and virtual CPUs. In Usuri, we added support for mixing them on the same hypervisor. Now in Victoria, you can mix them even in the same NOVA server. The next item is actually two features, both targeting SAP-based Glance configurations. The NOVA compute service now improved to use a direct and therefore fast image cloning method from SAP. Also, NOVA now handles Glance multi-store configuration properly during image download. We also added support for virtual trusted platform module devices. In Victoria, you can request such device to be added to your server via flavor extra spec or image metadata. For a long time, NOVA supported attaching and detaching neutron ports to running servers, but attaching a port that is backed by an SRIOV device was not supported. In Victoria, we now support such attach. We continued extending the support for cyber accelerator devices. Now you can rebuild and evacuate servers using accelerators. Also, we are planning to support even more lifecycle operations in the next release. In Victoria, deployers can add a provider configuration file to NOVA compute service to define custom resources. These resources will be reported by NOVA to placement, and these resources can be requested from your VM via flavor extra spec. NOVA will manage the resource allocation for such resources. Last, I have to mention some deprecations and code removals. The Libre driver supports multiple hypervisor backends, but the Xen, UML and LXC backends are unmaintained. Now we decided to deprecate these backends. This also means the removal of these backends are expected in the coming cycles. The standalone Senopy driver was also deprecated a couple of cycles ago, and in Victoria, we deleted the code for the driver. That was all I wanted to highlight from NOVA perspective. If you have questions, then reach out to the NOVA team on IRC. Thank you for joining. So those are our OpenStack project updates for the Victoria and Wallaby cycles. If you're a PTL or a core contributor for a project that wasn't shown, there's still time because all of these videos will be posted in the project navigator. So people who are trying to learn more about the project can see what the latest updates are and learn more about the project and get involved. With that, are there any remaining questions or feedback before we close the community meeting today? Awesome. We'll be sharing the recording and the slides on the mailing list this afternoon. And like I mentioned this morning on the mailing list, I'm going to circle back with the OpenDev team on their Gypsy instance. So there's any additional feedback on that. Again, my name is Allison Price. My email is allison at openstack.org. Please reach out to me. I definitely want to evolve this community meeting to what makes sense for the community. So any feedback is welcome. We just want to make sure it's what everyone wants to tune into in here. So thanks for joining and we'll see you out in the community. Bye everyone. Thank you.