 Hi everyone, thanks for joining us today. Today we're going to talk about multi-arch supporting OpenStack. I'm recalling from EasyStack and we have Jeremy from Red Hat. So if you find anything interesting, you can find us on IRC or through emails. So both Jeremy and I are members in OpenStack multi-arch sync. So before we start talking about multi-arch support, let's give the definition of what is multi-arch is exactly. So multi-arch is meant to be all kind of a CPU architecture, including X86, including on 64, and under OpenStack in the last 10 years, OpenStack reached a very good support for X86 platform. And then we find out that there's other needs for other architecture as well, but the support in communities is not quite there. So that comes out with the idea to have a multi-arch sync and to try and to driving and encourage those efforts on top of OpenStack communities and make it the, let's say, better place. So that means we're going to plan to maybe documentations, building test drop tracking issues and to build the entire ecosystem for different kind of a CPU architecture. So we form in this sync and we need more hands to help us to keep driving things. What we find out is that we currently still should have hands. So we're driving things slow, but steady in progress. We'll be nice to have more hands so we can drive things faster. It will be nice to have people to donate their data to servers so we can have those resources on top of the current community testing gateway. So we also have bi-week meetings if you want to join us. That's on Tuesday, should cover your time zones. If not, you can find us. We can discuss about that. And we also have six solar board and documentations. There will be more introductions about how to join us or information about the sync. So regarding on what exactly we plan to do and support, so there are a lot of things that we will plan to do. The very first thing we found out very useful will be to have a CI-CD environment on top of a current OpenStack community. To, I mean, in the OpenStack community to actually allow us to testing, to testing and including unit tests, scenario tests. So we can guarantee those code we shipped out will be able to run stable and to reach better performance on top of a different kind of a CPU architecture. Also, we are looking for packaging those services for specific for CPU architectures. So you do not need to do some nasty things to convert a lot of packages or trying to see if those work only work. And which will be hugely affected performance if you're trying to do so. And there's also documentations and tracking box that we think that if we want to support a different kind of CPU architecture on top of OpenStack, the documentation is very essential because people need to know what exactly they need to config and people need to know what exactly they should be aware of and what is the concept. So documentation is very essential. Also to checking those bugs that we find out on the way to building the testing or to packaging is also a thing that to record down to recording down and to allow other CPU architectures to be able to know that what exactly is the place when they're trying to replace the current CPU architecture to others. So to tracking those bugs also allowed us to allow developers to know what exactly is the issue now so we can easily get help from to share those tracking like story in storyboard. To say we need more hands on projects. And also finally we need to make sure those deployment tools are able to install those specific CPU architecture easily. So that is, those are like the basic thinking of how we should target a support CPU architecture. So you can see that testing is very essential in this kind of a targeting goal for if you wanna say we wanna support this different architecture. So the first thing we want to talk about is basically on 64 because at the very beginning that we start to supporting on is that thanks to Leonardo so they donate the servers to the OpenState Foundation. And so the OpenState get to using those servers to build the upstream CIS to actually trying to testing how the ARM performance and performance and try to test it so we can start to thinking about how we can use those servers testing in the test or scenario test. So that's like the very beginning of the entire plan to say, hey we see there are results for ARM 64 now and appears there's already some of the working progress things in OpenState already to trying to push in the support for the ARM 64. And so let's say, hey, if we can have this, so we start to think about one more test we can do. That's the very first thing and thanks to Ian say and thanks to Ian from the Maria Sieg we have a separate pipeline series to it is checked on 64 so if you want to push a job for ARM 64 and you can push it into this pipeline just be gentle because this pipeline currently we are only have the narrow provide about maximum 40 servers. So we still need more resources for our CI city environment so if your company are interesting to share your and donate your servers to OpenState Foundation, please do so we will try to make sure low server are well used and in no wrong, I don't know, we can even like having more different kind of a servers to do the comparison and to make sure that we help each one to cover different cover testing report. So everybody will be happy. So right now this pipeline and from the neural servers, we can build centers, debuins and Ubuntu images. And so a busy sample that we now have a CI city environment ready. So we start to thinking about what testing we should do and one of the testing we think we need to do is unit test. So we define a unit test in project templates which running under check on 46, I mean 64, sorry, 64 and to run a job that is to run using talks to running Python 3.8 on 64 architecture and under that job you will running on Ubuntu Bionic and right now that job is still not voting. We have like approval concept to run the job to sending patches to some of the core services to run the job, but right now we're not trying to pushing anything before we can have proper discussions on what exactly we would like to ask each team to run the job and to do the unit test. Because we need to make sure that we need to make sure that the current testing environment is stable to we be able to run at the capacity we have. So the job is the project template is there. You can use, you can add that project template. I mean to run the Python 3 job on 64 in your project, but we not currently we still under discussion on how we can use that. So if you run it under your project, it will be able to have a separate pipelines to actually, and you will run in for you for the Python 3 job which currently is Python 3.8. We will do more, but we just wanna make sure that everything is doing correctly not over using those resources. Furthermore, we are planning the scenario test and the scenario test is to run in Dev State Support. We have the Dev State Support patch send up for we actually like thanks to Kevin who've been working a lot, working on that a lot. And but the status is still on the low success rate. We still have doing some debugging because there might be performance issue that may run devogaling or some of the tests is not designed for R64. And also we are facing the current like Nova reboot test is failing, but we're still trying to figure out what exactly is cause of that. And also there's a time performance, there's a time out performance issue. But once we have the Dev State Support ready, we should be able to use that job to actually running specific text job functional test scenario tests for other to covering other test scenario. But right now as we can see that most of the testing job that we're running with Tempest, I mean, we want to do with Tempest under Dev State, under Dev State testing settings is mostly all passing. I mean, mostly passing, success and just some of the failures we need to figure out what is that, is that because if we need to deliver the job and using this as a job as a testing template, we need to make sure it is very stable and it is easily to use. So that is the thing that we're still fighting for. And we also think about will support. So right now we have will support to catching and push publish to pass libraries. But to think about to have will support actually also to build on 64 packages is very, I think it's very straightforward thing. I don't think right now we have a proper way to do it, but thanks to Ian who is keep pushing this forward. So I think we're looking forward to have probably packing for on 64 environments and to edit for those services we deliver. So that means that those proper packaging packages for specific architecture for on 64 specifically is case. We will be able to make those packages more stable, make low Python library more stable to use and probably will be increased the performance and to restore a lot of issue we're facing. That is also under other working. So probably we need to start with dependencies and then we will going to go forward with the services we have in OpenStack. Also we're talking about packaging also like container support to auto build container images. There's a patch for it, but it's abandoned but the discussion should be going so feel free to join. Here we have containers to using to use in multi-arch to build images but it's still working purpose which is not merged. Kubernetes provide OpenStack is already support to have container images to build for multi-architecture. Also the document support is we have to the we have general document support thanks to Jeremy but we are not yet to have fully support on how exactly everything is going. We have some bugs currently tracking and Kevin saw a lot of bugs of that and Ian as well. So we have more things need to keep pushing also. I mentioned there's a config different like the default is different for different architecture we need to find a way to easier solving that also test performance issue. So we have other tracking bugs on the way so you can check on our storyboard there's more tracking from the stories to find those bugs. So let's back to Jeremy to to the next architecture. Thanks Rico. Next we'll be talking about the status of support for PPC64, Lila and the in OpenStack. First bit of history for a long time there's been a team that's worked on support for PowerVM and Nova but now we have Power9 with better support for KVM and there's growing developer interest and OpenPower makes sense to tackle support for all of OpenStack on Power not just Nova. That said these efforts are still in a fairly early stage. The biggest limitation at the moment is that there's no Power hardware in Node pool so that pretty severely limits what we can do for testing in CI upstream for Power. Given all of that there has been some notable progress for packaging and other artifacts for Power namely with Arteo and CentOS artifacts. For packages we have packages for CentOS 7 and 8 for the relevant versions of OpenStack for Power actually as well as for ARM and for containers likewise containers are available for Power specifically Collet containers which TripleO then modifies as well as TripleO now has its new type of container separate from Colla which will also support Power. As for other distros and projects there is no support for Power yet. We recognize that not every distro will have a user base that uses Power but if there is interest we do hope to address it. On the topic of Arteo and CentOS various work was done to add support for Power and TripleO. With most deployment tools the only difference between architectures is where the content comes from different package repositories and different container registries but in the TripleO case there was a desire to add support for deploying a heterogeneous cloud so that meant pulling content from multiple sources distinguishing nodes by architecture additionally because TripleO also handles hardware provisioning we found that the Power hardware that we were using for development the IPI credentials of those machines did not have a username which is part of the IPI spec that username was optional but TripleO because no one had ever tried it with Power hardware before had a mandatory username field so this has to be resolved so overall TripleO started as a good case study for what would be needed for the heterogeneous cloud use case and other considerations for multi architecture and so as work continues with other deployment tools to support both architecture it's good to keep this in mind I know that Kala Ansible actually does support ARM already although I believe this is just the homogeneous use case now I'll talk about other architectures besides Power and ARM because the SIG is actually not limited to just those two architectures so for S390X or IBM mainframe there has been some work in Kala mostly just focusing on some dependency issues minor issues like that related to packaging there's also a team that works on ZDM support in Nova but in general to my knowledge S390X is not really a main focus for anyone for OpenStack at this time for MIPS64 I do know of one vendor who is working on adding support in Nova for that architecture specifically in the Nova Libra driver so if anyone who is employed by that vendor is watching this presentation please reach out to us we'd love to hear about what you're doing and keep track of that I've heard as best we can and on that note it's also good I'll talk about how people can get involved with the multi-architectures special interest group so this week earlier today actually there was a presentation about the status of ARM for OpenStack and went into quite a bit more detail than we covered today I'm sure that that recording will be available permanently online and tomorrow there is a forum session hosted by the multi-architecture SIG a birds of a feather session just to see who is interested in multiple architectures what sort of priorities should the special interest group be set should set and just other general discussion relate to anything about multiple architectures in OpenStack and then in the following week we do have the PTG we have two sessions suitable to various time zones and we hope to actually hack on some stuff and again depending on those priorities that we set during the forum session explore various possibilities and connect multiple parties and get as much progress as we can for multiple architectures and set the stage for future work so I do hope you contribute during those PTG sessions but in forum sessions but if not SIG members are always available at any time we look at the mailing list we have meetings every two weeks suitable to various time zones and we accept reports of any kind on storyboard be it bugs request for enhancements or anything really yeah so with that thank you everyone for watching this pre-reported content I thank you Rico for your pre-reported part I think now we'll turn it over to Life Q&A thank you