 Hi everyone, thanks for joining us and welcome to Open Infra Live, our new weekly hour long interactive content series sharing production case studies, open source demos, industry conversations and the latest updates from the global open infrastructure community. This is our third episode and we have some great content coming up in the next few weeks so hope you can join us every Thursday at 14 UTC streaming on YouTube. My name is Kendall Waters Perez and I will be your emcee for the day and let's go ahead and get started. So last week we held our ninth project team scattering for those who may not be familiar, the project team scattering or PTG is an event organized by the Open Infrastructure Foundation. It provides meeting facilities allowing the various technical community groups working on open info projects to meet in person or rather virtually in these times, exchange and get work done in productive setting. It lets those various groups discuss their priorities for the upcoming six months, assign work items, iterate quickly on solutions for complex problems and make fast progress on critical issues. Last week we had 47 project teams meet which consisted of over 500 contributors from over 45 different countries. Today we're going to hear from a few of those teams and what they discussed and worked on during their PTG week. With that I will hand it over to Fabiana from Cata Containers. Sorry for the bad pronunciation on your name. That's cool. Hello everyone, I am Fabiana Fidencio. I've been working in Cata containers for a little bit more than one year now and I am a member of the architecture committee of the project for six months and this was the first PTG that I've been actually organizing, helping to organize as part of the Cata containers. We can go to the next slide. So before talking about this, we have to talk about the previous one because the previous one was not exactly what we wanted to be. We didn't have that many people attending and we didn't have that much planning around that and most likely it happened because of what I think it was bad timing. So let me just go a little bit through this. The whole thing was happening just after the Cata containers election that was like changing people who were pretty much steering the project and the most important part that was happening in the middle of the 2.0.0 release which was one of the most important milestones we had last year. With those two pressing things happening, a lot of changes happening, we didn't have a lot of time to actually plan the event so we didn't have what we ended up doing was pretty much having two or three sessions in different time zones so we could try to get people from US and China together, people from Europe and China together and then people from Europe and US together, pretty much like those three. The first one, we didn't have the first slot which was like US and Asia. We didn't have that many attendance. Afternoon was even worse so that was something that we were not exactly proud of. We didn't have to make any progress on that and then for this year, for this edition, we were actually considering like shall we do this or shall we not? Then we can go to the next slide. So we had a discussion within the team when the email from Candle came about hey, we have this VPTG and shall we do something for this edition or not considering like what happened in the previous one. We discussed it between the team, we discussed it as part of the architecture committee meeting and we decided to do this in a totally different format than what we had before. Instead of having three different slots, we said like let's just go with one slot. It's going to be 2 p.m. UTC. I know this is like a little bit too late for focusing Asia but not too late. It's a little bit early for focusing in the Americas but not too early and well, Europe, we are like 2 p.m. here. So we decided to do this slot 3 hours and instead of having presentations as we had before, we decided to just open the floor for the community. We sent some emails, some tweets were done, we sent some matches in the channels and just asking the community, hey, come here, talk to us. What do you want to know about the project? What do you want to learn about the project? Do you have something that you would like to share with us about the project? Some hacks that we are doing that we are not aware of? So I would say like this was also not much planning but at least we had one sort of plan that was way, way simpler than we had in the past. And the timing in this situation also helped a lot because we were not in the middle of a release. Actually, one of the things that we planned was what are the steps that we're going to take for the next release? We were not in the middle of a transition period between old members of the architecture committee and new members for two reasons. The elections happened like a little bit before but the members that got the seat, they were part of the community already, they were there, they were present already so not many changes happening. And this was pretty much the background of how it happened. Next slide please. So we did this and what we did was what we call like let's do some of these office hours. The most part of the developers just show up there and we can discuss with the community, like we get to know what are the community issues. During this block of three hours that we were there, we got the least amount of people who were present, there were six people, there were six people and then there were like the maximum was around 16 people, 15, 16. And we had the participation of companies, people working from companies like Anti-Financial, Apple, IBM, Intel, OVH, Cloud, which is a new company that just joined and started collaborating with Cutter Containers a few weeks before, not even that. And also Red Hat. I work for Red Hat by the way, so we were there. And during these three hours, we end up discussing around 10 different topics which were from documentation to improve the usability of the project, how we shoot simple things, right? Like really simple ways to improve our documentation from the user point of view, like what they are looking for when they join or they get to the project's web page and what should be done in order to improve it. Passing by really technical topics like the way to get to know or to do a GPU pass through, to play with node feature, discoverability and these kind of things that will help the project to be deployed in a cluster in an environment that is not exactly the default one, right? Because we would be needing like GPU pass through and this kind of stuff. We end up with around 10 issues open because for every topic that we discussed, we were asking people, can you please open an issue for this? Like then we can summarize it there, summarizing the issue, the discussion we had here. And I can tell you, it was great. It was the best event, virtual event that I was part of in the last one year. So I can tell you, this format worked quite well for us. Most likely we are going to adopt the same format for the next VPTG. And Sunny Chai, who is part of the open infra, she's working on a blog post together with information that I forwarded to her. So wait that we are going to publish a blog post about this. And yeah, I guess that's pretty much from my side. Office hours, that was a plus. And we're going to try to follow this, the same format for the next one. And that's all from me. Great. Thank you so much Fabiano. So next up we have Rico Lin to talk about the multi-arch SIG. Hi, I'm Rico from United State. I'm also a server's multi-arch SIG teacher. So I guess I'm here to share some of our exciting news back in PDG and things we discussed there. Could we go to the next slide, please? Next. So I think we have very good PDG this like previous week. We have making a very productive discussion there. But before we have everything ever since start for discuss about PDG, I must give some thanks to Open Source staff from Oregon State University. And also thanks to the NREL. These two organizations, they donate CI resources to Open State, I mean Open Depart Foundation. And for those CPU architectures that other than X86, that's why we can push something here in this in the multi-arch SIG. So I thank to those organizations who helped to donate. So the very first thing we have discussed is that we have very successful running job for on 64 environment or to run it for 10 test. And so that's kind of like the very highlight we have for inside our PDG is to discuss what can we do next and also how can we tune in the performance and also how can we keep coaching things forward. And I guess that's also things that folks from our state get excited because it's like we finally have something that is very useful. And the success of such a job that proved that Open State can support for on 64 for the communities. We can start to think about more complex tests. So I think that's very good, a very good progress we make. The thing is that the performance might relate to another stuff. So that's something that we've been discussing about in the PDG, like how can we can we switch to other scenarios that more facing the user need, or can we like to tune in the performance by switching the back end. So you can see there's a lot of various discussions we have. And also we keep thinking how can we like have a better support in terms of building images in terms of have more project for any test. And also we also discussed how can we improve the the periodic test of the current CI environment to including more mariage drop there. But for now that in reality in the current we only have on 64 environments too long. So that would be something we as community that to to pushing forward at the very first step is to have better on 64 support. But on the way we also we also kind of have other architecture as well. Since in in reality there's a in that most of the non x86 architecture usually share the exactly same issues for for for inside services. So that will be very big stage that we we took down is to have better on 64 support. And in our in our PDG we also discussed about to have the Sieg report. And how can we do the next step. So if you click the link inside the the slides you will go to the model of state report which we generated at previous weeks. We we will we will try to do more in the following following month. But we'll see how how many resources we got. And how many volunteers we have. And also how can we how can we get the get to like it seems just seem to what kind of solely kind of kind of work. So in PDG we have like we have go through the report and kind of figure out which part of it is more more like more like a will be more get use of the mandate and also kind of fix on the titles. But which I generated. But the the thing is that the report is very exciting. We have we have we very thankful for OpenFall Foundation's help. We definitely will do do one for I hope for in next month. Also we have talked about the Sieg video. But that will be something we are we having as a concept to see if we can using the video starting to do more share of contents and share user stories. And also the check to check the liver seven with the CP stuff. For who don't know that liver is something we've done for a suspect a suspect and for our work of all those relations which I have been always doing inside of us. So it's very essential for us as a mother to finally figure out how we can have that support. And what is what is the new thing in and never because because we kind of like facing some of the issues in the wild and which is we have to keep tracking with what is going to be changing. So it's not the only way we can make sure that for mother is for like different kinds of things that they will always get support by a suspect who is trying to pick up with all of the things. And out of the way we are talking all about the job in the past. We are getting we also take spots from projects of destiny to make sure that I can say that the guest has to take with whatever the majority of the activity can do. And so I guess that's all we have for for my have a safe report. But if you are if you are interested please join us in our Mahatch meeting. And we also we also have I also know so you can find us there. Oh I have her. Thanks. Thank you Rico. Next we are going to talk with Greg Wains and he's going to tell us more about Starling X and what they talked about. It's Greg Wains here. I'm a new TSE member for Starling X and actually been with the Starling X projects of its inception. So yeah I have three slides maybe if you can go to the first slide. I want to take opportunity just because Starling X still struggles with a lot of people not understanding exactly what it is. So I wanted to just do one slide and a brief introduction on that. So Starling X provides basically a hybrid cloud platform solution. So we provide both a Kubernetes cloud as well as an open stack cloud in our deployment. We're also like a full kind of soup to nuts solution basically fully managed fully integrated basically ready to ready to deploy. There's no kind of integration required at the over and above what Starling X provides. And so Starling X the software itself is really just all the infrastructure management around managing the cloud servers and managing all the cloud software that's integrated into the solution. And besides the core kind of Kubernetes and open stack services we're also providing kind of you know ecosystem of you know package pre-integrated open source projects that provide integrated services that are hosted applications can actually leverage. So you know this includes things like you know Engine X or Manager Dex, Falk, Eros, Metric Server there's a whole bunch of them that we're continuing to kind of expand and grow. So there's a rich environment for the hosted apps after they deploy themselves. And then also a big key feature of Starling X is that we are optimized for the edge and kind of two key points related to that is really just because we scale both large and small like we can we can deploy on like a single sockets Yundee server single server at the edge as well as kind of the traditional you know multi-node architecture at the central cloud. And then the other key kind of edge enabling capability that we have is we do have a distributed multi-cloud solution that basically manages multiple edge clouds with a lot of orchestration services to make it you know operationally not not as operationally prohibitive to manage all those edge clouds we have a lot of orchestration services that do things automated over across all the edge clouds. Okay maybe go to the next slide and I'll just chat about things we chatted up. So we had a good three days of meetings at the PTG last week first just talk about some of the retrospective stuff that we we did you know we had we're about to release kind of our fifth release in May so we're at the end of that so there's a bit of a retrospective on how well some of the features did I just for information purposes I listed a few of them here some of the we actually that was one kind of you know a positive note from the PTG from the previous year is that we actually had quite a large you know feature content for our our five release this is only a subset of the features that we added we added some kind of really nice ones like Edge Worker actually is a really nice feature where basically enables the integration of any Kubernetes capable device so it's not running the Starling X software itself but it so that it means it you know as long as the device you know runs Kubernetes it can be any processor architecture any OS whatever is a smaller form factor as you want as long as you can do Kubernetes you can it with the Edge Worker stuff that was done you can you can integrate it you can basically integrate it into the Starling X Kubernetes cluster and then that enables tons of stuff right in the sense that I have a single point of management of my containerized apps across you know full Starling X Worker nodes versus the these Edge Worker nodes and those Edge Worker nodes can now leverage every service in the Starling X cluster like it can leverage you know the Seth backed PDC's for persisting data it can leverage you know the metric server for being able to auto scale its apps and jump like that so that was a really cool feature you know some of the other ones were were wrote we added wrote we we currently have a Seth cluster that's kind of a host-based managed one where Rook provides much nicer much more cloud native type solution for for Starling X we continue to evolve our ecosystems with Vault which is like a secret manager for Deeris which is a container image signature check validation type admission controller and metric server is a kind of a Kubernetes solution for doing auto scaling we added we added SMPv3 we previously supported v2 we support v3 now which provides better authentication and privacy we continue to evolve our well actually we added certificate management the type simplifications we we had integrated cert manager into our product in our previous release and now we're starting to take advantage of it in our platform we added it in the previous release so that our hosted apps could leverage it now we're getting our platform to leverage it and cert manager is all around enabling the ability to automate the renewal of certificates because it's that's an operational headache and what ends up happening is people put long expires on certificates which is not a secure solution so cert manager automation is a big thing a lot of our a lot of our wireless solutions users were requiring FPGA acceleration for kind of 5G applications so we added the management of the FPGA loads into our infrastructure management and orchestration across the distributed clouds and also added in kind of just a higher level abstraction of the Kubernetes upgrade the stuff that's upstream and you know some of the things we're looking to improve on we definitely had a long discussions on supporting new users of Starling X in the community picked around a number of ideas around mailing lists IRC workshops office hours definitely a number of items that we want to take actions on could be want to continue to support new users and then in addition to new users also to you know identify you know users that we have both existing and new that are potential contributors and work with them to support them Starling X is a Starling X is a big project and and it needs support from existing contributors in order to bring in new contributors so there's a lot of talk about how we get commitment from kind of existing contributors to support the new ones okay if you go to the next slide this is just my last slide just we also spent a day talking about what's next in release six because everybody knows kind of CentOS is it's going away like I mentioned Starling X is like a soup to nuts solution so it does include a kernel so we need to do some work to get off CentOS we're moving to a Debian solution and so there's a lot of talks about you know this is this is a big change for Starling X because we have a build environment that's based on the OS as well as all the packaging as well as you know the practicality that you know a lot of the Starling X infrastructure management has you know got to do tweaks in order to to run on the new OS so that's a lot it'll be a lot of work for us probably just prep work in R6 and then we'll move in R7 we're gonna just do kind of standard up versioning on you know things like Kubernetes I should have had open stack in there as well container DETCD all that stuff we're going to do some scaling work on distributed cloud solution our distributed cloud solution right now supports 200 edge sub clouds but we're getting pushed by users that are adopting it to get bigger you know a thousand edge clouds they want the simplicity of you know one central cloud to manage as many edge subclass as possible because of our orchestration services that run at the central cloud we're going to continue to do certificate management work kind of getting all our certificates you know managed in an automated fashion through cert manager and some logging work we're doing as well as some fault management integration into our containerized open stack solution so we had we had actually a good turnout for the PTG and I think we're kind of psyched up for kind of the next release thank you Greg it seems like you're very busy last week all right moving on to Ildiko Boncha to talk about the edge computing group thank you Kendall so similarly to Greg I also wanted to take the approach to introduce the edge computing group first in case there are people in the audience who do not know what this group is and what we do so this is a top-level working group that is supported by the opening for foundation we look at edge computing as a as an evolution path that extends data centers out to the edge and with that it also brings computing power and cloud closer to the end users let them be humans or machines so in that sense we don't only focus on the edges themselves but we look at edge infrastructure as massively distributed systems so what we do on a more daily basis is that we are collecting use cases so that we can analyze them to see the requirements and demands towards these edge infrastructures and based on the information that we collect we then go and build reference architecture models and see how we can turn them into implementations and with that we do some testing and evaluation work and we share all our findings with adjacent communities and anyone who's interested in that information in the form of white papers and other resources that you can find on our wiki page the links are all on the slides that you will have available after the show so you can read the two white papers that the working group published already and you can also see all the other resources that we have on the wiki page including how to get in contact with us like weekly meetings or the mailing lists and now we can jump to the next slide with for the short PTG summary that I prepared for you so as I mentioned what we do first is collecting use cases so we have I believe around 30 or 40 use cases identified already but they are kind of focusing a bit on the telecommunications industry for instance because 5G and edge computing are going hand in hand so obviously that was our first focus area as well because when it comes to edge computing connectivity is really important because that's what brings you the edge functionality and with that now that we have more and more production deployments in that area we are looking closely into new use cases in different industry segments a good example is industrial for instance with the digital transformation the factory floors are also turning into IT systems and with that they are also becoming a new and exciting edge computing use case for us as well that we are looking into so you will probably hear things like digital twins more and more these days another area that we are looking into is broadcasting and also a bit in connection to that events and stadiums and how those are also turning into an edge computing use case with providing network access and also broadcasting live streams about the events that are happening either in a stadium or somewhere remotely like a festival you can think of where you don't really have infrastructure built out already so you have to think about something that you have to move there provide the connectivity and provide features and functionality to people who are attending those more remote events another area that has been in discussion for a while and with COVID happening it is also in focus is healthcare and how you can bring doctors to people without them necessarily going into a hospital or doctor's office so these are kind of new use cases that we are looking into and with those use cases we also see how the edge computing space is evolving like talking about micro clouds and how a micro cloud looks like and what you need in terms of hardware resources and also what kind of infrastructure software services are running on these so after the use cases discussion we went back to the reference architectures topic to get you a bit of a context I put two diagrams on this slide so you can take a look at a closer look after the show when you can zoom in so we are looking into these massive distributed systems from the connectivity perspective and what do I mean by that is what happens if you lose connection between a central cloud and the edge site because in that case based on the use case you may want your edge site to be fully autonomous and have all functionality available there or maybe it is perfectly enough for you if your workloads will keep running and then the edge site can synchronize with the central cloud once the connectivity is back on so we were looking into the architecture big picture with this mindset and started to kind of build out these two main architectural models the centralized control plane as we call it and the distributed control plane so Greg was talking about starting acts a bit earlier and starting acts is applying the distributed control plane architecture model for instance so that means that you get control services on both the central cloud and the edge sites which means that the edge site has a smaller footprint for workloads but in return you have all the functionality available even if there's no connection on a temporary basis with the the central cloud and you can see on the other diagram that we have mainly compute and storage services running there to ensure that the workloads have everything that they need and they can also store data in case needed so we have some implementation diagrams as well then and work on going with the starting acts as well as open stack and following the the starting acts angle we we started to take a closer look at container technologies and container orchestration systems and just really trying to understand how you can build these these reference architectures using those so we are looking into both containerized infrastructure services as well as providing everything that containerized workloads need we also did kind of a small survey during the I believe that was the second session during the event where we were asking people what option they are choosing when they're implementing their edge infrastructures like running infrastructure services and containers or running containers only on top or how they they mix these to to have a better understanding that we can then turn into testing and evaluation work and back to the I mentioned storage earlier we have a recurring storage discussion during the the PTG edge sessions because storage is still a bit of a conundrum in terms of how much space and all resources you have available on the edge side how much data you want to move back to the central cloud for further analysis or moving out to somewhere else to store the data persistently so we were looking into some connecting use cases like how CDN is evolving content delivery networks as well as video streaming and and how you process the video stream and maybe alter it before it goes live and just really looking into how you can provide storage resources for these and we were looking into the triple distributed compute node architecture during these discussions and how they are using and providing access to staff on the edge sites and that solution is following the centralized control plane model to connect back to that as well and last but not least we also had cross community collaboration because we're working a lot with adjacent communities so you can see that we met with Akrino at CMAC and GSMA as well as starting X we were really we're trying to use the opportunity to have a little longer time than than just the weekly meetings those one hour long sessions to learn what these adjacent communities are doing and also looking into further collaboration steps like how we can apply the reference architecture models to Akrino glueprints for instance or how is at CMAC their architecture is consuming infrastructure interfaces for instance that the services provides that we are looking into you can find the notes of the PTG on the etherpad that I linked on the slide and we will also put the links to the recordings on the wiki page on the previous slide where you can listen back to all the sessions that we had and with that back to you Kendall. Thanks Ildiko. Next we have Julia Krieger to talk about Ironic. Thank you so I guess we're started as always the question one has to wonder and for context Ironic is a project that started from within OpenStack I guess about seven half eight years ago now and its purpose is to manage baremail systems at scale and really facilitate the orchestration and ultimately I guess making people's lives better so this is one of those projects that has a lot of details lots of moving parts and it also means that a lot of the problems we have are very hard it means that a lot of times we don't actually reach consensus but we find the same common words which is the real power of the PTG. In this past PTG we actually intentionally ran sessions for both the US-EU time zone and the USA pack time zone really trying to build slightly different groups and collect their feedback into one place and that was really powerful and that we can kind of get validation for one group on what the other group was saying or maybe some disagreement and ultimately it really helps to build a better community to spread that out so amongst topics we discussed was kind of where we see ourselves now and where we see ourselves in the next five years we had some discussions about project leadership because personally I've been doing for three years I need to hand it off and a lot of this came to the point where we were starting to realize we need to do a bit more outreach we need to do better have better communication with our audience in terms of those that follow us or those that use our software through other projects one thing a lot of people don't realize is ironic is buried inside other projects and you don't see it it's kind of amusing because it's almost like the project you don't actually ever hear of until it breaks which is awesome and scary at the same time one of the major topics we also hit during the PTG was performance issues it's a huge pain point for operators running at massive scale and historically it's been something that my apologies the corgi is expressing pleasure but I feel like this is the first time I've ever been on live stream the corgi has disagreed I'll try and wrap this up very quickly but performance was a major topic that we had and we dug into it and we found a lot of opportunities to for things to address and hopefully we'll be seeing some of that over the next cycle additionally above and beyond the technical work we really kind of reached a lot of the same words and found opportunities where we could do better outreach better communication identify things we could do with the ironic bear mail.org website which is if you haven't visited is kind of our our simplified landing place instead of the original open stack docs which are very verbose I think that's it and I think the corgi is demanding my attention unfortunately. Well thank you Julia and corgi next we have Stig Telfer to talk about the scientific SIG. Hey everyone so I thought I'd start with a little introduction about what the scientific SIG is all about can we go to the first slide please and so these are these were actually the the founding objectives from the SIG which was way back in the one of the Austin Sunnets probably four years ago now and I guess that we we're aiming to solve a whole bunch of problems but mostly the problem that we're really trying to solve is is about knowledge and sharing it and sharing expertise so when we look at the the objectives we have we have this sort of building of objectives around infrastructure platforms and applications but the main one that seems to work particularly well with the scientific SIG is is the social infrastructure and so that comes in mostly in in the in real life in person summits where we where we gather together do do sessions do talks and and then usually something social in the evenings but but the other area where it comes in strongly is is around advocacy so the scientific SIG is about advocating for research computing use cases within the OpenStack community but also for advocating for OpenStack solutions in the wider sort of scientific computing and HPC environments as well mostly it's a forum it's a as a slack as a forum it's a slack channel it's got an open membership it's about 120 members in there at the moment we like to talk share problems it's a fairly social and friendly group occasionally the SIG will pull together and we'll make you know work together and make something and and the book on the screen here is is the major example of that and you can get that from the download it for free from the OpenStack website as well the the crossroads of cloud and HPC next slide please so we like to get together and talk but in a virtual context of course there's a bit more limited scope for doing that but we did our best in the PTG and we had a couple of sessions where we did a bunch of lightning talks with SIG members the talks I've listed here you can see that there's a good range of subjects covering you know all the levels of the stack from infrastructure platforms and workloads and also from a broad range of participants so we have people talking from academia from national labs and and from businesses within this scientific computing sector but you don't have to take my word for it we have the next slide please the talks are also available so my colleague Marshall our co-founder reported the talks and and they've been made available to the Open Infrastructure Foundation and and they've put them onto the YouTube channel so rather than sort of paraphrasing the talks here I encourage you to go and look them up and and have a look for the ones that are interesting to you yourself next slide please so I hope this is a this is a SIG that represents a great bunch of helpful people it's a fairly vibrant community and and if you're into research computing and making it work in an open stack environment I hope that you'll find that this is the SIG for you too so thank you very much thank you Stig and thank you all the speakers so now is Q&A so if you have any questions for any of the speakers please drop those questions in the chat and we will answer them Terry wants to see the corgi the corgi occasionally gets posted on youtube on twitter and for the record the corgi's name is gremlin but we're talking about renaming him zathras because it is it's his lot in life and yes that was about one five reference if those didn't get it all right well if there's no more questions then um I just want to let everyone know about next week um we'll have an awesome open infralive episode lined up that I know we're all super excited about our own Jonathan Brice and Mark Collier will be joined by Martin Casado a VC partner at Anderson Horowitz and previous co-founder CTO of NYSERA the original SDN startup Bruce Davy will also be joining us a renowned computer scientist who helped create MPLS and co-authored a key computer science textbook textbooks used in universities around the world and last but not least Amar Padmanabhan who um one of the leaders of the magma open source core and networking project and a member of our own open infrastructure foundation board of directors um they will all be discussing the opportunities around connecting the globe including leveraging open source technologies like magma software based ran and open stack so market calendars hope you are all able to join us next thursday at 1400 UTC and I want to thank all of our speakers today and all of you for joining see you next week