 Thanks for joining us and welcome to Open Infra Live, the Open Infrastructure Foundation's weekly hour-long interactive content series, sharing production cases, open source demos, industry conversations, and the latest updates from the global open infrastructure community. Today marks our 16th episode and we have some great content coming up in the next few weeks, so we hope you can join us every Thursday at 14 UTC streaming on YouTube. My name is Kendall Nelson and I'll be your host for today. Since we're streaming live today, please feel free to drop questions into the comments section as you think of them and we'll answer as many as we can. We can try to save some time at the end of the episode as well to cover anything that we don't get to throughout the show. Looking forward to hearing from you and all of your questions. Today, we're joined by Dims and Bob from the Kubernetes Steering Committee and Jane Gonsham from the OpenStack Technical Committee. So, without further ado, let's jump in. Now, I'm sure this is a topic that has come up probably more than once for all of you. Do any of you actually really think that Kubernetes is replacing OpenStack? Yeah, that's a general perception of talks every time, I think, and especially like whether Kubernetes replace OpenStack or which one is better, right? So, and because the both technologies solves the similar kind of problem, right, but the soul is in a different manner. So this is the very obvious question. And I think if we keep asking that, this would be good to have the more clarification. So if we see about the use cases of both, then we can maybe understand it very clearly. What are the main difference between these two technology and how they can be used? And even in a general software industry, if we see every company have their own solution for a specific problem, right? Entirely for there are a lot of companies are there. So there is no such thing like single company, single solution can survive. So that's why in open source also, like OpenStack, which is an open infrastructure, open source and Kubernetes as a container orchestration thing. So they both have their own different use case. And based on your customer requirement and what company you want to deliver, you can use them separately, combinedly and I'd like to give a little bit highlights of their combined use case. Maybe that will be more clear like why they are not comparable or they are not replacing each other, but they are even complementary to each other. Like you can use Kubernetes on top of OpenStack, having OpenStack as cloud provider or having Kubernetes cluster on no-hours-chel-machine or bare metal using Magnum and if we know that OpenStack is complex in managing, deploying or even upgrading. So Kubernetes can help us to solve those problems at some extent. So if you can use Kubernetes to deploy OpenStack, it will make it easy for you. Or one example I see I think very clear is the control plane in OpenStack. There are a lot of services we run in control plane. Like if we combine everything in a single, there are no APIs, single API, neutral API. And even if I take example of single NOVA case, you have no API scheduler, conductor, proxy API and database again. So it's complex and especially when you upgrade, it's not an easy task. So if you divide those or you deploy those services via container using Kubernetes, it can make them managing it easily. So there are a lot of ways we can use them together. And if we can use them together, this is obvious that they are not replacing to each other. So it's up to you how you want to make money out of both of them. So there are a lot of ways you can do it. Yeah, so obviously we kind of covered a little bit talking about running them together. But Dimz, I know your team works on the SIG cloud provider in the Kubernetes community and also works on provider OpenStack? Yeah, so I was the only one doing that, but yes. So the way I got started was I was mostly working in the OpenStack community and we ended up starting Magnum, which was an open-ended way to run Kubernetes on top of OpenStack. And then slowly I drifted to the point where like, how do we make these two work better together? And we took the opportunity when the cloud providers were moving to external cloud provider in Kubernetes. And then we started this project around cloud provider OpenStack where we ended up thinking about like what is the best way to integrate these two projects together and using the base services that is in OpenStack and exposing that to things running in Kubernetes. So over a period of time, I drifted all the way and I'm mostly working on the Kubernetes side now. But this is true with so many different people. Like I've seen a lot of people who are straddling both ecosystems and dipping their toes one way or another. And there is many other projects that are happening around like bare metal and cluster API for OpenStack and things like that. So I don't see, I definitely agree with the Gansham that the projects are complementary and how you put together for your own needs is basically different between compared to different companies or different people, different teams. Yeah, awesome, further proof. They're not replacing each other. Jim or Jay or Bob, did you have anything to add before we move on? Yeah, so I just kind of wanted to add to that based on the experiences I'm seeing with my stakeholders and customers is that OpenStack has been around for a while and I will full disclosure not super experienced with Kubernetes yet. I'm one of those stragglers that's just coming up with the experience, but it seems to me that Kubernetes is very much focused on workload and where the workloads getting them available, the places to run things, whereas OpenStack's maturity is in the infrastructure realm. The tools that we have, there's nothing else that does infrastructure management with bare metal and stuff like that. And that's really where it's not an either or, it's how do they work together? One really focusing on the infrastructure, managing it and making it available while you have the workload capabilities through Kubernetes. We also have to remember to, that this journey from the virtualization world to the container world has happened really fast and a lot of our customers are still trying to catch up. So a lot of them can't have an either or environment. They still need those virtualized traditional VNs to support their workloads while they migrate to containers. Yeah, and you still need to run Kubernetes somewhere and OpenStack is a viable option for that infrastructure that you need. Exactly. So hearing from users and operators and both the Kubernetes and OpenStack communities, what are some of the hardest parts of operating these systems or operating them together? Just out of the bat with like Kubernetes, it's just the sheer complexity of it. There's so many moving parts, so many knobs and tunables and things like that that people have, they're just overwhelmed with a lot of the options. And that is like, I don't want to say it's by design, but it sort of is because the entire point of Kubernetes is to be very pluggable to work on the variety of systems and everything. So all these little knobs and dials are sort of a requirement to enable that like vast amount of portability. Yeah, to add on to what Bob was saying, you can always hear the Kubernetes folks saying, it's a platform for building platforms. So the way we think about it is like, you don't try to use it out of the box, but in any of the companies, if you want to evaluate and use Kubernetes, then do the things, get started and, you know, customize it to how you're doing things and like, build things on top, which you can then provide to the internal audience. So, and that's where it becomes useful. And the other slightly different thing between OpenSack and Kubernetes is like, we love a lot of little clusters, right? Like OpenSack is more about like how big is your cluster, right? And we like little clusters for doing different things. So you can have multi-tenancy that way, instead of the other way. So there's a few differences like that. Yeah. Jay or Gontrum? I mean, personally, the problems that we're trying to solve at Lenovo right now are the, you know, the actually bringing the clusters up, getting these environments up, whether it's OpenSack or Kubernetes, or what it is bringing up the infrastructure and deploying those environments are the big challenges. And I'm, you know, my selfish interest going forward is that hopefully, as Pune together will work to ease the adoption into whatever type of cloud customers want to have. Yeah. Yeah. And one of the... We all have to get there somehow. Yeah. In OpenSack, if you see, the upgrade is one of the things we keep seeing like is the most complex thing. But yeah, upgrade in OpenSack is one of the things is we should be making it more easy or making it more manageable. I think upgrade is one thing. Yeah. For the Kubernetes side, a lot of Kubernetes installations are on public clouds. So we kind of like, have an easy way there because we can rely on the cloud providers doing the hard part. And, but for us, the issues will be around, like how do you provide the same consistent experience across all the clouds, right? So as our challenge is, a little bit different. Bob, do you see that? No, I would agree there. That's sort of like the big, the big sell for Kubernetes is that it's sort of the universal abstraction. Doesn't matter if you're bare metal, you're in any one of the cloud providers. It is the same API for managing your workloads wherever they are. Obviously, lots of challenges to overcome in both our communities and very, very similar ones in both as well. So what would be some like really useful additions to the current set of open source infrastructure that would help your users, open stack users, users of both software tools? I'll say the very first thing, like as we talk about the issues operator are facing, like upgrading tool, if we have in our, open infrastructure things that can make their life easy, because whenever we choose, or like operator choose the system for their infrastructure cloud or something, they do see all these complexity, right? One time deployment complexity, managing things is okay. But if they see, okay, there is a lot of challenge in upgrade things and whenever there will be new feature or something, they will not be able to upgrade or keep up there. System with those new features and all, that will be something they consider it very first time. I'd like to see if some of the upgrade solution we have in our open infrastructure, even it is integrated within the open stack or outside open stack or combined with any technology. So a couple of things that is on our mind on the given decided is, how do we do something similar to an LTS where we don't rush folks to move to newer versions, all that frequently and give them skip level upgrades, like so they can do it every year or every other year instead of like, a death march of every three months, right? Then the other one is, like how do we bring people along with us when we upgrade to newer versions of things, whether it is resources or different APIs or CRDs, like how do we get them to move to the newer versions? Like for example, in 1.22, we are deprecating and removing, we had deprecated a bunch of APIs because GA APIs are out, but lots of people in the field are still using the beta APIs and like, hey, don't do that, right? Please move to the newer ones. So like bring them along and like, so that is a general challenge I see in both communities. So, go ahead Bob. So thinking back on the API thing, actually curious if OpenStack is a similar problem, lots of people have built abstractions on top of Kubernetes. So all these API versions, I'm getting hidden behind Helm or some of these other stuff. And like one of the biggest issues we had back when is the 116 release where we deprecated some APIs, is that there were like 360 Helm charts that people were using that since they had all attracted that stuff away, weren't actually like updated and no one actually realized that it would be a problem until they made that change. They, you know, upgrade their cluster and boom, things aren't working right. Yeah, I think the API discussion is interesting. It's one of those challenges that we've been working with internally. You know, the strength of things like OpenStack and Kubernetes are open APIs that, you know, can be used to create new things, to automate in ways that, you know, in the past have not been possible. But then you have the challenge of, well, we want to move forward and make new things. At some point you have to update the APIs just recently Cinder trying to remove the V2 API and we're sitting here going, well, we think we can do this. And it got to the point where it's like, okay, we're going to remove it and see who screams because you don't know, you don't know who else out there that's using it under the covers or behind the scenes. So, you know, the success is also a pain point. It works. People don't want to change. Yeah, its API is always complex things to change. And in a few of the, like, at some extent, what in OpenStack it does, we try not to delete those old API or interface. We keep them as long as we can. And for example, in our case, there are even some proxy APIs still there, which are proxy to Cinder or network or image services. So we, as much as possible, we keep the old APIs also, but we keep improving them during, as with the new version and all. But as like Jay mentioned, removing Cinder V2 is not an easy thing. So at some extent, people break, they get broken and they complain, but there's no other best way from API side, I think. If they are designed ideal as a best at the start, it's good, which I don't think it happens anytime, but we keep improving and yeah, what we actually mainly care is we don't break all the users at the same times. So we do it, the API change with the versioning and keep the old thing as much as possible. But yeah, we still face these API challenges in OpenStack. We communicate the changes and the upcoming deprecations and removal as best as you can and hope for the best, I guess. So we kind of talked about this a little bit, but do you think that in the future, this will all end up being operated by a handful of like hyperscale cloud providers or is there still a place for orgs to run their own infrastructure? What's missing to make it easier and a more viable option for more organizations to run so that it doesn't just go to the hyperscalers? Well, I mean, not everybody can run in the public cloud or has a need for hyperscale. I'm seeing a lot of customers in the medical industry, banking, those sorts of things where they can't have their infrastructure out in the public cloud. So there is a real need for on-prem infrastructure. We've had kind of a burst to the public cloud and people are going, wow, that's expensive or oops, I can't put this data there and they're having to kind of back up and go, okay, how do I do this internally and how do I do it without breaking all the automation that I've already done for the public cloud infrastructure? So again, back to APIs, common interfaces and making as much reusable and portable as possible. Yeah, we don't have that problem because most of a lot of the users are already on the public clouds. Our challenger is more around how do we span both public and private clouds and then making sure that people can autoscale between them, like burst, like Jay was saying and come up, in other words, how to stitch multiple clusters together, which are across data centers, across clouds and that is where we are seeing a lot of work happening right now around what we call the cluster API where you are treating like, give me a cluster as an API and upgrade this cluster, bring down this other cluster. So we are doing that kind of design right now and then it's just a matter of time before we'll have at least a few good solutions on how to stitch all these things together because in the end of the day, you want to push down policies to say, hey, I need to tighten security on all my clusters. I want to switch this off or switch this on, things like that. So you need a way to assemble the clusters together and do the same things across all the different clusters and being able to programmatically start things when you need them and stop things when you don't. So that's where we are seeing the newer work being focused on which is layering on top of the Kubernetes itself and not necessarily within Kubernetes. The other thing, like for some stuff that sort of is in Kubernetes that we haven't had good parameters for in the past, just like clusters didn't have an individual unique identifier. So it was really hard to manage clusters that might be in multiple locations if the cluster itself doesn't have a name. The other one is like multi-cluster services where, you know, let's say you want to advertise a service that is available in this cluster that might be on-prem or might be in a cloud to these clusters over here. And none of this stuff, like, there have been various things that have been built in the past to sort of augment and add those things externally to Kubernetes, but now they're getting sort of baked into more into like the core and made more of the core itself. We're also seeing, like, other stuff with handling, like, multi-cluster ingress with the, like, gateway API to be able to, you know, route load to multiple clusters behind a single front end. Lots of things happening. Yes. Like, obviously, lots of API changes, lots of work being done upstream. So I know that, like, both projects, OpenStack and Kubernetes, see massive amounts of activity, like 35,000-ish changes per year. But both seem to struggle with getting, like, basic maintenance sort of things done to, like, core Kubernetes and core OpenStack. How do you explain and kind of solve that problem if you have a solution? And no good solutions, as such, Kendall. So a lot of people will come to the community and say, hey, I want this. I want that. I want this feature. I want this working differently. This doesn't work for me. You know, I want to change the way it works, right? But then what we are finding is the people who are, who have to keep everything running, you know, day after day, week after week, release after release, that pool is shrinking a little bit in both communities, right? So we have to figure out a way to make sure that we engage the long tail of contributors, plus we have to figure out how to, like, we have to balance the two things. And, you know, we are far enough down the line, five years, six years down the line where, you know, the original people are no longer there and, you know, the design decisions that were taken and the code that was written is, like, you know, bit rotting and, you know, we have to train the new people to get to the point where they feel comfortable touching and making changes and, like, thinking through the same kind of, so if we set things up like discovery, writing things down and making sure that we have enough test cases and, like, general healthy open source practices, getting them, getting that kind of thing set up for the maintainers who are coming in, that would be, like, the right thing to do, I feel. That's definitely a good start for us. One of the things that I think both projects have this problem, I can, I'll speak more to, like, Kubernetes is, you know, these days it is a lot harder to get a feature or anything that you want merged in there. Like it or not, it's now in a production place and we've done a lot to ensure that any code that is merged in is, you know, production ready. We have a process, the productionist readiness review that reviews all sorts of features as they come in. And this adds complexity to getting those newer contributors in there and one thing that sort of Kubernetes has done is, like, much more of the work is now being shifted outside of core where it is easier to get more people involved. But that's still, like, there's still challenges even with that and getting the work out to a place where it can act of, like, be worked on easier. So just add on to what Bob was saying. So when somebody comes to us with a big feature, we ask, like, tell us the smallest amount of change that can be done within Kubernetes where we can call out to the things that you will want to call out when you want to call out instead of, like, putting the whole feature in. How do we enable you to build on top of Kubernetes, outside of Kubernetes? So over a period of time, if there is really a huge demand from a lot of people, then we can move it into the core, right? So if you see, so we ended up with this approach and we've been doing it for a few years. Now, if you look at the list of projects in CNCF that build on top of Kubernetes, you can see the explosion there, right? Like, there is people doing all kinds of stuff, and OPA, NY, Gateway, you know, you name it. There is, like, projects doing things on top of Kubernetes, built on the primitives and the extension points that we have within Kubernetes, right? So this is good and bad because, you know, what is happening, ending up happening is the people who were earlier working on Kubernetes, Kubernetes core are doing things outside, and they have less time for the core, right? So we are enabling the ecosystem to expand like crazy, but then we are paying a little bit of a price in terms of, like, losing some of their time. Yeah. Yeah, that's true. And in OpenStack also, as Bob mentioned, so getting the features done is not the same way it used to be because there are a lot of expect, contributors is one of the expect, but about their long-term maintenance is also the key. If a single feature is specific to a single use case, which can be solved in a different way, we try to tell them. And in OpenStack, if you see, we have a process of discussing the design first, and as part of the design, we discuss what is the use case of that feature. And if we do have any alternate, we tell them, and if still the feature, the proposal is good as a use case for other companies, other, and deployment also, we keep accepting that. But yes, it takes a larger time than it used to be. And I think the, yeah, we don't have, like, perfect solution as a community, but I think the companies, like using open source, either Kubernetes or OpenStack, they should come up and adding more contributors to this core maintenance or even the infra setup for these communities. And I feel like we should communicate to those companies, convince them, how to do that is another challenge. But that's the one way I think we should start or continue doing that. I think, you know, it's interesting because in OpenStack, you know, I've been around it for a while now. We won't say how long, but, you know, we went through the, ooh, everybody's got to get their new feature in. It was chaos, and all of this, you know, is it the right feature? My company wants this, my company wants that. And now it is a little sad to me, to be honest, to see how that we've gotten to a point where the fact that people keep asking, is it dead? Shows that the view that the world has of it is wrong. It's not dead. There are a lot of people using it, and there are still a lot of really interesting challenges to solve. There's more to development, I think, than just making something new. There's a challenge that people need to see in making what everybody is using stable and continuing to grow it from that aspect. And I would love to see, you know, more excitement from the community and the companies of the world that depend on this to continue to help out and solve some of these tough questions going forward as to, you know, what are the edge cases that we've never figured out? You know, what breaks the gate sometimes that there's got to be customers seeing it too? It's just, you know, they're not coming back and saying, hey, I saw this and I want to help fix it. For Kubernetes, oh, sorry. Go for it. For Kubernetes, we have this concept of CAP, Kubernetes Enhancement Proposal that's sort of like outlines, like I started touching on it a bit when I was talking about the production ready to review a few minutes ago. But it's essentially a design proposal that outlines, like, all the work that is needed for a feature. And they tend to go through very lengthy review processes by like the owning SIG, the group that is like the core maintainer for it. They'll loop in any sort of other ancillary SIGs that might have thoughts on the design of it. And we have this sort of, yeah, this entire process of them going in sort of alpha, alpha, beta to stable. And alpha, when it is first put out, it's not turned on by default, but it is enabled in a cluster. It is there for end users to, you know, flip on and experiment with and give us feedback. Speaking of which, we don't get enough feedback. If you want to get involved, please give us feedback. Please turn on alpha features and give us feedback. We need it. From this alpha to beta phase, like the API can change. It's the, where we can make big changes. And sometimes there'll be multiple alpha APIs. And it's the same thing when we go to beta, beta is where it's like, it actually gets enabled by default in a cluster. And at that point we have at least enable like, or say like, there's some guarantees around it. Like we, it's not likely that the API will change significantly if it can, if it goes from beta to GA. So it, I don't want to say it's safe for people to use, but like they can, they can, they can start to use it. Wait, you just said you want them to use it in alpha. I want, like it's turned on by default. Alpha is when we can like make the most, like most radical changes to the API. Like it was like, this is awful. Why are we doing it this way? We'll toss it out and, you know, release a new alpha. Which has happened. Oh God, it wasn't end point slice. I don't remember. There was one that was like just completely just like tossed out after the like things were just awful in alpha. But that, that early feedback is really needed. And with all this, like I, the graduation from alpha to beta GA can only happen like once per release. So when it comes to a feature going from like alpha to production, it is a minimum at this point of a year. It cannot go faster. It is, it takes a year of consistent development to go from alpha to GA. And that is like the fastest it can possibly go. There isn't a way to really bypass that. We do that specifically on purpose to make sure whatever does make it a GA is thoroughly tested, is thoroughly stable, has all our needs to potentially like give people a feature flag to turn it off. And as well as like we recently like had to make sure that everything has the right ways. This is from the production ratings review to like measure and get all the metrics from the potential thing that was being introduced. Yeah. In OpenStack, we had the concept of experimental things to deliver specially on API, but the challenge we faced is people started using in their production and when we change it, they started complaining. So we stopped the concept of this experimental thing. But just one question during the alpha to beta or GA transition. Is there any feature happen like you didn't get feedback or you got the negative feedback and you removed it completely? Oh, we have a bunch of those. For example, we are completely redesigning what we call part security policy because at beta, it was stuck in beta for the longest time and it wasn't making progress. And we saw so many problems with that. We were like, no, we're going to get rid of it. Fact is number one. Yeah. I will say like so many users were interested in that feature that a replacement is sort of being developed that doesn't necessarily do everything at PSPs, part security policies did, but does cover a majority of the use cases in a much more maintainable fashion. And while this was happening, I should tell you that there were at least three other open source projects which came to the same space and they said, whatever PSPs can do, you can do with our projects, right? Like Q&O and OPA and at least there was one more project. We were so happy. Folks, go use one of those projects and leave this data for a load. But we still needed something simpler in the base. So we are ending up starting the process of a new feature which will do some of the things that PSP does, but then 80% of the cases and then 20% of the cases we say, go use this other projects that are built on OPA of Kubernetes. It seems like a really good way to organize things. Cover the majority in a smaller amount and then like the special flower sort of use cases can exist or be served by other projects better. Cool. So we again kind of covered this a little bit, but what are some of the biggest focuses and struggles now for both the Kubernetes and OpenStack communities now that both are past the hype curves and moving to the land of more mature but still very active open source projects? Bob, did you want to go first? Yeah, we kind of touched on this a little bit earlier. So Kubernetes has like the rate in which new contributors are coming in has dropped significantly over the past like two years. And this is sort of again, getting a little bit over the hype curve, the difficulty in which it is to contribute to the project. There are sort of a lot of little issues, but the thing is like all the six, all these groups still have just like massive amounts of work that need to get done. And one of the strategies that Steering has taken recently is the whole idea of the annual report. We are asking all our SIGs, all our groups to sort of report out, what they've done for the past year, where they need help. This is not necessarily to like, look at like, hey, you're not doing this. This is for them to raise the flag of, hey, we need help here. So we published a summary of those annual reports recently. I can drop a link. Yes, thank you. I was copying that out of the dock. And some of the people, if you are interested in Kubernetes and helping, that would be a great place to start for, I think where we struggle the most though, is the unseen work, the not the future work, the stuff that's hard to incentivize. So this is like our testing and info groups. They are our core like operational stuff. Like our, for Kubernetes, it's like SIG testing in the Kate's info working group that are just making sure that, you know, all the lights stay on and we can continue to test and do it like everything that's needed in a Kubernetes release. Sorry, excuse me. Would you say that's about right, Dems? Yeah. There are definitely a lot of like common struggles, OpenStack being, you know, a half a step more mature, I guess, on some level, or maybe just been around for a little bit longer, maybe not more mature per se. But I expect that there's a lot of advice that the OpenStack community can give to the Kubernetes community, because we've been through a lot of the struggles just like six, 12, 18 months before you. So Gantram or Jay, do you want to maybe talk about some of the solutions to the problems? Yeah, one of the things is like in OpenStack, we faced, as we talked about a little bit before also, the basic maintenance and keeping up the infrastructure to build the software. And the thing here is we currently, we need more maintainer in OpenStack for infrastructure, keep up the infrastructure services, which we need day to day, in day to day activity for developing the software. And the challenge is because those services or maintaining of those services are not directly at the production stage, things like companies doesn't realize what are the benefit of those. They just think like, okay, we get the OpenStack release, we can upgrade it, we can use it. But how we are developing the software there and what all basic things we need there. So that is one of the key thing. And we don't have a solution, but the advice I would like to give is like, we should have some framework in OpenSource to have those maintainer or resources keep up, which are required for the day to day activity or for infrastructures. How we do that, either we ask like say foundation or we ask companies to support those things. So that is something if like in Kubernetes, you can start thinking about now that can be helpful for if you're facing future issues in future. Because currently in OpenStack we are in, one example is we are in critical health for ELK services maintenance, Elastic Recheck and Logging, which is very much for our day to day debugging the issues and all. So by the time you reset, that stays with the critical health. I think you should increase more companies to give you the maintainer and resources. Yeah, I think I'm glad you remembered or mentioned the Elastic Recheck, that part definitely for community members and companies who may have infrastructure to help the OpenStack community with, that would be an opportunity there. If you don't have coders to help contribute, there are other ways that you can help OpenSource communities through providing physical resources to help with that. There's also, there's upstream investment opportunities list that's publicly available as part of the, out on the web for OpenStack that if you're looking for ways to get involved with the community in places where we can really make a, you or your developers can make a difference, that's a good place to start and just, it's a community. Get out with the people like DIMMS and Bob and Jimon and I and say, what can I do to help? We're always happy to talk about that and help bring people to get them started. One question for DIMMS and Bobby here. Have you seen, I know DIMMS, you were involved early in the OpenStack project. Has the onboarding and bringing new members into the community been different as you have people who've gone from one OpenSource community to another? So, we are trying new things, different things. And, you know, for example, we have some country specific channels in Dev, for example, for folks from India and CN Dev for folks from China, where we try to organize their time zone specific activities and try to get them into, like, make first, you know, there are some issues where people who are just getting into the projects can help out, so throw out those kind of issues so people can snatch them up. Then we are also trying, we've done this with OpenStack, but we are also doing with at CNCFS, like mentorship programs, outreach, LFX and GSOC and those kinds of things. So, there are some things that we've already done here that we are trying to replicate there and there are some additional stuff that we are also doing. The problem here is the learning curve, right? It takes a while for people to get comfortable and feel that they are productive and getting somebody to that point is the hard task. One of the things that we are thinking is, like, the funnel has to be big and then over a period of time and based on, like, whether they are getting more good mentors and things like that, by the time a year is out or two years are out, only a few of those remain. People that we are trying to target and try to get them into various things, various initiatives and working groups and things like that and over a period of time see who sticks, right? So, building up the leadership and making sure that we have a contributor ladder that people can aspire to, those are the kind of things that we are trying to do as well. I think one nice thing that I have seen in the community was originally there was that question of what are you doing work for? What is this open source thing and having to go to your management and explain it? I think that is one barrier that over time has been removed a little bit at least. I don't get questions from management anymore. Why are you taking an hour to go sit and talk on a video stream or those sorts of things? So, that is one good improvement I have seen over time. So, on the management and the PM's side, like from the vendors or other contributors, the problem is they don't want to engage too early because they think that things are not ready and then by the time they engage they realize that this doesn't work for us so we will go do something else, right? So, we have to point it out that paradox to them saying you have to be there, you have to pay in and you have to pay attention to what is happening and only then you can read the benefits better. So, that is a bit of coaching that we have to fight for the folks that are not directly involved in day-to-day activities but they are the ones who sign the checks so to say for the people who are contributing. So, there is some amount of things that you have to do on that side and it's a hard thing. Yeah, that came up for Yeah, Ajay. Go ahead. No, go ahead. The same thing came up for when OpenStack tried to add the help needed thing which is now the upstream opportunity things. So, the very first question was how we explain to the executives what's the benefit. So, what we tried, we tried to add some items where we are asking helpful so that members or developer can explain it to their managers or executives like what, as a business it can add the value. It's not just like I want to do the coding and I want to go there, it's something you also get. So, that's something you can explain. So, one of the recent things that we are trying to do is add more roles in the six not just chairs and tiers but also other roles where the role is defined what the person doing the role is going to do. So, people coming in and say hey, I want to do that role or I want to do what the same. So, having that kind of a thing will tell people that hey, okay, this is a three month thing I am sure I can talk my manager into it. But if it's not specified and it's open-ended then it becomes very difficult. For example, we have a good mentoring program in the release team where everything is written down, a CI signal and how to push the button to cut a release. And like we've written it down and we say that look, this is a three month rotation and they'll be mentor, you can shadow them, they can be multiple shadows and next time you can step up as a lead. So, they know exactly what to expect and they know what is extra out of you and there is a checklist and a role book that tells them what to expect and what to do. So, it becomes easier for getting them like, we have to make friends right, like you make friends then you end up doing more things with them. So, that part is the one you know, that I think the sort of recent role you mentioned I think is really a good idea and that's how like more and more people can help within the sort of time instead of asking like okay, I want to contribute here as a full time for a full year. That's good. Yeah, and it like it tells them okay, I have a set of people that I can go talk to. These are the shadows, these are the mentor that I can talk to and I build a friendship with them and then I go do things like over a period of time you expand the number of people that you know and you get to know more of the problems that are there in the community and where you can help and you naturally end up doing more things. At least that's the theory. So, hopefully yeah. Yeah, can confirm I wear a lot of different hats in my community open stack primarily but also now Kubernetes have gotten involved in more places. The more people you know, the more work you have to do. Open source built by the people that show up. Basically, yeah. Some people you meet, so that's good too. Yes, definitely. Yeah. Can we get one question from the audience? Another one now. How is open stack and how is Kubernetes dealing with the CentOS to Linux change? Could this be an opportunity to build an out-of-the-box stack? At least for Kubernetes that we don't care about that, we distribute binaries. Easy, easy for you. As long as there is C Group B1 and C Group B, all C Group Bs are happy. So, the way it works is we depend on runtime like container D or cryo and then that depends on run C. So, a lot of the things on how containers run is abstracted from us so we don't have to worry too much about what is happening in the underlying layer but it's more of a problem for open stack where everything is dependent exactly on what you're running on. Yeah. So, there was another question asking if Kubernetes supported or was supported on Rocket Linux that would be a no then or yes, because like you're abstracted away from it. I'm sure there are people who have run it on Rocket Linux. If not, that will be a good thing for somebody to try. Yeah. That came up in open stack also on mailing list. If we do run open stack or is there any testing done but yeah, we don't do it upstream currently but we said if anyone is running or testing the documentation or contributed back what you're testing what are the results. Awesome. So, we're kind of getting to the end of the questions we had but how can our two communities continue to work together moving forward? Yeah, I think already started since one or two years back. So, we have the events previously it used to be physical events now virtually and is like live streaming here. So, for example in open stack PTC or steering committee meetings, people goes from one community to other like especially the TC and steering committee but it is open for other member also. So, I think we continue discussing and continue meeting with each other in the event that will be I think a great way to share the ideas what are the challenges communities facing, how they sold it or what open stack faced or how we sold it or not yet sold. So, those cross-mind and cross-communities collaboration in the event I think is the good way. Even in open stack we have next PTC schedule in October. So, that will be a good place and very welcome for communities steering committee to join us. So, a thought on working together for the future and this is as somebody who works together works closely with telcos trying to figure out the edge. Open stack has had edge working group for a while now. We're still trying to figure out what edge is. Would like to know thoughts from DIMMS and Bobby where's Kubernetes going with the edge plays? So, the VNFs at the CNCF level there is a working group that is trying to do CNF which is something based equivalent and the way it works is we have working groups at the CNCF level at the Kubernetes level at this point the pen is with the people who are doing the CNFs and they are collecting best practices and things like that and they are trying to come up with a conformance thing and guidance documents and things like that. So, that what we end up doing is if there are things that are not implemented or that needs to be implemented like features or whatever we are going to either signal or signal or something like that and that gets translated into caps and that's how we end up working on it. So, we have a process for that but so far we haven't seen it feels like they are off on their own and they are okay right now so we have to go poke them and see if they need anything more from us. On the edge side itself when we are talking about one kind of edge would be smaller devices and for that there are a few projects that are there already in the Kubernetes ecosystem. There is K3S there is K0S and there is Cube Edge so there is active research and experimentation and things going on in that space. Now the question would be how do you bridge the edge devices with a bigger cluster and we haven't really started too much down that path yet but I am sure we will end up looking at that scenario as well. That touches on the multi-cluster stuff that is going on right now. It is all collectively edging going that way eventually. Yeah, I think the kind of what I think would be interesting looking to the future is we are going to have an explosion of infrastructure that needs to be managed with IoT devices and things like that and Kubernetes is well placed for running small little workloads at the edge and looking at ways that we can combine the infrastructure management through Ironic etc. that we have there to get the environment up and running while Kubernetes manages the workload on top of it. I think there are definitely opportunities in the future to work together on those sorts of opportunities so yeah I'll give you a call Dems. See, make friends, get more work. Make friends, get more work. I know you need to put a long time. Rekindle old friendships get more work. Get more work. I'm going with the theme here. Yeah, exactly. So we've had a couple more questions come in from the audience. It's a CNCF projects combined, Kubernetes, Qvert and CAPI somehow confuse us on decision making when choosing between OpenStack and Kubernetes. So what are your suggestions or checklists to answer on which way to go? So typically what I end up answering here, this is experiment. Try it out. You can't jump into production next week, even if I give you a checklist because your situation is unique, your use case is unique and some of the things that Qvert offers might not be enough for your existing image to be booted up in Qvert, for example. So cluster API is slightly different so you would end up the question is probably like do I use OpenStack to run my VM or do I use Qvert to run my VM? So I would say try both both have their positives and negatives and the thing with the Qvert is if you have a mixed environment already then it is better to do that. So for example, if you are doing a lot of things in containers but you have this one thing that refuses to budge then you use something like Qvert and if you are primarily based on virtual machines and you want to add QNetus on top then there's a different set of things that you can think of. Qvert thing like recently my company we tried testing the Qvert and it was not as stable as this thing or so one of the things we noticed that in terms of the scaling like if you are building a large cloud or something you need a lot of high workload so OpenStack is where you can achieve a little bit more than what Qvert you do with a small cluster and all. Even though like as Tim said it's experimental and it can be improved with the feedback but it's a different use case and how you use it. So that mainly depends. Awesome. Thank you for all the information. Hopefully we helped them out. So there is one more kind of about use case specific things. They ask we use Ironic for bare metal provisioning. Any thoughts on similar projects for Kubernetes is MetalCube stable for general purpose use. Any other similar projects on the roadmap? I think it's MetalCube, Tinkerbell. There's like a dozen of them. I haven't kicked them to kick the tires there and I know Ironic has been there for a while but we have Ava with us now. Anyway that's an inside joke but you have to look at the list of features in Metal3 and you have to do the comparison but the basic technologies all three use are probably going to be the same at the base level. So it's more of what you're comfortable with what you know already like if you're comfortable with Python and you foresee a bunch of small things that you end up fixing in Ironic then maybe you should just do Ironic. Right? Yeah Makes sense. Stay with what you're familiar with or what sort of community you fit in better with where your use cases lie a little bit more closely aligned with those that are being focused on and that sort of thing. For everything there is a learning curve start with your comfort zone and like grow the teams around the technologies and the communities that you want to be with. You can also be risk if you're concerned about any of those projects you can be risk your investment those projects by actually contributing back to them. Ah yes. Good pro tip right there. That's applicable outside of Kubernetes and OpenStack too. All open source projects could use more friends to help take on all of the work that a finite number of us are doing. Your company depends on it. It's a good idea to have someone in there that knows what's going on. Key there is don't start an open source project without looking at all existing ones and maybe not worth it. Your problem may already be solved and you don't need to reinvent the wheel again. Just add the maintainer to existing one. Don't just use it freely. Exactly. Yeah. Get involved. That's definitely the moral of the story here. Being involved is really the best way to go. So as far as moving forward getting involved in either the Kubernetes community or the OpenStack community or if you want to keep participating in our joint discussions between the Kubernetes steering committee and the OpenStack TC, the technical committee we kind of coordinate Gontra mentioned this a little bit on a somewhat ad hoc basis to meet either at the public steering committee meetings that are organized in the steering committee channel on the Kubernetes Slack and if we're happened to be organizing an upcoming PTG which we are currently that will be set for October of this year. We will also be inviting the Kubernetes steering committee to come hang out with us at the OpenStack PTG and we'll coordinate those conversations likely on IRC in our OpenStack-TC channel. So thank you for joining us. Thank you to all of our awesome open source governance representatives today from both communities. Thank you Dimms, Bob, Jay, Gontra. We really, really appreciate you having having you here today sharing your knowledge and opinions and everything. Our next episode around ironic, ironically enough haha airs in two weeks on August 12th. But don't forget that in the meantime you can watch all of our previous episodes that we have aired on Open Infra live or Open Infra.live in the meantime. Also, if you have an idea for a future episode we definitely want to hear from you. You can submit your proposals at ideas.openinfralive.live cheese there's so many more periods than I keep wanting to include. There you go. There is the URL there. Mark your hand on yours for Thursday, August 12th at 14 UTC to join us for our next episode. Thank you all and see you next time on Open Infra.live. Thanks a lot Kendall for running this. Thanks Kendall. This was great.