 Good morning, good afternoon, good evening. Wherever you're hailing from, welcome back to another KBE Insider show. I am Chris Short, host of the most of this thing we call Red Hat Live Streaming. I'm joined by a special guest host today, Steve Spiker, as well as our special guest, Maček Šulek. But, you know, and Maček is a great contributor to the community. I lost the page with all his descriptions, all of a sudden, the tabs, which are Rooney, hang on. He, we're going to talk to Maček today about, you know, being a software engineer at Red Hat, obviously, but also about what it's like being a double SIG lead in his work on the Kubernetes CLI and controllers. But first, the news. Mina, take it away. Welcome back, by the way. Thank you. Hi, good morning, everyone. Gonna talk about some highlights here as I do on all episodes, except I guess Gordon did a really, really good job last, last month, so I do have a hard part to uphold. But security has been a big issue, I think, with Kubernetes in the last couple of months. And one of the, a couple of the articles that I want to highlight, there's a new tool that wants to save open source from supply chain attacks. SIG Store will make a code signing free and easy for software developers, providing an important first line of defense. This is, I think, a really, really good first step in, you know, defending those attacks. And then there's a NSACISA Kubernetes hardening guidance, which also identifies the common areas of Kubernetes security risks, as supply chain malicious actors and insider threats. It aims to educate engineers to avoid common misconfiguration issues and safeguard applications. The guidance suggests that supply chain risks are hard to mitigate and can emerge in the container building cycle or infrastructure provisioning, especially in cloud environments. And then we have Helga Labas coming in, talking about how to secure Kubernetes as it becomes mainstream. This is actually an interview with the CEO of ARMA who talks about security and Kubernetes systems, what makes them susceptible to cyber attacks and what should organizations expect when deploying them. He answers, you know, attackers are looking for targets and how do they choose their targets by a combination of key parameters, the value of the target and how easy it is to attack it. So after talking about this stuff, obviously we need to have a way of knowing if our Kubernetes network security strategy is solid. And then I think there are four critical questions that must be asked to understand where these vulnerabilities persist and where steps need to be taken to ensure adequate protection within your container network. So before you decide anything, ask these five questions to yourself. Does your network inspection achieve complete visibility? What isn't protected by your security deployment or service mesh? What are the limitations of your existing web application firewall protection? Are you addressing security drift and how fast can your Kubernetes security mitigate threats? Again, very, very important theme in the world of Kubernetes now, I will drop in the links for these specific opinion pieces and news articles that I've addressed right now. And then going in, what are the main drivers and challenges of container technology today? Obviously security being one of the main challenges that are related to the application container technology, they limit its adoption. There's a lack of internal alignment and experience in Kubernetes management that are also named among the key barriers to adoption. Main drivers are increasing number of enterprises are opting in for the powerful deployment options and visibility over complex deployments, efficient distribution of workload across clusters resources and accelerated software delivery powered by Kubernetes. But despite the challenges, the idea of a simplified and automated service delivery continues to drive the deployment of Kubernetes across the world. Then we wanna highlight the five DevSecOps open source projects. Again, you can go in and look at this article to learn more about these projects, but teams that embrace the DevSecOps approach make security an integral part of the entire application life cycle and these specific open source projects aim to help that. Claire, six-door, KubeLinter, open policy agent and gatekeeper and Falco. So I will again drop all of these links into the chat. Feel free to go and check those out. Come back to Kubernetes KBE news page every week. We have great articles coming up today as well. And giving it back to you, Chris and Steve. Thank you, Meena. Awesome. And yes, security has been a huge issue lately. Machek, take it easy, Meena. Machek, who is your daddy and what do you do? That will depend on the context and everything around it, but I think we can figure it out eventually over the course of the rest of the meeting. So your software engineer here at Red Hat, co-lead of two SIGs, sounds like you're like me. You wear many hats at Red Hat. And that has to be an interesting challenge. It is. Let's start that. I was at 6CLI lead for almost four years now. And I said was because last week I officially stepped down from the 6CLI chair role. In 6CLI we have a division of what a chair does and what a tech lead does. So theoretically, all of us are wearing both hats, but over time when we wanna appreciate someone else or I will be slowly stepping down, but I wanna help with the technical side of things. I'm giving away the organizational hat and we'll take care of the technical stuff, especially that I have a lot of knowledge that most importantly, historical knowledge and decision that we did over the past years. And I know that the other chairs and tech leads were asking me to be around to help them because that historical knowledge is sometimes helpful. For resolving conflicts and situations, which are basically on a daily basis. So that's that. Yeah, the context, why we did this this way or the other way in the past. Have we tried to look into this or something else? And yes, we usually have, but the decision from the past was that because there are those other stuff that might interact, we have to make this hard decision of doing this or differently. So this definitely saves a lot of time when I can look it up in my head somehow. I don't know how I managed to keep a hold of this many information, but somehow I do. But I think it has to be like... So how many people? Yeah, go. Yeah, how many people are you usually interacting with in the 6 July kind of like? 6 July varies, but I think it'll be somewhere between 10 and 20 people that are constantly on our... Well, currently we have this many meetings that we are meeting every week because the official 6 July meeting is every other week and that's in the other weeks we are doing box crops and just recently Katrina started doing customized box crops. So no matter what's Wednesday, there's usually a 6 July meeting, whether that'll be box crop or customized crop or a regular meeting, it's all in the calendar. So you should be good. Good one. On top of that, you also do work in SIG apps too. So as part of your community participation. Right, so I was part of the SIG apps basically since the initial days when I started working on Cube because my initials to my original story with Kubernetes started with jobs. Well, at the time it was called actually schedule a job. So that was the idea of adding something like Krons in Linux if you're familiar with Linux systems, you know that there's an ability to schedule some task at any given point in time. Initially that was called schedule a jobs over time. It was renamed to Kron jobs. And that was the original idea. During the initial discussions, we divided that into jobs and Kron jobs. So this is how we, how my story, I think that was around 2016 or 2017. Very early on. Yeah, that was very early on in barely a couple months after I joined Red Hat if I remember correctly. But since then I was jumping the train in a couple of places. And if you look at my contributions and I don't know what I was doing, but I was looking for something and I was looking for what I did, what I touched, I touched very different places in the Cube ecosystem. And even though I'm on a daily basis, the team that I'm leading at Red Hat is overlooking both 60 line SIG apps and also SIG scheduling. But thankfully I have amazing folks working with me that are handling the SIG scheduling. I don't have to do it because my mind would blow what I would have to look at the third SIG. But even still I'm also participating in SIG API machinery and trying to look into what they are cooking, both from SIG apps perspective because controllers are interacting with API one way or the other. A lot of primitives that SIG API machinery is working on is being used heavily in both SIG apps and SIG CLI. So it's somehow natural to me to also follow along with those folks. Interesting. So hello out there, everybody in the audience. Just wanted to make sure we get a hello in. I was just going to jump in. You mentioned like you're starting the Kubernetes and it sounds like part of it was your role in Red Hat. I don't know if you can talk a little bit more about how that kind of started. Well, that was an interesting turn of events. I was basically invited to a conference that was happening in Southern Poland pretty close by where I was living in a very small city nearby. And a friend of mine asked me, oh, there's an open source days happening in Bielskowiowa that was like seven years ago in March, if I remember correctly. And I was like, oh yeah, I promised to go with you on a conference a couple of months before but I couldn't make it, so oh yeah, I'll join. Also, that was a time when I was a couple months after switching my previous previous job to the current one at the time. And I was like very disappointed by the pick that I did. Like literally after a week of working there, I was like, yeah, that's not the place that I want to be in. Been there before that. Well, happens. And I went to the open source days and I've met my wonderful friends and co-workers until today, Zierka Folta, who's currently in charge of HR in Czech Republic, and Mihał Wojtek and Mihał is staff engineer in OpenShift until this day. And he was my team lead for a very, very long time. And I consider him and my friend. So, and we started chatting and it was like, oh, it would be so cool to be able to work with you. There was one little at the time is that OpenShift back then, that was seven years ago, was at the version two that was written in Ruby. And I'm being Pythonista. I was like, yeah, that's not my game. So I applied for two positions actually. One was for OpenStack and other one was for OpenShift. And because of being and having my heart with Python, I cared more about the OpenStack role more than the OpenShift. It turns out that I didn't get the OpenStack because I was missing some proper virtual machine knowledge, but I got the OpenShift role. Soon after OpenShift started working on V3 and we switched from Ruby to Go. And I was like, oh yeah, that's fine because I worked with Java before I worked with C. So switching from Ruby over to Go was pretty exciting for me actually. And that's how I landed over at Red Hat. I did touch the Ruby codes for quite a while. I think I was one of the last person that was still maintaining V2. I think folks that were joining Red Hat after me did not maintain the V2 already because they were already jumping into V3 and Go-based solution. So it was exciting. OpenShift that happened out, worked out for you. I mean, worked out for the team overall. So that's great to hear. The one thing that I think is an interesting story of hearing folks like you work upstream so much in the community is that there's a lot of pieces there that are kind of hard to pull together. But at the same time, you're managing delivering a one called downstream but a thing that's taking those bits from the upstream and then downstream China product as I'm dealing with the past releases. So you're working with many versions of CUBE or CUBE CTL, whatever you want to call it, right? And you have to deal with those challenges. So I'd be kind of interested to hear your story there what that's like dealing with not only the community parts but now you're kind of in some ways going back and porting forward like all the different challenges or ways of working day to day. Okay, so before I jump into that one, let me straight up one thing about the CUBE Cuttle, CUBE CTL, whichever. I'm going to use both interchangeably. Love it. Although a couple of years back, if you are familiar, CUBE Cuttle has a logo. And we struggled with it and talking with Phil and Sean at the time who were leading six CLI, well, are still on to today. We figure out that maybe something like Cuttle Fish for CUBE Cuttle, which is cuddling the CUBE logo. So if you haven't checked the logo, if you go to Kubernetes CUBE Cuttle on GitHub, you'll see our logo proudly presented on the front page. But both names, even though we went with the Cuttle, both names are perfectly okay. And I've seen lots of questions, debites. And it's actually something that we are being asked almost every single time when we're talking about six CLI during CUBE Cons. For the past three, four years, I would say. And now going back to your original question, I must admit that the fact that CUBE decided to switch from four to three releases a year was a significant improvement from my point of view because if you're thinking about just CUBE, you're thinking about past four releases, currently three releases. But for me, as you said, that means double the digit. So it was eight releases a year or six releases currently. Because for me, my life cycle looks like this. I'm done with, let's say Kubernetes 122, which was released a couple of weeks back. And I'm jumping immediately into OpenShift 4.9. Which will be, which is based on Kubernetes 122. And that will be released in a short while. And then immediately I need to jump on the track and start working on 123 already. And we will be slowly preparing for another version of OpenShift. So we're constantly between feature freeze and literally chasing every single date, whether that's CUBE feature freeze or OpenShift feature freeze, whether that's CUBE code freeze or OpenShift code freeze. And jumping with those dates is challenging at times. And it sometimes is overwhelming. But thankfully I have an amazing team, both AppStream and SIG CLI, as well as SIG apps. That does a lot of the work and can help me with pretty much delivering any single feature, whether that's downstream or upstream. So backing up a little bit further, when did you get your starting like open source? How did you discover open source software in general? Yeah, so with open source, the story goes back to Python and all the way back to my university years, like 20 years ago or something along those lines. I took a class on Python and I was like, yeah, well, man, not sure if that's something for me. A couple of months went by and I had an internship and during the internship, I got to work with Python heavily. And that's where I fell in love with Python. And over the years, I figured out, well, the community here with specifically the Python community was providing with this amazing tool for free. So I figured out that I wanna give back something. And my initial contributions were to Python itself. I think I did a couple of PRs to Python itself, specifically IMAP and SMTP libraries. Over time, I also helped with BoxPythonOrc, which is the buck tracker for Python. And that's, I think that that was the initial story where I started with open source. I was doing as much as I could in my free time. And I'm trying still to be active in the Python, although live work and everything else is not always in line with my willingness to work on Python stuff. It's interesting, you said that you get to play much with Python in these days or it's in a while? I wouldn't say that I don't have a time. Every single time I'm working on something simple that I wanna scrape data or somehow analyze the data, I'm gonna reach out for Python every single time that I'm doing something like that. Over the past years, whenever I was preparing some kind of a demo for OpenShift, or presenting some ideas, I was always preparing an application and that application always reused Python under the covers just because I wanted to give it a try. In the past, in the early days when we started shipping V3, I was also involved in Source to Image, which is the build technology for OpenShift. And I was the primary owner for the Python builder. I kick off that one. So I've always tried to use whatever I built and make sure that the experience that I'm feeling is actually legit and whether I should improve something and make it better or whether the UX simply to say is reasonable for a regular user. Source to Image. I think fondly called S2I. I'm dropping a link to that in the chat. If you're not familiar with it, folks. Yeah, the one thing I wanna know if you wanna maybe jump back into kind of what's happening in the CLI space around Kubernetes and SIG CLI. I don't know if you wanna spend a little time about talking about the plugin model, like exploration around crew and then kind of integration with Customize and any plugin ecosystem, kind of throwing a broad statement there as sort of like CLI topics. So kind of curious. Right, so SIG CLI itself, we have actually three main sub-projects that we are overlooking. You didn't mention Customize. And I also mentioned earlier today that we are doing box crops and one box crop is actually tomorrow. It's around 6 p.m. Central European time and it's around noon Eastern time, 9 a.m. Pacific. We'll be going through Customize box. We have a very talented group of people who is working on Customize and pushing this forward, listening to what the customers and users want to have added. There are cases where we are trying to simplify a lot of stuff for Customize because it was a problem for some time where Customize moved forward. And before that, we decided that we wanna ship KubeCTL with Customize embedded. And the fact that Customize went so far and with features and capabilities, we were left in KubeCTL with pretty old Customize. And the dependencies unfortunately made the problem even harder for us to upgrade. So we had to refactor a little bit of Customize to be able to update the version. And Jeff did an amazing job here and worked tirelessly to bring the necessary changes into Customize and then update the Customize in KubeCTL. So that's on that end. The next project crew and the entire plugin model. So that was basically in line with what the majority of Kube was doing. So Kube decided that the core would be pretty much closed and we would open up a lot of places how we can inject or add additional capabilities. KubeCuttle wasn't different. We looked at other tools, namely Git and other binaries, how they implement their own plugin models. And we came up with the current plugin implementation that your plugin, it just have to have a prefix as KubeCTL and that will make it a plugin to KubeCuttle. Out of that, Ahmed and friends figure out that it will be nice to have something to manage the plugins for KubeCuttle. That's how the crew started it thing. And it's pretty popular and we're very happy to have crew on board. And lastly, which is a pretty new addition to the 6CLI sub project is Kui. It's a project initially started by IBM and driven by Nick especially, which is a Kui approach to kind of like a wrapper to KubeCTL. It has a much richer capabilities of presenting the output of the KubeCuttle commands. So it's like a combination. It's not like a web console, but it's definitely a much richer KubeCuttle wrapper than if you would use a normal KubeCuttle. It has a live preview of, let's say, if you start watching pods and it allows a little bit more freedom around sorting and formatting the output of the KubeCuttle commands. So there are pretty interesting stuff. And lastly, because you did mention the plugins, during the work that we did on plugins, we extracted a library called CLI Runtimes and that exists on GitHub, other Kubernetes, where we are providing authors of the plugins with a lot of the primitives for printing data, for reading configuration, et cetera, so that you don't, first of all, you can be, your output, your way of doing stuff is similar to what a KubeCuttle does by default, but it also deals a lot of the stuff for you, so you don't have to write anything from scratch when you wanna, I don't know, parse KubeConfig, for example. Do you have a, is there a place listing kind of common or popular plugins that exist? I mean, I'm on the crew website right now. There's 154 plugins. I think there was, or I remember. Ork function, but. Okay, you can. If you have your own number of stars, I'll drop it in the chat. I guess you have any favorite or one you use often or one you wanna plug? My favorite, I would probably call out to Debug which was, which originally started as a plugin, but we're currently in a process of pulling Debug into the, as a default command in KubeCuttle. It's, it's rather lengthy process. If you're a plugin, it's obviously in, you can have a little bit faster iteration of your releases. If you're in a core, the process is you just, you basically have to follow what Kube does, but on the other hand, what is something that we've been working for, I would probably say three years, maybe even four years is we're trying to simplify the KubeCuttle code so that we can move the entire KubeCuttle code to a separate repository. Even though that if you will start, if you check out GitHub, you'll notice that Kubernetes slash KubeCTL exists as a separate repository. It's actually not a repository where the entire development is happening on a daily basis. Kubernetes has a notion of staging repos. So if you look under main Kubernetes, Kubernetes repository, there's a staging directory. And if you drill down, you'll notice that there's a KubeCuttle directory. That means we are publishing the contents of that directory into a separate repo. The goal for that was that we can and we ensure that the libraries that are used within the staging repo are not using any of the dependencies from the main Kubernetes, Kubernetes repo. And that eventually we will be publishing from entirely new repo. We're currently discussing how to do it because it's a lot of challenges are still ahead of us for how to release. Because if you look at how Kube currently releases, it is basically publishing all of the artifacts from a single repo. I was talking with Sig release, I think that was last week or two weeks ago about us wanting to publish KubeCuttle code or basically KubeCuttle artifacts from a separate repo. It will take a little bit of time still, but we're hoping that within a couple of next releases to three maybe we will be able to publish from a separate repo. As soon as we reached that point where we have a separate repo, we will return to the discussion of maybe shipping KubeCuttle faster than Kube itself is. Because that was one of the goal. If we move to separate repo, we will try eventually cut the court for releasing. Obviously there are some challenges coming from that because currently we are required to support plus minus one version, which is a default policy for all of Kube. If we start, for example, publishing KubeCuttle every month, that means we need to make sure that the support matrix is not plus minus one, but we will be supporting about four or five releases back and forth. So there's a lot of, maybe not necessarily code changes required, but there's a lot of discussion that needs to happen around processes, mostly how to proceed with this approach. So there's a lot of work around that, but we're very hopeful with regards to that. Sounds very promising. Kind of curious in a lot of work that's going on in there. So appreciate it from you and all the, your team and the community members that are doing it. I was kind of curious as I think about, like if I was a plugin, if I wanted to develop a plugin, like either how would I look at what's available? And I think we talked about that and talk about what do I do to get started? You talked about this SDK that's available. I didn't know any other tools or recommendations you might have for them, and especially in light of this, making sure it works across multiple versions of whatever it's interacting with on the Kube and back end. Yeah. Right. So we also thought about that one. And there are a couple of resources available. First of all, a shameless plug with who on, we did a presentation during one of the past Kube cons about what it takes to write a plugin. If you search with my name and six July, I'm pretty sure that you should be able to find it. I think that might have been either Seattle or around that time. Additionally, within the main Kubernetes, again, staging repo, we are publishing a repo that has a sample CLI plugin. And if I remember correctly, the repo is literally called sample CLI plugin. Similarly to how there is one for sample controller. And I think maybe even sample API server, there is one. So if you go to GitHub slash Kubernetes sample CLI plugin, it's a very minimal plugin that allows you to switch namespaces permanently. But most importantly, it shows how to write a plugin and how to reuse the libraries that we are shipping the CLI runtimes I mentioned before, how to build on top of Kubernetes API, Kubernetes client go to achieve the necessary stuff to build your simple plugin. Put your app in a link to that right now in chat for you. Make sense. Thanks. I guess I was just thinking through some of the other pieces of the CLI. You mentioned customize or anything you see kind of coming down the road as other CLI integrations. People are kind of core features that people are trying to work their way in as far as for a sub-project. That's an interesting question. Honestly, I haven't seen anything new in that territory, although at the same time with this many duties that I'm dealing with and PRs and approvals and whatnot, both upstream and downstream, I'm not very closely following the area of either CLI or controllers. And it's always takes me, well, sometimes it takes me by surprise, but if people are showing up either for SIG apps or a SIG CLI with something new, then yes, I will be aware, but nothing like that showed up. Sounds like the pretty full plate there. The plugin model really allows for anything to happen at this point too. So that's it. Yes, that's true. Sounds like that's the right thing to allow there. So Curious a bit more, you talked a little bit about your involvement in SIG apps early days as far as jobs and I know kind of recently become the co-lead of SIG apps. So I didn't know if we'd talk a little bit about what's going on and kind of the key things in SIG apps these days. Right, so yeah, a lot of the, so equally as with SIG CLI where there's a lot of moving pieces going on, there's a lot happening in the SIG apps area as well. Most importantly, we're trying to align some of the controllers by adding the capabilities that were previously available in other controllers. For example, the ability to say, oh, during a rollout, I wanna have this many pods unavailable, which is something that we had since always in deployments or Damon sets, we're currently adding similar capability to stateful sets so that before stateful sets were always going one by one pod, now you will be able to pass a little bit greater unavailability rates. So for example, you will be able to move faster with your upgrade. There are other issues that we're overlooking from SIG apps point of view as well. The biggest one that we're looking at in the very long-term is we're trying to unify the statuses of all the controllers. If you've ever looked at the controllers, you need to know how to read properly deployment status, separately stateful set status, separately job or cron job or any other controller. There is literally close to zero common interface between those. The reason for that is because each of those controllers were written by a completely different person. So everyone had a different opinion on how the status should look like. The biggest downside of that is if you're building tools on top of the controllers, you have to write a logic that will know, oh, I'm dealing with a deployment, this is how I should interpret the status. If I'm dealing with a stateful set, well, the logic has to be different. So we're trying to figure out a way how to combine the current statuses in all the controllers and make them somehow unified so that you will be able to just write one implementation and that will have the necessary information, whether your workload is just starting or it's progressing, whether it is done or it's like running, and that will depend. There are various different cases because if you think about it, most of the workloads controllers, so stateful set, deployments, daemon sets, their end state is that they are running. But if you look at, for example, the batch workloads where you just run a task and the task has an end, their state will be a completed. So we need to figure out those common statuses somehow and present them in a unified way to users. So we're slowly working on enhancement for Kubernetes where we will try to combine those statuses and then eventually slowly over time, we will be implementing. I'm positive that during the implementation phase, even though we're already spent a couple of weeks or even months looking at the statuses and trying to figure out with something reasonable, I'm positive that as soon as we start implementing those additional edge cases will pop up and we'll have to modify the initial requirements that we put ourselves. Yeah, it sounds like a pretty decent task and I've run into this multiple times we're trying to build experiences around extensions and trying to get the status of what's going on and it's challenging to write that kind of common tool to roll that up. One thing is kind of curious, like where does SIG apps is, you can say apps, you can put anything under apps in a sense. So how do you define the scope of what really goes into SIG apps or how is it defined as far as the everything that's, is it, I know you mentioned workloads kind of aspects of Kubernetes, but I don't know if you talk a little bit about what all happens there when it's the decision of keep punting it to other SIGs. Well, that's a very good question. So I'll probably refer to the SIG apps charter. So basically every special interest group within Kubernetes has a, its own charter as, as, as it sends it basically lists who is the chair, what are the sub projects and what basically we do and what are our responsibilities. So if you look at the SIG apps charter, we are saying that everything from controllers all the way up to that is running on top of the platform is considered as part of the SIG apps. So there are multiple topics that went through SIG apps. Controllers are obviously the most primary one. I would call it that way. So whenever you want to discuss any changes whether to API or functionality to one of the core controllers the SIG apps will be the place. Although some of the controllers are primarily owned by a different SIG. So for example, endpoints or services those will be owned by the, by the networking SIG. SIG apps will be mostly controller. So all the stateful sets, daemon sets, replica sets, replication controllers, all of that and then everything on top of that is running. There was a lot of work around and we still have a sub project called an application which is actually in a grouping primitive for a set of workloads together. Just recently we had a very interesting presentation about operator for higher level, like operator that is overlooking the penises between deployments. That was last Monday if I remember correctly the recording is up and that was pretty interesting. And I remember that the person that was explaining the project, they mentioned that they are working on open sourcing the solution currently. So there's a hope that there will be more stuff like that available. If you look in the past, a helm was for a very long time one of the primary topics during SIG apps calls and probably a couple other topics. And it was sometimes hard or overwhelming but I think at this point in time the majority of the SIG apps calls are devoted to the controllers. If there are no topics, we started doing box crops and PR crops and go through issues because there's quite a big backlog of the issues that we have against the controllers and we're slowly going through those and we're trying to make sure that people's voices get heard. Yeah, that was got involved in SIG apps a long time ago and we jokingly refer to it early days as SIG Helma because of what's dominated as Helma is a topic. One of the things I was going to say is like if you look at the GitHub page talk about SIG apps it's one of the things I think is great about it is the non goals kind of part of what it describes it doesn't endorse one particular ecosystem tool it does not pick which apps run on top of Kubernetes does not recommend one way to do things or things I think that's really helps clarify what the group is there to do. So even to the point of the app definition I know there's the label recommendations that the SIG apps actually overseas as well. So overall kind of kudos to a complicated topic I think that SIG has worked pretty well to handle over the years as far as scope of applications. Yeah, exactly. So we only have a whopping 15 minutes left and obviously we don't have to take the whole hour if we don't fill it, but what do you think are some unique challenges of Kubernetes today? And people starting out the sharp edges they might catch themselves on potentially, right? Like if they're getting going with Kubernetes Hmm, that's a very interesting question. For me personally, I think the biggest issue is the volume of the changes. And something that I'm personally struggling is how to keep up with all of the changes all of the requests for reviews I know that there are multiple issues and pull requests with my name on it. And whenever I'm talking with people during KubeCon or during SIG meetings I'm always asking them to reach out to me on Slack because my GitHub notifications in my email are way off charts. And I'm always promising myself to keep up with those. And it happens every, I don't know, every couple of months. Okay, I'm gonna clean my inbox. I'm gonna go all the way to zero and then only two, three days later or I don't know, something happens that there will be other topics that I need to deal with for two, three days. I'm gonna neglect those emails and they will pile up very quickly. And then it just goes by for the next couple of weeks and then I have to go back to the initial, oh, I need to go through, I don't know, a couple hundred emails and figure out which I care about and which don't. So that's unfortunately a big problem for me personally. That's why I'm always asking if you care about your PR, please reach out to me directly on Slack. I don't mind. If you haven't heard from me because you pinged me once or twice and if you haven't heard from me ping me again in a week, I will respond. I've had many people ping me literally every week for extended period of time, like really extended period of time. And eventually I got it through every time I apologized for neglecting that. But I always try to, if I don't respond within a week feel free to ping me, I don't mind. Honestly, I don't mind because there's life. There's so many things going on in parallel that it just might slip my attention or plenty forget about stuff. So I don't mind being pinged again and again about PRs or reviews. That's how it works. Yeah, if you don't have- I couldn't imagine- Slack, I'm gonna mess it up. Yeah. Yeah, I couldn't imagine your inbox and the backlog you have there is, I used to use some of travel, dead time to like kind of catch up on those things and don't have that anymore. So, but also don't have travel, dead time. So anyways, the thing that I was thinking of sort of much is like kind of that involvement of the community and how you've adjusted because I think back that I remember in the 2019 in San Diego at KubeCon and we're, I think of three of us were probably at the same height, like we're all pretty tall and we're like talking, people walking by and just hanging out there between the exhibit hall. That's such a great way to connect with people and with KubeCon coming up in North America. Again, I was kind of curious how you've, have you adjusted to engaging with the Kube community and then more of a virtual presence or- So, since day one, I'm every mode at Red Hat, but as we were talking with Chris, I do enjoy meeting people in person. In the early days, I have a pretty big Red Hat's office in Czech Republic, which is two, two and a half hour drive from where I live. I was visiting the office every single month, just for a day, but that was nice. And even though I'm an introvert and I prefer sitting at home, close by with just my monitors in front of me, at the same time, enjoy from time to time to actually go out and talk to people. So, even though we are stuck at homes for the past almost two years, I do miss KubeCon. I'll be missing it even more in two weeks, if I remember correctly, at the beginning of October, I can't join folks in LA, but I would love to be around because it is just nicer to talk with people. I think most of the communication, most of the discussion can happen. It's not, that's not a problem. Either through Slack or Zoom, that works, but the bonding part, you know, having a beverage of your choice with the person and just chatting about silly stuff, I don't know, Disney World trip, visiting parents about kids, about life in general. Poor transportation in Detroit. Poor transportation in Detroit, exactly. I mean, literally anything. And most often, that's not even work or project related. That would make a huge difference. And even though KubeCon did a pretty well job with virtualizing those events, it's not the same. I do miss the interaction where I can stand in front of the people and talk with them, ask the questions. We literally did, instead of doing any kind of presentation for SXC-Li back in San Diego two years ago, when we did with Phil and Sean, we stood there and we answered questions about SXC-Li, everything for 90 minutes. You can't do something like that with the current pre-recorded sessions for KubeCon. It is not that easy. And even though there are slacks, there are some virtual chats, it just doesn't work that way. The ability that I can come to person, ask the question, poke them, and or have most importantly, the hallway conversation, that's invaluable. So I do miss that part. And even though I usually attended KubeCon both in North America and then in Europe, that was enough for me for a year. I did those two conferences, then DEF CONF as well in Brno, and I was done for a year, three times. I did more once or twice, but I was like, yeah, that's way too much for me. I'm okay with doing a conference per quarter. That's enough for my introvert character. So yeah, it's a good point. I'm flying out next Saturday to KubeCon. Yeah, like I remember being in San Diego and at the conservators summit, and there was all kinds of technical issues. And then it's much easier to say, hey, you don't know Git, let's just sit down and run through it real quick. Like this is the bare minimum you need to know, now you're a good contributor. And having that face-to-face discussion where there's body language and everything else, it's a lot easier to interpret the work that's happening, it is very different. And even like life dream, like we are right now, it's not quite the same as sitting down at a table with our computers and talking about a problem, right? Like it's very much a, hey, we're gonna talk, we're gonna work, we'll talk some more, we'll work some more, right? Like it's not the same flow, right? Yeah, exactly. Yeah, I even have to confess though, even at San Diego when I was there, like I'm not a crowd person either, like a bit of an introvert. So like the keynote sessions, I don't go wake in line, whatever. I stream them from my room. Yeah, so there I am in San Diego. It's a city looking around at poles, squished between people. Near heat away, more of the big work, you're talking. So finally being an introvert, I actually sat during all of the keynote sessions. And actually, if I remember correctly, the keynote sessions were the only ones that I actually attended, because the remaining ones, I usually end up talking with folks in the corridors or in the both halls or basically wandering around and eventually just popping in for my session or one or two other sessions. So the key is where like the ones that you can actually find me on and then the ones where I'm presenting others. Well, I usually try to set up my schedule somehow. Oh, yes, these are the stuff that I wanna see. I probably don't see, I don't know, like 23% of what I check. I'm like, yeah, I wanna check this one out. Yeah, we have the same here. I get a few sessions in that feels like about it. Yeah, I spend more time talking to people and figuring things out than I do actually listening to people talk, right? Exactly, exactly. Someone just mentioned in chat, Rapscallion Reeves. At least most people at those conferences tend to be introverts too. So you're kind of amongst your own people. And then, yeah, Tanawa 3 points out, acoustics for the exhibition hall in San Diego was awful. If anybody remembers that it was just constantly loud in this big domed space with music playing in the background and 20,000 of our favorite friends or whatever, can't remember the attendance numbers, but it was just loud. I mean, I remember when we were in London, I think that was the first KubeCon in Europe. And there was like a hundred and something people. We barely had like two or three rooms, just a little bit of people. Berlin was pretty much similar and then it just grew exponentially over the years. That was crazy. Yeah, and this KubeCon I feel like it's gonna be a little bit like that, where it's like a little smaller, more quaint environment compared to the tens of thousands we're used to. Yeah, but the downside of that one is that we will be missing a lot of the contributors to the core because of different reasons they're not traveling to US or they are not even traveling within the US. I spoke with many people and it's not as easy or they are just not willing to do so at this point in time. Yeah, no, I was just talking to a friend on Twitter a few minutes before the show started or before we got on air and it was like, yeah, no, they can't come because they're in the UK or Berlin or whatever. And yeah, the lack of international folks is going to be very obvious. Like it'll feel very, you know, US-based, right? Like as opposed to international conference where people kind of gather. But with two minutes left, there's no questions in chat for, oh, there is one question in chat. I don't want to ask, sorry. What's the best way for random people slash developers to help out with Kube projects? I have my opinion, but I like to hear yours or Steve's for that matter. Honestly, I always say that it's easiest to show up during one of the calls, whether that'll be 60 live or SIG apps, especially the ones that are going through box crops. So for example, we will be going through customized box crop tomorrow or there's a 60 live box crop in two weeks from tomorrow. Those are the places where we literally ask, is anyone interested in working on this one? Is anyone here? It's a good place for asking questions. If you're new, even during the SIG apps or 60 live calls, we try to give a little bit of space at the beginning for everyone to introduce themselves who they are, what they want to work on. So it's perfectly fine to also show up for any of those calls and explain, oh, this is the issue that I've noticed. I want to work on it. Can you help me and just ask questions? There's no dumb questions about any of those topics. So it's the right place to ask that. That's usually my simple answer. Yeah, and that's my answer too, to be honest with you. Steve, I'm curious if you have a different opinion. Yeah, I think also the contributions are very valuable across the board. So talking about getting started early and looking at some of the setup instructions might be wrong, helping fix those up, opening bugs for issues you see for different things that seem a little off. So all of those things are a good way to get started. And I think one time I started was I was even a go-coder and I fixed one of the CLI issues just because the new lines were screwed up. So I knew how to insert new line and compile the code and contribute that back. So it was like, I could even just fix it. It was easy enough, instead of describing the problem, it's like, here you go. So I think the little things like that are a good way. It just depends on your context and background, so. Yeah, I mean, I started off in SickDocs very early on and then made my way over to Contrabex and that's kind of where I live now. That's fine with me. That'll work for anybody else, I feel like too. Yeah, there are multiple options, so just pick one. Yeah, and if you really wanna talk about it, feel free to ping me on the Kubernetes Slack folks. Chris Short, wide open there. I'm sure Macha and Steve are also welcome to answer your questions. Exactly. Definitely. And yeah, so thank you, Macha. Thank you, Steve, for guest hosting. Thank you, audience, for attending. Really appreciate everybody here on this call, live stream out there. Thank you very much. It's good to talk. Yeah, coming up later today, we have the call for code for Rachel Justice. We'll be talking about Take Two. It's a project that will help folks analyze data about the justice system in a way that will make people rethink about sentencing and things like that. So it should be an interesting call and lots of fun stuff for the IBM team on the call for code side, so yeah, good stuff. So thank you, everyone, for joining and have a great day and stay safe out there.