 picture. Where does Kubernetes stand right now? What does the ecosystem look like? What are the major challenges faced by people adopting Kubernetes and what's really going on in the project itself? And finally, we will also talk a little bit about what is new in Kubernetes 1.24. So let's get started. A little bit about me. My name is Nikita and I'm a staff engineer at VMware. My day job luckily is to work on the Kubernetes project itself. And I've been working on it for over four years now. I was also a former member of the Kubernetes Steering Committee. It is a committee which oversees the governance of the whole project. So we have different levels and committees and groups in the community. And the Steering Committee is the topmost body, which takes the leadership role for the whole project. I also hold a few other roles in the community like being a technical lead for the SIG related to automation policy contributor experience. And I'm also a CNCF ambassador. You can find me on GitHub using Nikita and Twitter at the Nikita. Okay, enough about me. Let's get back to Kubernetes. Let's look at some of the recent Kubernetes trends that we've seen in the ecosystem. In the CNCF annual survey, we noticed that 96% of organizations are either using Kubernetes or they're evaluating Kubernetes, which honestly is at an all-time record high ever since the survey started. So before Kubernetes was something like, okay, this is a very niche technology that only some companies can use, only one of the bigger companies or medium-sized companies use the smaller companies, they don't need to use it. But now it definitely shows that Kubernetes has crossed that adoption to become a mainstream global technology. It is everywhere. We've also noticed a 67% increase in the number of developers using Kubernetes. And that number is now a whopping 5.6 million developers. Just think, like there are five, almost 5.6 million people like you working on Kubernetes itself all across the world. At the same time, another very interesting fact that stood out in the surveys was that a significant portion of the back-end developers said that they've never heard of Kubernetes and they're not sure what it is. So it goes on to show that developers may not even realize that many of the popular services they use, use Kubernetes under the hood. It's become somewhat like Linux right now, somewhat like the ubiquity of Linux right now. This shows that Kubernetes is finally becoming boring and honestly that's good for the project and also for the ecosystem because that's what exactly Kubernetes stood out to do. Surely there's more to do, but we are getting there. Along the same thoughts, Kubernetes is becoming less visible as the technology evolves. Organizations are using serverless, they're using managed services more intensively than in the past. And users now don't necessarily need to know about the underlying container technology. In fact, we even see that 79% of respondents use certified Kubernetes hosted platforms and 90% of them leverage some form of cloud managed services at some point of the time. We can also see that with Kubernetes mentions in social media that the mentions are increasing year on year. The hype is still there. It's not gone yet. The adoption also is also increasing. The hype is also increasing. Another very interesting thing to notice is that since Kubernetes is maturing to a mainstream technology, more organizations are slowly moving up the cloud meta stack, leveraging Kubernetes APIs and interfaces. So like I said, Kubernetes has become boring. So they are looking to go to something that's more fun and interesting to work on. Now, this is particularly apparent with runtime containers like container D and cryo, like I think all of you must have heard of the container D project by now. Service mesh like on why and liquidity and monitoring tools like Prometheus. Given that there is a 43% increase in Prometheus adoption and 53% for flu and the adoption, it shows that organizations are looking to tap into open source technologies. So Prometheus and Fluently, they're all open source technologies to advance their observability practices and capabilities. So since a lot of the CNC projects are open source, almost all of them, we can see that organizations are looking to adopt those and eventually move into open source technologies. One emerging topic in the container space is edge computing. Now, two out of three edge developers use Kubernetes and most of this seems to be limited to large and medium sized organizations right now. IoT, which is the key driver for many edge solutions, is also in a steep trajectory and is expected to grow by approximately 19% per year. This underlying growth of IoT will also continue to propel the total usage of Kubernetes. There are also a lot of exciting challenges in the edge computing space because you have a lot of other constraints like memory, CPU, and so on. So if this concept of IoT and edge excites you, I'd highly recommend looking into Kubernetes and the edge computing space. Okay, now that we've talked about how Kubernetes adoption is increasing, it's also touch based on some challenges that folks might face while doing so. One big business challenge that organizations face is the lack of expertise. It honestly takes a steep learning curve to build, deploy, and manage Kubernetes effectively. This is mainly due to the lack of operational precedent with it. Even though we've seen that Kubernetes is being adopted across almost 96% of the organizations, it is still a new technology compared to others. So there is a lack of operational precedent with it. To break through these barriers, organizations can take training programs including CNC and certifications and work with partner organizations that have specific expertise and solutions. Along the same lines, it's also becoming harder for organizations to hire skilled Kubernetes engineers. But hey, there's also the insight if you are a Kubernetes engineer, it is a very good time to be served because your skills would be in high demand. With more organizations moving to cloud, there seems to be an emerging trend of lack of internal alignment. So with multiple stakeholders involved, decision making around how to integrate and manage Kubernetes can become difficult. So it is important to set a cloud strategy around various aspects like cloud financial management, operations, security, and compliance. Now, scalability. It's kind of ironic that Kubernetes improves scalability. Yet I'm saying that one of the common challenges faced by organizations is in fact scalability. This is mainly due to the complexity of Kubernetes. So most organizations have a hard time with complex installation and configuration. Honestly, if you've tried doing from scratch yourself, you know how hard it is. But this problem aggravates when there are multiple clouds, multiple policies, and designated users involved. All of this can affect the expansion or scalability of the organization. So how do you solve this? This can be overcome by using a dependable Kubernetes solution or a product. So let's dive a little deeper into the security aspects, which according to me is the most fun part. Let me start by saying that Kubernetes itself is not some inherently vulnerability-ridden technology. The security that's affecting it are similar to those that affect most other technologies that are relatively new to users. Here are some top security concerns. First one is the possible configuration issues involving container images, namespaces, runtime privileges, or even unnecessary exposure of secrets as they are baked into images because all of this can lead to risk exposures. Handling misconfigurations is very important. Now, security vulnerabilities, I think a lot of you might know that obviously this is a huge cause for risks around security. So security vulnerabilities, what does that include? So it includes exploits on containers like malware installation and privilege escalation to name a few. Now, these can exist in production, production accessible container registries, and even third-party admission controllers in Kubernetes clusters. Lastly, there are security concerns where organizations think of security only as an afterthought of the Kubernetes implementation. I know we've all been there, but it is very important that we maintain compliance. There are security compliance requirements over containers like CIS benchmarks that must be taken into account. There are a lot of other compliance requirements, but I won't be going into detail here. Okay, now that we've seen how the ecosystem looks like today, let's dive deeper into the Kubernetes project itself. What are some of the achievements that we've done over the last year? I'd say one of the biggest ones is feature maturity and stability. Like I've been saying, like Kubernetes has become more boring. It's been able to get more boring only due to maturity and stability. A lot of the Kubernetes 6, so I'll be talking about 6 a lot in this presentation. 6 basically stands for special interest groups, and Kubernetes is divided into a lot of special interest groups with each SIG, a special interest group responsible for a particular set of components. Anyway, coming back to the point, a lot of Kubernetes 6 are continuing to drive long-standing beta features to graduate to Staple. So some examples that you might have heard of are the IPv4, IPv6 dual stack, which graduated to Staple in 1.23. Even the generic FM middle inline volumes also graduated to Staple in 1.23. Now showing up and sticking around. This sounds like a very simple phrase, but let me explain more on how important it is. Climbing the contributor ladder. The contributor ladder is basically when you start out as a new contributor, like, okay, I want to contribute, then you start contributing a little more. We call you a member. Once you become a member and you're contributing even more and participating in reviews and other reviews also trust you as a reviewer, you become a reviewer and so on. So you basically move up a ladder and that's called the contributor ladder. One of the biggest challenges that we've been facing right now is that even though we say that the Kubernetes project has a lot of contributors, we don't have folks who are consistently moving up the contributor ladder. Now let's talk more on how we solve this challenge in some sense. It's still not perfect though. Okay, now coming back to the contributor ladder. So climbing this contributor ladder is a trust-building exercise as much as it is a skills point. So sticking around and we use a phrase called chopping wood and carrying water. It is the main formula for growing leaders in the project. You need to stick around, you need to do the actual work and you need to show up. Now how did we kind of solve this problem in a sense? Let me take the example of SickDocs. So it's an amazing example of an intentional contributor ladder growth effort because they grew their contributor base and their reviewer base in 2021. How did they do that? They introduced a shadow program for pull requests handling. So there was a lot of pull requests coming in but no one was taking a look at it and new contributors would get rejected because their pull requests were not getting reviewed and it would just like lead onto a new cycle. So they designated a shadow program and basically a designated person would be there who would be rambling these 3 hours and looking at these PRs at a regular basis. They also set up a shadow program. So there would be a lead who would be doing this and there would be a mentee or you can think of it as a mentor mentee system where a lead is doing they are responsible for the actual work and there is a shadow who is learning from this lead on how things need to be done. So they introduced a shadow program for PR rambling and dedicated more time to be active in the SickDocs Slack channel and answering questions from new contributors. This eventually helped grow the community. Not only this they also worked on the leadership transition strategy to bring community members into leadership roles via specialized six month group mentorship program. From this they were able to cultivate leaders for the sake and some of its subgroups adding new coaches and tech leads. In fact one of them is also from India and we'll see a lot more. Who are these people in a minute? Now, every group in Kubernetes has the responsibility to make sure that we're putting our best foot forward with supply chain security. Security is very important for us. In particular though, sick release, sick auth and sick security worked hand in glove to drive sustained efforts in this area including artifact signing, compliance with SLSA 3 standards and improving the end user security documentation. So our main goal has been to amp up Kubernetes security. Now there are plenty of processes, tools and policies that are put together in a project lifecycle that sometimes eventually need to be phased out for whatever reason. We've seen that in all organizations and it is the same, it is also true for the Kubernetes project. The contributor pain point that we've had in the core base by Lash is Bazel. If you've not used if you've not used Bazel in the past, be happy for yourself. It is very complicated to use essentially it's like a build system. So the crews in testing and sick release, they put in a lot of time and attention on removing Bazel from core Kubernetes. It still exists in a few parts of the core base in other like not core Kubernetes, but in other parts, but we're also planning on removing it there. And finally, sick windows. It has had amazing progress and growing windows support in the ecosystem with their efforts like majorly defining operational readiness standards for windows. Now, these are all the achievements that the project has done over the last year. Now, looking at all of these, there are a few themes or trends. Let's see what they are. Now, one is that they're trying to prioritize on quality. Why? So there's been actually an increase in regression related backwards in the last few releases. Many of these regressions why they're appearing is basically related to actually two types of changes. So one change is to add features of it's unrelated bugs in areas that are too complex and under tested Kubernetes, like I said, it's a huge core base. And there are some areas that are too complex and under staffed and under tested. So there are only a few people who know what is actually going on there. And sometimes when another contributor comes and changes things, it breaks and leads to regressions. The other type of change is changes that were intended to be mechanical refactors, but then they accidentally ended up modifying behavior. Now, to fix these problems, we are tracking the health of existing components and we're developing more specific test plans. Also, when we're adding new reviewers, we are being very cautious that these reviewers have the necessary experience and they can actually review code properly. So we're prioritizing quality above all. The other thing for us is growing independent contributors. Now, what does independent review mean? So we have a lot different scenario of contributors. So there are folks who work on Kubernetes on like the day job, that's people like me. And then there's also the independent contributor base who are not paid to work on Kubernetes, but they do so on their own time because they like it. Now, we're trying to grow the independent contributor base by connecting folks to jobs. So if you go to the CNC jobs website at cncf.jobs.io, the job listings also indicate a percentage of time that the employees would support for upstream activities. For example, if I'm an employer and I'm listing a job site there, I would say, okay, like if you apply to this job, 50% of your time, 20% of your time, you can work on upstream Kubernetes. So that way you can filter out who jobs you want to apply for as well. And this will eventually help grow the contributor base. And that's something that we really, really want right now. Another thing is niche contributor documentation. So like I said, Kubernetes is very huge in a lot of complex areas. And with that, its contribution documentation also needs to be big. And it is pretty big. We are starting to take better measures to document more complex areas and keep things up to date. We sadly like, we really need to do more, I think, but we're still getting there. Honestly, if you're looking to contribute to Kubernetes, writing documentation is an excellent way to get started. And finally, burnout. So it's become an industrialized problem now with the pandemic and so many things happening all over the world. It's like, I'm sure most of you might have been through burnout at one point or the other, unfortunately. But we need to solve it together. And there are a mix of reasons why contributors are burning out. We are still unsure of the exact solution to this problem though. But at least we're constantly talking about it. In fact, we also reduce the release cadence to make it easier and more sustainable for contributors and users. And we are keeping our doors open for contributors to have these discussions with us. So if you are feeling the symptoms of burnout or if you feel burnt out already, feel free to reach out to SIG leads or leaders and Kubernetes community and we'd be more than happy to talk to you about this. Now, we've also identified some areas of growth, opportunities or needs. So some six, let me actually take a step back. So we're talking about project health, right? So how do you define project health? So some six, they have industry by open source veterans and like these are veterans, they've worked in open source for a long time. They can quickly identify areas of the components that need help. They're able to tell stories about what's flourishing. And they're pretty quick in doing things. So all of those six, they have better health compared to some others where they're slightly newer open source folks. So that's totally fine. But since different people have different levels of experience, there needs to be a standard way to establish universal indicators of project health in a project, especially as large and diverse as Kubernetes. So that is one area where we are still working on like what and trying to define what is project health anyway. We do run annual reports, obviously annually, yearly. So we ask all SIGs to list down some statistics like how many contributors do you have? Did you help grow these contributors and so on? What features did you work on? How many do you graduate to a state with things like that? So we're using all of this data to eventually create or we've do create a report for the summary of the annual report basically. And it we are trying to get the overall project health. But there's still more to do. So we're trying to define what is the universal indicator of project health. Now, if you've been watching open source news over the last year, supply chain security has made headlines. According to open SSF and other security groups, code reviews are an important piece to putting prioritization on security. If you don't do code reviews right, bugs can sneak in and vulnerabilities can sneak in. So doing code reviews right is very important. But with burnout and other factors, people are having enough time to work on it. It's been hard for us to grow the number of reviewers. So that's another growth area that we've identified that we definitely need more reviewers. We are trying out some strategies right now. But if you have more ideas, feel free to reach out to us and SIG contributor experience. Now, only a handful of the most active contributors, they will tell you that they're working on 80 to 100% of the time upstream. And in fact, most like what the project needs right now is senior people. So we have a lot of junior folks joining in and we're just totally fine and that's amazing. But you also need an equal number of senior people helping out in the project. But most senior folks, they also have day jobs and they are working in these big companies or small companies, they're just not able to dedicate time to work on open source because they have a ton of other responsibilities. Now we're working with the CNCF governing board to see if we can develop some incentives and long-term strategies to fix this. But having less full-time folks and less senior folks is turning out to be a huge problem for the project. Okay, now finally, let's look in brief about the Kubernetes 1.24 release. 1.24, it had a strong focus on beta and stable features and it involves some pretty major changes. Let's see what those are. So one of the biggest changes is that Dockership has been removed from Qubelet. So from 1.24 onwards, it is recommended that you move to a container and time like container to your file. But this changed me a lot of noise when it was announced like Kubernetes no number supports using Docker as a container on time. But don't panic, this change has no impact at all for most users of Kubernetes. If you use a managed Kubernetes service, this change should not impact you in any way. If you manage your own cluster and still use Docker as a container runtime, like I said, moving to container D is important, but it's fairly simple to do so, so you don't need to worry too much about it. New beta APIs will also not be enabled in the cluster by default. But remember that this is just the APIs and also this is only for the new ones. Existing beta APIs and new versions of the existing beta APIs will also continue to be enabled by default. Another interesting thing, Qubelet offers a new Prometheus metric to allow cluster operators to count out of memory events that happen in each container running in the Kubernetes clusters. Best practice is to set memory limits for each container, but when software does not run as expected, it can reach this limit. And when that happens, the kernel kills the faulty process. So find out exactly what happened is not easy, but this new metric will definitely help that. Now, no secret by default for service account tokens. This change sounds scary, but it only impacts Kubernetes users who use the long lived service account tokens that Kubernetes stores inside secrets. So to Kubernetes 1.23, creating a service account in a cluster results in Kubernetes automatically creating a secret with a token for that service account. And this token never expires, which can be useful, but it is also a security issue. So starting with Kubernetes 1.24, these secrets will no longer be created automatically. Now, being able to load a sidecar that checks for the health of persistent volumes is a welcome addition. Now cluster administrators will be able to react better and faster to events like a persistent volume being deleted outside of Kubernetes. This will absolutely increase the reliability of Kubernetes clusters. I'd also like to take a minute to celebrate some Kubernetes contributors from India. Dems was another prolific Kubernetes contributor also did a version of this in KCD, Bangalore 2020. I wanted to do a follow up on it. So while working on this, I realized that we have a lot of new contributors who joined us in the last year. So this is an absolutely no particular order, but let us see who they were. So first, I'd like to start with Tarupia, who is also from Chennai. He has contributed to the E2E framework and a few cluster API libraries. Anubha, I could not find a picture of them when they have been instrumental in the contributor experience and sync docs. Nikhil has a lot of PRs to his name in QBuilder. Sainthini, who's also one of my favorite contributors, she's one of the cluster API providers. Miha works on the Gateway API. Karthik Sharma has contributed to storage and CSI related projects. David Vrata is also one of my favorite contributors, and he's been taking leadership roles in sync contributor experience. Harsha has contributed to the E2E framework. Lothan, Dipto, Getika, Ashutosh have worked on cluster API and its providers. Priyanka Sagu has taken leadership roles and release teams. She's also a B enhancement lead, so she is the lead for making sure what all features get into 1.025. She is also mentoring a few other folks to take this leadership role in the next few releases. Azitthi is a very, very prolific signor contributor from India. Okay, I will go through all of these names here because they were also talked about in previous KCDs. I would like to give a shout out to Devya, who is leading sync docs as well. But yeah, it's very nice to see so many people contributing from India, and I hope you all also consider joining in as well. I'd love to see your faces and your name included in the slides next year. So yeah, if you want to get involved in any specific area, so I talked about CubeBuilder, I talked about CSI and I was cluster API, so there's a variety of projects. Please feel free to reach out to these folks. I'm also happy to answer any questions myself, so please feel free to reach out to me as well. Yeah, thanks for having me here. I hope you enjoyed the talk and I guess I'll please feel free to reach out to me for any questions even after the conference. You can find me on Kubernetes Slack if you just search my name. So yeah, thank you again and hope-