 Hey everyone, welcome to today's CNCF live webinar. We are gonna get everybody settled in and give folks another minute or so to log in. So pop your name in the chat, tell us where you're watching from and give us a wave and we'll get started shortly. Love it, very active chat today. Hope y'all are ready with your questions. Okay, I'm gonna go ahead and get us started. Give me one sec to pull out my script. Here we go. Good morning, everyone. I am Libby Schultz. Thank you for joining us. Welcome to today's live webinar with CNCF, Kubernetes 1.29 Release. We're all very excited. I will be moderating today's webinar. I'm gonna read our code of conduct and hand over to Priyanka Saku, Nina Poshkakova and Carol Valencia with the Kubernetes 1.29 Release team. A few housekeeping items before we get started. During the webinar, you're not able to speak as an attendee, but you've all gotten familiar with our chat box. Please leave your questions there and we'll get to as many as we can. This is an official webinar of the CNCF and is such a subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. And please be respectful of all of your fellow participants and presenters. Please also note that the recording will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link and the recording will be on our online programs YouTube playlist. With that, I will hand things over to Priyanka, Nina and Carolina to take it away and get us started. Hey everyone and thank you Libby for the introduction. I'll start moving with our slides and thank you everyone and welcome everyone in the chat. Starting with a round of introduction. My name is Priyanka Sagu. I'm the release lead for Kubernetes 1.29 release cycle. I work at SUSA as a Kubernetes Integration Engineer. I am also a technical lead for Kubernetes Special Interest Group contributor experience and a GitHub admin for Kubernetes project and subprojects, Nina. Yeah, I can introduce myself too. I'm Nina Plushkova. I'm the Kubernetes 1.29 enhancement lead. I work at solo.io on the Google platform team. And before the 1.29 release, I've been an enhancement shadow for the 1.27 and 1.28 release. Okay, I think I can go home. My name is Carol Valencia. I work before it's released in the top streams and right now I am as a communication lead. Also I was shadow in the before release and I think that's all for me. Thank you, thank you Nina, Carol. And with that, I'll move to the next slide just to say a big thank you to our 1.29 release team. We had 40 people on our release team this time and I would just want to say a big huge thank you to everyone, every shadow, every lead for helping me throughout the release, for helping us get a very successful release out. So thank you to our enhancements lead, Nina, release nodes lead, Frederico, CI signal lead, Vioam, communications lead, Carol, docs lead, Kat Koskow, bug triage lead, and all my four release lead, shadows, Angelos, Mickey, Dodalpho, Mejha, and finally our emulator advisor, Sander. Thank you to everyone. Also, we have a theme like every other release previously for 1.29, we are calling 1.29 release cycle as Mandala. Mandala means universe in a few languages in Hindi, in Devanagari, the release logo is actually inspired from an Indian art form called Mandala itself. And special thanks to Mario Jason Braganza who is also one of our Kubernetes contributor and emcee. He helped us creating this logo. So thanks a lot. With that, I'll pass to Nina. Yeah, so first I'm going to give an overview of the Kubernetes release process. So for anyone unfamiliar with our release process, for larger enhancements, we have the enhancement author, right, Akep, which just stands for Kubernetes enhancement proposal. And this goes through enhancement review process. So once the enhancement is reviewed, it needs to get approved and merged into the Kubernetes enhancement repo by the enhancement freeze. Later in the release, there's a separate code freeze date by which all the code changes need to be merged in for that enhancement. So there's two separate freezes, the enhancement freeze, and then the code freeze in our process. And go to the next slide. So in 129, we had 49 enhancements graduate, which is very exciting. So we had 19 alpha enhancements, 19 beta enhancements and 11 stable. In the beginning of the release, there was over 70 enhancements that opted into the release. If you go back, yeah, thanks. But as the freeze deadline and code freeze deadlines passed, it went down to 49. And this is for reference in line and about the same as the 128 release, which also had 45 enhancements graduate. So numbers are about the same for 128 and 129. If you want more details about our deprecation and removals, if you go back again, you can see the one more slide back. The link here, yeah, for all the upcoming changes in 129 and more information about the release process can be found there. Because we have 49 enhancements, if you go to the next slide, we're only gonna focus on the major themes. So we'll highlight some of the major themes in the release, but not go through every single enhancement because there are a lot of them. So the first enhancement I'm gonna highlight is the in place update for pod resources. This is one, two, eight, seven. In some historical context, this has been a very long awaited enhancement. So like I think it first was requested in 2015 and then the discussion and the initial creation started in 2019 as part of the 117 release. So in 129, this feature is staying in alpha. And what it means is that currently that there's in order to change like CPU and memory for a pod, you need to restart the pod. And that's not necessarily something you wanna do in a lot of cases. So like if the load on the pod increases, right, you don't want to kill the pod and restart, you just wanna be able to give it more resources. Or if you see a pod is under utilizing resources, you don't wanna waste the resources, but you also don't wanna kill the pod in order to change that. So this change makes the pod spec containers meetable and this kind of unlocks vertical auto scaling as a use case. So if you wanna read more, you can check out the cap read me for one, two, eight, seven there or the enhancement issue itself. And the next one we have up is, oh, I think we missed, yeah, NF tables. So NF tables is the mode where the Kube proxy can configure packet forwarding rules using NF tables instead of IP tables. So this is aiming to be the successor for IP tables, but this feature is again, only in alpha. So this introduces the new Kube proxy mode flag and because it's an alpha feature, there is limited support, so it's only available on Linux nodes. It's not expected to outperform other modes and is still under a lot of heavy development, but if you're interested in learning more, you can check out the feature under the enhancements repo, it's 3866 or read the cap read me. And then the next one we have up is side car containers. So side car containers are graduating to beta. This is again, another long-awaited enhancement and there actually haven't been that many changes since 128. So side cars containers were introduced in 128 as a new type of container that starts alongside init containers and runs through the lifecycle of the pod and doesn't block pod termination. So now that it's graduated to beta, it's actually enabled by default. So a lot of more people can try it out. If you're interested in learning more, you can check out again, the enhancement feature. It's number 753 and read the enhancement cap itself. With that, I'm gonna turn it off to Carol. Okay, yeah. I will continue sharing about our caps and we have this feature about read mine one spot and persistent volume that is in the status stable, generally available. And it was introduced in Kubernetes 22. This access mode enables you to restrict volume access to a single pod in the cluster. And ensuring that only one pod can write to the view in the volume at the time. And also we will have a future release block in the next week, if you want to have more details. And we have more details also in the reference, the cap and the handling issue. And I think I will go to the next, that is reduction of secret based service account token that is in the status of beta. And I think it's a improvement for security always trying to remove the tokens from the service account. And we have this label, the legacy service account token clean up that will be enabled by default. It's by default is the period it will be one year that after that time, the secret is automatically deleted. The secret labels will be invalidated which not have been auto-deleted yet. This is related with all the automatic labels that is inside of Kubernetes that is creating. So beginning from the release, this legacy service account token will generate automatically with the default. Enable by default. And yeah, I think also I put some reference about the documentation, if you want to know more for me will be to go to the next. And yeah, we have improvements about Qubelet, resource metrics and points going to generate available. This is good for integrations with Prometheus and I can imagine with all the metrics tools that we have more in different ways we have more improvements about container CPU usage memory, it starts in seconds as you can check in the list. The improvements in the general available is generally about performance, tracing and these kinds of features. Yeah. And I think I'll go to the next that is with you. Brilliant. I can take that. Thank you, Carol. So the next one is KMS version two improvements. This is kept number three, two, nine, nine. I'll start with a bit of history on this. So just for context, every API in Kubernetes that lets us write persistent API resource data, for example, secrets or config map, they support something called address encryption. For example, if I want to store secret objects or secret data in Ed City, before storing them in Ed City, I can encrypt them and then store them in Ed City. So Qube API server is what helps us with that encryption layer. Qube API server process accepts an argument called encryption provider config. That's what controls how API data will be encrypted, like what kind of encryption we want to do. And this encryption is provided as an API named encryption configuration. So one of the encryption modes we have available is KMS itself. And for this particular cycle, we are talking about KMS version two, which is an improvement on the KMS version one. So KMS key management service version two, it's a encryption, envelope encryption scheme to encrypt data in Ed City. The data is encrypted using a data encryption key, D-E-K, at Qube API server level. I'll just quickly follow what is coming new in 129 since it was beta in one, I believe when it was beta, I don't recall the exact time, previous cycle. So in this cycle, 129, there is tracing added between this operations between us as user asking Qube API server to encrypt our data, and then Qube API server asking the KMS version two plugin to do all what is required to encrypt. So there is tracing now available to trace back what is happening between this encryption and decryption processes. There is also a reference implementation added this cycle to the internal KMS version two plugin for anybody who wants to implement a KMS version to server. As for testing purpose, this is not for production. So anybody who just want to run KMS version to plugin server for testing, they can use this reference implementation. And I think that's all, that's what's coming out of KMS version to improvements in 129. It was ahead in 127, so. Okay, yeah. So 127, it was beta and 129, it's graduating to stable now. Moving ahead, we have one of the very major changes coming as part of 129. This is KEP 2395. And for some people, it might be a breaking change. So I like to introduce that part as well. So we are talking about this KEP called removal of entry integrations with cloud provider. Bit of history again, in 2018, the Kubernetes come in the grid to form a new SIG called Cloud Provider Special Interest Group. And the reason, the mission for making that SIG was to remove all the cloud provider integration that's present inside the Kubernetes code base itself. Reason is, as we are getting more and more cloud provider vendors available in the ecosystem, as well as Kubernetes code base is increasing every single day. It's getting harder to maintain both of them together entry. And it also hinders the release cycle of either both sides. So around January, 2019-ish something, the SIG came up with the official draft of the KEP to start working on the removal of entry cloud provider integration. What has changed in 129 now? So already in 126 and 127, two of the cloud providers were removed. EWS, I think was removed in 126. And OpenStack, OpenShift, I think either one of those which was already present in the code base was removed in 127. Now three of the cloud provider integration are still available in the code base, which are Azure, GCE and vSphere. So anybody who is still on any of these three cloud providers, it's going to be a breaking change and we have a feature blog coming out. That's linked at the bottom of the slide. Please look, please read that feature blog. It actually have a few suggestions from SIG cloud provider to mitigate the breaking changes or what needs to be done to actually still keep using the cloud providers. At basic level, what's happening with this KEP is we are flipping the values of two flags, disabled cloud providers and disabled Prublet cloud credentials provider. But these feature gates just defaulting to true now the former behavior that is provided through the flag cloud provider command will now only recognize external values. External value, that means it will not recognize any of the cloud provider integration that's present internally in the code base. So please, please, if you are on Azure, GCE or vSphere, read that blog. If you are not on Azure, GCE, vSphere or AWS and OpenStack, then it's a good news. Any other cloud provider will start their journey as external cloud providers. So this won't be a breaking change. And the KEP also have more details. So if you want to check more implementation details, there are also available in the KEP as well. Yep, and moving ahead, I have an announcement from the Kubernetes project. Anybody who is currently consuming Linux packages from the Kubernetes project, please move to our community repositories. So anybody who have been consuming Kubernetes as Linux packages from aptapp.cubinitus.io or yum.cubinitus.io or packages.cloud.google.com. These repositories, these legacy Linux package repositories will be frozen. Actually, we're frozen starting from September 13, but they will be going away in January of 2024 sometimes. So the suggestion, the recommendation from Kubernetes project is to migrate. And where to migrate, please migrate to our community infrastructure, community repositories, packages.cubinitus.io, pkgs.krs.io, there is more information in the blog linked below. With that, we are done with our major themes, major changes and announcements. I'll try to wrap up this by again, introducing how our Kubernetes releases work. So every release cycle, we have a new Kubernetes release team that's form assembled from the returning shadows. We also accept shadow applications from new contributors who are maybe looking forward to start their journey with the Kubernetes project that they are already contributing to Kubernetes project in other parts. So anybody who wants to help us with releases, Kubernetes releases, this is the way to go ahead with. It's a apprenticeship model. It's a model where you have mentors available who will walk you through different parts of our release team processes and help you understand them and eventually help you get titles, leads, role leads, et cetera. So what we have available as part of this Kubernetes release team shadow program is these five, are these five roles, enhancements, release signal, docs, release notes and communications. We have a lead role lead for each of these five roles, as well as each of these five roles except, depending on the requirement, four to five shadows every cycle. So if you are interested, there is a link below for 130 shadow application. The shadow application has already started, I believe, a week back and we are still open for new applications till January 3rd, 2024. So if you are interested, please apply. We are really looking forward to have you working with us on the release team. Also, please read the handbooks if you want to understand about what it is to be part of any of these five roles. Also, these roles actually require different time commitments. They are at different phases of the release cycle. So just look for that as well. So that we do accept applications. So if life happens, that's perfectly fine, but if you are just looking forward to understand how much time commitment is required, we have handbooks available. And that would be, I'll paste a link in the chat. Libby, you can share, but I'll call it out. It's github.com slash Kubernetes slash sig release. And you will find a release team folder that will contain all the information about our different roles. Every release cycle runs for about approximately four months. So that's the time frame we are looking at. Also, there are a few changes coming as part of our release team structure. So earlier we used to have six roles, including bug triage and CI signal. Starting from 1.30, we are merging these two roles based on some feedback and suggestions we received in previous release cycles. So the new role is now called release signal. And since this is the first time, we are going to have this role in 1.30 release cycle. We are going to form this particular role, the leads and shadows from the previous returning shadows itself from CI signal and bug triage teams in 129 and previous releases. But all other roles are still open. So please, please apply. And this will change in releases after 1.30. So if anyone is interested to apply or work on release signal, please look for that 1.30 onwards. There is more information about why this change is happening in the email link below. With that, I'll pass to Nina. And Carol. Yeah, so Kupakani is coming in 2024. So you can go to the CNCF website under events and see when it's happening. But it's happening in Paris. So very exciting location. Anything anyone wants to add? Nothing for me. I think I'm excited about last Q-com. I like the KCD stand because it was a new space. And that's the new, that I am excited to go to Paris to try to meet more people related to the KCD events. That was like a new opportunity to meet the organizers of other countries and it was very nice. Also, if you joined the release team, you kept to go to the contributor summit. So 127 was the first release I was part of. So I ended up going to the contributor summit in Amsterdam and it was really cool to meet people in person and see everyone in the community. It's always amazing to meet everyone. I met a lot of people from 129 release team during the last Q-con, Q-con and that's always amazing to meet people in person. And I look forward to meeting a lot of new people in Q-con EU. With that, if nobody has anything, I want to call out something. Since we, in the early part of the presentation, we discussed we have 49 new features coming out of this cycle. So if you want to check the other, I believe almost 40 features, which I did not discuss during the webinar today, please check out the release notes. And you can also check our enhancements tracking board where all the track enhancements are listed. And there are also feature blogs coming after, I think this week and the coming week. And I think the week after that as well. So in two weeks from now, you will also find a lot of feature blogs coming from 129. So please look out for them if you are looking forward to learn about any of the specific features. That, thank you for joining us today. Thank you so much, Priyanka, Carol and Nina. With that, does anyone have any more questions? The slides will be posted and the recording, you can access them via your registration link used today or they will be on CNCF.io under the community online programs tab, as well as on our YouTube playlist for online programs. So that'll be available later today. All right. Everyone have a wonderful holiday season and break and thank you all. This is exciting stuff and we will see everyone next year. All right. Thank you everybody. Have a great holiday.