 Hi everyone, welcome on today's CI team functional group update. It's kind of special one because this is the last one before the summit. So it will be very brief but actually part with a bunch of things that we did and we plan to do in the upcoming weeks. So first of all we discussed collaboratively with Githari team, with Stan and a bunch of other people on reinforcing Githari team. Because Githari team has a lot of impact on the performance of Barjul's parts of Githrock. And we came to the conclusion that Ezekiel would be great addition because of his very good go knowledge. And also the Githrock race knowledge where he can like help with refactoring of the parts of Githrock code base. So Githari team was enforced with Ezekiel as starting with this week. So basically the 9th of October. Thank you Ezekiel very much for all your work in the CI CD team. We made basically a bunch of great features. We work on the matter most. We work on a lot of this backstage and performance improvements. We work on including usage pins and a lot of different other things. So thank you again for your work and bring a lot of to the backstage performance of Githrock. So but this is Ziger. It is definitely something that makes it slightly harder for us than for Ali. But we actually opened two CI CD positions. Backend developer and senior developer. And we effectively look for a few people, not only these two. But maybe more about the hiring slightly later. So as for the accomplishments. Comparing that to 10.0 if you would click the link of the presentation from last release. We had basically way more mergers and way more issues crossed. But 10.1 was basically very focused on having Kubernetes out of the door. Kubernetes GK integration out of the door. So the first iteration of allowing you to create the GKE cluster. But also a lot of different performance and backstage improvements. Because we improve paginated registry, we improve deployables, we improve jobs controller. We also introduce some functions that we'll be using later for allowing you to see an information about each step of a job and the performance of each step. But also we fixed out of the box and the banner that was very intrusive today. Sorry, yesterday basically today it's a non intrusive one. This is still not part of 10.1 because I think it should be part of RC2 if I'm correct. Philippa can correct that one. But also we fixed a bunch of different bugs and improve the CE and E job testing times. So, but as always not everything goes right. So as for lobloids. Monitoring is progressing, but slowly a little more about monitoring later. And there are some things that slipped. The maybe most important ones is basically the artifacts one because we had an idea of extending the current design of artifacts. But also actually we're introducing with that with the concept of multiple artifacts per bit and force us basically to introduce some scalability changes around the story. But we didn't manage to finish that in 10.1. And we basically kind of assumed that auto uploads would be better to be done after multiple artifacts per bit. But it was kind of virtual blocking where we could basically finish auto upload without waiting for multiple artifacts per bit to happen. So this is like slightly a low light from the planning perspective and how we decided to do things because like in terms of the artifacts we could basically deliver half of the story but we tried to like have the full story covered instead of like trying to do the iterations. From the performance we have you. We did merge the migrations we did run but basically we discovered that we did not fully migrate all stages. We did not migrate the stages that were for external integrations in this case Jenkins integration. So it kind of forced us to not basically remove the legacy code from stages so the pipeline index list doesn't receive the improvement that we plan to have in 10.1. If we talk about 10.2 the direction is basically we are following our GKE story. Like the dark maybe two main parts of the GKE story is like the person is that we want to allow to install apps easily into Kubernetes cluster. Right now this absolutely basically hand dealer and ingress that allows you to have automated load balancing but afterwards we will basically work on enabling more apps. In this case Prometheus also a GitHub runner and the second one is made is more like this clean kind of cleanup and my Kubernetes basically the top level part of GitHub because we want to move from Kubernetes integration to be part of the integrations page where it's hidden very deeply to make it basically first class. So if you go today to github.com you will see under CICD the new option cluster. Today you can only create GKE cluster but tomorrow you will be actually be able to add your existing Kubernetes cluster and start using everything that we plan as part of directions. In terms of scalability security and improvements we are basically mostly following the stories that we started last release hopefully finishing them for the biggest support runner issue. We want to basically merged persistent connection close fixes that we started testing some time ago and that should basically help with the resilience of GitHub runner and scheduling builds on Docker. So clgithub.com this is always quite big part of my functional group update because we are also responsible of making sure that it is stable because this is used by the company but also by our customers. As like basically for the past few months Bitcoin miners are continuing to be paying but like it seems that we kind of managed to not be impacted by them. At least we don't see any impact other than like maybe increased cost for limited period of time and this is basically like from we kind of managed to figure out what we have to do what we have to verify how we should block them. How we should prevent them from abusing this works today and thanks Brian for helping basically with that on like daily basis. So this is more like a joint effort between CICD team and security team. From the other minor changes we are slowly migrating all GitHub Ink runners to separate accounts. The main reason for that is basically to be to make it easier for us to be aware of the cost and split the monitoring of shared runners that are used by the customers and split the monitoring of GitHub Ink runners that are used only by us. We are quite nicely progressing with end to end monitoring because we basically last week we enabled console and Prometheus but we had some problems with amount of the metrics that we were scraping because it turns out that the machines that we choose for Prometheus were not able to process everything. They were scraping from the machines because we create a lot of dynamic machines that are then removed. So in the end we have a lot of metrics stored in the memory which kind of makes it really tough to Prometheus. So there are a few improvements that we are right now working on. First we only scripted important ones. And then second we actually made machines to properly disconnect from console service discoverer. The other story that we are kind of following is the object storage migration on GitHub.com. We migrated over 25 terabytes of data so far. It seems to be like around half of how much we store today. So it is quite nice progress. We are right now in the process of doing basically cross check before we proceed further because the next migration will be basically that we migrate everything up to now. So we would basically store everything on the object storage. We still have new artifacts landing in the locker storage. But since we are working on the improvements of the search to upload them, this will be also the next release. There is also something else that we introduce. Oh my God, the table did broke. So maybe I should be visible here if I would fix it like that. Okay, it should be much better. So yesterday we enabled high CPU droplets for CE and EE testing, which kind of gave a very nice boost for our pipelines. What is important, we enabled that for all CE and EE jobs. So on GitHub.com and on DevGitab but also for forks. So it is something that is basically visible system-wide. But what is actually the outcome is that we see that the workload time of the jobs are basically, in most cases, it seems that they are around twice faster than now. The total time seems to improve something like 30%, 25, 35%. But what is more important is the stability. If you look at the results of past few comments, you see that the variance of the testing times is much more predictable. Something that was not true previously. It is mostly because of the CPU still time, something that with the high CPU droplets, we have less likely to hit because of the noisy native words on virtual machines. Actually on the hypervisals hosting virtual machines. So if you please let us know how it is working. If you see any problems with this change, because we enabled that system-wide, but we are still not like 100% sure that it will be fully stable all the time. So if you see something strange, please let us know as soon as possible. The next thing is, you just see a summit, something that I want you to invite. I will be organizing session for discussion about what we should change, how we should scale CI to run next 100 million jobs at GitHub.com. So basically, from my perspective, it seems that this is something that we should hit in one or two years from now. It's somewhere between this time. So this still gives us a little of the time to plan everything ahead. But there are so many different areas to cover that anyone that could jump in help or maybe listen to how the current architecture of the CI looks like. I would be very glad to host you. So under the link, there is Google Doc. This Google Doc is basically like maybe something to start with, because I kind of describe how I kind of plan this session to look like. It is still not final, but if you have some ideas beforehand before the session starts, please write the topics. I will be happy to try to introduce them in this lighting talk session that I plan to do in the first 15 minutes. The next thing is, of course, hiring. So if you know any good developers that wants to join GitHub, if you know someone that is great in the grow in the Kubernetes, but also someone that is basically great in the race and is passionate about tech tools, please sort out, give them know that we are looking for a great people that could join GitHub. And also let me know too. I saw that there are a bunch of questions, so let's move to the questions. It's correct. Nice. High CPU droplets. Magic the gathering room. Yes, I think so. Okay, so thank you very much for attending. See you at UGC and see on the summit in two days from now. Thank you very much. See you on the team call.