 Hi everyone. I think that we'll wait something like 30 more seconds and we'll start. Let's start. Hello everyone. Welcome on the CITM update today. Today is a Valentine's Day, quite inconvenient. Let's maybe start with the accomplishments. For me the biggest accomplishment is basically a team. We have this group of people, not all of them, that are part of the CITM are on this picture. But what is more important is what we are actually working with during our challenges that we are trying to solve. Okay, this was the YouTube video, live photos. So first of all, this is Valentine's Day. Thank you very much for everyone, but not only these people that are enlisted here. If you look at this list, these are basically people that are from different teams, that they are production engineers, that they are content managers, that they are support, they are front and back. It's basically, if you could just take a look, we are working with pretty much everyone. Everyone at the GitHub. It's really great. It's really great. It's a really great team effort. But let's talk a little about the engineering side of the CITM update. With 8.16, we introduced short runners minutes. It's still named built minutes. But the truth is that this will allow us to have GitHub.com short runners usage under control. We have seen people on GitHub.com using the short runners for something like 10,000, 20,000, 50,000 built minutes for running their private projects. We'll be introducing short runners minutes, probably something like in a couple of upcoming days, one week or something like that, where your private projects will be, we have these allowance pipeline quota, which will be, I believe, 2,000 minutes per month for your group. It will be really easy because this will also allow us to offer much better service and much better quality of the service. Mark has been, Mark puts like that create an issue of killing the build stop, something like probably half a year ago or maybe over half a year ago. We've been removing this build stop for two subsequent months, but we didn't have an alternative for that. But with 8.17, we are introducing this mini pipeline graph, something that you already are probably using on GitHub.com. There are still a few improvements that are coming to mini graph. For example, making this mini graph to be fully refreshed automatically in real time. This is one of the ways to make this information more visible and make it faster for you to reach the fairly list of builds. But actually, this starts with the first step, because the next step, we started discussing a way to make the merge request widget more performant. And as part of the merge request widget, introducing information about the jobs that it failed by also including an easy to expand the job trace view where you could see 20 lines of your job. Something this is what we actually are looking usually for. When you go to this failed list, you click on this failed build, you're actually usually scrolling to the bottom to see exactly why it failed. It will be much easier in one of the upcoming releases. But the truth is that we are also investing a lot of effort as part of the CI team into Kubernetes. We work on terminal stuff. Terminal is accessible for GitHub. It's basically one of the key points of idea to production. We also make Kubernetes, we make runner to use Kubernetes and work it well with autoscaling of GKE. We help with making GitHub run on Kubernetes and actually making seed dance during the Mexico summit. And sometime ago, we did introduce this concept of auto deploy. Auto deploy that simplifies the building and deployment process of your application. Something that will be even more performant when we introduce this new awesome feature of the deploy boards that I will mention a little later. But as we've ever accomplished, there are also some concepts. But let's look at that from a different side. If we think and just look at the people that we have right now in the team, we are basically full stack team. We just have competencies in pretty much anything. And we can effectively work with everyone from GitHub. We just know about the front and back and API runner, code design, autoscaling, GitHub CE, GitHub preview apps, Kubernetes, Docker, cache, DevOps, infra monitoring performance. This makes us highly efficient and basically allows us to deliver a lot of new features every release. More about these features in a few minutes. Because we have this flexibility of assigning people, we always have some time to solve a technical debt. And as mentioned, we work with a lot of people from different teams. And it's really crazy when we think about that, that we just achieve that with 18 members where five of them are regular and a few more of them are joining to us and help us to improve the CI and CD. But the truth is also that we have a lot of ideas what we can do next and what needs to be improved. But actually, there is too few of us. One of the biggest concerns and challenges for us right now is to make our team bigger. So we are actually looking for people hiring them, senior backend developer, senior product manager. But the truth is that this is just beginning. Because when we have these people that can guide other people, we're basically looking also for a few backend developers that we make it possible for us to deliver much more than we are delivering now. You may be asking yourself, how can you help us? It's usually very easy because there is a lot of what you can help. We have a lot of ideas what can be improved. And for example, these improvements we could probably divide into three different groups, improvements that help with our case, the scalability and performance. We kind of struggling sometimes, but sometimes maybe even often with the GitHub.com performance with the CI performance on GitHub.com, but also with the performance of GitHub CE tests. We sometimes just miss the people that could implement these small improvements to GitHub CI that would make, for example, our testing to have less of this constant time spent on, for example, bundle install. Another bucket is just we always receive a lot of requests from our customers, people that are paying for GitHub and just want some bugs fixed or some things to be implemented. But sometimes we are pushing them back because we are either following the vision or we are helping our case or we just not have people to make it happen. And the third area where it could help, we quite often have these small bugs that pretty much anyone that even doesn't have any CI experience could go and try to help them fix them by themselves or with our guidance how to make it happen. So maybe if you are interested in learning and be a full stack developer or be a part of this full stack, you could just help making this to happen. If I could talk about the plans now, there is this very big thing because right now, GitHub CI, the CI team is named CI team. But the truth is that we have the open merge request, we named the CI team to be CI CD. We did grow way past the CI in the meaning of the CI. We've been following a lot of as part of the idea production delivery, a lot of as part of the CI CD vision, which is really great because it allows us to move futures, what people are actually using way, way, way before everyone else. But we'll be actually right now by hiring and introducing new people. We'll be also focusing on making the CI by the CI part, the GitHub CI, not CD, more competitive. More competitive, make it faster, make it better, make it more scalable. Introduce features, for example, the pipeline scheduler, something that we are doing now by clueling the API call from the Chrome. It's not ideal. It's a workaround, but this is not what people are expecting from us. They are expecting us to be much better than everything is there on the market. If we would talk about the 9.0 features, this is one of them. This is a deploy board. It's basically not natural improvement to environments and the review apps that we introduced some time ago. With deploy board, it's really great concept because you will be actually seeing when you are deploying your application what is actually happening with your application. If you have, for example, 50 containers required to deploy your application, this every box there, it will be just life updated that this container is actually running your latest version of application or not latest one. It will greatly enhance visibility of how your application is running right now. With the future upcoming monitoring features, if we also make it easier as to deduce what to do with your file deployments. This is basically another step to make the CD vision much more performant, be away ahead of everything that is out there, and just hide this complexity of Kubernetes that we have now by making this really easy to use and really easy to start with. If we would talk about the CI improvement, I think that most of you that have these bit triggers feature will be actually extending these bit triggers to be pipeline triggers. Pipeline triggers that we have a description, we have an owner, and probably in the future based on these fundamentals of the pipeline triggers, because this is basically the first iteration. The next iteration probably will be working on making a separate UI, but maybe with the same backend part to make it these pipeline triggers also be schedulable. Not sure yet how it will look, but if we consider the pipeline triggers, how they work, this is actually the great fundament for introducing this long-requested feature. I believe that this feature for people asking for making the pipeline schedulable is something like probably one year ago, so I would say that a lot of people will be happy when we finally add that. But we also think about the scalability problems of github.com. You saw the shared runners, this is something that is shipped now, but we also ship the default artifacts expiration. On github.com, this will be set to some date. It's usually very common that most people just don't specify the artifact expiration date, and then we have a graph where we see only growing number of artifacts over time and it's growing exponentially, and we don't really can control that. This will make it it's really easy feature to add, but it will actually allow us to enable that on github.com with 9.0 and say that from now on unless you specify something else, your artifacts will be removed after probably 30 days or seven days. It's still up to the site. But there is another improvements, something that we've been baking for 9.0, because this is basically the breaking change to your workflow. Cache defaults. Cache defaults were very conservative. The first time when I created the cache defaults, I assumed that it's more important for me to make sure that the builds doesn't crash. If for example, I am running different versions of Rabi rather than make it easy to use and fast. Sorry. I have this so far. With 9.0, we make the cache always use the default key for everything. You can still overwrite it and make it behave as previously, but the switch will be different. Right now, we'll just make it as much open as possible instead of as much close as possible. Scalability problems and performance. There is a number of things that we are working as part of the performance improvements. One of them is this paginated environment list. Right now, it's not really a widely used view, but the truth is that since we are adding a lot of and we have big plans of driving the CD vision of github, it will be much more troublesome over the time. We decided that it's the best time to do it now before we start suffering. It just can happen that for some big projects with a lot of deployments, this view can take seconds to load instead of really seconds. It's the highest time to make it happen. It's pretty much almost finished with the great work of Philippa and Grzegorz. We also be improving something that I am saying probably right now for three months, but this is actually the constant improvement. Every release we are shipping another small change that makes API for the CI be faster, be more performant. We think the 9.0 will be this kind of big point, will be big change, where we actually introduce this long pooling. This is something that is being called by Kim. It's the first part of that was already merged sometime ago. The second part is almost finished and after that we will be working on the third part. We'll be actually adding mechanics for adding decent pooling. We'll be also more aggressive in cancelling builds that die for some reason. It just happens that runner can crash and then you would actually have to wait 24 hours for this runner to be considered fight. By introducing this still stack build worker, it will happen in one hour now, but in the future it will be try to reducing that even further. Sorry, I cannot find mute. Runner 9.0, something that was requested by the production team, production engineers is to migrate runner to not use the API, but use API before. Right now in the process of making this change happen, also the Kubernetes executed for runner 9.0 will be considered stable and production ready because we introduce the last missing feature, support for GitLab container registry for private images. Something that is right now supported only by Docker, not yet by the Kubernetes. Very important to note that runner 9.0 will be not compatible with GitLab before 9.0. The other way around, runners from branch 1.x will still work with GitLab 9.0. It allows the customers to have this grace period of upgrading the runner to the newest release when they start using the new GitLab. If we think about the plans in the longer term, I think the biggest challenge that we have to face now is to make all CA views to be fully real-time. Something that is being worked by the backend and frontend that is a little outside of the CA right now, and we are just waiting for the findings, how to make it happen. But the goal in the end is to make any page and graph to be updated when it happens, when it gets updated on the backend, and as soon as possible. Because right now, it often happens that you have to refresh this view, which is kind of inconvenient and not really 2017, if we look at that. The second goal in the long term is to use Kubernetes. I did test Kubernetes in different scenarios and I had very good findings refusing that. So I would say that we'll be recreating issues and trying to make it, try to use GKE with the Kubernetes cluster, AutoScale Kubernetes cluster to run as much as possible. Probably in the future also running the shared runners, if it will be seen viable. In longer time, probably Q2 or Q3. Right now for fetching the views, we use the typebase pooling, which is bad in the end. It's not scalable. Actually, when we hire the next engineer, but we'll be working on the runner, one of his first tasks or the next after the first task, we'll be looking at how we can make our runner use Qs, the real Qs, maybe public and subscribe or something other. I think this is the very big goal, but the goal that will give us a lot of room for the improvements. I believe that this is everything for me. Thank you very much for your attention and I am very keen to hear the questions now. Spotify playlist. We heard the CI team. You have to share that pic coming. Yes, I will share Philippa. Wow, this is huge. Got to move on the quota. Deploy boards look cool. We'll do a feedback from large customers on Deploy Board. Deploy Board is for 9.0. It's Kubernetes only. Reg is cache defaults. Is it cache size? No. Is it only how we create a cache and how we fetch the cache? We actually fetched, we create a cache for every branch and will be the same unified cache for every branch. Right now, if you create a new branch, the very conservative cache defaults will make this cache for a new branch be created from a scratch, not really reused, which is kind of defective purpose of caching, because then you actually always have this really long first one, which is not ideal. Marine did ask a very good question. If we move all our bits to one type of executor, we might end up having problems if you're using different executors. Is it a good idea to not run the Docker executor, at least as part of the fleet? It's really interesting because we have a lot of different executors right now that they are supported. We have a lot of different executors right now that they are supported. We never use them, but we have a very responsive community that just tells us very quickly about something that is not working. I still agree with you that we should have some part of the fleet running the bare Docker executor, because this is something that is much widely chosen right now, but it shouldn't stop us from using Kubernetes as much as possible. I believe that for our case right now, for some of the builds you use, for most of the builds you use Docker, for some of them, we still use the shell. Basically, that's it. We don't use the Kubernetes yet for any builds at this point. Gaby and Mazetodet ask, what are you thinking for the QST? Will you rely on the Redis or use something else? Because rabbits or MQ? I think that this is still too early to answer this question. I believe that we have so many different solutions to choose, but the truth is that we should always be kind of sure that we are not introducing another technology into our stack if it's not really needed. The course of point, how it will look, how it will try to make it happen, we try to evaluate as much as possible using Redis and see whether it fulfills our requirements. If it's not, then maybe we look for some other solution, but as of now, I think that this is still too early. Marin doesn't like RabbitMQ. I think that Marin doesn't like introducing another technology into our stack, and this is usually the biggest hurdle, because this is also a hurdle for our enterprise customers that would have to maintain these components. Yes, that too. Okay, thank you very much. I'm not seeing any other questions, so see you on the team call.