 Hi everyone, I would like to invite you on today's functional group update for the CI CD team. Let's maybe start. Okay, the first slide, as always, is something that we are just very proud of. It's our accomplishments. Like, well actually what our biggest accomplishment for 10.0 is auto DevOps. It was very big story that involved a lot of people. It involved also help of Dimitri, Alessio, Mark, Fabio, Philippa, Zyger, me, and Zygos. I believe that I mentioned everyone. But actually what is important that we managed to do it, we managed to close this story in a beta phase, which kind of resulted in closing something like about 14 issues in total. You can click the links and see exactly what we did. There are just different parts of that story that has to be done. But also what is important that it was not the only thing that we did kind of accomplished in 10.0, because we actually move on a bunch of other things that are very important for CI CD at github.com but also for our customers. For example, we introduced Protected Runner, something that was done by Shinya. It's like the very clever concept which allows you to limit the usage of your runner to only your protected branches. It's very clever. It's very easy to use. You just tick a checkbox on your Runner settings page, but also something completely different. It's like a lot of cleanup, removal, and deprecations that we kind of had finally an opportunity to do in 10.0. For a very long time, we use the GitLab CI Multi Runner as a name of GitLab Runner, which is in fact GitLab Runner. And we actually kind of make this transition to happen in this release. From 10.0, we just use GitLab Runner everywhere, also for our binaries names and all our documentations. So make sure that you update your bookmarks. But also we had a chance to like remove and deprecate a few things like the types keyword, the gasey triggers. We also deprecated auto deploy, which is right now part of auto DevOps. But we actually finally removed the old CI API, which was like the part, which was like introduced basically with the GitLab CI when it was still like the separate product, separate to GitLab, not really something that was integrated. But like always, not everything goes right and we did not manage to do everything that we planned. Basically these two issues, fishing, dependency issues, fair job, Runner took non-existing, sorry, using external URL, were deliverables. We did not manage to do them in the in the merge window, but we also did not manage to finish some very important Runner parts of the story. So basically we kind of expected that 10.0 will also be a time where we follow closely GitLab release and the GitLab release process where there is this strong closing, merge window, closing window at seven and then we actually do RCS and release at GitLab. But it did not happen. We are still kind of trying to fix it now. And also we had like very big plans for production readiness for 10.0. We only kind of solved part of the story about really everything that we had planned for that release. And Bitcoin miners are still annoying. There is like great help on this story from Brian, who is like looking through logs and blocking them. We did not see them for a couple of days already, but it's like they are striking back every same time. So we need to be aware. But maybe this was like the past. Let's talk about what is now and what is in the future. So 10.1, as always, we plan very optimistically. We plan that we will be able to ship a lot and this is usually what happens. So our direction feature for 10.0 is basically following our Kubernetes story, following our GKE integration. So we try to implement ability to create the Kubernetes cluster and hook this Kubernetes cluster to your GitLab projects, right from the GitLab. We also, something that we, we actually miss the matching of something like three days by three days. We just continue finishing the HTML public artifacts that is based on pages. It seems like we should have this read like next week, but also there is a lot of about performance, scalability, and that we are following every month. And something that we become more important is CICD quotas, something that is like after, something to figure out after the Bitcoin miners issue, what we can do actually to make GitLab to prevent this kind of abuses. And this is CICD quotas tries to figure out what kind of things we have to implement at GitLab for GitLab.com to make it easier for us. If you have some ideas, please click this link. It's confidential because there are some basic confidential information. What we are also working, we are continuing our work on improving object storage for artifacts. Plan for 10.0 is actually to make it possible to upload artifacts automatically. Right now it just requires manual intervention. And there is the long requested support request ability to paginate registry list, something that is very big problem today if you have a lot of different tags, push to your registry. The current registry page basically times out when trying to load information for all the data. So, but this is 10.1 and the roadmap. It's like we are following our Kubernetes and GKE. We have actually kind of plans till 10.2 right now. But based on the like how we managed to finish 10.2 plans, it will then later depend what is our direction on max releases. Autodevops right now is beta. So we just want to like gather feedback what people think about the Autodevops as a feature and get the GA basically, where it's actually considered production ready. But after Autodevops, there is a lot of follow up and ops features like ops views where you could like manage your applications straight from the GitLab. Improved speed of the CI CD. This is like the very big task that it's very important. It's very important also for us. It's like different ways for caching your stages, different ways to cache your results between different stages. UX is becoming more important. So we are just like waiting for UX research right now to kind of make sure that what we are making actually gives the most benefit for our users. But also we are looking at the ways where we can say that GitLab CI at gitlab.com can actually scale to 10 times is what is running today. It's still kind of challenge, quite a big challenge to actually to make it possible but we are slowly everybody's iterating on that and we are just being more and more confident that it can happen in the future. What about CI on gitlab.com? This is actually kind of interesting because I mentioned before about deprecation and remove us. So 10.0 of GitLab removes the support for old runners. Basically the runners that are 8.x, sorry 1.x basically. So basically if you use the runner that is below 9.0 it will not work with gitlab.com. We notified our users by doing mailing to them and also having a blog post and notifying them a few times on the Twitter but you need to be aware that this can be a problem once we deploy 10.0 RC1 on GitLab. Bitcoin miners are as mentioned with artifacts object storage we are working on that now. We had some small issues that we are trying to overcome but so far we basically moved 3.5 terabyte of data to object storage and it seems that we could should be able to move the rest of the data in upcoming weeks. And the CI production readiness this is something that was one of the lowlights, something that we were very ambitious about doing and something that takes quite amount of time. We are right now at the face of having console cluster running and getting the production certificate actually to hook Prometheus monitoring to this console cluster and hook any machine that is being spun to this console cluster. But there are also like other parts that we are working on it's like automatic cleanup of unmatched droplets on digital ocean. We today for example detect when we have miss patch between the local database and the database that digital ocean has and we can remove the hanging droplets and we do remove quite significant number of them but sometimes we also fail to create these droplets from runner's machines but they still kind of being preserved on managers. So this kind of machines are still not removed they are not significant amount but this is still something to reduce the cost of using digital ocean for CI on github.com. But CI on github.com is not only what we offer to our users but also how it affects CE and TE testing. For example, something that we've been monitoring for some time and we introduce a number of different improvements for caching but it also kind of hit us again on the seventh when we actually saturate network connection on our cache server. Basically it seems that our network connection on the cache server is something around 2.5 gigabits per second. So basically we notice that our CE and TE testing bits actually being slowed down by inability to fetch the cache fast enough to actually to make use of it. So something to overcome this problem and not really have the problem with the cache server given our scale for CE and TE testing and LA how we run these jobs is to try to use local cache. So basically we store the cache always on the machine and we use this cache as long as we have this machine but we don't use the distributed cache server which actually doesn't give us that much benefit today other than it's like creating potential problems with the scale of the cache server and number of them. But also the increased time of this caching phase that has to happen on our bits. But this is not the only improvement that we are planning to help you with CE and TE testing. For some time something that was proposed by business people from digital ocean and then something that was implemented with the help of Thomas and guidance of Stan was to test high CPU machines for CE and TE testing. And it did prove that there is like significant gain basically by switching from two gig machines to standard two gig machines to high single performance CPUs even if this is two gig. So basically we did a bunch of tests on different size but also I did run this gig bench performance comparison between C2 and two gigs. But like the matter out of it is that C2 on single thread is twice faster and it's also it's kind of reflected on the pipelines that we run for CE and TE testing. And right now we are considering running small scale tests for every big one on depth to see how big impact high CPU machines we have on our pipelines and the stability of them on the testing time compared to basically the cost of running of using these machines. We plan to, as I said, we plan to start with DevGitLab.org. So basically any job that will be run on DevGitLab.org in this testing phase will use C2. But then we actually thinking about expanding that to every C and E fork and project that is run on DevGitLab.com. Today it seems that it's reasonable to assume that we should get at least 30% faster pipelines for every tested commit. So this is like quite significant number that I believe that everyone would like to see. But this is like something different because from time to time we also check how many bits we run on gitlab.com. And this is like the trend that you can see that we on the January, we actually had something about 200,000 bits that we run on gitlab.com but now we are basically like hitting this 500,000 bits. The blue line actually represents the any runner that is being used. So basically it includes the specific runners that people use for the and registered to their projects but also as well the shared runners and gitlab CE runners. Like this distribution today it's like very interesting because it seems that today about 59% of the bits that are run on gitlab.com are being processed by us. 21 is actually for CE and E testing where 38% is basically used for shared runners. It doesn't really tell the full story because this is like just a number of jobs being executed. It doesn't tell anything about the durations and how much minutes we spent for each of these buckets. But this is something that is like also kind of interesting to see today and I will just prepare this comparison for next functional group update. So this kind of shows also how big challenge is for us to make sure that we can scale gitlab CI on gitlab.com to make sure that this 500,000 bits first it's not a problem today but also if we assume that in like six months we have two million of bits, it still will not be a problem. So this is why we are considering opening this position looking for more people who could help us in solving this performance scalability but also a lot of direction challenges that we as you can see we have. So we started working on the draft requirements for next CI CD team members which kind of very clearly describes our goals and what are the key competencies that we are looking for which will be like the base for considering opening this position and looking for more people that would help us with that. I believe that this is everything for me. I will help you look for questions. Gabier, we are planning anything to prevent search engines palm using HTML built artifacts. Today we don't have anything planned for that but if you could like join in the issue and describe what you think about and what we should do we still have plenty of time for considering that today. Remy mentioned yes, Kim mentioned who. Okay, since I don't see any more questions thank you very much and see you next time.