 Okay, hello everyone. Welcome you on the next CI team update, CI CD team update, the one that is today, it's mostly about 9.1 and 9.2. So let's start maybe as usual with our accomplishments. I would say that our very nice accomplishment is Fabio joining us Fabio that kind of introduced him yesterday. So welcome Fabio, good luck with all the challenges that we have with the CI CD team and number of things that we are working. We are looking forward to working with you. I've been kind of discussing with people already. Also, we kind of work on... In 9.1, we work on shipping the small iteration to Canary Deploy. Canary Deploy is a kind of way for you to have two versions of your application running in environment. If you have, for example, production, this production can have also different version of your application running, more like something like staging application that is kind of processing part of your load allowing you to test some of these capabilities. We made these Canary Deployments extensions to AutoDeploy where also we added support for private images and very simple support for database support for databases. Right now it is only Postgres. It's very basic support, it's very experimental. And databases use immutable storage so the data is not persisted, but it actually allows you to test Canaries, Deploy boards, support for Heroku buildpacks and databases all in basically one package. One of very cool achievements from 9.1 that we managed to do with the joint forces of the community is shipping three awesome features. All of these three features are basically very, quite mostly requested from the community. And actually it seems that people from the community did join and try to do that. We had to join in this effort in order to finish them, but with 9.1 as discussed before, you can actually have multiple Docker images per your project. You can auto-cancel pipelines. For example, we did probably enable auto-cancel pipelines on KitLab CE quite soon, but also this is scheduled triggers. Right now it's named scheduled triggers, but in 9.2 it will be pipeline schedules. It's the feature that is actually requested for something like one year now. Very long time. Very amazing contribution from the Suken. And this is actually all of us to build a lot of awesome things on top of that. But it's not only that. As usual, we always focus on a lot of performance, stability and support improvements. And we spend quite significant time on solving those. One of these is this four or five months old request from support and from the community about disabling successfully pipeline notifications by default. With help of Sean, we kind of make it possible, improve support for jobs. It's not uncommon these days to have jobs that have 15 megabytes build log. If you try to open this build log right now in KitLab CE, your browser will basically explode and you will not see anything. Your Chrome will just crash. We kind of make it work. It's more like incremental improvement, not really the final solution by limiting amount of data that we are showing. We are showing only the last 5K of bytes from the log. You can still download the full log if you really need. But by default, we are limiting the amount of data that is being rendered. And then we had some confusions about caching, whether it works or not. It turns out that we kind of deliver wrong message to the user showing him that we are extracting cache. Well, it is not really true because we are kind of differently analyzing that message. Our effort, something that kind of started like two releases ago is a lot of focus on having the CI part of KitLab to be as real-time as possible. 9.1 is just an X iteration. With 9.1, we have this pipeline list that is now refreshed automatically. There is still some ways to improve that later, but it's very good first iteration. Also, deploy boards get this kind of real-time abilities. It's now partially, but with 9.2, deploy boards will be always updated. So you will always see your canaries and deployments in real-time when they do happen. But to be fair, as always, not everything goes right. One of these things that didn't go right is object storage. Something like one and a half week or one week before, we kind of figure out that our approach to this problem is not ideal. At KitLab and KitLab Rails, we use CareerWave. CareerWave is a technology that allows us to store with some abstraction data, data that is currently stored on the file system. But CareerWave is not really designed to store big files. It's not uncommon to have artifacts that are one gig or two gig. And at the end of the release, we kind of discovered that relying solely on CareerWave to do this upload, it's basically wrong approach. Because this upload would be done in a unique process, kind of making it possible to Daniel of Service, GitLab, which we cannot accept. KitLab Runner volume mounts for Kubernetes. This is something that has been started like three months ago. And this is something that still we haven't shipped. Mostly because of focusing on all their things. A little because we didn't for a very long time had very clear explanation what is the proposal. But the low light is that we still have to work on that. Auto-deploy changes. These kind of tweaks that allows you to use canaries and private images and databases got pretty much shipped in last week. We kind of stretched this deadline because this is separate repo. But I know that Axel and Joshua and Mark were working on some final documentation touches on Friday before Saturday. So very late. And this is something that should basically not happen. The another low light that we kind of being struck is the runner upgrade process with 9.0. With 9.0 of Runner, we kind of drop backward compatibility with the previous releases of Runner. And we had some backslash from the community saying that they usually upgrade the Runner only to latest. We didn't have the fail-saves mechanism to actually make it visible to people what is the problem with the runners. So we had to kind of build this compatibility check into Runner to make it easy actually to people discover the problem that they are using or GitLab or they are using latest Runner that it's not really currently being supported by that version. See you on GitLab.com. As always, there is a lot of happening with that. We will be introducing actually the pay plans are launched. It's very big effort of a lot of people being led by Mike from the product team. Free and Bronze users limited to 2,000 runners minutes per group for private projects. We also have plans for much bigger allowance. These limits will be effectively enabled with the first of May and will be actually kind of Thursday when we will pretty much have this in flight. Currently we are running this CA limits on a small scale on some of the projects to actually have this future assurance that everything is working as expected. So if you are maybe interested in joining this kind of testing effort, just give us a notice and we could enable these limits for you. There is also another story happening. We have GitLab Runner 9.0. We dropped the support for all GitLab releases, but we also introduced new API. This new GitLab Runner is basically much more efficient than the previous one. So we are actually looking in depreciating all the runners and mostly like forcing people to upgrade because the cost of handling these new runners is like 99% lower than the old ones. If we think about the stability of CI on GitLab.com, my general understanding or just visibility into last month, it seems that it's been much more stable and we are setting where much more responsive to all potential problems. For example, some of you were having problems with stock on image pooling. This issue is actually right now solved. It's shipped with the 9.0 runner and 9.1 because it was related to the API changes that we introduced in GitLab Runner. There is also this CI cannot connect issue where you sometimes see on your merge request. We know when this issue is happening. We are not really sure yet why it is happening, but we actually have some ideas where this can be. So this is something that is still not solved, but actually there is some work around that would that allows you to kind of overcome this issue until we figure out the proper solution. And thanks to Ben and Thomas, we have a crazy amount of new graphs and new monitoring and new alerting that actually allowed us to discover the stock on image pooling. The first graph on the left, you actually see this spike of the machines that are being stuck. It actually kind of shows us where is the problem. So having this extra monitoring, it's actually taking a lot of benefit in terms of our response time in solving these problems. Ben introduced the second graph. It's maybe not that interesting if you don't know what is happening there, but it actually shows the average time of not really average. It's like 15% and 90% of time required to provision a new machine. Sometimes we are just seeing that machines take longer time to provision. So this graph actually gives us another data point to actually have full understanding what is our stability deficiency in the CI on GiveLab.com, something that is especially very important right now when we are introducing paid plans. As always, we have crazy amount of things that we are working on. One of these big things that we did not yet really start working on, but it's this direction feature linking between project pipelines. It's like the first iteration where we would be able to show you, if you click the link, you will just see image where we could show you your dependent pipeline from other project and the status of this pipeline. Pipeline schedules, something that we started last release with this amazing community contribution, but there is still a lot of things to make this out of the alpha phase. So pipeline schedule, we just get a fully revamped UI with Dimitri's designs and we don't buy front and the price with help of Zigger on the backend side. We also have customers complaining that we kind of broke external CI services support. So we also try to make it right in 9.1 to actually make it sure that you can use other CI services without GiveLab CI, which is not always true these days. Our continuation of real time effort right now it focuses on changing pipeline graph, but also make the job details. Right now there is this not very nice behavior of job details that if your job finishes, this whole view we have to reload. We hope that it will not be the case with 9.2. Our effort on actually solving the object storage. We have a number of ideas that we are currently implementing. There is also disperses pipeline stages. It's not really something that is visible from the user perspective, but from the performance perspective it has crazy implications. If we finally be able to do that, which is there is a risk that will not be with 9.2, you will see a big improvements in loading times of all CI related views, but also resiliency of CI logic. Sometimes that right now breaks when we introduce another kind of mechanism to have your pipelines being processed. Also, there is this kind of one or two months old thing that we are trying to pursue is GCE shared runners, but I hope that we can speed up this effort and make it finally true. We have a lot of pieces already in fly. One of these pieces actually run every hour jobs on custom build image, the base image of the virtual machine that would have embedded monitoring, embedded note exporter and embedded IDS system to actually be faster to figure out any abuses, and then actually hook this every build machine to the Prometheus instance. So basically right now with a little of a little way I allow you to actually see the performance metric of your builds if you really need that. Container registry GCE. I know people that customers that kind of struggling with amount of data stored in the registry. I did make that it's very experimental, but it turned out to be actually working and I have already people saying that it's working great. So I'm just trying to make sure that it goes out of the experimental phase or this alpha phase and can be considered better and can be used by the customers auto deploy currently auto deploy is just custom custom bus scripts. We strive to make it hand based hand based it seems like the best approach to tackle these problems and actually then adding another services to auto deploy makes it much more easier. I see a number of comments. I will just respond them when I finish things that may not ship. As I said, it has very serious performance implications but it is also worth significant change. And there is a very big risk that we not have enough time to actually make sure that it is properly tested. We want to release something that could actually do a paradise pipelines processing something that basically everyone in the company and a lot of people in the world are relying right now on between project pipelines. This first step is actually depending on another things. And since we did not yet really start the front and the backstage improvements for that, there is big risk that it will not ship. And one of the problems with that is also that next week is just holidays and like 30 or 40% of the team capacity we probably have holidays. So it kind of makes it also harder for us. So basically that's it. This is, this is the CS team updates. I would like to read right now the questions if we have any. Kim, do we still get success in the pipeline was failed before? No, we don't yet have a recovered pipeline. See if this is what you're asking for. Kirsten Carrier Wave still registers the artifacts but doesn't do the upload anymore. Exactly. So the main here, the main thing here is that Carrier Wave would be responsible for managing, deleting, accessing the files. But the whole upload process would be completely asynchronous and out of the bones of Carrier Wave make it really cheap actually to do. Should we tie to AWS S3 or Minio? It depends on the infrastructure choice. Currently, the plan is to support S3 first, maybe with having the Google Cloud Storage later. But basically S3 solution is like the standard for object storage. It is very widely used so we could use both of that. The two-track cross-project pipelines view. Thank you, John. Bill team also wants that. Jim, yes. So the plan is that we have the scheduled pipelines and connected pipelines on our agenda. Scheduled pipelines are actually something that is right now in experimental phase, alpha phase. So this is something that you could start using, they could start using. We've connected pipelines basically joining the game, either this release like first monetization or next release. Do we have any other questions maybe? Yes, Minio is fully, it's actually fully compatible with S3. So it is like solution. So thank you very much. See you on team meeting.