 Hi everyone. I'm just testing my equipment, which appears to be working. Okay, so can everyone see my screen? I'm in the present mode, so I don't see. Thanks. Welcome to the functional group update for the building. I decided to do a bit of a different functional group update for today because during Cancun I realized that a lot of people do not really know what the build team is doing and what kind of stack we are going and working through. So I decided to just give a bit of an overview what we are handling on a daily basis and also give an update on what we are preparing for 9.0 release or the highlights of the 9.0 release. So I'm going to start with, I would call it a mission of the build team, but I think it's the mission of the whole of GitLab. We want to make GitLab the tool that is very easy to use and for the end user to focus on their own thing that they are doing. This translates into build team as well. We want to extract the maintenance complexity away from the end user and we want to have the initial installation bar very low. So that means that the person who picks up the package needs to do the installation within a couple of minutes with minimum configuration. Apart from that, once you install the package, we also want to have a very low barrier for upgrading to the newest version. This is helping us with our release cycle, which is every month and if upgrading was difficult, people would be discouraged to upgrade and we would have to support more and more older versions and everything would just basically slow down. Apart from that, we also want the end user not to have to worry about configuration. GitLab is a complex application and if the end user needed to configure every single little thing that is necessary to run GitLab, I think most of people would be discouraged. With this, we want to have configuration in one place and we want also that services once they are configured can run seamlessly without any user intervention. With that in mind, we have our responsibilities and that is we are building an omnibus GitLab package. This omnibus GitLab package will allow the end user to just install GitLab and run it without thinking a lot of how internally GitLab looks. We are shipping per version 19 packages, 10 of which are CE and 9 EE and with all of that, we also need to support operating systems that, let's say, are older. I wouldn't say 2010 is very old, but given that it's 2017 now, a lot of things change in software, so supporting something that was released in 2010 is a challenge. Apart from that, we also do Docker images, one for CE and one for EE and we are basing the Docker image on the omnibus GitLab package. I will explain a bit more why we are doing it this way a bit later. We are also maintaining cloud images, so those are the images for AWS platform Azure and we're working on Google Cloud platform as well. Those images are basically one-click installations where you would just go into the interface of the cloud provider, click on a button and everything would be set up for you. We are basing these images as well on omnibus GitLab package. Apart from that, we are also maintaining packages for container schedulers. This is named very differently based on the platform, but for example, we are maintaining a Kubernetes help chart, application for Red Hat OpenShift, Mesosphere and so on. We are also using our official Docker images to activate this. I've separated Pivotal Cloud Foundry here. It should actually go into the container schedulers part, but Pivotal Cloud Foundry is a bit different and we are doing something very special there, but again, we are basing everything on the omnibus GitLab package. As you have probably noticed, omnibus GitLab package is at the base of everything that we do. Why are we doing it this way? In order to limit the amount of work we would have to do jumping from project to project, from platform to platform. If we have one common base, we can limit our focus on that base and only do very specific things that are necessary for platforms. That actually allows us to ship to newer platforms much faster than it would usually take us. We don't have to rebuild everything from scratch when we are starting on a new platform. That also brings us a bit of a challenge. We need to make sure that that package works on an operating system, like on a bare operating system, the same way as it would work in a Docker image. We also have all the dependencies inside of the package. This is to make sure that the installation is very simple and easy. That also means that we need to make sure that the software we are bundling and shipping can compile on older and the newer operating systems. If you hear me mentioning, I don't want to add a dependency, this could be reason number one. Basically, I will use an example of Node that we recently worked together with Frontend on. Node.js, for example, can be compiled on CentOS 6 only up until version 0.10. After that, it becomes really complicated. It needs a lot of library upgrades and ultimately might not even work. Basically, we need to make sure that all the libraries that we want to ship, all the dependencies we want to ship, need or can be compiled on both older and newer operating systems. Apart from that, we need to keep all the dependencies up to date as much as possible, of course. Top priority is always security. We are always looking for any security vulnerabilities that are inside of the libraries. This can be very deep in the library. It doesn't necessarily have to be, let's say, post-dressed. It can be one of the dependencies that is used for building the library. Apart from that, sometimes we need new features that the library is introducing. I will use, again, the Node.js example where the latest versions would work with Webpack that Frontend needed, but the older versions basically just doesn't. We need to keep those libraries up to date as well. Of course, we have a lot of transient dependencies. That means that to no fault of ours or even the direct dependency that we are shipping, something can be broken somewhere very, very deep in the dependency tree. Example I can use is recent warnings that support might have seen and reported where users were reporting that during the reconfigure run, they would see various warnings. It turned out it wasn't our fault. It wasn't Chef's fault. It was somewhere very, very deep. One gem that is being used by basically every Rails application had a behavior change that influenced everyone. That also brings us to the next challenge. That is the complexity. We need to keep the control over the package size. One reason for that is the more things that we put in the package, the slower will it be to unpack the package. I don't know if you are aware of this, but our package is around 260 megabytes in size, which is large, but it's not something unusual or at least it's not outrageous. I think most of you would be surprised to know that once we unpack the package, the package itself or the contents of the package are one gigabyte in size. GitLab and all of its dependencies are one gigabyte. That actually takes time to unpack on certain systems, which we see on GitLab.com, for example. Each dependency that we introduce also has to the build time. It's really important when we are releasing that the build time is as short as possible. Of course, upgrading all of these dependencies is a problem. I will use an example of Postgres. Postgres, we've been shipping since the very beginning with version 9.2, and now we are at the crossroads. We need to ship 9.6 Postgres, which is newer, better, and everything that you can imagine, but that also means that we need to figure out the way how to upgrade the users. Sadly, Postgres has a breaking upgrade between minor versions. 9.2 is not directly compatible with 9.6. This is only one dependency that we have. We have to consider many more. Of course, finally, we also need to make sure that the end user has a very seamless experience when they are setting their configuration. For example, instead of setting five configuration flags, if we can bring complexity back to us, we are going to do it just to make sure that the user can set that one configuration flag instead of five. Okay. That brings me to now that everyone is understanding of what we are actually doing. I also want to mention what we are working on towards release GitLab 9.0. We have currently two Raspberry Pi packages, one for Weezy and another for Jesse, so Raspberry 7 and 8. Because Raspberry 7 is basically no longer being maintained, we decided to drop the support for that package. It still means we are going to have one Raspberry Pi package, but one less to build. However, we are going to introduce SEWS Enterprise Linux 12 Enterprise Edition package. We have various customers expressing their interest in that. We are working with SEWS directly, which the company is actually helping us building the infrastructure that we need to build the package. One interesting thing I think for everyone is going to be that the GitLab package is going to automatically attempt an upgrade of a user's Postgres. We've been shipping two versions of Postgres for the past two, three months, I think, and we've been warning users that they can do a manual command, that's one command that will automatically upgrade everything for them and switch everything that is necessary. If you ever try to upgrade Postgres, it's not that simple. You need to follow quite a lot of steps, but we extracted that into one command. But in order to make some progress and make sure that we are shipping the latest version of Postgres by default, we are going to force an upgrade on everyone who didn't upgrade in 9.0. This is going to produce some interesting challenges, but I'm confident that everything is going to work out, and at least we are working on managing the rescue of a possible failure in a graceful way that wouldn't leave a user in an inconsistent state. We are also working on adding a secondary database for GitLab Geo. This is necessary for disaster recovery, and this is going to add another service inside of the package. That will also allow us to do a simplified GitLab Geo. GitLab Geo is very complicated, and we want to remove the configuration that the currently customers need to do manually and move it to basically feature flags where we can just set one flag and turn on things that are necessary and turn off things that aren't. Next, we are working with Prometheus team to enable Prometheus and all of its exporters by default. Again, one of the challenges there is going to be, it's very simple to turn on everything on by default. What is complicated, though, is making sure that we can allow the users to turn off everything by default, sorry, not by default, but turn off everything if they need to do that. For example, if they don't have a lot of resources where they can run the package, they might need to switch off, basically have a master switch to turn off all of the things they won't be using. That is going to be one of the things that we are going to be working on. And the cloud images I previously mentioned are currently being built manually. When I say manually, that means we need to trigger the build manually. We are going to automate all of this and this is ongoing work first for AWS images and then we are moving on to the other ones. But we want to have the same release procedure for the cloud images as we have for our packages and our Docker image. To similar note, Pivotal Cloud Foundry is in a similar state right now. We are at least up to date with it. We are lagging behind. So Pivotal Cloud Foundry GitLab Tile is only one release behind our current minor release. And we also are going in the direction of automating that. A lot of work was put in into Pivotal Cloud Foundry GitLab Tile and that actually allowed us to start finally working on some features. And first of those features is actually introducing LDAP support for Pivotal Cloud Foundry. This is currently being worked on. And I know that has been one of the most requested features for our Tile in Pivotal Cloud Foundry. And to add to more components and more complexity, we are working together with the production team on shipping Postgres highly available package. Again, this work has been going on for the past two and a half, almost three months. And we had to scrap our work and start from the beginning two times now. And I think it's going to happen the third time as well, because the production team is working on changing the infrastructure. And because we want GitLab.com to be, GitLab.com is our biggest customer and we want our customer to be happy. We are following what they are doing. So we are helping them there as well. There are some Kubernetes HelmChart improvements that are happening as part of the Idea to Production demo. Namely, we are introducing the CI Runner inside of the HelmChart. So this is an ongoing work that came out of the Idea to Production. And finally, but I wouldn't say the least important is improving build times. Our releases are problematic, at least from the package side, when we have to wait for 45 minutes for the package to be built. We want to slash that and have it at standard of maximum of 25 minutes, because I know that I can do a build in 25 minutes. But there is some ongoing work there that will help us slash that even further. But why the build times are important, if we can do shorter build times, we can also introduce building from branches for everyone. And that will help every engineer that is working on any featuring GitLab to get a built Docker image or a package that they could quickly spin up and test the change they did. So call it basically a review app, but for the whole package, for the whole product. And this is also helping us with the GitLab QA effort that is happening, so automatic quality assurance. And that's being, is going to be one of our focus in 2017. And I also want to take this opportunity to quickly mention who's actually working on this, who the team is. So currently, the team is six of us, so DJ Gabriel, Jason, Ian, and Baloo, together with me, of course. And we expect the build team to grow to 18 total for 2017. We are not fully, or I'm fully not fully sure whether it's going to be two additional senior engineers, or whether we are going to go with one senior and one junior. This is still being decided. But we are hiring people with DevOps experience. So anyone who is halfway in between full operations person and a developer, what that actually means is that we need someone who has a lot of Chef experience slash Ruby experience. And we also need someone who has an idea of containers and containers, container schedulers. And of course, more and more, we expect the engineer to also have some knowledge in the go lang part as well, because we are now shipping also a lot of go components. And I think I'm going to finish with that. So thanks for listening. And I'm free to take any questions you might have. I see a lot of chat, but of course, I didn't see that before, because presenter view doesn't allow you to have that. And what about new resasking? What about using Alpine as our base image to make it less in size? That's a good question. We investigated that. The problem is we still want to keep the base of the Docker image using omnibus GitLab package. And that actually prevents us from using Alpine. We would have to do a lot of interesting things to get it running. It's possible, but it would be a huge task. And also, I'm not sure how much space we would actually save given that the package itself is already 250 megabytes. So if you add an Alpine image with all the dependencies that we require, it's still going to be around 300, 350 megabytes. And I'm not really sure that's really worth the effort at the moment at least. How much warning of the Postgres upgrade will there be? Thanks, Rev. We've been shipping the warnings for the past three releases. So every time you ran reconfigure, we had a big chunk of text warning you about the upgrade. We also warned everyone in the blog posts that they can expect this coming. What we are going to attempt doing is hook the PG upgrade command we already have to do this automatically for you. That's why we are also investigating how big of a database our customers have in order to understand how much of an impact this will have. From our tests and from the tests that other people tried, if you have a database size anything up to 10, 15 gigabytes, it's going to be very easy to do and we shouldn't see any problems. But if you have github.com size database, things are just not going to work. So we are looking out for that. We are investigating and we are really hopeful that nothing will break there. But also that's why we are making sure that if something does break there, we fall back gracefully to the previous version so that you can actually still continue using github and then plan further. We are being very careful about that. Only one thing to add. If you are github.com size, the key is can you take the downtime? If you can't take the downtime, just use classical style. Let's not get into crazy things. Yep. The thing is we haven't found anyone who is github.com size just yet, or no one told us anything just yet. So we'll see. Yes, built-ups, we're getting nuts, are still getting nuts. And okay, I think that's pretty much it when it comes to questions. If you have any additional questions or concerns, join us at our built channel and ask away. Otherwise, see you very soon in the team call. Bye.