 So, hi. My name is Jose Evertse. That's the pronunciation. I'm a DevOps engineer. I'm very excited to be here. It's great to be able to travel here and to see what San Francisco is all about. I already had an interesting day with all kinds of presentations. I hear a lot of similarities with the story that I'm going to tell. I live in the west of the Netherlands, near the sea, near the Hague, with my wife and two daughters. I'm in my 40s. I've been in IT since 1998. I've worked at startups and also at several banks and telecom organizations. During 2018, I wrote a book about GitLab 12, Mastering GitLab 12, and how to use it in your organization. I'm currently in the process of starting a CICD branding company called CI Academy with a former colleague from ING because we think there's much to be trained at big companies about CICD still. So, while I've worked at startups, I also worked in enterprise organizations. And I want to tell you today about how we experienced scaling GitLab in a big organization, ING Bank. You've probably heard of it. I want to show you the difference between DoomNet and a small company. So, first, let me tell you how we skilled GitLab at App7, where I first worked with GitLab. Imagine Cloud7, a telecom services company based in the Netherlands. In 2014, I was part of the engineering team. This company started in the 90s and was looking for new markets. They started in the 90s with the management of telecom networks and private bronze exchanges. And we already switched to supporting managing mobile devices in 2014. But now we want to go into the app market. The owners of Cloud7 decided to start a subsidiary company called App7. What's the name? The goal was to begin a real app development agency and to do it as fast and cheap as possible. We had to choose a technology. We chose web apps with the React framework. At Cloud7, we were used to using CVS and SVN as a versioning tool. We liked the command line. I'm a newness guy. But we hired new front-end developers. And those guys, they really hated that. So, they wanted the newest stuff. So, we started to look for a Git server-based solution. So, we wanted it to be possible to use it on premise. It had to be open source. And it had to cater to all kinds of users. So, GitLab was our natural choice. It was the best option for us because, of course, you can install it on premise. It has a web interface that most people can learn to use. It's quite easy. And it's open source. It's free to use. So, we could use it for our business. It took a couple of minutes to make this decision to use GitLab. And also, a couple of minutes to install the Omnibus package. It's very easy. So, I'll give you another example of scaling GitLab in a small company. This is how we use it at CI CD Academy, the company, the training company that I'm starting. We started out organizing small courses for developers. Using the portable desktop you see here. We took it along. And we used Raspberry Pi's. You know them as well probably. We used them for the practical exercises for developers. It works very well for a handful of people, but to handle bigger groups of students, we needed to scale. So, we created an automated learning environment using Kubernetes. So, I mentioned Kubernetes in the school. And we use AWS EKS in your classroom for that, to scale it up for bigger audiences. And these examples show how it is really great to start small and use cloud-native technology to scale up your business. And now, for a different ball game, a GitLab on an enterprise scale with thousands of users. We already heard various comments about that today. My story will focus on how we as a GitLab team at ING experienced the growth of GitLab from 2,000 to 12,000 users. But first, some info about ING. Just the biggest bank in the Netherlands. There's an annual balance sheet of 845 billion euros. It has presence all over the world, also in the U.S. In 2015, ING started an agile transformation, where they focused on creating an engineering culture. They want to be the Google of banking. How do you scale this culture? They introduced a Spotify model in the organization. It gave more autonomy to teams and introduced all kinds of other interesting ideas. Teams are being called squads, for instance. I was in the code squad, that was the name of our GitLab team. But you can feel several presentations about this subject alone. I won't do that. I will focus on CI CD, because it's part of the vision. And it included the use of GitLab for all developers. In 2017, I started a temporary assignment at ING again. This time, it's a CI CD engineer at the department that was specifically set up to provide CI CD services to all developers within ING. The department was called CDAS. It's an abbreviation for continuous delivery as a service. Here you see the basic overview of the CDAS pipeline that was offered and that existed in 2017. There have been some changes, but the concept has not changed. You see a Java Linux flavor, you see the penguin, and a Windows flavor with Visual Studio team services. There are different phases, code, build, deploy, etc., and various products are used. GitLab was chosen for the code phase as a code repository and collaboration tool. Then you see a stateless build server with Jenkins to build the code, and the artifacts are stored in the artifact. The deployment, test, and release phase used to be handled by Nolio, but in 2018, they switched to a combination of Ansible Tower and Visual Studio team services as an orchestrator. In 2015, when CDAS started, they had to choose a product for the code phase. There were three options, GitLab, GitHub Enterprise, and Bitbucket for code versioning. But GitLab was chosen. Why? Because feature richness. It had much more features than other products, and it's actually still the case right now, I think. The authorization model that could be used inside GitLab, mapped very nicely on what ING wanted. Very important for us was that the on-premise possibility, also here, we could install GitLab in the data center of ING, and the other products could not. Well, GitHub Enterprise had an option to install an appliance, but it was a black box, and ING didn't want a black box in their network. So, let me tell you a bit about how our teams at ING used to work with building software. I'm taking you back to around 2001, when I worked at ING as a Unix system administrator. My team was responsible for Giltel online, that's Internet banking in America, and mobile, and banking. This was before agile. It was way before DevOps, the term was coined, and you see here that silos existed. Everybody did double work. Communication was terrible. In the end, it made it all very costly and projects took a long time. This does not scale well. Fast forward to 2015, when CEDA introduced CEDA's Linux and Windows pipeline. It does not take much imagination to see that this is much more efficient. The most important thing I want to show here is that the lines from the teams come together at the pipeline. It means that collaboration was getting better and breaking down silos. GitLab simulates this kind of work, and what happened was the team started in the source and components. We heard a lot about this in other presentations as well. They shared code and components. It meant that inside of IG, there was more open source software within the company. Communications between developers shifted from email and big systems to a matter-most instance that was installed alongside GitLab. Another effect was that developers started to help each other use our pipeline, and that was a very beneficial effect. Also, the shared pipeline had APIs that we built for gluing different tools together, and all developers started contributing things like monitoring dashboards that used that APIs, and also other automations like the requests for SSL certificates, for instance, or the renewal. There was a presentation at GitLab Commit London by Fabio Hoeser from Siemens, maybe you've seen it, and that shows also this picture. They had the same evolution, so to speak, at code.Siemens.com. Today, I heard several stories as well, but this picture could be used for that as well. To give you an idea of how we plan to scale GitLab as a team, let's look at some diagrams. These were actually used by my team in sprint planning. It used to be talked about and made the planning. We had a simple GitLab setup in place in 2017, just a reverse proxy and application server and a backend database server. The next step was that we created a copy of this configuration in another data center, so we created a DR setup. We used rsync to replicate files and repositories, and we set up a streaming world application to copy data from the production postgres database to the DR site. The DR site was called standby mode, and we had a switch script that we could flip to the other data center in case of an emergency. We planned to scale further in 2018 to the situation you see here with multiple database clusters, load balancers, Redis clusters, shared file systems. We started with pulling Redis out of the picture and creating a Redis cluster that worked perfectly. We did it in two weeks, I think, and nobody noticed. It got a bit faster, maybe. We then started to test cluster FS as a shared file system, and we used a service offered by the infrastructure teams that was called, that was cluster FS, and we were the first internal customer to use it. The initial design and calculations indicated that performance was comparable to what we had already, so we said, okay, that's cool. But unfortunately, in the summer of 2018, when the whole new environment was set up by those guys from our teams, we noticed that the performance was not good enough. It was worse, and that was impossible, so we could not migrate to this new cluster FS solution. So we celebrated our failure with a barbecue of my place. This is my team here, you see, and my barbecue as well. But you got to eat, right? So I'm getting a bit hungry right now, actually, seeing this. After that magnificent event, we had to change to a plan B, and that was that we decided to migrate all of our machines to the ING private cloud. What is that ING private cloud? Well, it looks a bit like AWS or public cloud of your choice, but you can get computing, storage, and network resources with self-service provisioning so that it looks like that. It had been on our list list for a long time, but we didn't do that earlier because management was not really in, yeah, they didn't like that because there were some stability issues in the past with the new private cloud offering. As a team, earlier felt that we could not migrate to this system because our automation tools were not yet good enough. We thought. So we were out of options because cluster FS was not going to cut it, and FS also not, and other options also. So we decided to go for it, to migrate to this environment in August 2018. We managed to finish the migration in the fall quite fast, actually, and the big difference with the former situation is that we had all installations done with Ansible, everything, and all the infrastructure components are defined in playbooks, in GitLab, by the way. And it was proven that it worked because in the end of 2018, we had to use, we had to scale up a bit more CPU memory RAM, and we had to get much more GitLab runners there, you see in the picture. We only changed some parameters in Ansible variables. We ran the deployment scripts with Ansible, and then we had more components in ING private cloud working. I left ING in the beginning of 2019. So for me, this infrastructure is the end state. But I heard that a month ago that they were already scaling up again more runners. The amount of runners is exploding. So what are the hurdles for pushing GitLab company-wide that we saw? Ironically, GitLab was too successful. If you look at this user growth diagram, we started with under 2,000 users. And when we started growing users, we had issues with the hardware that GitLab was running on. We couldn't add more CPUs, more RAM, or more storage to it. So that was a problem. The growth of users also meant that we had more support work. We were a three-man team, so remember that. Actually, we started with a two-man team, even. But we got more support work as well with more users. And also, because it's a global service, it was 24-7. When we found out that support was going above our heads, we created a first-line support team in Poland with the colleagues from ING in Poland. So that relieved some pressure of first-line support issues. But still, a lot of the second and third-line, we had to do that. We were responsible as an autonomous team to manage that. Another effect of being too successful is that, and also more users, is that it was getting harder to secure maintenance windows. There was change management in place in the bank, so we had to get sign-off approval for all kinds of changes. And more users and more departments, it was getting harder to get maintenance windows. So that was for us quite a hurdle. It's politics. There were a lot of reorganizations in the last couple of years at ING, which are not per se bad. But what happens with these organizations, we had a lot of overhead tasks, we had to do for that. Also, that got on our backlog as well. And the problem was with all reorganizations, people got shifted around teams. So we got some knowledge dispersion around different tool teams within the department, not so smart. Also, we had new people coming into the GitLab team, and then we had to train them with all the sysadmin talks around GitLab and how to use it as well. So that was not very handy. And very important, rules and regulations. Of course, IG is a bank, and a bank is a regulated industry, but that really is actually at odds with innovating and growing. So what does it mean scaling in a heavy regulated environment? Why is it so hard? Well, in 2017 and 2018, at ING, there was still cloud paranoia, even though cloud infrastructure had been around for some time. And you have safe hardware agreements, and you have all kinds of technical solutions to keep your data safe and your trust code. You have to guarantee that your customer data is not being seen by anyone else. You have to approve it. It was the reason why we did not migrate to the public cloud for scaling in 2018, but to the ING private cloud. Our backlogs could have been full of cool integrations and new features for our department pipeline, but also we had to do proper risk management. We also had stories on risk and innovation. So there was a trade-off between innovating and being 100% compliant. We actually had to spend 30% of our time officially on risk and compliance work. And in practice, it was even more. And then if you look at the support work that also came on that, you have a very little time for innovating about the tools. I've heard that before today as well, tool fatigue. But you see a lot of different tools and integrations in regulated industries. At banks, there are a lot of legacy applications. Much was invested to build them and to keep them regulatory compliant. It's not easy to scale them in a modern way. And replacing them is also not an option. This is partly because of the law of the handicap of a head start. Does anyone know what that is? I don't think so. I think it's maybe a Dutch thing. It's from a paper from a Dutch historian. And it means that when you innovate and when you create a head start, then it's very hard to make that even better again. The example is the mobile banking industry in Kenya. Maybe you've heard of it, M-Pesa. It's been around for some time. And there they could pay with mobile phones 15 years ago with SMS. And they totally skipped the internet banking. And they just could pay with a phone with a very simple mobile phone already where we could not actually in the West. I've seen that personally myself also in Zambia. I'm not sure if you heard of that country. It's also in Africa. I worked there as a consultant for the Zanaco Bank. That was really cool. And there I witnessed that I had a look at their backend systems. And they skipped the mainframe totally. So they only have a box, a Java application server, running all core banking services for internet banking, credit cards, everything. And they do it with FlexCube. That's a Java application server that was built in India. And now actually, I think it's part of Oracle Bot. Oracle has that as a service offering. So that's an explanation, a bit of the law of the handicap of that start. If you have all things built, it's very hard to throw it away and start over again. So what does this all boil down to? Enhancing scalability in enterprise. Can we say something general about this? Well, what are the factors that positively influence the scalability? You need executive sponsorship. I've heard this many times today. It's the most important thing. An example of this in our situation was that before 2018, our management didn't want us to migrate to the IG private cloud. So it immediately was not an option anymore for us. But when they backed us, we had the green light. We could fill our backlog with all this stuff needed to migrate. And we could also get help from other departments. Corporate culture. So we changed plans after the Gloucester VS failure. But that was okay. It's okay to fill. We experimented and we shifted. If you are afraid to change, you can't scale. You cannot innovate. Long-term thinking may be contrary to agile practices. But organizations, also IG, still move more slowly. Until this changes, don't reorganize every month. Use your involvement. At IG, developers are the biggest advocates for GitLab. We actually, we use them for support. Because we are only a three-man team. But when somebody had a problem and they posted it on the Mattermost support channel, our channel, other people, other developers started helping as well. Even before maybe I saw it, actually. And they helped us migrate to the private, IG private cloud as well. The guys from the infrastructure department. And they were GitLab users also. Training programs. Very important. You have to keep them up to date and change them. Because if you grow your, scale your software, you also get different kinds of users. So you have to change your training program accordingly. It's very essential. It's the basis of our new company, actually. Automation. Very important. This is the way, I'll also read this in the former presentation again. A lot of themes. That automation can get the problems with risk compliance out of the way. If you automate all risk compliance stuff, you have, in our case, 30% more production. You can do more production things. Uber communicate. Well, also, I think that's obvious. Because your users have to know, you have to know what you're doing. You have to prepare changes and communicate with them. We use service now and we wrote blogs internally. We emailed every stakeholder about what the changes were that we were going to do. And also, afterwards, we evaluated them. And for communication with developers, we used the MetaMOS channel to announce all kinds of things, changes, handle support, experiments with the exclamation sign. In our case, we only did one experiment. And I encourage you to do more than one experiment, because you see the clusterFS experiment failed for us. And we didn't have the time to do more experiment. I really wanted to do more NFS experiments, but there was no time. So, experiments. In my book, I talk more about several scaling scenarios which could be useful in enterprise, especially using Ansible, which I have used at ING, and also then combined with Terraform and AWS Cloud. But it's also applicable for other clouds. Well, this concludes my presentation. And, yeah, thank you.