 Oh, my goodness. Last session of the day, I hope all of you have gotten coffee and are warmed up. I bet you've seen a lot of YAML today. All right, well, let's get started. So good afternoon. My name is Don McCaslin. I'm a development platform engineer at Google Cloud. I'm focused on CI CD and development tooling. Today, I'm filling in for Manisha on the first part of the talk. So there'll be a little bit of reading. Don't worry about it. Today, I'd like to talk about building a software delivery platform on top of Anthos and GitLab, but first, a short introduction to GCP and DevOps. By now, it's a de facto standard that every business is a technology business. And the kind of technology that drives the most business value is software. In fact, we're working at a time when software is defining the competitive landscape of business. And so the speed at which software is delivered is a determinant of the speed at which business gets realized. DevOps is about improving how we make software. This is an official definition of DevOps from the GCP website. In short, it's an organizational and cultural movement that helps us deliver value to customers in the form of software, faster and more reliably. Or to put it another way, DevOps makes your business better. Many here may already be familiar with the work of the DevOps Research and Assessment Group, or DORA, cool people. DORA is a research group that's been iterating for over five years to understand how DevOps works in the real world. Recently, the DORA team joined Google Cloud. And it's been personally really great for me to learn from them what makes teams successful to share that knowledge with the community. Their research shows a whole lot of things. But in particular, there are some interesting metrics. There are four key metrics. The first is cycle times. How long does it take for a code change to actually be delivered to users? Second, what's the interval between releases? Are we measuring velocity and deployments per year, per month, per day? Third, how often do deployments fail? And do they need to be remediated? How long does it take, fourth, on average to make those fixes? What's the mean time to repair? Now, according to the report, there's a huge spread between high performers and low performers in DevOps. The really high performers, we term them the elites, are way ahead of the pack on this. You may have seen these numbers before. I've seen them lots, and I'm always stunned. These are really big numbers. DORA has shown teams who score well on these measures use technology to deliver better business outcomes. So I think of DevOps as an intersection between three components, tooling, process, and culture. Y'all are talking about tooling a lot today. Tooling the DevOps landscape consists of a vast number of cloud-native technologies. Process, DevOps business process incorporates things like systems thinking, shift-left feedback loops, continuous integration, deployment, and continuous monitoring, and culture. DevOps culture is centered around collaboration and teamwork. Over the past 10 to 20 years, organizations of all kinds, startups, enterprise, even the public sector, have embraced the open-source exchange of ideas. We learned from experience that when we solve common problems together, we all benefit. We all get stronger common foundations. And that gives us more time to focus on building our differentiators. And in the spirit of open-source collaboration, we announced the Continuous Delivery Foundation earlier last year, in collaboration with a number of industry leaders in this space. It's a sister foundation to the CNCF, and Google is a founding member. This foundation is a place to collaborate about best practices, guidelines, and tools for the next generation of CI CD systems. In DevOps, you could divide the landscape up into five high-level phases. Align to a generic application lifecycle. Collaborate, build, test, deploy, and run. At GCP, we have end-to-end tooling to facilitate cloud-native CI CD with applications such as Cloud Code, Cloud Source Repositories, Cloud Build, Container Registry, and some others. These are tools built in-house for a GCP-native experience. But through our technology partnerships, we also work with the Best of Breed solution vendors that operate in the space to provide our customers the optionality of working with their existing or favorite tooling. Specifically, since we're here today, I'd like to briefly touch on our partnership with GitLab. GitLab has built several integrations in partnership with Google to automate source to deploy to LAN code on GCP services, such as Google Compute Engine. And through its Kubernetes integrations, GitLab has also automated source to deploy to LAN code on GKE, as well as GKE on-prem. GitLab's CI CD workflow can also build, test, and deploy applications managed by CloudRound, which is the GCP serverless platform. These GitLab solutions are all available for use by customers' listings in the GCP Marketplace, which brings us to our latest engagement on Anthos. Anthos is a modernization platform. It's meant for you to develop applications once and run them anywhere in the cloud. But it can run on your on-premise environment. It can run in Google Cloud and also in other clouds over the edge. It's software-based. There's not a hardware requirement. Infrastructure is abstracted away, so you can focus on building apps and not on managing infrastructure. Anthos is built on open-source platforms, like Kubernetes and Istio, and provides a consistent substrate across heterogeneous environments. So the same software runs on GCP in your data center, even on another cloud, all with a unified control plane. And this is really interesting when it comes to CI CD. For example, you can do your builds in the cloud, scale capacity up and down for those bursty workloads, and then deploy to production servers that run in your closet or in other clouds. And that's a great introduction for what we want to show you today. A software delivery platform built on top of Anthos and our favorite GitLab. But first, I'd like to thank a few people who couldn't be here today and poured tons of engineering hours into this demo. Vik, Amir, and Steve have done incredible things with this technology. And we look forward to what they cook up next. Our agenda today is simple. We'll cover why we're doing it, what we're trying to do, and how we did it. With any luck, today we will push to production. Hope I didn't jinx myself just now. So why should you build a platform? I think you've actually been exploring that a lot today. Platform is an abstraction that allows your developers to focus on developing while providing the operations folks the ability to iterate on underlying layers. It masks the complexity of the infrastructure and deployment from developers. Devs don't need to change their workflow if the infrastructure changes. Ops want to tweak and change and optimize the info. Many of us have seen this. You start a project. It begins simple. Developers iterate on desktop environments. They share code with version control. Maybe you've built out some different environments in your local data center. Operators are managing the infrastructure with a combination of manual steps, SSH, and orchestration tools, puppet, or whatever. And maybe you home some of your application parts in the cloud. But now you've got a multi-environment development workflow. And it works. But it takes a lot more effort than it should. And your teams aren't able to change as nimbly as they could because big changes have to be ported across multiple environments. And frankly, everything you do takes more effort because everyone is having to tightly coordinate across multiple domains of expertise from developers, operators, and security. So you say to yourself, we've had a platform as a service for years. Why don't we just use those? Well, it turns out for many people they end up in a Goldilocks situation. In 2008, we launched App Engine. And it was amazing for people to easily build and deploy their apps without thinking about scaling and managing ops. But for enterprises, it was too opinionated to fit in their environments. You couldn't write to the file systems. You couldn't manage memory, compute anything. So we launched Compute Engine, which worked for enterprises to start moving workloads with abstractions they were comfortable with. But still, that was a lot of work. And it was too hard to manage. So in 2015, we launched Kubernetes, which is a good in-between. And then people started building platforms on top of Kubernetes. This is the truth of Kubernetes. You don't want your developers becoming Kubernetes experts. Also, by the way, we always invoke Kelsey Hightower's name. I don't know if you all know this fella. We invoke his name in order to ensure a smooth demo. So how can you build a platform on Anthos and GitLab? All right. You're familiar with the stack. At the bottom, there's the infrastructure, networking, compute, storage. On the top, there's your application. In the middle, you build your platform, abstraction layer. That's version control, deployment management, CI, CD, secrets. And a lot of times, it looks a lot like this. You've got your layers, your users using the app layer, the devs iterating. And in our world, hopefully, Anthos is managing your infrastructure. And Anthos offers a lot of things here. So you can centrally manage. You get a single API for workload management. And you can share methodologies across teams. We're going to go into that more later in the demo. And you can deploy in multiple environments. So we care about many personas in the development stack. But we're going to focus on three today, development, security, and operators. They each have individual responsibilities and different requirements. And these are the components of a software delivery platform. These are not all the components, but they're majority. So developers need app observability, version control, config management, continuous integration, tests somewhere in there. There's continuous delivery, which is a shared platform for operators and developers. Policy management, container registry, orchestration, and infra monitoring are all used by security and operators. And these are the components that we chose for the stack today. You'll notice GitLab in several of those locations. You could probably even operate in a few more, but this is what we've done today. So we're using Anthos config management. It's based on Git, Kubernetes, all the good things. And it's aimed at making your security and operation teams happy. By default, the apply loop runs every 15 seconds. ACM recognizes the objects it manages, and thanks to annotations. And you can start to stop managing objects with ACM by adding or removing these annotations. So let's talk about how we're organizing our application and resources. The things to note here are that we're heavily using namespaces to differentiate our deployments, and we're homing a GitLab CI runner in each cluster to do deployments that we need. This means that our CI tools only need credential scopes specifically to the clusters themselves. And we're heavily relying on infrastructure as code. Common theme for today, it seems. Here you see the application source code on the left, along with CI configuration and base Kubernetes config. And then it goes through build steps that hydrate the manifest using Customize. And those results get committed with a merge request into the environments repo. CD tools automatically pick those up and apply them to staging and production. We're using Customize for the Kubernetes manifest hydration. Customize is different than a templating system. It's a patching system. And you can think of it this way. Docker has the concept of file system overlays. You start with a base image in the bottom in red, and then you apply changes over time, over time, and use different images until you get all the way up to your last change with a container layer at the top. And that's your working layer. Customize is the same thing with Kubernetes manifests. We have the shared base layer at the bottom, and then we layer in security patches and finally production or staging patches at the top. Dun-dun-ah. I think we just switched to the demo time. There was a slide. It said demo time, but we got switched over. This is great. So let's do a demo real fast. Let's assume your developers want to start working on a new application. We built up a CLI. A lot of people build up CLIs, make life easier. So let's start a new application, except let's call it Hello GitLab 2020. OK, good. So encapsulating your application in a CLI is a great way to make sure we're repeating. I did. Thank you. Props to the audience. OK, anyway, so it's a great way for encapsulating. Hello GitLab commit. We'll go with the long name. All right, let's see what it's doing. Let's go to our groups. So it starts out by making a GitLab group. Groups are like projects or organizations in other repo management systems. Here we have a group for your new application, Hello GitLab commit 2020. Notice that we've got two repos, one for application code here, one for the environment. Let's look closer. The repo is pre-populated with everything that we're going to need. It's got a Docker file. It's got a templated main.go that doesn't do anything right now, but later it will. It's even got a GitLab CI YAML. This is awesome. You'll notice that it's relying on pre-populated and shared templates. And it's even got a scaffold.yaml for developers who want to do local development. Let's go look at the other repo. We've got a GitLab environment repo. And this is the home for our hydrated manifests eventually. We'll show how that works shortly. But for now, let's go back to the CLI. So it's asking for credentials to be generated in our repo right now. And that gives us a little URL to load up. So we'll go over here. We'll load up the URL. And it has instructions. I know what the instructions are. So we're just going to rock through it, set up a deploy token that only has the read registry scope, create the deploy token. We'll grab a password. We'll tell it what we did. All right, we'll see what it does. It fixes up some runner tags. We've got a pipeline. Yay. Way to go, GitLab. All right, so let's go look at our first GitLab pipeline. And it's running. You know what? We'll come back to, oh, it's almost done. So here, you can actually see the jobs. First, we tested them. Then we built. We hydrated the manifests. This is all cool. And then, at the very end, we're pushing, hasn't started yet. We'll come back to that. We're pushing the manifests into the repos. So I tell you what, why don't we, oh, now that it's done, it's succeeded. Let's actually see what it did over here to hello, git, mad, commit, if. And you're like, oh, hey, where are my manifests? Well, we're using branches. So the staging branch goes to the staging repo. And here, you can see the fully hydrated manifests, really straightforward, easy stuff. And we can see a pipeline that kicked off to push it, to deploy it. So we should have staging now. Let's take a moment real quickly and look back at the other groups. So there's a group here, PlatformAdmins, that was already there, that had the templates that we cared about. So there's the Golang template. You've already seen this. This is copied directly into the starter repo for our developers. Golang template ENV, same thing. We have the shared customized bases. This is where developers and operators can actually work together on the deployment yamls. Each team can share a part of the yaml files and make changes. And then we've got the shared CI CD. So different teams will probably draw the line at different places where the shared part of their CI definition is and where the customized part is. But in this case, we put almost all of the logic straight into the shared CI. And then Anthos Config Management has its own pipeline. Here we can look into namespaces and the managed apps. And we can see all of the apps that we are currently configured to build out. And you can see actually the resulting workloads here in our Kubernetes cluster. Here we've got the old app. Here we've got the new app. We've got Gitlap runners running inside the new app. And then I think that's the tale of that. So the last thing we've got to do is get into production with a handy dandy merge request. So in this case, we would merge request from staging. The latest fix, push to production. Submit the stage request. I mean, the merge request will merge. And let's also be good people and delete the source branch. CI CD pipelines, pending. And you'll notice here actually that we're deploying two clusters. All right, while this is going, we'll say, what did we see here? So we've got a complete software delivery platform that allows developers, operators, and security personnel to iterate over their individual areas of concern, all in code, all with the potential of code reviews, linters, all the good stuff. And this was built using Anthos Config Manager, GKE, and GitLab filling in for several of the rows. This is just one way to do it. You've probably seen several already today. But the point here is now that we have the tools for managing infrastructure and policies, building a software delivery platform is easier than it's ever been. And thank you for being here. And job succeeded, so thank you to the demo spirits. Not yet, but it will be before long, actually. We are going to release this as open source. All right, thanks, everybody.