 Hi, I'm Darian Ford, Senior Director of Software Engineering at Capital One. I've been with the company for about four years, and in that time I've spent a large portion of it thinking about our container strategy, how we enable our software developers to deliver software while meeting governance and regulatory compliance, and ensuring that our container platforms can enable the developer experience to be really, really great without bogging them down with all of that overhead. Today I'm going to share with you some of the lessons learned, how we've achieved some of those goals, and some open-source offerings that we are bringing to the greater community. Let's get to it. If you're already here at KubeCon, I probably don't need to tell you that Kubernetes has emerged as the most popular and widely leveraged container management solution in recent years. Efficiently managing Kubernetes at scale, especially at a highly regulated company, may feel daunting, but it can be done. Today I'm going to speak about how. Let me start by telling you a little bit about Capital One. We're a 25-year-old Fortune 100 company that is still founder-led. When you think about banks, you probably don't expect that. And this is one of the top things we hear from people about why they love to work at Capital One. Our founder-led startup mentality can really be felt in everything we do. We're doing things differently than other banks. Banking is ripe for disruption, and we want to be the ones that disrupt the industry. Capital One was founded on the belief that the banking industry would be revolutionized by information and technology. We are known for being a data and tech pioneer in the financial services industry. Two decades later, our belief is stronger than ever, and we are seeing the relevance of this focus right now in the pandemic as businesses of all sizes grapple with how to engage customers at a distance through digital channels in the same human and personalized ways they did before. When we think about where the future of banking is headed, we start by looking at how technology is changing our lives. The digital revolution has changed how we communicate, connect with people, shop, travel, manage our health, and manage our money. Digital adoption has only accelerated with COVID-19. We've all seen years of digital adoption condensed down to months, or even just weeks. Capital One has spent the last eight years investing in a technology transformation focused on comprehensively reimagining our culture and talent strategy, how we work, and modernizing and hardening our technology infrastructure. In September, we exited our legacy data centers and finished moving to the cloud, solidifying our foundation as the bank of the future. In pioneering new standards, tools, and technologies, we adopted an open source first approach to software development and infrastructure. The most challenging part of moving to the cloud as a highly regulated industry is building the technology, governance structure, and supportive culture. On the technology side, many financial institutions have structural barriers, legacy systems that have been in use for 20 years. Most banks don't have the technology in place to even send their code outside of the company's corporate firewall. From the governance perspective, it takes time to develop a process where code is reviewed by legal, information security, and leadership, along with other stakeholders, without impacting the developers themselves all that much. And culturally, traditional banks need to challenge the perceived risks such as loss of intellectual property, competitiveness, or productivity. A huge part of us going all in on the cloud and digital transformation is investing in our tech associates and giving them the tools they need to excel. We believe in our associates and we believe in continuously investing in them. One way we invest is to constantly encourage continued learning. To that end, we launched a tech college inside of Capital One to support associates' ability to sharpen their skills. Our focus areas include software engineering, security, cloud, mobile, data, agile, and machine learning. In addition to hard skills, tech college provides training around soft skills as well as they can be just as important. We also have ambitious goals around training and certifications. For example, we have aspirational targets for the number of engineers who will get AWS certifications, and we're giving associates time to study. Over the course of the summer and into the fall, we've had a series of invest in yourself days, one day a month where there are no meetings, allowing time to focus. We've had numerous certifications completed over the past several months as a result of the program. An unforeseen side effect with our gradual approach to the cloud came when we saw duplication of efforts. When you really embrace the cloud and tell developers who are organized into independent agile teams to go make stuff happen, you get various iterations on similar solutions. A large amount of duplication. Many of these teams were used to throwing solutions over the wall for someone else to run, and yet now they had to own things end to end. We quickly realized, we quickly realized that a you-build-it-you-own-it mandate and doubling down on DevOps was the key to keeping delivery rapid, but well managed at the same time. Re-architecting our process in this way has allowed us to successfully and efficiently deliver solutions in the cloud. Another best practice I can share in this area is investing in creating a single common deployment pipeline ecosystem that allows us to build consistent checks and expectations for getting code into production. We built the solution to address a gap in the industry, and we're already seeing the benefits of more efficient delivery. We've applied similar thinking to our container management strategy. In recent years, we've scaled our usage of Kubernetes and steadily increased the number of containerized applications we have running production. Harkening back to some of our earlier lessons learned from our cloud journey, we knew we wanted to avoid silo teams creating disparate solutions to manage their own containerized applications. Instead, we decided we needed to standardize on our own set of additional features across the company that would enable our developers to more efficiently develop and deliver business value. This meant running Kubernetes at enterprise scale in a regulated environment. Earlier this year, we commissioned a study with Forrester consulting about container adoption and usage in the enterprise. We found that compliance is the number one concern among senior leaders using container management platforms. For this reason, it might surprise you to hear that a bank was an early adopter of Kubernetes. We fully embraced the technology in 2016, less than two years after its initial release. Kubernetes is certainly newer and more cutting-edge than, say, the legacy mainframes you might expect a bank to use, but that doesn't make it inherently insecure. For many companies, fear of introducing new risks can make them feel lost when it comes to exploring and adopting new technologies. Compliance and safety is what people really care about across most industries, not just regulated ones. In many cases, understanding all of the factors you need to evaluate as you build the container management platform is a challenge in and of itself. We found it was helpful to consider our overall cloud strategy, our culture and talent strategy, the state of our cloud resources, where they over provisioned, etc. and our application development speed and scaling requirements. Other important components of a successful container management strategy include the supply chain, how your containers are built, where they are stored, how they are validated, the infrastructure and orchestration, where and how your containers are run, how they're run together, how they can access each other, the runtime security and policy enforcement, how you ensure your containers do what you want and nothing else, and observability, runtime metrics, debugging, audit logging, tracing, everything you need to run your containers in a well-managed way. There are many benefits to a container orchestration platform. Developers can focus on business outcomes. Logging and metric gathering can be for free. Compliance and government are baked in by default. You can have a testable workloads, better infrastructure utilization, and you can have higher resiliency. Simply put, a container orchestration platform should meet the needs of your developers as well as your enterprise. Developers need a simple fighting consistent workflow with an intuitive interface that will put deployed applications at their fingertips for quick and efficient management. The platform should be flexible to develop and deploy applications to while giving freedom to write code and not have to manage that infrastructure. Enterprises need compliance and regulatory control, plus the ability to enforce workflow standardization and reduce the technical complexity placed upon developers. When we start thinking about how we begin to meet these goals and achieve those benefits, it leads us to the makeup of the platform and the components of it. Two of these components we've already open sourced. Each solved some challenges we encountered around provisioning and resiliency of Kubernetes clusters. CRIT is a command line bootstrap for Kubernetes, which decouples management of SED, enables multiple nodes to be provisioned simultaneously correctly, regardless of the order in which they come up, and is designed to work with the cluster API. We open sourced CRIT in September of this year. E2D is a tool for deploying fully automated SED clusters. SED, the powerful backend storage that makes software like Kubernetes possible, has historically required a human operator to perform complex administrative tasks, like initial cluster bootstrapping, cluster membership, data backups, and disaster recovery. E2D solves for many of those challenges that we came across. CRIT and E2D are both part of critical stack, our enterprise container orchestration platform that runs on top of Kubernetes. Critical stack enforces common governance and security controls, enabling teams to efficiently scale containerized applications in the strictest of environments. For developers, critical stack is full of valuable functionality. Stack apps is a path to enable packaging of applications in a way which takes into account all of the Kubernetes objects which are needed. This creates an immutable, verifiable package which can be promoted from one environment to another. Having a running platform that is a default location for workloads means we shift left the whole you-build-it-you-own-it portion of things. SWALL provides EBPF metrics gathering and tracing for all Kubernetes workloads, enabling engineers to truly understand the impact of their workloads on the underlying infrastructure. There's also a developer UI which provides fingertip access to your namespaces, containers, logs, traces, and an editor with templates to help ease the learning curve of Kubernetes. For the enterprise and platform operators, there's a bunch as well. There's a user interface which provides enhanced RBAC management, SSO integration along with audit logging by default. Cilium CNI is installed as part of the critical stack platform by default, along with the ability to edit policies all within the UI. This gives you multiple layers of network filtering capacity. There's also a marketplace enabling approved helm registries and selective approved charts for simple single-button deployments into namespaces. We are excited to announce today that critical stack is open source. We'd love for you to check us out on GitHub and see how we can help you and your teams with safe, scalable, and efficient container management. Thank you for having me here today and please enjoy the rest of the conference.