 Hello, Virtual KubeCon 2020. I'm Liam Randall, Vice President of Open Source and Emerging Technology at Capital One Bank. Today, I'm here to talk to you about technology transformation and container orchestration in the cloud. Now, you've probably heard of Capital One before. We're the 25-year-old founder-led Fortune 100 company with more than 70 million customers and over 50,000 associates with major operations in 15 US cities, Canada, and the United Kingdom. But you may not know the entire story behind Capital One. Everything we do starts with one simple question. What is the experience that our customers desire? We know that if we can fully embrace that and work backwards from there, we're starting off in the path to success. When we think about what our customers want, they want 24-7, 365, mobile-first applications. They want full service features, everything they could do, IRL, but on the go. They want their data in real-time and interactive. They want it personalized and we're at the dawn of an age where they want it proactive. Enriched with intelligence, insight and recommendations from artificial intelligence or machine learning. Now, we've spent the last eight years investing in technology transformations focused on comprehensively reimagining our talent and culture and how we work with our technology infrastructure. When we think about where we work, six years ago, we first started to move to the cloud and exit our legacy data centers. We've spent the last six years building the foundation for the bank and the future and the cloud. This year, Capital One officially exited our last three data centers and on September 28th, 2020, we became the first U.S. financial institution to exit legacy data centers and go all in on the public cloud. Now, when you think about why cloud, it's not just about how we account for expenditures. For us, it's about agility. It's about instantly provisioning infrastructure at limitless scale to allow our associates to quickly experiment. Being cloud native isn't just about technology. It's about scalability, being innovative, increasing our speed and time to market. Now, a key part of our story is the who. As we began to think deeply about our technology transformation, Capital One recognized early that the winners in banking the future will be great technology companies with the risk management skills of a leading bank. And with that underlying assumption, we've sought to completely redefine who we are as a company. Over the last eight years, we've comprehensively reimagined our talent, doubling our technology teams to over 11,000 associates, 85% of whom are engineers. And we've gotten them to work building modern technology infrastructure, using modern standards like RESTful APIs, microservices, and rearchitecting our data environment to build the foundation for machine learning. And how these associates work is part of the role because it's not just an engineering transformation. At Capital One, we know that the best experiences are created in collaboration of a diverse and inclusive workforce. So in addition to the engineers, we're also investing heavily in product management, data scientists, and designers. And we've gone beyond just bringing outside talent into our organization. We support and invest heavily in our associates, holding a high bar for talent and ensuring that everyone has both the access to and the allocated time for investing in themselves to sharpen the saw. As an organization, we continue to produce and develop all of our associates. Central to this effort is our own internal tech college where we provide ongoing training and key certification for areas of skill focus. Now, when we think about modernizing the how we work with the right talent in place, we've had to consider how we get people together. We've moved to agile processes across the entire company, not just technology, co-locating teams so that engineers, product managers, data scientists, and designers are integrated directly into the businesses. We're embedding machine learning across the company from our call center operations to back office field processes, fraud, security, and our digital experiences. And in 2014, we made the declaration to be an open source first company. Now, you've probably become familiar with a couple of our open source projects. In 2015, we launched one called Hygia. It's a DevOps dashboard that enables visualization of DevOps metrics. And Hygia has been adopted by over 160 companies worldwide, including Verizon and Walmart. And in 2016, we launched the phenomenally successful cloud custodian, which brings automated governance, compliance, and cost optimization to cloud native environments. Now, more than half of cloud custodian contributions come from outside of Capital One from engineers working at companies like Amazon, Microsoft, and dozens of others. In August, we successfully donated cloud custodian to the cloud native computing foundation to continue the adoption acceleration, building a community, establishing standards, and to give back to the open source community. Today, I'm proud to share that our in-house Kubernetes distribution critical stack is going open source, but more on that in a minute, because it's not just about these three particular projects. When we consider that our developers are actively involved in over a hundred different open source projects today and that we've released an additional 25 other projects with more on the way, there's more to the story here. When you think about why we're so all in on open source and an open source first company, as our technology transformation has progressed, we began to not only consume but contribute to this open source software for a variety of reasons. To begin with, we recognize that some of the best software in the world is built with contributions, perspective and thoughts of diverse communities, people with different use cases, organizations with different priorities. We want to collaborate on maintenance and bug fixes. We want to crowdsource features and ideas. And most of all, we want to drive widespread adoption of these products and encourage people to commit on standards. It's not just about marketing and community leadership for us. It's also a key way that we recruit talent and retain talent in our organization. Now, back to the big open source announcement for today. When we think about what Critical Stack is, it was born as an outside company that we acquired in 2016. I was actually the founder of Critical Stack in 2014 and launched it about 30 days after Kubernetes was released in July of 2014. And as the architecture and landscape has increasingly moved towards microservices, we've started to converge internally on the power and opportunity for Kubernetes. Our technologists all across our organization are adopting containerized workloads for standardized application deployments in the cloud. And Critical Stack is a container orchestration platform that's built on top of Kubernetes. Like any distribution, it builds upon and includes powerful open source components from a Klaros, the CNCF landscape. But ultimately, it includes the capabilities that are helpful to implement common governance and security controls, enabling teams to efficiently scale containerized applications in enterprise environments. So let's dive in on what some of these features are. Let's start with the intuitive interface. The design and goal here is to help developers with Kubernetes so that they can make progress in their lives without having to be Kubernetes experts themselves. Next, we have Crit, which is not just the start of Critical Stack. It plays a critical role in developing and deploying and customizing your distribution. It was purpose built for rapid and complex scripting and cluster customization and includes a powerful desktop deployment manager called Cinder that we'll demo in just a minute. Next, we're contributing E2D, which extends EdCD to include critical enterprise features, enterprise features like backup and recovery capabilities. And finally, we'll do a preview of Syswall, a container and namespace aware, experimental and performant, EBPF based metrics and security collection platform. So in summary, Critical Stack allows developers to make progress in their lives with Kubernetes without having to be Kubernetes experts themselves. And it enables enterprises to adopt modern technology with the governance controls already included. Now, next, we're gonna go ahead and do a demo of Critical Stack. And to get us started here, what I've done is I've already executed Cinder here on my laptop with these two commands. The first thing I did was I had Cinder deploy a master Kubernetes node, which is preconfigured with a number of popular open source CNCF solutions and integrations for Kubernetes. And then I had a Cinder go ahead and add a node to this workflow. Let's see what it looks like. Okay, so let's just draw a quick sketch so we understand what we just deployed. So here today, I'm running on my local Mac. And this is my computer. It could be a Linux or perhaps a Windows PC. And now Macintosh works by actually deploying a virtual machine layer and then runs Docker on top of this. So this is a provision and setup for us. I just have Docker installed here locally. And then the first command actually deployed a master node for us. So that's our first virtual Kubernetes server. And the second command deployed a sample worker node for us. Now you can scale these resources for whatever sort of testing you'd like to do to whatever you'd like. But the key thing is, is that while all this is running, what Cinder has configured for us are two different interfaces to help us make progress in our lives here. Now the first is a simple command shell, where I can query this Kube instance, just like I would a remote instance for testing. I think of the cost savings that this can bring to your organization for development purposes. I could type, get nodes. And the second thing it does is it also sets up our port forwarding here so that we can have nice online access here to our cluster. And let's go ahead and demo that next. Okay, so this is on my local machine. We have, I'm running a Mac today and inside of my local Docker here. So inside of VM on the Mac in Docker, I can talk to my Kubernetes server. There's a Kube config you can download directly and Cinder will set this up for you. So let's go ahead and run KubeCuttle and let's get the nodes and look and see what does this infrastructure look like. And you can see we have two nodes here. They're both up and it's version 1.18.5 of Kubernetes here. Now let's take a look at the pods that are scheduled on here. And you can see that there are some system service pods with the Cilium CNI installed and configured. There's the Kube API servers and then there is core DNS as well as some other things. Let's take a look at this infrastructure through a different facet. Let's log into the UI here and we'll go ahead and take a look at that. So I'm gonna log in as, now you can see right away when I log in, there's some different focus areas for the cluster. Critical Stack is designed to help developers make progress in their lives without having to be experts, but it includes a lot of the common integrations that enterprises need to be successful with Kubernetes such as SSO, role in access-based controls, enterprise logging integrations and so forth. So we can see here that we have two critical stack nodes that are up and running and that I right away have access to a variety of different namespaces here. Now, if you're not too familiar with a namespace, think of namespace as a virtual partition for Kubernetes. It's a different logical area where we can organize things. So right now being in the critical stack namespace, what we're seeing here are some of the containers that are running on critical stack in Kubernetes that help Kubernetes to function properly. So let's change over to a different namespace here and let's go ahead and deploy something. Now, there's lots of ways that you can deploy things inside of critical stack. I really encourage you to take some time to play with this powerful tool, but I can simply just type Redis and ask a critical stack to go ahead and configure and deploy a Redis for me. And you can see that it went out, pulled down the container, found an available workload for it here locally. And this is the same way that it would run in the cloud or in your own infrastructure and it provisioned it and it's working here. Now, as a developer, I may wanna troubleshoot or query or inspect this container. So just by clicking on it, I'm brought right into a management interface that lets me not only interact with my payload, but it also lets me look at logs, it lets me see what cluster events are associated with this particular container as well as get a preview of metrics. There's a lot more to critical stack and I really look forward to sharing more of this in the future. I really encourage you to follow along with us at either GitHub, Forged Slides Capital One, or as always, you can find us at Capital One, Forged Slides Open Source. KubeCon 2020, thank you so much for your time today. I hope you all have a wonderful and safe virtual con.