 Better, you can hear me? All right, hi, I'm Jason McGee. I'm the CTO for Cloud Platform at IBM. And I think I'm the only not database guy on the speaking agenda today. So think of this as your seven minute break from databases. What I thought I'd actually talk about is we're at KubeCon, so you're not allowed to go to anything without having Kubernetes come up at some point during the conversation. So I'm going to talk about Kubernetes and the role that it plays in running data workloads at IBM and why I think it's a good platform for the compute side of building hosted data systems. Just by way of background, as a public cloud provider at IBM, we have a large catalog of hosted database services. Adam spoke this morning about the work we're doing around CouchDB, which is the underpinnings for our cloud and service, and it's moved to FoundationDB. But we also have a rich catalog of other as a service database technologies that we make available to clients to run on our cloud and build their applications. Now, we're at KubeCon, and part of the reason there's, I think, 13,000 people here this week is this, which is that over the last four-ish years, Kubernetes has emerged as the dominant compute platform for building workloads. And at IBM, we made a decision back in 2016 to pivot our entire cloud strategy to leverage Kubernetes as our core platform. We started to build a as-a-service Kubernetes offering as part of our cloud, which we released in the middle of 2017. And then in 2018, we made, I think, a somewhat crazy decision that we were going to move all of our cloud services onto Kubernetes, meaning we're going to take that same service that we make available for clients, and we're going to use it as the hosting platform for running all of the rest of our cloud services, including all of our data workloads. And I think many people think of platforms like Kubernetes as predominantly being for stateless systems, for non-data systems. But we decided in 2017, we're going to move everything there. And so we started a journey to move a lot of databases and data-back workloads onto Kubernetes. The service that we run them on is something called IBM Cloud Kubernetes Service. It's an as-a-service, hosted Kubernetes platform. It provides all of the infrastructure abstraction for the services that are right on top. It provides multi-zone deployment capabilities so you can kind of automatically get high availability within a region. It sets up all the network and network routing. It secures the workloads and kind of provides the core execution environment that we use to run everything. And over the last two and a half years, we've gone from nothing to running over 19,000 Kubernetes clusters in production that are running this vast array of workloads from simple stateless websites to AI machine learning environments to very large-scale databases. And so at this point, if you looked at IBM Cloud, you'd see all of these different kinds of workloads that run on Kubernetes. Now, why do you care? Let's zoom in for a second and just look at data workloads. So over the last couple of years, we've gained all this experience on running Kafka for our event stream service that running a whole variety of open source database systems, running our machine learning and Watson AI systems, running our data warehousing systems all on top of this platform. So I thought I'd spend just two minutes and kind of talk about what I think Kubernetes helps with when it comes to running data workloads like that and some of the unique needs that data workloads had as the guy providing the compute infrastructure underneath. The positive side, I think, is a few things. I mean, obviously, Kubernetes, one of its key values is infrastructure abstraction. It isolates you from the physical infrastructure underneath, allows us to more easily provision and leverage that infrastructure, and therefore allows the data service teams to really focus on the database orchestration and management logic that control planes they're building and the core data management problems. It helps automate availability and provides a model for failure recovery both within a single zone and across zones and provides a networking model to allow us to at scale distribute network traffic into those data systems and to recover from failures in componentry as those requests are routing through the system. It also provides a common tool chain for the development teams to allow them to more quickly build and iterate on the data systems that they're creating. It does a lot for us around security. Simple things like vulnerability scanning of images and ensuring that you can only update running production systems with good code that passes security audits and vulnerability scans, baking that into the tool chains that people use so that the average developer doesn't have to spend their time thinking about those things. And of course, handle scaling, both scaling within Kubernetes and scaling of physical infrastructure underneath. And that's an area where data workloads kind of depending on what kind of database we're talking about deal with scaling differently. And so having a variety of state management models in Kubernetes and a variety of auto scaling approaches both vertical and horizontal and physical infrastructure scaling allow us to make the right trade-offs on how to apply resources to that environment. Operators is a new thing. Operators you can think of as a way to extend Kubernetes itself with new APIs that allow you to manage the software or system running inside of Kubernetes. And we've done a ton of work in our database teams building a set of operators to allow you to operate the life cycle of a variety of database systems running in that. You can go to places like operatorhub.io and get access to those operators and kind of have the control plane to manage a database system running inside of Kubernetes. And then of course, it allowed us to put some common architecture in place across a variety of database systems. So for example, in our IBM Cloud database service we're running well over 10,000 database instances. All of them use the same kind of operations model, the same execution, the same core architecture, whether it's running Mongo or Postgres or Redis or something else. Now running data workloads isn't always the same, of course, as running stateless workloads and there were some unique needs. I didn't pull out a lot of them, but there were kind of three that stuck out to me. One was a higher demand on vertical auto scaling for some database systems, adding more CPU or memory into the running system is the easier approach to scaling than adding more instances. And so Kubernetes, especially two years ago, had a pretty nascent vertical auto scaling capability that we had to help mature and evolve to handle these kinds of workloads. Dynamic storage provisioning, of course, is really important. And then some unique worker flavors, like being able to run on bare metal servers and take hypervisors out of the mix to get the IO latency out of the path, being able to use GPUs in the case of a lot of machine learning workloads. And then being able to have direct access to things like fast local SSDs. Those were capabilities that Kubernetes was able to surface in a consistent way across the infrastructure to help run these data workloads. So the moral of the story is, one, Kubernetes is actually a proven platform for running data workloads. We're doing, we and many other people in the industry are doing this at scale with mission critical production data systems. And I think it's the perfect platform for a data service developer to use to allow them to focus on building the core database system and not the mechanics of running compute. Thank you very much.