 Thank you. I'm Tim. I work at Google. I do spend most of my time working on Kubernetes, GKE, Anthos, and related projects there. Recently, I've been spending a lot of my time thinking about multi-cluster, multi-cloud, multi-environment sort of use cases, hybrid, multiple clouds, those sorts of things. I don't think I need to sell anybody on why cloud matters, I hope. There's a lot of reasons why people end up in a multi-cloud or a hybrid situation. Just to touch on a few, availability is a real thing. It's 2020, your application shouldn't go down. It's sort of not cool anymore. Scheduled maintenance isn't really a thing. So you really do want availability for your applications. And availability really means replication, geographic distribution, multiple regions, multiple zones. And one of the ways people get that is by moving to multiple clouds. Another related topic there is locality. We're in a worldwide internet era. If your application is serving traffic in Asia, there's no excuse for you to be hitting a back end in the US. Likewise, risk. Cloud is a risk. A lot of you here are enterprise users who are moving into cloud or have moved into cloud. And if you're in charge of that decision, this is a real risk to your business. What happens if the cloud goes down? Or what happens if the cloud decides to change their pricing model or end of life, a product that you depend on? Having multiple clouds is a really strong way of mitigating risk for yourself. Likewise, isolation. Many people end up with multiple clusters or multiple environments so that they can prevent cross-conhemination. They have a blast radius. Make sure that, god forbid, you fat finger your update. You only take down some portion of your applications. Also, people have this legacy stuff, these things they built before the cloud native era. And last one that comes up a lot is acquisitions. You pick up a smaller company. They're already using Amazon and you're in Google or they're using on-prem and you're in cloud or vice versa. Bridging these environments together is becoming increasingly important. Gartner estimates by 2021, that's next year, 75% of mid-large-sized organizations will have adopted multi-cloud or hybrid as a strategy. If you do that without a solid plan, you are going to increase your operational costs. You're going to stall your development. You are not going to shorten your cycles. You're going to lengthen your cycles to borrow from SID. What do I mean by that really? Look, moving to cloud is hard. And it can be really, really hard, depending on what your applications are about. Moving to multiple clouds just amplifies that. And yet, this is what is happening, right? 75% by next year. The reasons to move to multiple clouds or to multiple environments are so compelling that people have decided to do it regardless of the risk. Using those multiple clouds now, like let's dig into the risk, what does it mean? Every cloud is different. There's this noise that you have to experience when you load two clouds up. Anybody here who's ever loaded up multiple cloud consoles and tried to do the same thing in the UIs, right? Or use the APIs, find the same concepts, and map them together between clouds? Holy cow, it's a nightmare. From the infrastructure as a service, to functions as a service, to platforms as a service, to now containers as a service, there are no standards. Every cloud is different, and all these differences, they manifest in really many ways. There's the UIs and the CLIs, there's the tooling, there's APIs, and just API style, right? Some are declarative, some are imperative, some are REST, some are GRPC. Some of these things are very deeply rooted, right? In the terminology, the way you use the clouds, the concepts that the clouds offer, the capabilities, the product portfolio, how you do identity between clouds. Off account management, go on and on. These things, it's a huge deal. And it's actually worse than that, because all of those differences manifest in all of the product portfolio. So you wanna use VMs, compute, right? Well, what's capable in Amazon is different than what's possible in Google. It's different from what's available in Azure or your on-prem. Networking, holy cow, I spend most of my time staring at networking problems. Networking between the different clouds is tedious. Storage, load balancing, autoscaling, life cycle management, all of the different capabilities of the clouds are completely different. Now, of course, this impacts how you build your applications and what you can build. Logically, the logical conclusion to these problems is abstraction, right? So you start building up abstractions for your company. You wanna hide all of these details. Often, this means now you have a lowest common denominator API, right? Whatever capabilities are common across all the platforms, well, that's what you're gonna expose to your staff, right? Now, you have this tower of bespoke abstraction. And I'm gonna tell you, from experience, these abstraction towers are harder to maintain than you think they're going to be. They're always more costly. They require more staff, more time. There are more sneaky, hidden little bugs than you can possibly think about. And truthfully, as a software engineer, good abstractions are really hard. So, oh, don't forget your on-prem environment, which doesn't have any of these API controls, right? So you're forced to trade off, right? You lose your capabilities in order to get your abstractions. You lose fungibility. You can't move things between clouds the way you want to. You end up with high maintenance costs, high development, long cycle times as you move between clouds and try to bring new things online. And just to make that even worse, you've now worked with these individual clouds. You've now got locked in accidentally. You didn't do it on purpose. You didn't really sign up for this, but you just accidentally get locked in. You used some facet of your cloud provider that doesn't work on some other cloud provider, and now you're stuck, because you can either go back and spend engineer years retooling your abstractions and all the things that built up on top of those abstractions, or you just live with it. Some features just don't exist in other clouds. Lowest common denominator APIs kind of stink. As someone who wrote one, I'll tell you, nobody's really happy with them. And sometimes the differences between the providers are so subtle you don't see them until way after it's too late. So now, to be fair, sometimes lock-in is okay. Sometimes you strategically say, like, I'm gonna use this thing that only one cloud provider provides because it's really important to me. But I want that to be your choice. It can be a calculated move, and you should leverage it when it makes sense, but you should do it consciously. Accidental lock-in is really the devil. So now you're building abstractions, and you're building up, and you've got a staff of people, and you're teaching them these abstractions. You've got to train your dev and your ops staff. Every cloud has their own certification programs. There's a reason they have those certification programs. Because it's damn hard to become masters in those clouds. So now you have to build your own training, your own documentation, your own certification program for your own abstractions so that your staff can become experts on that. And heaven forbid, there's a leaky abstraction, all abstractions are leaky. If you leak a detail from a cloud provider up into your API, now those users are forced to debug down through your API and into the cloud provider. This is really not a great story I'm selling here. This can take months, years to build out your own custom platform as a service, and you're figuring out your best practices, you're figuring out your patterns, you're figuring out what not to do, frankly, you're gonna make mistakes, and you have to propagate that information out to your dev and your ops teams. And now you get to train your entire staff, right? And maybe you have 10 people on your staff, and it's not such a big deal, or maybe you have 10,000 people on your staff, and oh my goodness, you gotta get to work on this. I call this craziness, right? This is nuts, and you're doing it by yourself, because these are bespoke abstractions. What we're really lacking here, the reason I'm here to talk today, something that's very dear to me, is consistency. You don't wanna train your teams on all those different clouds, and you don't wanna customize your applications, you wanna build these abstractions. Now, I heard you say, didn't I just tell you how abstractions are this terrible thing, and you really ought not do that? No, what I actually said was, they're really hard. They're hard to get right, and they're expensive, and especially when you're doing them on your own. That doesn't mean you don't want them, and that doesn't mean you don't need them, but you probably don't wanna be doing it all by yourself. So what you really want, what you're looking for is standard abstractions, things that other people are using that work across the industry. We heard to talk this morning on how open standards are being developed using GitLab. Super interesting, you want to know that you're not in this alone, right? You want abstractions that have broad support from multiple environments, multiple clouds, with deep integration that works really well with those clouds. Ideally, where the clouds themselves are participating in those abstractions and in the development of that, you want abstractions which extend easily, which give you the right hooks to customize it for your own environment to tap into the events that are happening so you can pop the seal and do those cloud-writer-specific things without going in and changing all the abstractions. You want examples and documentation. You need to be able to hit Stack Overflow, who has a nice table out there. You need to be able to hit Stack Overflow and find the answers to the most common questions. You want layers that make sense, that don't succumb to the lowest common denominator syndrome, and you want to be able to hire people who know them. Maybe this is the most important point here, Chris made a joke about seven years of Kubernetes experience, but Kubernetes has only been around since 2013. I just lost my notes. Oh, there we go. Kubernetes has only been around since 2013, right? So literally, you can't have seven years of experience. In 2013, there's four people who have experience with it in 2013. So it's difficult to hire people who have sort of these industry-leading experience. What I'm really talking about here is the opposite of bespoke, right? It's really standard. Enter Kubernetes. So this is more or less Kubernetes, and am I sure I was pretty transparent already, if not maybe my jacket or my shirt or my introduction? Does anybody not know what Kubernetes is? It's okay. Wow, one person, excellent. Kubernetes is a platform for building and running workloads. For managing containerized applications, it is a highly portable system that runs across different clouds that has abstractions for the most common sorts of patterns in work loads. I'd be happy to go into hours and hours and hours about what Kubernetes is, show off the demos. If anybody wants to talk afterwards, I'll be around all day. Kubernetes is that abstraction. Kubernetes has most of the properties that I talked about before, kind of on purpose. It's built by and for the clouds. Google works on it, Amazon works on it, Azure works on it, the Chinese clouds, Alibaba and Tencent, they're all working on Kubernetes. It is supported by all the major clouds, even with managed offerings, so you can go to any of the major North American clouds, Amazon, Azure, Google, IBM, they all have managed Kubernetes offerings, so they can take away some of the pain for you. It is used in every major industry sector. At KubeCon this year in San Diego, there was a demonstration of how they got Kubernetes clusters running on an F-16, on an in-flight fighter jet, a cluster of Kubernetes nodes. Let that sink in for just a second. That was cool. I thought that was pretty exciting. It is literally running the accelerator at CERN, right? It is, they're talking about putting it on the space station now. It is, it's being proven in pretty much every industry sector. It's not just for video games anymore. It is also proven to scale. So we're already supporting more than 5,000 nodes. There's been a few blog posts by people who are talking about pushing it up into 10 and even 15,000 nodes. And it is highly extensible by design. We've built a lot of extensibility mechanisms and hooks and dynamic registration and plug-ins into the system so that you can tweak it and bend it to your needs. Kubernetes was built on foundational ideas that came from Google. So before I worked on Kubernetes, I worked on Google's internal system called Borg and its successor called Omega. I took, when Kubernetes came around and Docker landed, we took some of those ideas and we went public with them. We started talking about what we're doing and why and it really resonated. So those ideas have come forth now in the form of Kubernetes. Some people joke that it's Borg for everybody else. Actually, I had a crisis a few years back when I thought, what am I gonna do when I leave Google and I don't have Borg anymore? Like, I don't think I can cope with this. Now I don't have to worry about it. Kubernetes is driven by a community. It is not a Google project anymore and the theme this week or today is community. So Kubernetes is a community project. It is driven by thousands and thousands of developers from every time zone, from every continent, except Antarctica, if you're out there, let me know. We cut a release every quarter. We're not quite as good as every month, but we've cut 18 straight releases for 18 straight quarters. We're cutting another one in a couple of weeks. It is constantly growing, new functionality, handling new use cases. It is handling new customers, new scenarios, new installations. Cloud support is growing, strengthening, deepening. This is how you can bridge that multi-cloud problem. Now, you don't need to train your staff on all the different clouds. You don't need to build your bespoke abstraction. You can elevate the conversation, talk about this platform that already exists, right? You can protect your teams from the turmoil below them, hopefully with close to the right abstractions that don't leak too much, but at least if they leak, you're not alone. Your staff can get down to work, right? They need to do their thing without getting stuck in all the details of each environment. Somebody asked on Twitter this morning, what does infrastructure mean? I was looking at it and my response was, infrastructure is all the crap you don't wanna care about, but you need to in order to get your job done, right? It's everything below you. So, hard-earned wisdom, all abstractions eventually break down, right? Kubernetes, we're trying to make a mechanism, a system that can extend, can grow and develop and adapt with people. Some of those abstractions will break down and be replaced over time. Already at sort of five years in, we're starting to see generation two APIs come in to get more sophisticated and start to age out the older, less sophisticated APIs, and that's fine. The system's gonna continue to grow and be supported and have this community behind it. Now, Kubernetes does not magically make all your clouds be the same, right? Product feature differences and limitations, they do exist. If you wanna use BigQuery, you have to use Google Cloud, right? If you wanna use Redshift, you have to use Amazon. That's just the way of things, right? But Kubernetes has a lot of places where you can pop the hatch, right? And you can break portability and trade off functionality for the lack of portability. But you don't have to most of the time. Kubernetes, one of the things we like to pitch about Kubernetes is it's low-level enough that you can run just about anything, stateful apps, databases, Kafka, but it's high-level enough that your employees don't get stuck in the quagmire of learning each cloud and debugging through each cloud. One of the things we've also talked about through Kubernetes, through our industry, in fact, is this dev ops model, right? Where we make two separate personas and then sometimes we glue them together and sometimes we don't, right? Your developers can focus on the developer tasks and your ops can focus on the ops task. And if you're a true dev slash ops shop, you can wear two different hats, so you can think about two different sets of problems at different times. You don't have to conflate the two things together. So you can get this consistent experience across platforms. Now, if you're careful, just a little careful, you can take your application and you can say, you know what, I'm not happy with cloud A anymore. I wanna go run it on cloud B. And you can take the YAML, everybody loves YAML, take your YAML and you can move it over to the other cluster. It's not gonna be completely free, it's not gonna just work the minute you do it, but with a reasonable amount of effort, you can actually take that thing and move it over. This is the benefit of systems like containerization, Docker, Kubernetes, excuse me, is you have a plausible story to that, right? And these abstractions offer you most of what you need when you cross those cloud boundaries. So if that's not enough, let's talk about Kubernetes as a platform for building other platforms. It's a platform platform. Kubernetes has this extensibility API called Custom Resources. People all over the world are using this to define their own APIs that are their own bespoke abstractions. Wait a second, didn't I say you don't want bespoke abstractions? No, actually bespoke abstractions are great as long as they are focused on what you want to do and they're not baloney. They're not the crap that you don't want to care about, right? So we find many customers now who are using Kubernetes Custom Resources to build higher level concepts, right? One of the customers that I work with, I won't name them because I'm not sure how public it is, but they're just down the street and they're building a giant abstraction that changes the way deployments are done because they don't like the way that Kubernetes built in deployments work. They have their own needs for it and they have the staff that they can justify building their own abstraction so they've built their own abstractions, right? People are building these things called we call operators, right? You want to launch a SQL instance, well that's cool, you just say create SQL and it creates you a new SQL instance. You don't have to worry about how many pods does that need? How many replicas? What's the replication story? How do I expose those services? What's my DR? All of that is bundled into an operator that builds these custom abstractions and builds on top of Kubernetes. So we let you do this by defining types on top of the existing API machinery. All the ugly, not so fun part of serving a REST API with consistency, with some well-defined API semantics are taken care of for you, right? You get all that for free. We didn't intend with Kubernetes to build an API serving platform but it just sort of shook out of what we were doing and it turns out to be really interesting. So now you and your teams, you can build your own APIs on top of and alongside of Kubernetes APIs so you can have the best of both worlds. You can have your standard abstractions and you can have your bespoke abstractions built on top of them. This lets you focus in your business, in your business logic and get things done. Now, I'm a Kubernetes fanboy. You might be able to tell by how many times I've said it today. Kubernetes is not an island. Kubernetes is the beginning, the seed of an ecosystem and what an ecosystem it is. We are just one piece of a Cambrian-esque explosion of things. If you look at the Cloud Native Compute Foundations roadmap, their trail map, all the icons are about this big now because they have to fit hundreds of icons into one page. There are literally hundreds, if not thousands, of products and projects across GitHub, across startups. Many of them are represented here that are aiming to go with Kubernetes, both open source and proprietary, licensable and usable right away today that solve problems for you, that fill gaps, that work with Kubernetes, that either build on custom resources or build applications on top of Kubernetes. To give you these higher level controls, security, policy, networking, storage, it's an incredibly packed landscape. It's very difficult to navigate right now. What a wonderful problem to have. I'm so confused by all of the cool things that are happening right now. It's going to, over time, settle down. That's just the way it works. Inevitably 90% of these things will die off or consolidate and that's okay, but open source is exploding and in a really good way, just to list off some of the projects, Istio for networking, Knative for serverless, Silium for EBPF security and networking, Calico for networking and connectivity, Tecton for CI CD, Cube Builder for helping you build custom resources, the cluster API, which gets you managing clusters and even a higher level. These are just a few of the exciting projects that are happening right now within our ecosystem. Interestingly, there's a whole sector of this ecosystem that is made up of users who are publishing their own tools that they've built on top of Kubernetes. It's not just companies trying to make money off their things or sell you something. It's actually companies who are making this digital native transformation and coming out with their stuff and saying, look what we did. Would you like to help us? Would you like to participate? Would you like to get what we've got? It's pretty cool. This is a really exciting, interesting time to be in open source space. So whether you want to go with pure open source or whether you want to look at startups or other companies to fill these gaps, you have a lot of options. You have a lot of flexibility. The power is really all yours right now. And in my opinion, we are closer to the beginning of this process than we are to the end. This is going to heat up before it cools down. I guess every project curator feels this way, right? But I hope I'm right. This is an exciting time to be part of this. So I sound like a commercial for Kubernetes, so maybe a reality check is worthwhile. Kubernetes has some issues. I will admit it. Some of them are being worked on. Some of them are yet to be staffed. Some of them have yet to be discovered. This is the reality of software. We know this. Not every abstraction that we have is leak proof. We are aware of some of those and not aware of some of them. Sometimes we are our own harshest critics. Kubernetes is very cluster centric. We are working on building these abstractions that span the clusters, that span the environments that make it easier for you to use multiple environments. This is where I'm spending all of my time lately is on those topics. I think we've got the cluster pretty well nailed. Now we're spreading out beyond the cluster. A lot of our cluster policies are sort of poorly defined when you start to go into cross cluster space. So I think this is a very exciting, interesting time to be looking at those things. For example, networking, network policy across clusters, networking is hard. Networking is one of the hardest problems that infrastructure people have to face. Networking across clusters, across clouds is extra super duper challenging. And multi-cluster life cycle management. Cluster life cycle management in general is actually pretty nascent. So this is a place where things are growing up. It is still early for multi-cloud and hybrid in Kubernetes space, but it is growing, right? And I think Kubernetes is pretty well positioned to be that thing that helps you bridge the different environments when you need to. For whatever reasons you need to, there are a million legitimate reasons. Kubernetes hopefully will be there to help you with it. So looking to the future, there's a lot of work to be done. What I need from you is your call to action. Bring me your use cases. Come and talk to me about what it is that you're really trying to do concretely. You want two clouds, why? What do you need to happen between them? How do they communicate to each other? What does your network look like? What do you want to happen? Because that's the way that I, as a representative of Kubernetes, can start to bring those ideas into the fray, right? If you've got the time and the bandwidth, come join one of our SIGs. Come talk to us about your use cases. Come to KubeCon or come to SIG multi-cluster and represent. You know, we have some things that we understood that we think we understand pretty well, as they think, you know, mesh-like, app-to-app communications. I think I understand that case. Ingress, I think I sort of understand that case. But there's a lot of things that I don't understand yet. And that's why I'm asking for you guys to help me. So after this today, I'll be around all day if anybody wants to talk to me. Also, I'm pretty easy to find on GitHub or Twitter or email. It's always the same name. We've got a million topics to talk about. So again, at the risk of being the cheerleader, I think Kubernetes is a good spot for this. At least there's nothing better. Running apps is hard, and that's your business. I don't want to make your business harder. I want to make your business easier. Running multiple clouds is going to complicate your life. It is going to be hell for you, and I want to try to make it epsilon easier. But I need to understand the use cases. So come in and talk to us if you're looking at multi-cloud and you haven't started looking at Kubernetes, I urge you to take a look at it. It's hard to say that it will be here forever. Who knows? But there's not a whole lot of these C changes that we see through our careers. This is a pretty big one, and I'm pretty excited to be along for the ride on this. And if you think that we're missing something with Kubernetes, let's talk. Thank you. And we've got just a couple of minutes left. So if anybody wants questions, I'm happy to do questions. Or we can just talk afterwards. There's a hand in the back, too. There's one over here. Well, who shouldn't use Kubernetes? Who should not use Kubernetes? If you don't have a problem, don't try to fix it, right? If you're not feeling the pain that I talked about here, don't worry about it yet. You might eventually, but I am a big proponent of if it ain't fixed, don't mess with it. If it ain't broken, don't mess with it. You need to understand Kubernetes before you jump in. It's a deep well of water. It's pretty nice and warm. There are some living things down here. They will occasionally brush past your foot. What's the biggest risk to Kubernetes being the de facto orchestration layer in the next 10 years? The biggest risk, do you mean what might unseat it? Or the biggest competitor to Kubernetes right now is still homegrown stuff, things that people have done on their own, right? It's not, some of what Kubernetes does is not that complicated to just script and do yourself. To the previous question, if what you have is working, by all means run with it, right? I don't see any other projects that are really well positioned right now to sort of replace Kubernetes in the Kubernetes space, but they say you never hear the one that kills you. So if something's gonna come along and eat our lunch, I don't know where it is yet. I'm madly scanning the landscape, trying to figure out what that is and why. Some people think it'll be serverless. Maybe it'll be serverless. I think I'm excited about some of the stuff that's happening in Kubernetes serverless space, whether that's Knative or OpenFaz or the other things that are happening. That might be the guy who comes along and changes everything next. Thank you very much. One more? Oh, I'm sorry, did I not have that on the slide? T-H-O-C-K-I-N at GitHub, at Twitter, or at Google, or LinkedIn, or speaker deck, or whatever all the other sites are. Any other questions? Thank you. Oh, one more? Yeah. What are some best practices to monitor Kubernetes resources like bars, ingresses? Best practices to monitor Kubernetes. Monitoring is one of those spaces where everybody has an opinion, and it's hard to come in and say what we think is the best practice because we have to fit into existing situations. That said, customers who I talk with who are coming in sort of semi-green field are looking to Prometheus. Prometheus is another CNCF project. It's a really excellent project. Also built by former Googlers to model internal Google systems, ironically. It's a great system. We work really well with Prometheus. If you're running in a cloud, though, you might have a better managed experience by using the different cloud provider monitoring stacks. So it is pluggable in that way. Thank you very much.