 All right, thanks for having us come. I'm really excited to talk with you today. We're going to talk about Kubernetes and what it has to do with multi-cloud. But before we dive in, I'd like to introduce ourselves. I'm Ian Chakras, and I'm one of the engineering leads for Anthos and GKE. And my team is focused on making it possible to manage and monitor all your Kubernetes infrastructure no matter where it is, whether it's on Google Cloud, other clouds, or on-premise. And I've been working with Kubernetes for several years, and I've seen us move through several different tech transitions, everything from bare metal to containers and monoliths to microservices. And I love the velocity that Kubernetes and containers really bring to us and allow us to focus more on our core business. Prior to joining Google, I helped a large SaaS company actually on their multi-cloud journey. And I'll try and layer in some of the things that I learned as being a part of that early team that was laying the ground for leveraging Azure, AWS, and GCP. Tim? So I'm Tim. I have been working on this little Kubernetes project for about five years now. I'm one of the handful of people who can actually put five years experience on my resume. I pay attention mostly to lower-level topics, networking, storage, multi-cluster, those sorts of things. Before Kubernetes, I worked on Google's internal systems, Borg and Omega, which were a lot of the inspiration for what Kubernetes does. And one of the things I get to do at Google is I get to sit and listen to customers. And I talk to a lot of customers, and I hear the problems that they're having and the sort of pleas for help that they have. And sort of an unfortunate truth about Kubernetes, for a long time, we put our fingers in our ears and pretended that the cluster is the edge of the universe, and there's nothing beyond it. And the reality is, as I listen to more and more customers, I realize that multi-cloud, multi-cluster hybrid is the reality. There are lots of reasons why customers are going to end up in multi-cloud situations. And I posit that approximately every customer, every non-trivial customer, is going to end up in this situation one form or another. Reasons like geographic locality, whether that's latency or availability or regulatory reasons or data sovereignty, things like disaster recovery, cloud-specific things. I want to use this feature of this cloud and that feature of the other cloud. Risk management, making sure that you're not stuck. Legacy reasons, acquisitions, just lots of business reasons that people are going to end up in this sort of situation. Gartner pointed this out. By 2021, the projection is 75% of mid to large organizations are going to have some sort of hybrid or multi-cloud strategy. Presumably, that's why you're all here. You're interested in this particular topic. So the interesting thing, the hard thing about multiple clouds is the noise. There's so much that is different across clusters. All of these environments are entirely different, sorry, across clouds. All of these environments are completely different. Whether you're talking about UIs or CLIs or whether you're talking about support models or how you engage with the product, it's going to be different on every cloud. This is incredibly difficult. Imagine you're poor teams who have to learn these environments, right? Whether that's your on-prem VMware environment or your on-cloud Google or Amazon or Azure, learning different environments is hard. To learn them to the depth that you need to be able to develop and debug real applications on these clouds is really, really difficult. And it's even worse than that, actually, because the details of these clouds, the differences, they run so deep. They run down into what the cloud can do, right? Networking capabilities across clouds or across environments are incredibly different and varied. Storage, autoscaling, lifecycle management, all these things that have real material impact on the way you develop your applications can be total chaos for your staff. You need some consistency. You need something to help you get an environment across these clouds that doesn't force everybody on your team to be aware of the differences between them. So we're here. This is KubernetesCon, so I think you guys know where I'm going with this. Kubernetes, I think, is perfectly positioned to be this environment. Now, Kubernetes was sort of designed to do this from the beginning. We wanted to provide the right abstractions that would let people have a loosely coupled environment so that they could use CLIs and APIs and tools without getting locked in to their particular cloud provider, without being stuck there. It's OK to make decisions about cloud providers. We're going to use specific cloud provider features when you want to. But it's really, really easy to slide down that slope and realize, a year later, you're stuck there because you didn't realize you were making all of these highly coupled decisions. So Tim has laid out why Kubernetes is a good fit for multi-cloud. And really, it is this platform that is high enough level that it hides most of those variances that we see across all the different clouds. But it's also low enough level that you can do anything that you need to for your business and your developers. And really, Kubernetes provides these abstractions that insulate your teams from some of the mess below and hiding that infrastructure complexity that's associated with multiple clouds. As I mentioned, I was involved in a platform team to help perform these types of migrations across clouds. And actually, we took both approaches. We took a cloud-specific approach for VMs, and we took a Kubernetes approach for all our container workloads. And I can tell you which one went much more smoothly. Kubernetes is this platform upon which you can build. And building your own platform for you and your company and your teams and your developers is a great thing. Actually, it allows you to build up on top of it and provide these higher level abstractions that are really focused on what you need to do or what your business is trying to do. And this ability to be a platform for teams and for companies is really powerful. This gives you enormous leverage. Not only can you build the platform for your teams, but there is this entire ecosystem of people who are out there in Kubernetes building things that can help you run your business. So there are hundreds of little things. Actually, I went to look at the CNCF page recently just to look at all the different projects. And even just the graduated project list now fills your entire screen. And there are so many other ones. So to mention some of them, like you want a service mesh, you're probably going to be looking at Istio. And this provides that higher level abstraction and these value-added features that are independent of where you're going to run it and independent of the lower level of differences within the clouds. You can also look at things for developers like Knative, moving to where you can deploy a container really easily anywhere that Kubernetes is running. And there is this entire ecosystem that builds the infrastructure and the applications, whether it's logging and monitoring or whether it is networking, databases, security, storage, policies, lots and lots of different vendors and partners and options and choices for you so that they can fill in the gaps if there are any things that your business is running into. So Kubernetes is giving you this leverage as being a platform that actually spans all those other clouds. Now, I've painted a pretty rosy picture, but in reality, it's still pretty early for multi-cloud and for hybrid Kubernetes. Tim mentioned some of the challenges with the underlying infrastructure that certain teams are going to need to understand. Not all the abstractions in Kubernetes are leak proof. That is completely hide the infrastructure. Tim mentioned one in particular, like networking. And networking across environments or different clouds and across different clusters, both of these still remain challenging. So just saying multi-cloud is probably too abstract. We really need to talk about flushing out some more detailed concrete use cases that need to be explored or solved for. So the future here is that there's still a lot of exciting things and exciting efforts and explorations that are happening today when it comes to multi-cloud. And Kubernetes is at the core of it. So some of these that are relatively distinct and actually understandable are like connecting across all these different environments or across different clusters or managing the policy. The policy of each of the different clouds, that's a challenging problem. But the policy of every one of the clusters, that becomes more consistent. Doing selective updates, stage deployments, this is again something that Kubernetes, you can get leverage from Kubernetes being in all the different environments and even your operational aspects of running and managing Kubernetes. Now there are some things that are still harder to do that are really near and dear to our heart at Google, things like disaster recovery scenarios and Tim mentioned a whole bunch of different reasons why you may be multi-cloud. Things like multi-cloud geographic placement of applications, whether it's for redundancy or whether it's for low latency to service your customer. There are things like bursting across clouds or different regions and so these are again areas that I think are harder but still ripe for innovation when it comes to Kubernetes. And you can even get into things like dynamic scheduling for batch workloads and so we see a lot of like new efforts within the space to take advantage of either the data locality as an example, specialized services within particular clouds or even taking advantage of like cost structures to dynamically schedule your workloads. Now the fact is that you do still need to have a platform team that operates all these clusters but again Kubernetes is gonna provide these wonderful abstractions so not everyone needs to focus on the underlying infrastructure. Now many of these things are near and dear to our heart at Google and that is why last year we announced GK on-prem and earlier this year we announced Anthos and GA at Next. Anthos provides this consistent development and operator experience across environments both multiple clouds as well as on-prem and it is a modernization platform that goes wherever you need it to be, wherever your infrastructure is, we bring the cloud to you. Now I don't wanna turn this into a product pitch so I'm just gonna say we see these types of trends whether it's a modernization or the multi-cloud as inevitable and we believe that Kubernetes is the basis and the building block for that change. So that's the end of the talk today. We are here at KubeCon and we'd love to chat with you more about Kubernetes, GKE or Anthos. We have a bunch of different locations that will be over the course of the event. For example, the community lounge or the Google Cloud booth. Perhaps you folks would like to take some questions. Would that be cool? Sure. Awesome. Any questions from the audience? Everybody knows everything there is to know about Kubernetes being the... Okay, they proved me wrong. Shortly. Yeah, can you give a quick overview on the state of what KubeFed federated Kubernetes? Sure. So KubeFed is a project that came out of the SIG multi-cluster group. It is a particular twist on how to manage multiple clusters. The project itself struggled to get adoption and we talked to customers about it. There's a lot of people who are interested in it but when they actually get down to brass tacks with it they find it doesn't quite fit what they need. So the Google perspective on this project is, first of all, community is amazing and the community should and can do anything they want and spread out. At the Google side we're focused on more specific use cases. KubeFed has sort of an overarching model. We're trying to find very specifically concretely what are the problems that customers are having and what are those use cases and what are those user journeys that they wanna take and focusing on those sorts of things. So we've invested instead in projects like KubeMCI, multi-cluster ingress, which lets you configure the Google Cloud load balancer to bring traffic into multiple clusters. This is something that KubeFed struggled with and struggled with because it's really hard to do in a generic way. It really needs to lean on some of those cloud provider specifics. And so for the customers of Google that was a really valuable trade-off there but it still builds on top of Kubernetes using the same ingress abstraction that NBL uses. Now we're focusing on sort of the next round of use cases. We're talking to customers and I'd love to hear from anybody here. What do you try and do across multiple clouds and multiple clusters? What are the problems that you're facing with your apps talking to each other or finding each other or needing policy between each other? We're keeping that independent of KubeFed though we're still in contact with that project. Hi, I'm Arcello from Stanford University. So we have a lot of hard work due to taxation reasons. We actually can recover a lot more money if you would buy CapEx rather than buying cloud resources at the moment. Things are changing but not yet. So because of that we do have an interest in Anthos. We already met with Google and we were asking for a time when a roadmap when you guys are gonna allow us to use on-prem resources without the VMware layer. And not only that, also the F5 requirement. I do know there might be some intention to allow maybe KVM or other cheaper platforms to work with and that'd be great to know if you have any BTAs. Yeah, I don't have an ATA for when we will provide a distribution and management for bare metal or not using vSphere. But please come talk to me afterwards and I can connect you with the appropriate product managers and tell you about the offerings that we have today that can help you manage and monitor all your Kubernetes clusters. And with respect to the F5 requirement we have in pre-release now a sort of bundled load balancer based on CSAW, another Google project. So you don't need to have that F5 requirement. We've heard you. It's always cool when we hear bare metal plus Kubernetes stories because I think that's the reality for so many folks and being able to work with all of those in various situations of compute is so important. Any more questions from the audience? Yes. So one of the things I'm curious about because you're pretty prominent in the Kubernetes ecosystem is so Kubernetes is coming out of Borg and Omega. I'm super curious if there's any features that are in Borg or Omega that's still Google internal that you think they could be put into, that Kubernetes is basically missing that would be able to take it to the next level delivering on this whole journey we're talking about today. Unequivocally, yes. Borg has 14 plus years of development behind it. So, and Borg's been in development since 2003, so I guess 16 years now. And it has both a ton of features that are very custom for the way Google runs. Features that would not make any sense for Kubernetes to adopt that are tailored for search or Gmail, right? Which there's really a half dozen of such applications in the world. But it also has a ton of features and functionality that I think Kubernetes absolutely should get. And I could probably rattle off a half dozen off the top of my head. It's not that we don't wanna bring them forward. It's a matter of prioritization, how to fit them into the system, getting enough people to do it. Kubernetes is a community project. We need more than just Google doing the development and we have this now. So, a great example that came up just this week since you talked about it is things like high performance computing. People who are very concerned with things like NUMA, right? Well, Borg has some really clever ways of automatically figuring out best NUMA alignments and managing it. And we can actually get really far down the performance optimization curve without any API at all, right? Because we can figure it out automatically. But it's hard, it's hard work. And I would love to help somebody do that work, right? It'll come up eventually. It hasn't been the top priority for the customers that we're dealing with. But since it is a community, if somebody's interested in this work, I'd be happy to help point them at the right papers and shepherd the project through it. Any last question? All right. Well, thank you so much, both Tim and Ian. Thank you.