 Okay. We're wonderful. Hello. Thank you for your patience. My name is Whitney Leigh. I'm a developer advocate at VMware. So nice to see your smiling faces. So I have a pretty cool job. I get to travel around the world and talk about interesting tech. And I'm here talking today because I developed a presentation about Knative Serving. And so I've delivered it many times throughout the past year. And I wanted to share with you some of the questions and concerns I've heard about Knative. All right. So that's me. When I give the presentation, I don't use slides. So I'll draw out how Knative Serving works and then I'll flip back and forth to a live demo. So I'll show it in the terminal also. So a finished drawing looks something like this. During the presentation, I'll be zooming in on different parts and drawing it. And it's a really fun presentation. And it's really fun for me because I get to see people get excited about the technology. Sometimes I'm presenting to developers and they're overwhelmed by Kubernetes generally. And so I'm giving them K-Native where they're like, oh, I don't have to know all the Kubernetes abstractions. This is great. So I see them get excited. And then I also see them like, okay, I want to adopt this. What should I do next? Or what is a potential obstacle maybe for getting this in my organization? So I get a lot of questions and I've been saving them all up just for this moment. So I want to talk about the audience for my talks. Who am I giving these presentations to? So as I just mentioned, I do give the presentation sometimes to developers. For example, at JBC in Barcelona, I've been at DevOps UK in London and various stops around my companies like spring one tour, Seattle and New York. But then I've also given this talk or talked about K-Native or serverless to an ops audience too. So you'll see some of that reflected here. And so that would be at VMware Explorer. I did that. And also Victor and Mauricio who are up here, the three of us did ask me anything about serverless panel on Victor's show, his DevOps toolkit show. So also this is my show in lightning. It's a streaming show. So Carlos came on and talked to me about K-Native serving. And some of the questions came from that show too. That was a fun time. So I've grouped the questions into larger trends instead of firing off a bunch of them at you. So let's go over the trends quickly. I've gotten questions around observability and debugging questions about the networking layer, questions about integrations with other tools, and similarly how K-Native fits in with the CI CD, GitOps and or platform building story. And then also I've gotten questions around scaling workloads. So without further ado, let's actually hear about some of the questions that I've received. So in terms of observability, how is K-Native? I'm going to kind of list the questions. I'm not going to tell you how I answer the questions. Basically, I am very comfortable saying I don't know when I don't know, and I will send them to the K-Native Slack workspace, but for now I'm just going to list these questions. So is there a GUI to help me see the differences between revisions at a glance? And I overheard I don't like K-Native because it's impossible to debug. So that's a preconception that's out there. What are your thoughts about using K-Native tracing and telemetry capabilities versus code level instrumentation? And finally, for this one, how do I debug K-Native when things go wrong? So then I also received questions about the networking layer. So definitely the first two times I presented, I got comments and questions about K-Native being heavy and requiring the full-fledged service mesh. So I now address this as part of my presentation where I talk about the different networking layer options, but you should know I like to tell the community it's still very much the way K-Native is perceived out there that it's heavy. And then another question I got is why doesn't everyone use this? It doesn't sound like a networking question, but when I got this question, Mauricio actually came on to my enlightening show. He talked about K-Native eventing, and when he came on he said that it is his firm belief that K-Native should be installed into every Kubernetes cluster. So when I got this question, I like quoted Mauricio in that moment, but then I had another audience member who said he was like, I know why you don't use K-Native. And I was like, oh no, okay, here we go. And so that guy said K-Native makes your resources become tightly coupled with a specific networking choice, and that networking choice might limit choices you can make later in terms of what tools play well with K-Native. So I talked to that guy after, and he was specifically talking about Flagr. He tried to make Flagr and K-Native work together, and he couldn't do it because of the networking choice that K-Native forced him to make. So that actually segues perfectly. And so my next set of questions, which is questions about integrations with other tools. So for this one set of reciting the questions, I thought I'd simply name all the tools that people are asking about in terms of the context of working with K-Native. So first of all, GCP Autopilot clusters. I'm a relatively new learner myself, so I'm not going to assume you know any of these text technologies. I'm going to say what they all are. So pardon me if I'm repeating something you already know, or something that's obvious to people because I don't have a sense of what's obvious because nothing's obvious to me. But Autopilot gives you a cluster for exactly the size of the workload that's running. So Autopilot would have to create a node to run a workload. So then therefore Autopilot, with your scaling from zero, you have to create a whole node. And so I got a question around whether that works well with K-Native. Also, whether K-DA works well with K-Native, K-DA stands for Kubernetes-based event-driven autoscaler. So can you use this autoscaler with K-Native? Argo CD, does that work with K-Native? It's an Argo CD is declarative GitOps continuous delivery and deployment tool for Kubernetes applications. Tecton Pipelines, it's a cloud native solution for building continuous integration pipelines. And Crossplane, which we talked about today, it's a way to extend Kubernetes to manage resources that are external to Kubernetes. So we know that that works with K-Native because we just saw Mauricio and Victor show us. And then as I mentioned before, Flagr, and I've gotten questions about Flagr multiple times. So not just from the one difficult, not difficult, wonderful person who participated in that conversation. So Flagr, if you don't know, does progressive continuous releases, and it uses metrics as a way to calibrate the way the application is rolled out. So this, for the next group of questions are about how does K-Native fit in with the CSED, GitOps, and platform building story. So basically all the tools I was talking about before, integration questions with Crossplane, Tecton Pipeline, Argo CD, et cetera. I want to point out here that this category definitely has ever overlap with tooling integration. So a question about whether K-Native integrates with Crossplane is a question about platform building in my book. A question about whether it integrates with Tecton Pipelines is a question about CSED. And then we have some more straightforward ones like how does K-Native work with an infrastructure as code approach? If you're using GitOps, would you have a SCEMO there, too? And does it work with GitLab, CICD? And then the final section of questions are about scaling workloads. So how do you make sure your cluster has enough resources available should many applications scale from zero at once? How can we measure when it's worth using K-Native for the scale to zero benefits? So K-Native itself has resources constantly running, so how do I know it's going to be a net win to use K-Native for serverless? How can you do serverless with spot nodes? And without the cluster autoscaler on the Kubernetes cluster, can you realize the true serverless feature of K-Native? So to recap the five trends, we have questions around observability and debugging, questions about the networking layer, questions about how K-Native integrates with other tools in the CNCF landscape, questions about how K-Native fits in with a GitOps, CICD platform building story, and questions about scaling workloads. At the end of my talk, I give action items, so now I have questions for you. First I'll get some statements. So I point people who are interested in K-Native to the knative.dev quick start tutorial, the one that Paul was just talking about, that he upset up. And I also point them to the K-Native Slack workspace for if they have any questions or adoption concerns. But I wanted to ask you, and I don't expect you to answer now publicly, but maybe find me later. If there's any other K-Native resources that you think it would be valuable for me to point people to when they're learning about K-Native serving for the first time. And that's the end of my talk. You can find me on Twitter at Wiggity Whitney. If you want to see one of the talks that I give and pick it apart and give me feedback, there's a link right there. But that's all I have. Thank you so much, y'all.