 Yeah, hello. My name is Roland Huss. I'm from Red Hat, working there on OpenShift Serverless, which is the product based on Knative. And in the community, I'm part of the TOC and also the Knative client working group lead, one of the kind of client working group leads. Yeah, that's me. I'm Evan Anderson. I've been doing Knative for the last four years or so and I'm currently security working group lead and on the TOC currently at VMware. Hi, my name is Whitney Lee. I'm a developer advocate at VMware. I teach and learn cloud concepts from behind my lightboard. I'm here on the panel representing new learners in general. I wrote my first line of code in 2019 and I've made a lot of content lately about the Knative project. And I'm Sebastian Gresgen. I'm the co-founder of TrigoMesh and I was involved in Knative almost since the beginning. Okay, excellent. So I'm Max. I work for IBM and I'll try to be moderating. So the panel is about the past, present and future. I figured I will start with three questions on those three topics, but I want you to be involved. So after those three questions, I will look for audience members with questions and then run to you so you can ask your question. So get ready to be prepared. And then I will ask the panelists to limit their answers so that we can have time to make sure we get to the questions. Okay? All right, very good. All right. So the first and people don't have to also answer every question, everybody has to answer. If you don't have an answer, then that's fine. Let's move to the next question. Okay, so the first question I have, which is talking about the past, let's just make it fun. Do you have a story or an interesting recollection about how Knative started or maybe something fun about your colleagues, maybe about Matt or whoever? Tell us a story if you have one or how you started with Knative. Yeah. So this is a little bit of prehistory. Vile over there and myself and Brett Clouser at Google were actually working on a next generation of the App Engine Managed VMs project and trying to see if we could move things to Kubernetes. And shortly after that, there was a blog post that Joe Beta ended up highlighting saying, you know, basically Kubernetes is too hard for developers. You know, hey, if you want to start an application, you need to understand pods and containers and deployments and replica sets and services and horizontal pod auto scalers. And, you know, all of a sudden everything you needed to know about your application fell out your ear because you packed all these networking and distributed systems concepts in and we said, Hey, I bet we could do something about that. And this is circa 2017, 2018. I think 2017. Okay. So it only took two years after Kubernetes, you know, sort of it was, it was what Vile, do you remember? Was it September? I see that we started building a proof of concept. Okay, cool. Anybody else has a story about the kind of the history of the past? Okay, sure. I have a story. So I created Q bless. I think there was December 2016. You know, there was a lot of fast solutions, you know, in the community space. And, you know, fast forward to 2018. I'm contacted by do it Clinton, Clinton at Google is like, Hey, we're doing something and so on. And at the time, well, you know, I was transitioning from another company. And he said, you know, do you want to be part of the K native announcement that at Google next, that was July 2018. And I said, you know, sure, but I don't have a company yet. So, you know, with my co founder, we rushed to get a logo 400 bucks on 99 designs. We, you know, we called each other. We came up with a name. And suddenly it was funny. If you go back to YouTube, you see the announcement. And it's like, you know, thank you to IBM and T mobile and pivotal and trigger mesh, which was a, you know, not incorporated yet. So that's quite funny. Anybody else? Or we'll go to the next one. Yeah, actually, I joined a Canadian quite late in the game. I think it was 22, but 20 some of the two years. And yeah, I was always fascinated by the by the really the ease of the developer experience. And really, this was really great. And I really wanted to be part of that. I have to say, I was really accepted by the community very, there was a very, very reception. There was a very, very nice community to join. And I really enjoyed it. And I actually can only encourage everybody here to join the Canadian community, which is really a great one. Okay, so the next question is very specific. It's going to be the present. We are here first Canadian con. Obviously, we're part of the CNCF. Tell me what you're looking for the most with being part of the CNCF. So maybe we'll start with you, with you, Whitney, like what, what do you expect the CNCF to do for us or maybe to hurt us? No, I don't think so. But you tell me, what do you think, you know, this, this means to you being part of CNCF? Actually, I think this is maybe a good opportunity for someone to explain to me what, what it means to like, like what this milestone means for y'all. Let me ask, call us here. So you, you were instrumental to making it part of the CNCF. What, what does it mean? And how, how hard was it to put Canadian in CNCF? Like what, what, what are the things we have to do? Well, convincing, I guess Google. But the CNCF is a lot of people actually think that we were in CNCF. A lot of people talk to me and say, I see your logo in the CNCF landscape. And, and they think that we are in the CNCF, which is a good thing. But it's not, was not real. So moving to the CNCF, it gives us an umbrella of services and help in terms of legal services, marketing services, infrastructure. We're running infrastructure. Thankfully for, for Google that started the project, we're now we're moving into our infra and people like Hippie, Caleb, I think is here. Those folks from IP helps now, so we grow with the community. I think it gets us closer to Kubernetes. So one of the things that I did and encourage is to join and meet people from Kubernetes. I know a lot of folks already have like relationships with Kubernetes. I tried to join Sieg Relief, so I'm building a network there. And that way we can build a better community right within CNCF and Knative. So basically there's a lot, you know, behind that. But what does it mean to you, for instance? So what I'm hearing is it moves you closer to Kubernetes, but also it sounds like there's a lot of time, money and effort put into this project. And when it's, when it's owned by a private company that is a little bit dangerous, there's a lot of trust being put in Google. And now the trust is like, it's there, like it's owned by the community now and not by an an individual entity. And Google is not evil. So we don't have to worry. Yes, nothing against Google. So, okay, I'll do a quick one before, before even, so, you know, for me, CNC, moving to CNCF is really big for Knative. I really thought, you know, that it would happen much sooner, right? You know, early 2018, I mean, I would have been great. The big advantage to me is that, you know, for users, they want to know that the project is sustainable, that it's not dominated by one vendor. So that, you know, people want to know that they, I mean, they want to de-risk the investment in a particular software, right? So, you know, by putting in CNCF, it doesn't change anything in terms of how you contribute or even the license or, you know, things like this. But you now know that it's, you know, under an umbrella that's really de-risking, you know, your investment and your choice of saying, hey, I'm going to use Knative. If I, you know, if I fight with Google, I can ask Red Hat. If I fight with Red Hat, I can go to VMware and then I can go to DragonMesh or somebody else. So it's really de-risking, you know, your choice and then, you know, broadening the community. Oh, I was just going to point out one other thing. The way that we're all here, we talked at San Diego in 2019 about it would be cool to do a KnativeCon, but the logistics without an umbrella organization where everybody, you know, who wants to participate can give some money to a central place to get that organization to happen. It's so much harder. So that's one of the things I'm excited about is it's going to be easier to do things as a community under an umbrella like CNCF. Yeah, I think I can also back Sebastian's opinion about that. It's really much easier to, to adopt or increase the adoption of Knative when you're under a neutral foundation. This is really, I think the big, big win for me and also for, for users, I think, because they have the kind of the security that they are not go away anytime soon. Let's say like this. Yeah. And, and, you know, not to dwell on this, but I'll give you a specific example is that a lot of time when somebody wants to do something at IBM with IBM, they go to me as if I represent IBM. But, you know, I may not be there, you know, or, or, you know, maybe busy with something else. And in the case of Knative, you know, Isamal was the person that represented Google for us. But now we'll have an organization that we can go to and there's a formal process and so on, right? So you don't have to be the one always there, even though we were great. So thank you. Alright, so last question for me, and then we'll pass it to you. So think about your questions. Looking to the future. What do you think is missing and Knative that you'd love to see? Maybe it's a community thing. Maybe it's a specific feature. Maybe it's, you know, organizational thing. Think about like, what do you think you would like to see? So let's go to Evan first, since you haven't started. Oh, geez, I have a huge list. But I think the big thing that I'm really excited about for the future is that we've been improving and learning on how to make this software in the open for the last couple of years. And, you know, we've got a big feature list of stuff that, you know, eventing wants to work on some task flows, security has a big list of stuff to do. There's neat stuff going on in the networking space, auto scaling and functions. But we've got a framework for doing this all and for cooperating and coordinating it. And I'm looking forward to that getting better and better. Okay, I'll give one. So you know, one thing I realized fairly recently is that when you look at Lambda, what people are doing on Lambda, I mean, it's not new, but they're really building REST APIs, right? And I think I for for some time, I forgot about this, but they use API gateway, and then they use functions to really build a REST API. So we can do this, you know, different ways with with Knative. But then there is also async API. So I want to bounce back on your talk. And it's a it's a big, big use case, async API, maybe not so much for, you know, what you describe like long running jobs, but just, you know, event based integration where, you know, we work a lot on this. But we have to really, you know, think deeper about, you know, how do we support, yeah, async API, asynchronous flows, connections to different things that may be asynchronous, you know, things like this. So I think it's going to be a big, a big thing moving forward. Okay, cool. Whitney, you ready or? Yeah. Okay, go for technically, when I've given a couple talks about Knative at conferences, I see interest in duration based rollout. So I'm excited to hear about when that comes to fruition, so I can pass that along. And then community wise, I have some experience volunteering with the Kubernetes community. And in the coop in the Knative side, I maybe haven't, I think there could be like a more of a welcoming committee or a really clear place to where to go when you're first getting started to be able to navigate all the different working groups. And, and so like Maria's presentation earlier, I didn't, as someone new to the community, I had no idea that group existed. And so now I'm really excited to participate in that. So how can we amplify that to someone who's, who's approaching the community and a little intimidated by everyone about where to go first. And Maria has been very good at keeping this running every month. We have a talk and it's a very good talk usually. I'll see you all at the next one. Yeah, very good. Yeah, actually, I'm also very excited about the more boring things like security as mentioned already. So this is a really the things which people are really requesting for production ready workloads. So this is something I hope that we will make a big leap forward in the near future. And the other thing of which I'm also very kind of a passion for us to develop experience, I think that can get the abstraction quite exact right for, for developer application to combine everything into this single resource. And we can do to better I think functions as a very, very impressive direction that we are going to. And I really hope to see them more, more traction even. Yeah, keep pushing that in. If you have not tried KN eventually, Navid and David are going to talk about that. And that helps a lot. Okay, so who has a question? I'm going to pick some people if I have a question. Oh, you have a question. Yeah. So I first heard about Knative when some coworkers in my same standup would talk about it. And I was so new to tech that I didn't really understand what it was. And so I like what, what is that to K native that keep mentioning? And I heard that it's serverless in that it's attached to Istio. And it's so it's heavy. And I'm wondering what y'all are doing to help get rid of those stereotypes, because now that I'm deep in the project, I'm like, Oh, it doesn't need Istio. And like serverless is cool, but I wouldn't even say that's the best thing, like the ease of deployment seems like the best thing. Okay, yeah, this is like the opposite of okay. So that's my question. So actually, the Istio thing is kind of funny, because that was one of the first pieces of feedback when we launched. And I feel like we reacted to it pretty quickly. Matt probably remembers the exact timeline. But I feel like they may be, you know, it was July that we launched. And without having thought about it, maybe February or so the following year, we had ambassador or glue or one of the other gateway implementations. And it may not, I don't know if anyone's done a like, here's what it looks like when a request comes through K native. But we do a substantial amount of processing on that gateway. And I'm excited that the gateway API is going to finally standardize some of those capabilities. But there's your standard Kubernetes ingress is not sufficient for what K native needs, which is why we have all these different adapter layers. And I think that plugability is really good. But it sounds like we have not that's found the docs it looks like. But that plugability is not advertised as much as we need. Yeah, I'll just share a little story because you know, we're between friends here. So yeah, remove I mean, removing the dependency on Istio was a very early feedback. And I agree with you, it was addressed very quickly. But you know, the the perception that it's still a dependency persists, which is, you know, interesting from a almost social perspective. The funny thing is I remember having a discussion with I think Brian, Brian Greg, Ryan Greg, the PM Ryan Greg. Yeah. And, you know, I was telling him, Hey, we are running, we're running K native. And we have, you know, we have Istio and the full service mesh. And, you know, that thing is hard to debug and monitor. And then there's a huge, you know, startup time and so on. And he said, Oh, yeah, but Sebastian, you know, in cloud run, we disabled the mesh. So, you know, that was that was very funny to me, because in fact, they were just using Istio as an ingress. And they didn't use the actual mesh capability. So, you know, that's it. Okay, we have an audience question. So I asked the audience, if you have a question, raise your end after and I'll come to you. Okay. So Hey, it's me again. I'm Michael gosh, I work at VM Bear. Can you hear me? Is that okay? Okay. So my question is not so much to Evan, because I've talked to him about that. So it's more to the three of you. And so in K native, we talk a lot about developer experience and making developer experience better. And my work, my daily work with K native involves working with administrators, operator kind of type of personas. And I realized that a lot of these persons don't consider themselves to be a developer. So the world itself is kind of overloaded a little bit, right? Because I know that K native doesn't want to be just for developers or business developers, right? It's for everyone. But conceptually and perceptually, a lot of people think, Oh, that's not for me. This is purely for JavaScript, whatever Java developers. So my question is, and especially if you look at AWS Lambda and some of their eventing services, which are heavily used for automation, notification, like not necessarily business logic, the way we understand business logic, right? But it's super successful there. So my question is, how can we grow the K native community, especially these kind of personas in the future to get our numbers up on one hand, but also appeal more to these persons? Yeah, so maybe I can answer this first. So I think one of the big things that really drive this kind of adoption is to have more sources. So thanks for thanks to to creating tons of sources like that. And actually, really, I also think that for example, for Lambda, the big big asset is really that you have so many servers that you can connect to it's not so much the programming model, how you could connect them, but the sheer availability of everything like that. And we need to to level up on that on that capability to to have an easier way, how you can create sources and bring things together. So yeah, I don't think I have anything to add on this thing. So I mean, user experience, you know, to me is huge these days. And and honestly, I would say that K native is super hard to use, right? And I've been involved since the beginning. When you look at all the APIs, all the construct, even things that we've done at TriggerMesh, if you take somebody who is new and and they try to use it, it's challenging, right? Lots of things that you need to understand, right? Ingress, cloud events, you know, things like this. So I think we really need to do a much better job in terms of developer experience, right? It's not on boarding people. It's really, you know, what's their conceptual model when they want to consume something like K native and when they want to build an application, right? And yeah, I think right now we're falling, you know, really bad on this on this front. You have data that you can share like survey results and so on? No, no, no, we okay. So it's just a feeling because you say super hard. It's not a feeling. It's just even us trying to use things. If you try to build a real application with like events and so on, you know, you're going to struggle, right? Writing all the YAML, I mean, it's okay. Did you use KN? That's what I'm talking about. Because we have a solution. I'm not saying it may not be as good as you need, but we do have a solution. And at some point, they're going to give a talk about that. I'm not trying to defend it. I'm just saying that, you know, I'd love to know the level. If it's YAML, yes, that's a problem. But even, you know, it's interesting, but even KN, I totally respect it. I understand it. But when you ask somebody who is totally new to start using KN, they need Kubernetes, they need KNative, they need Tecdon, they need to connect to a Docker registry and so on. There are lots of things happening that are very hard for, you know, an operator, you know, to get on board. Okay, that's good. Okay, so that's feedback. Anybody wants to add to this? Otherwise, we have a question from the audience. I was just going to say that, I know Whitney recently was just coming on board with eventing, and so I was hoping maybe you could talk a little bit about your experience, trying to figure out what all this stuff was. So I do a streaming Lightboard show, and what I do is, from behind the Lightboard, I have a guest come on and teach me a concept, and then I draw it as I understand it. It's a pretty long format show. So I had Carlos come on and teach me about KNative eventing and, sorry, serving, and I drew it out. And then I had Mauricio come on and teach me about eventing, and I drew it out. So those are some beginner resources to at least get the high level concepts down. And then for me, personally, once I have that mental model in place, the rest of it making making use of it comes together much more easily. Excellent. Okay. So mention your name. Hello, guys. Dhruv Desai from working with JPMorgan and Chase. So I have a question basically the current function model is very HTTP request centric and very driven by the HTTP request, right? So like, I just wanted to understand that if in the future is there a scope for job like support where you can have batch data transformations and administrative task support as well. We just had a presentation about that earlier. Yeah. The Benthos one. Yeah. That's definitely one thing that you can do, particularly if you're looking to fan out your work a lot, you can basically, if I apologize, if I misrepresent Benthos here, but it looked like Benthos was basically, you know, queuing up one request for each piece of work and then firing it off, you know, add a K native service. The other thing is you still got all of Kubernetes out there. So there's nothing says you have to only use K native. It's totally fine to reach for batch v one job or cron job. If that's the right tool, you know, K native, the K there is really is for Kubernetes and all that Kubernetes is there. So I don't think we have specific plans for like a job type interface at the moment. Google about cloud run jobs actually. So I thought maybe like on something on similar lines, we might have something in K native as well in the open source community. I I haven't seen anything. Yeah, I mean, I actually, yeah, I kind of share the same feeling like Evan. So actually, I like really the the focus that K native has on this particular workload. And actually nothing for prevents you from combining this with other workload types and other work for framework. So actually, I'm more in the camp like stay focused, stay, stay, make that what you're doing, make it good and make it well. Yeah, and shameless plug, for instance, at IBM in our code engine, we have a batch job in there. And I think it just uses Kubernetes. So it's basically just putting it all together into one team. Okay, other questions. Yes. So we have a question. Into your name and association. Hi, I'm Anthony from Gamber in the UK. There's a lot of talk around AWS and obviously Lambda's a huge service used a lot. You're on about kind of de risking. Do we or do people have obviously your huge companies talks with AWS and speaking about Lambda and how we can try and make sure that the project continues? Should I put somebody on the spot? So we do have AWS representative here, but let's get the answer here. I don't want to embarrass him. He doesn't want to. Yeah, so actually, I would say so for AWS for the upper runner is more the equivalent to K native. And because it's container based. So actually everything in Canada is container based and we were trying to provide a Lambda like experience for containers as well. From the with native containers. Actually there's no I think for AWS there's no will. So this is my feeling is there's no will need to go to Kenneth because they have already their tech and the same is probably true for Microsoft as well. And yeah, so this is kind of my impression that that yeah, that's the reason why AWS is not so engaged in the community. So I didn't quite got the question. Basically, if you should we talk to AWS? How do we engage them? Do you think there's a way to make the, you know, so very, I guess you never talk very early on, very early on, you know, when one use case we worked on was actually function compatibility between Lambda and K native. So if you had a function that you could deploy to Lambda, could you actually easily deploy it on prem with K native? So you know that that you can do no problem. And now the fact that Lambda supports containers straight up, you know, it's I would say it's even it's even easier, right? But, you know, overall, I mean, when you know, to me, when you look at, you know, server less what it is, it's really a fully managed service, right? So it's the, you know, people not wanting to care about the scalability, the operation of the system, it's fully managed, you know, cloud, right? You know, the, the interface per se, like the function thing and the function interface and the, you know, the way users consume it, I think it's actually, you know, a little bit secondary. Okay, so by chance, or maybe not, we actually have one on here who is the director of AWS serverless. So let me ask him to make some concrete statement and promises about the future. And no promises. Director for solution architecture for serverless at AWS. I'm here. I'm interested for sure. Before AWS, I was at IBM with Carlos and Max and so have a lot of experience. All I could say is we have a lot of customers on AWS that run Kubernetes and use K native on EKS or even Rosa as well, right? So Rosa's another managed open shift offering and they run K native as well, right? Obviously, Lambda and the ecosystem is huge. You mentioned the integration, but I think there's always room for conversation for sure. I'm here all week. Let's talk. I saw that AWS Lambda also recently offered HTTP endpoints directly to lambdas without having to go through API gateway. So yeah, so one of his year, let's talk to him. Like maybe we had a little thing, little, little impetus for that. Yeah. Any other questions? Ah, perfect. Yes, let's do it. You don't have to introduce us. So I didn't introduce myself. I'm Carlos Santana. I'm steering for K native work for IBM. One of the things I'm doing lately is doing user interviews for admins and operators. So not a real way. We already did a round of developer experience, but one of the things that the working group is doing is finding out what are the admins doing with K native? What are the struggles they have? Like Roland said, one piece of feedback, a lot of folks are using EKS and trying to figure out what is the best solution for integration with observability, like things they're asking like, how do I integrate with Datadog or how do I integrate with CloudWatch or how do I integrate, how do I do monitoring, observability, tracing? So what are your thoughts in looking into that area, talking about the future? So I don't know for Kennedy specifically, but at TriggerMesh we've done a huge effort to look at all our components and expose metrics that can be scraped with Prometheus. And we do a lot of configuration on Amazon to make sure that all the logs get properly to CloudWatch and that they are ordered properly so that you can look at your logs and so on. So yes, there is a lot of effort in observability and logging. Before you go in much more about what's coming up for that, can you describe to someone new like me about what's current, what you're currently doing well and then what you hope to improve? Oh, I've got too many scars here. We're all doing it all badly. So again, I'm looking at Matt over there because Matt's familiar with a lot of this stuff too. So the early work on Knative attempted to sort of split the world using OpenCensus and the Stackdriver libraries. And so you could either export to Prometheus through OpenCensus or you could export to Stackdriver and that code is still a bit of a mess. And if someone would like to tilt at that, I'm happy to talk with them afterwards. But yeah, observability is a big problem. One thing that I feel like I don't do well. I suspect a lot of us who are deep in the project don't do well is trying to use it as an end user with only end user permissions. It's super easy to debug things when you have cluster admin compared with you only have permissions in this namespace and maybe some monitoring dashboards. And I think we could do better there. And I would I would love to see someone tackle that. Any question from the audience? There we go. Plenty of questions if you don't. But let's go for it. My name is Leo. A small question about the difference and the benefit between Knative and KEDA project. I am using KEDA project. And in the use case of I have Kafka and the micro service that consume messages from Kafka. And I will like to know what is the benefit to use K native in this use case. Yeah, yeah, actually, I typically distinguish KEDA and KEDA that KEDA is really it's all about consumption based scaling based on HGDP traffic. So KEDA is not so good at that. KEDA is really good for specific scalers where you have, as you mentioned, scaling on Kafka messages within a topic or other systems. And it's also pull worth is pushed. Actually, I think they are not really competing against each other. They're really complementary. So actually, you can combine both of them very nicely to provide a super duper auto scaling solution for everything. But but actually, KEDA and KEDA if there's things on different side of the spectrum, so to say, I Oh, I've got a couple other things. If if KEDA is working for you, you should keep using it. Like, I'm not going to tell you go switch to something else because it's the new hotness. Use the stuff that works for you. There is an interesting sort of philosophical difference as well between KEDA and K native. So where K native sort of came from is let's improve the developer experience. Oh, and by the way, auto scaling should be part of that. Have a URL that you can reach to, you know, access things and so forth should be part of that. But it's sort of all those decisions build on each other to kind of reach a point. And when you start trying to take a leg away, you end up with something that you lose more than what you lose more than one support when you try to pull one piece out. KEDA is very much an add on to the standard Kubernetes style of things. So you get all the standard deployment and so forth management things, which is great if you're comfortable with them. And if that's a lot to learn, I'd say, you know, K native can be somewhat of an easier on ramp because we say, here's a service, you know, here's an event source, you know, plug them together and you've got two things to think about. You don't, you know, you don't necessarily have to think about all the warts that deployment has built up over time. Sebastian or Whitney, do you want to add anything? Yeah, really to Kafka. I don't think I agree with Roland and Ivan on the scaling. There's no real competition. But you know, with respect to Kafka, if you're thinking about K native, it's more about whether you want to move away from Kafka connect, for example, which is all about defining your sources and sinks into Kafka, whether you want to do data transformation differently than what you're doing right now with Kafka. And there with K native, you're going to be much more declarative, right? So, you know, whether that's a big benefit for you or not, you know, that depends on what you're doing. But I think, you know, you have to compare more on the Kafka connect side of things rather than, you know, scaling or, you know, your use of Kata, right? That's a good, that's a good point. With Kata, you say, hey, I'm looking, I'm into scale this based on the depth of this queue. But there's nothing that forces that to connect back to the application code that is necessarily reading for the same queue. So you could actually have a different queue name and probably get some really wild scaling results. Anybody else? No? Oh, I'm burning to ask question. I'm burning. Yeah, I just wanted to add a few comments to the observability conversation earlier. So late last year, I think we did some of our most of our observability stack to kind of make it easier for any users to get started, which is kind of my perspective as a systems engineer, DevOps person, as opposed to a software developer. So we did some work last year. So I was wondering how the community found that work and if there were any gaps and things that needed to be improved. Oh, yes. Apologies. So so last year, a lot, there were a lot of conversations around gaps in our monitoring stack. So we saw I and a few other people went ahead and like created some Grafana dashboards, joined all the dots together and clarified the documentation how when you deploy Kinect to a cluster, how are you go ahead and be in a place where you can see all the metrics and everything. I remember that. Yeah. So I was wondering how the community saw that and if they still feel that there's an observability gap. Yeah, I think we have still a gap in general communication about new features to the community. So we had I think the community meetings where we had some some section where we're talking about news in the product itself. And I think and I totally agree that we need to get better as a community to announce the stuff that we are doing and how we can use it more. And I think having this kind of regular meetings, those scenarios where people can join and can get information about that, of course, the website could list it maybe more prominently. Yeah, so I agree that it's probably there's still something to do. I don't think I have heard a lot of feedback either way, which as an optimist, sometimes I think, oh, that means that nobody's deeply upset, but it also could mean that no one is using it. I don't know how we measure the difference. If you can figure it out, I would love to know. Please tell me. And there's also it's you see right like meeting where you can come and kind of present to people. It's very open. Evan runs it. You know, usually there's a space for like a new new topic. Yeah. Any other questions from the audience? I'm yours from Arctic Finance. Since VMware and an open shift of tons of an open shift, this is represented. I'm still struggling with it with the question that like Keynative works really well in the Google cloud environment, because they have it glued well very well on to their infrastructure. But you both like provide like managed Kubernetes services. So we don't always have to control of the the scalability of the underlying infrastructure. I can perfectly skill up to a thousand parts, but I have three worker nodes. It doesn't work out. So. Yeah, how do you both make sure that the developer experience is not harmed by this limit because you can both run your platforms on prem and and other services? How do you control your scaling at the Keynative and if you're using Google Cooper's A's engine and you don't check the cluster outer scalar box, you also don't get scaling. But yeah, that's a that's a limit. Most people seem to kind of understand that we're not giving you magical extra capacity, although maybe if you're at a large scale, you get a little bit of over subscription extra capacity that, you know, was hard to extract otherwise. But I don't know, most people seem reasonably comfortable with, oh, I've got, you know, three machines. They've each got 12 cores. If I want to do more than about 30 ish cores of activity, I probably need to buy another machine. Yeah, I think also that that for for products like like OpenShift, it's there's an additional persona that is not needed for Google Cloud one. This is the operator who is really driving the cluster itself. And and of course, the operator needs to to think about how we can set the limits for Canadian like it has to do like she or he has to do for for any other Royal Cloud running on OpenShift, for example. So there's no there's no magic silver bullet that helps you with Canadian if they're an opposite server less. And yeah, so this is something we only can give recommendations how many services per cluster per node can run. But otherwise, as you mentioned, there are so many possibilities how you can operate such clusters. There's no no very recipe how you can completely make it right for everyone. But one of the nice things with Canadian is that all those tools that you use for managing deployments work with native, too. So if you just set a quota on each namespace that works regardless of whether you're using Canadian or not, you can use a full machinery of Kubernetes because because it's Kubernetes based. So it's essentially there's a dark or called capacity planning, which is pretty much what you're asking for. Yeah, lots of experience, but I don't know if there's any blog specific on K native capacity planning that I've seen. Yeah, do you want to mention something? Good. Yeah, I was I was just going to respond a little bit differently, which is that when K native started, there was one aspect of it, which was can we have a better API abstraction for an application in Kubernetes, right? Because you need it before you would need to create a service or deployment and ingress and so on. So we said, I mean, they said K native service, right, one object, which then, you know, is going to create everything. So that meant that, you know, the mental shift is a little bit different. You're not talking that much about serverless as much as, you know, you're looking for a pass, right? You're looking for a platform as a service, right? And a lot of people that turn to K native, that's what they wanted. They wanted a pass. They wanted a new hero crew to be able to deploy applications, you know, better on and faster and easier on on on Kubernetes, right? But then, you know, we said, hey, you know, it's it's serverless. We're going to give you auto scaling and and things like this. And, you know, with the AWS person in the room, I'm going to say serverless on premise, totally nuts, right? So to your point, you cannot do service on prem because you're not going to be able to scale, you know, as much as in the cloud, right? So, you know, bottom line, you know, you have a pass with K native and then you have auto scaling that you can that you can tweak, but that you you'll hit the limit at some point, right? So it's not, you know, really serverless, right? Okay, anybody else wants to add or okay, any other questions from the audience? Don't be shy. Chain God people. Creators of they were there for most of it. I see. So they don't have any questions. So let's try to close with this question that I've been trying to ask before is, well, first, let me ask this. How many people here are brand new to K native? So you just here because you just heard about K native? Okay, so just a couple. How many people have committed to any of the projects in K native? I know these guys. Yeah. Okay, so a little bit more than 50%. And then the rest you're just observing. Okay, that's okay. That's okay. One more question. How many people here are representative from a say consumer? So you're basically consuming K native? Oh, look at that. Chain God love it. And then how many are essentially helping build it because you're going to sell it, right? Like you have either some kind of a product that you're based on. Okay, not too much. Okay, so there's a bit in the middle. All right, so my question, now that you have an understanding of who the audience is, you have a lot of experience, not just on K native, but also a lot of you in previous, you know, communities. So looking to the future, how can you like what what are the things that you want to bring from your previous experience to K native to make it better? And if it's something that you find that, you know, on the other end, a K native does better than the rest, what would you bring from K native to the rest to the other communities? Because I asked this because I've been in multiple communities and I've seen this particular community as being fantastic. I think a lot of it has to do with the people that started it. It's not because they're there. I like them. But I think they set the the much right on how things should be done. So tell me your perspective. And I'll start with Evan. He's been shaking his head. So he has a lot to say. I actually took a little took a little while for me to think of something. But there've been a couple of changes recently, you know, feature tracks where there's a change proposed to roll out and there's some extra flags to enable it. And everyone's been really accommodating when I pushed back and said, you know, hey, we should do this behavior by default. Everyone should get this. It shouldn't be, oh, when I want to install K native, you know, oh, there's the default install. And then if you really want to run it, you have to set these eight or 10 extra flags. And doing the right thing by default, I think is something that we do. We try to do really well. I and I think when we miss, we try to learn from it and we fix it. And I really like that in comparison with, you know, I'm working on a container D change and it looks like maybe it's going to miss two releases because it's hard to move these files on disk forward. And yeah, I think we do better than that. Yeah, it can be very generic. So so Evan is answering about release management and all the grunt work that you have to do. If you can take off things around community or reaching out to customers or whatever, you know, bring your experience to bear. Along with the five people in the room who are new to the project, I think when I was first trying to navigate the docs, I was reaching for, like how, how does this relate back to Kubernetes? And I was especially confused by a K need of a service being called a service. I thought it was a Kubernetes service. And I'm like, where's the pod running? I don't understand. Like I got really hung up in that for a while. And so I think it would be nice when you explain everything, relating it back to how the traffic moves through and where the application actually is running after you. I've spent a long time trying to figure that out. And then I'll reiterate what I said about the community before, I think maybe having a welcome channel or a mentor, a shadowing program as happens in the Kubernetes community. And that's really nice as a way to just meet someone new who's like involved in your personal journey getting started with the project. Yeah. And I think Matt, for a long time, you were running something on Fridays, right? Like, and I think, you know, Scott as well, I don't know if it's still running, but they used to have that on Fridays, but now that they are, you know, building the next billion dollar company, they, you know, I get it. Sebastian, I think, you know, the biggest thing is to get as much feedback from users. You know, I think we've been very good from the supply side, you know, the vendors and then developer of the technology, but we really need to bridge the gap with the users so that, you know, we can all improve the user experience, the developer experience, the operational experience. So listening, basically, listening to our users. OK, that's a good feedback. Actually, this is the same direction I really try to bring in my experience is really to provide an excellent user experience with the CLI in that case, and also try to convince people that the CLI is a good thing. It's because it's not so easy to convince people that I really stick to Yamaha files and really love that. And I really try to have some I think I have some good arguments that it's good. And I want to to elaborate on that and really to also to to level up on the dev-up story where you can really make the bridge between both worlds, so to say. So yeah. OK, we have five minutes left, so we can take one more question. Oh, Scott, look at that. Did you have some squag to give us or something? He's the squag, squag Meister. I'm going to I'm going to try to lead the witness a little bit. But, you know, so you go to Kubernetes because you want to defer some choice, maybe if like a platform choice because you want some sort of portability, you might choose Knative because you want to defer the choice of maybe even Kubernetes because there's a bunch of these managed services that run Knative. So what is the next choice deferral that we could think about trying to help? Like what are people having to lock themselves into today that that is in the purview of Knative? OK, that's a tough one, but it's a messaging platform. Is it not like your rabbit MQ kind of level stuff? The inventing decouples that it's my understanding. That's what Mauricio taught me on my show recently. Any other thoughts? Do you agree? Is that true? I I think Scott was also asking what's the next thing like inventing and I think the function stuff is pretty promising as a, you know, hey, yes, underneath there's HTTP, but, you know, underneath there's cloud events, but you just see I get a thing and it's been unwrapped for me. I do my stuff and then I hand it back to you and you wrap it back up and send it off. And I don't I don't even need to think about what that wrapping looks like. Yeah, what I can see as an extension of the next step to inventing is really to settle on this idea of an eventing mesh that you have really comprehensive overview of everything of the whole topology that you have in the eventing. So like like a single CID that really lists all the triggers and everything else or that you really have a have a good overview without trying to pick up everything on your own and have to list it. So maybe really a read-only custom resource that just represents the topology of your eventing setup. So this is something where I can see that that users can even even have a better better experience. If I understood your question properly, you know, I would say the API gateway. You know, it's a it's a big thing. You you you face it all the time. And right now there's lots of work in ingress, you know, our file objects and so on that are very relevant. But you know, it's a little bit spread out. And I think we need to clarify this related to K native. It's the fact that there's no eventing ingress. And I'll give you a free answer. So when when I was in research, 2000, the biggest concern that all the students were going after is this idea of dynamic composition. So you'd have a bunch of services to do, say, for instance, setting up the trip to come here, reserve a hotel and so on. And then how could you dynamically compose it? So you have to have a way to compose the services and then dynamically instantiate it to specific ones. And I don't think anybody's doing this right now. Right now you have to kind of code it. You can use lenses stuff where you can build the functions, but you still have to put it all together to glue. Can you make it a little bit more dynamic? So that's an idea. Okay, I just remembered related to functions or something that I've I've been pointing a couple of people at. If you've ever used Firebase functions, they have a very neat mechanism for specifying what event trigger should trigger your function all in the same source file. Okay, firebase where people don't know is it's an acquisition that Google made. I see. But it's it's its own product name. So if you just search for firebase functions, you can see some examples of that. One final thing which I nearly forgot is really that I also agree that this composition that you mentioned is really one of the next things like with also the integration of serverless workflow, this could be really the combination with these kind of high level abstractions and bring those together. I think this will make a big thing. The time might be right. Okay, so we only have a few seconds left really. So I'll just give everybody a chance to say final word where do I not it's a tip that you want to give people to kind of get started or you want to plug something from your company or from your work. That's fine. We appreciate your time. So hold on. Go first. Actually, I only want to say please join us. We are really a great community and I really there are still exciting things are at and we really love contributions. I second this. Yes, Evan, that's it. I'm just so excited to be here today. That's awesome. And thank you all. He's been stuck in his basement on a treadmill. But my show that I keep talking about is called Enlightening. It's on Tanzu.TV. I have some stickers. So come say hi and I'll give you some stickers and hang out. Excellent. Thank you, Whitney. Thank you. Super happy to be here. Part of CNCF. Yes. Very happy to be out of the house also after, you know, those two and a half years of covid and I got a bunch of T trigger mesh t-shirts in my backpack. Thanks. Excellent. Thank you, everybody. And you have lunch now waiting, but Carlos, you want to say something before? Lunch and where is it at? Yeah, we have we have lunch and we return. Let me get the time. One 30. Oh, 13 30. Yeah, it's here outside. So we have lunch. Yeah, you have to get outside. Thank you for the panel. If they're for those end user companies, please see me. If you're interested in doing an interview and I'm doing interviews so I can schedule you to get feedback from the community.