 Hi, everybody. Good afternoon. If you're from Paris, raise your hand. Good. Hi, Eve. Got to think a couple of their presions. Welcome. Davion said, very excited to be kicking off this panel. We have this title. Let's see when they're going to put it on screen, which is from Insanity to Ingenuity, Seven Practical Tips for Navigating the AI Storm in D-Bass for Evolution. Or we could say, not another AI talk in KubeCon, the alternative title. Good, good, thank you. Yeah, feel free to applaud, all right? Give yourselves a applause. But for real, there are 13,000 people attending this conference, and we've been receiving a lot of information about AI. But I think it was actually yesterday in the keynotes that one of the speakers said that they only heard about Kubernetes for the first time a year ago. And we all know how easy it is to learn Kubernetes in a year, right? That being said, I think our community has a lot to show to the AI community, which is why we have an amazing panel with us today. We have Lisa, we have Joseph, we have Monica, and we have Eddie, all coming from different backgrounds. I'll have them explain that in a minute. We really want to be practical here, right? The idea is, we're hearing there's lots of hype, there's smoke and mirrors, there's some BS, there's lots of other adjectives that we could use to describe it, which is why I wore this chili pepper shirt because we're going to be having some hot and spicy takes on AI. That being said, before we get started, I would just like our panelists to introduce themselves. Lisa is one of my friends and mentors in the CNCF. She's the reason I became an ambassador, and she put this panel together. So I want to give her a round of applause before we get started. All right? Well, Lisa, can you introduce yourself first so we can get to more of the content in the panel? Go for it. Thank you, Bert. And thank you, everybody, who kindly said yes when I only had to make one phone call all the way around. So I was so excited because this is a group of absolute experts. And we do a lot of things in the community. I run the San Francisco Bay Area really large user group, cloud native platforms. It's currently called. I change the name a lot. But big meetup group, CNCF user group in the Bay Area. And I've been running that for over 10 years. And so I ended up becoming an ambassador about, I don't know, four or five years ago or something. And I do a lot of meetups. And since last October, I think every meetup I've run has either been on AI or on platform engineering. Because does anyone want to talk about anything else? And just listening to all these conversations about AI at the meetups really made me think, OK, we need to kind of herd some cats here, get some stuff together, have some really focused conversations. And so I called some of my favorite people and said, let's have a great discussion about AI. Because we were having hallway discussions about this. And I was like, this needs to be on a bigger stage. So I really appreciate you all coming to this. We will leave a little bit of time for questions at the end from you. But I'm Lisa. I'm from the San Francisco Bay Area. If you want to attend one of our meetups, you can hit me up on CNCF Slack or LinkedIn. And I would love to get in touch with you. So I'm Joseph Sandoval. I work for Adobe with the Ethos team. I'm a principal product manager there. And the things that fall under my purview are things that are related to just Kubernetes as a service, as well as like traffic engineering and our API gateways. And like all of you, they're maybe trying to grok what's really going on with this, how to get started. I'm pretty basic. So hopefully I'll give you some strategies and ideas of how I'm looking at these things with our platform, where I think opportunities are at. And hopefully there's some insights that you could take away. Hi, I'm Monica. And I'm founder of SATA. And SATA is a database. And our goal is to make it easier to use. And we are building a solution on top of Postgres. And basically, the tooling that our users need to make it easier to use and manage and scale it in production. Before, I was in a monitoring space. I was at Elastic, the company behind Elastic Search. And happy to also share any thoughts about open source that I have quite a bit of experience in. Thank you. All right, I'm Eddie Wasif. I'm the resident cowboy here on the panel. I'm from Dallas, Texas. I'm the chief architect at Vonage, which is part of Ericsson. So Vonage and Ericsson, a lot of people don't really know that we're still in business. And we're actually doing a lot of cutting-edge stuff. And what my role is over at Vonage is making sure that we are seeing what's out there, making sure that our communications APIs and our platforms are up to snuff, making sure that we take advantage of the latest and greatest technologies and presenting it to the company so that we can make sure that our communications and all of our services are available. Kubernetes Fanboy. And I'm really, really thank you for inviting me up here. I'm really excited to have this talk and hopefully take some of your questions. All right, Eddie. So you mentioned some of the stuff you're working on at Vonage. When it comes to AI, what are your concerns? And what is the potential value that you see that it might add? Absolutely. So with AI in general, there's a lot of concerns around where this data is coming from, whether there's any bias one way or the other. And then whether or not when we use AI APIs, is our data going to be leaked? And I think that's probably a common concern that a lot of people have. How do I use AI to enrich my product without giving away my customer's private information? So that's probably one of the biggest concerns that we have. On the other side, we're seeing AI being used, building our products, building our operations. And again, where is that bias coming from? Is it going to be a little bit too eager? Or is the standard a little too high or too low for our products or applications that we're using? Fantastic. Joseph, in the case of Adobe, regarding concerns? I mean, there's always concerns. The great thing about working for a company like Adobe is that we have been really early adopters. We're seeing it across a lot of our products. And so it's exciting to see because I think it's a real democratization of some of the tools that have sometimes been difficult. Now, as far as where I'm at from a platform perspective, I mean, we're looking at ways to use it to optimize. But you have to really give it some thought. And I think yesterday during the keynote, Paige Bailey I think has really put out some really good guidance. And some of these things are just like, are we in a race to implement these things? Well, a lot of us have been technologists. And we've gone through a lot of these journeys, whether it's through automation tooling, even Kubernetes. And just because you can, I mean, yes, it's great to experiment. But I always kind of advise, start with the easy things. So even in our case, we're looking at areas where from a platform perspective, we're using it for support in our bots. Can we optimize that user experience for our developers who use our platform? And I think the other areas that I look at immediately is a lot of us who maybe have the persona of an SRE or you're some other person that's in these environments where you're supporting, you know, you have runbooks. Like in just these things into an LLM, you can optimize a lot of that tribal learning and start creating that. And that I think builds that trust that you're going to need. So there is concerns, but I think there's safe ways to kind of get started. Really like that. And I like how you mentioned the point too, but we've seen other challenges is not the first one. And on top of thinking the stakeholders and the personas, Lisa and I got to know each other. Have you ever heard of the data on Kubernetes community or do you okay anybody? Well, once upon a time, this was a significant challenge, right? You should not run safer workloads on Kubernetes with sort of the status quo. And in 2020, I started leading that community and led it for three years, which is where I met Lisa. Lisa, I'd like to know, you know, based on your experience seeing people coming together in a vendor-neutral environment, something that's often provided in context such as the CNCF, open source in general, when we're approaching this topic of AI, what are the things that you've seen worked in terms of communities tackling challenges, in terms of strategies, ideas that could be implemented when it comes to this topic of AI? It's all over the place, which is why I put this panel together. I think Joseph just mentioned LLM's large language models, which actually makes me tempted to change my name to Lisa Lisa Marie, so I could actually have the coolest acronym, first name on the planet right now and be super trendy. I know you're a music fan from the 80s, so you get that reference. But yeah, no, it's an important conversation. I think what I'm noticing is it's still early days. There's a lot of people that have tried different things. When I put this talk together, I actually used zero AI to write this abstract, and I was told later when the talk was accepted from the track chair that it was a differentiator for me because I didn't use AI. It actually made the talk more unique. It showed the individuality, like the problem with AI when everything is crowdsourced is you're getting basically what the crowd, and so I would encourage a lot of people, I don't know if this applies to writing lines of code, but it applies to a lot of other areas. Maybe don't use AI for your first draft, maybe write the first draft and use AI to enhance and to edit because most people right now are doing it the other way around. So apply that analogy to whatever you're using AI for, and just I am still encouraging people to think it through because I've heard a few horror stories. I think, I hope Eddie talks to us later about sovereignty and where the data is coming from and where it lies and how much that freaks out some of our end users here. But I think we're still early in the conversation and I think the reason we call this seven practical tips is because we want to make sure that we get more tangible with these conversations and that we really focus on, there's too many open-ended things, like we're going to talk about open-source in a little bit and that's a super hop topic that is still very much debated. But I think as much as we can narrow this down and talk to end users and say, what problems are you having? And these are the stories I like to showcase in our user group. What did you try? What broke? What did you do to fix it? And then share that knowledge with people, that's what's resonating the most right now. Fantastic. Monica, in your case, what are you and your company trying to get out of AI? What are you hoping to get out of it? Yeah, that's a very good question. And I want to start by, in the last years I was, in the last year I was going to different events where there were like VCs, venture capitalists and basically the biggest advanced advice that they give to companies is you need to have sort of AI, introduce your AI in your product, even if you are a B2B or B2C type of company, in order to survive, you need to have a different way of providing, introducing any AI capabilities in order to make it easier for your user to use it or to kind of have a different approach in order to differentiate. So AI is a big thing, especially in early stage companies. And for us, actually, I think AI is a very important part in order to really bring and make it build really better products. And this is, for example, in our case, we can really build a database that it's easier to use and we can really differentiate from all the other competitors. I think AI, for example, can help our users with giving advice when to introduce an index, for example, when there are slow queries. It can help you, for example, to give you advices how to change your configuration. It can give you different changes that you can make on your schema and how this can affect your production. It can, for example, also generate the data model by using the prompt. I think there are so many things that you can do with your AI, and I'm really excited because for us, it really helps us move fast forward and also reach the goal of the company in building a database that is easy to use and change the perception that users have with databases that are complicated, difficult to use, difficult to scale. And for us, it's an important aspect of the company. Great, and I liked how you mentioned the dichotomy of the VC pressure to mention AI at all costs, and if we walk around the solution showcase, we'll probably see quite a few organizations that are maybe pumping that message as well. But then, like you said as well, the positive side of things that can be helpful to make people's jobs easier. And I'd like to know, in your opinion on this, is it a blessing or is it a curse? Are there tool sets that are gonna enable developers, DBAs, to do their jobs better? And are there any things you can mention specifically in Vonage that are happening right now? Sure, I mean, I can give you my insights on that. Whether or not I can give you details about the company, I don't know, I'll probably... There's an NDA that might disagree with you. Yes, yeah. But, you know, it's interesting. I saw, and I know most of us are on Reddit at some point, but there was a really funny meme that said, developers in the future and it had a sign that said certified organic handwritten code. You know, since everything is being gen AI'd these days, it seems like that's the direction we're going. But a lot of the things that I see that are valuable for us is, you know, like Monica mentioned, being able to give that opinion based on kind of where you're going, making the recommendations. Hey, maybe you're going down the wrong path, right? Maybe you need an index here. Maybe your data model is going to cause, you know, issues and friction in the future. What we're really seeing or what we're focusing on in the beginning stages, you know, to Joseph's point, is we're looking at the easy things. We're looking at the code gen, looking at the log analysis, the anomaly detection, things that can help us build our product in a more resilient and scalable way, versus directly going into our product. Now we're still looking at incorporating AI into our products that we deliver to our customers, but that's a little bit more, you got to make sure you've had the quality, you got to make sure you're taking care of the biases, you got to make sure all of our eyes are dotted and our T's are crossed. Primarily, we're looking at things like K8's GPT, we're looking at tools around trace log analysis, we're looking at a lot of open source, maybe whether it's licensing or whether it's vulnerability analysis and where these things can affect us. And so we're looking at some of these tools early on before it really gets into our IP and taking that and feeding it back to the developers. So it's a huge world that we're seeing and we're trying to take into the things that are going to help us without exposing us too much. Just want to follow up on that? Yeah, I think very similarly, I think maybe a few of you have caught this at the conference, but there's some interesting projects like LLM, Minetease, which one of the first use cases it has is like you can do chaos engineering with it, which if you're doing chaos engineering, the intent behind it kind of aligns. Other things like I think last year, and this is kind of where I started really started thinking about this even from a platform perspective when we were in Amsterdam, we started to see like chat GPT dropped and there was companies like Kubea and you see other companies where they were coming in and they had done a lot of the work around early on with guardrails. And I think you can also make that decision of, you don't always have to go it alone. There may be an option where you may think about where maybe that may be a good option, a more by approach to some company that's really been in that space so that you do things that it's in a non-destructive manner. And there's a lot of guidance as well. So one thing I didn't mention earlier is I'm a member of the CNCS end user technical advisory board and closely working along with us is the team that since Chicago QCon, they have released now the AI land, a white paper as well as the landscape and really it's just a starting point. And I think a lot of us are all on this journey and there's some things that are not fully open and available to us yet. So we're really in a nascent time where a lot of us from a community perspective can lean in, can help to provide some of those insights that are gonna help so that we can find those safe entry points. But I think overall, just like Eddie was mentioning, we're trying to find areas where it's done in a way that can we save time? Can we move toil out of a lot of the work that we're doing? And I do see those opportunities. KGPT is a good example of that where a lot of times we have just so many, like, well, tickets come in, needing more information. And when you could have that query Kubernetes and just do a read to find the information that you may close the gap, shorten that mean time to resolution, like, I think those things are really highly valuable from that perspective. I think, I mean, most of you here in the audience are engineers and we also, I'm trying to question ourselves how our jobs look like with AI in a few years. And you know, there is already, it might be that there won't be any software engineer role in a few years. You know, there is already AI software engineer that was launched recently. So I don't know. I mean, putting the joke on the side, I don't think that will happen in my opinion. I think AI will be just a co-pilot for software engineers a way to make their job easier. So I don't think the AI will replace completely the jobs that the software engineer job. I also think that what AI, in my opinion, will bring is that, you know, because the tools will become more intelligent and it will make it the life of the developer will be easier, depending the developers, platform engineers, SRS and so on. I think it will be easier. And I think the trend will be that the big difference will be that in the future, I assume that you will be able to run and build a company with less resources. And you know, instead of hiring 100 people to build a new product or, for example, a database company, you will need less resources. And we still see this nowadays, you know, there are companies that have 10, 20 people and they are valued one billion. And I think with AI, that will really move faster. So when I started CEDA, actually, I was thinking, I think this is a future where you will be able to build a company with less resources. How can we make this, how can we start now in order to be there in five years in order to kind of go after this trend? So that's kind of the initial idea. But yeah, I think on, you know, putting the joke on the side, I think, you know, there will still be software engineer roles or engineering roles because someone needs to train those models, someone needs to use those models. And you know, there will always be new tools that we need to have. So I think definitely maybe less job, like the amount of jobs will be maybe less than it is now, but I still think that there will be a component. So that's my, if you want what I think the future will be. I absolutely agree with you that the job will still be there, but I think it'll be significantly changed because it's the ability of the developer to use AI to their advantage. I think it's going to probably reduce the number of developers that the world has maybe or needs, but also it's gonna be a differentiator of how well they can do their jobs, right? If you can use the tools at your disposal to build a better React application or a Kubernetes operator or a database as a service, that is going to be a new skill that's going to be needed for the software developer starting last year. You know, you've heard 10Xers, right? Or the unicorn, now it's a reality. I mean, a 10Xer from, you know, what we called a super developer last year, they can do that with Copilot. They can do that with some of the tools out there today and then that's just gonna raise the bar. So I absolutely think it's still gonna be needed. I don't entirely agree with the NVIDIA CEO that said you're never gonna need to learn to code. You're gonna need to learn to code, but you're gonna need to learn to code and how to use AI to your advantage in all aspects. Now, one of the things we wanted to talk about is in terms of what AI can learn and benefit from in terms of what regarding CNCF can offer, talking about open source standards, open source licensing and governance. Lisa, you wanna start out with that? I'm glad you asked this question. A little over a year ago, I'm also a founder of a conference called Kube Crash. It's kukrash.io and we're about to do our fifth one in April, April 24th, it's virtual, you can all attend. And we focused, over a year ago, we did one of them on AI and ML and also Zero Trust and we started talking a lot about security elements around AI and there was a lot of fear and that part of the conversation hadn't gone far enough yet. A year later, it really, it's there, but what I saw at AI.dev in December, that's a Linux Foundation conference, if you're not aware of it and they're doing one in Paris, I believe, pretty soon, so you should all go and then there's gonna be another one back in, I think Seattle's the next one, but we were at one in San Jose in December and there was a lot of talk about how open should AI be, how open source should AI be and of course it's a Linux Foundation conference and so there was a lot of advocacy for it to be very open source and then there was a lot of talks about, well, wait, wait, wait, wait, wait, we don't have the standards in place, we don't have the structure, what about the governance and a lot of fear and so I think we're still not even in the middle but in the beginning of these conversations and I just had a hallway talk with Liz Rice about this very thing and it's like, yeah, we can try to talk about security and how are we gonna get biases out of the code and things like that but I think if it's going to be, if open source is gonna be a big part of it then obviously you need licensing conversations, you need standards conversations so I think we're just in the beginning there but what I'm really curious about is as a company, as an end user, as a consumer of this technology, how much do you care how open source this all is? I think all of us do and I think we're touching on I think quite a few things here and we're here at an open source conference, we all really believe in it, a lot of us have been in this Kubernetes journey for quite a while and being in San Francisco and especially as watching the last year, there's a whole other ecosystem that's been grown and what is considered open there is probably not the same definition of what we probably define it as, is there concerns there is and then as well as kind of like the ethics is some of the things I think about like how these things are trained, the foundational models aspect of these things, we look at it, you can obviously see there's been lawsuits that are related to like some of the information, as I can say in the model, you gotta be very cognizant of that so I think that's where open source will be very interesting. The one thing great about the EU is like as far as like privacy and information, the EU has always led the United States and I think my hope is that there'll be more questioning around that because I think it's, yes, the open source aspect of it, what is that and what does that license look like? But then as well as kind of like, what data is being used? Is it's copyrighted things? These are the things that I sometimes think about and we wanna make sure that those things are respected so that's kind of where areas where I think are gonna be important and the community could play a key aspect. Monica, would you like to comment a bit about what we once considered open source code, open source project, as opposed to some of the things we're hearing about in the AI ML space? Yeah, I think it's important to acknowledge that what means an open source model is not the same with an open source code. So if you have, for example, a project that is open source, what this means is that you can go read the code, understand the project, what it's doing and for example, be even able to fork the code and like do your own because you kind of learn from the previous project with open source model is different. So what means an open source model? Basically, you get the weight and what this means is that if I give an example, the grok from XAI basically is three gigabytes of float numbers. So it's not very similar, right? If you understand with an open source code, you just get float numbers and basically this is something that you can just play with it and see how it goes for your own needs. But you have no idea about how this was trained, what kind of data were used. So if you want an analogy is to me, an open source model is similar with having a project that is not open source and you just download the binary and you can try it out but you have no idea about this, like how this project was built before. I don't know, maybe there are lots of discussions in the last year about open source and AI and I think maybe that will change. There are some companies that believe in open source from the AI, from the top AI leaders and some are open source, some are closed source. I think I'm kind of curious me personally to see how this will evolve the open source aspect of the models. Yeah, you know, it's really interesting, Monica you were asking something in the previous comment around VCs pushing everyone to kind of have some AI flavor in their products and I think that's causing companies to just rush and do it. And I think open source will be very helpful to let companies experiment, especially when their legal or compliance says hey, you're not shipping this data out to an API. So they can now put it on their systems and it may not perform, it may not be the best but it's something that lets them experiment and see actually how can you use API, or sorry, AI in your systems. And I also think there's a bit of confusion, like this is the big data or the blockchain of today. AI has been around for a while, there's narrow AI, there's general AI, now there's generative AI, these are all different flavors of AI. And in the data world, for example, we've had narrow AI for a while where it would look at the behavior and recommend indexes, right? That was a very specific AI that existed for that product. But everybody is really intrigued with generative because they saw a chat GPTK into an open AI. But when you think about the value that co-pilot took, which is especially the one on Bing, let's say, the free one, they know who you are, they know your search and they've got the internet to add to it, that's adding more value. And companies are like, well, I gotta do that too. I've got data, I've gotta figure out a way to get it out there. A lot of times that causes data leakage, that causes issues, and I think the ethical part that you mentioned, Joseph, is very important. And I hope that more and more companies that push for open sourcing are going to force the standards, are they're going to force looking at these models, figuring out where the biases are, figuring out a way to identify biases. And we've seen that evolution from the beginning of when they just generated something to generating it, giving you the sources that it's using in the generation and also the reasoning why. So that was something that the community pushed. And I think some regulation had something to do with it. But ultimately, it's up to everybody here to push your companies, push your regulators to say, look, we see the value, we need to show where the problems can be through open source contribution, through open source usage, through blogging and talking about it like we are, to really bring that to fruition. Because it's a tremendously powerful tool. I can't live without it anymore, to be honest. I get these long emails, I send it to chat GPT and tell me just summarize it please, right? And it doesn't, right? I can't live without it anymore. So that's where I see really the open source aspect, the ethics about it. It's really just starting and we really need to be the driver behind it. Great. Now we're getting towards the end and we're going to have a bit of time for questions. So I want to hear though from you, Joseph, first in terms of what's going to be your next AI step? What's the next step that you're going to be taking regarding that from a Kubernetes perspective, from someone who's very active in the CNCF, the work that you're doing, what's going to be happy for you in the next few months? I mean, I think like a lot of us, it's still a time of experimentation, but safe experimentation. I'm always looking, as I was mentioning earlier, for ways to really improve the user experience. Even last year when I really started to kind of grok what was happening and what potentially you could do with generative AI, I was like, this could potentially be, and this is a product guy, so I'm thinking of like, this could be a user interface that I can use to maybe take some of the friction around a lot of the work that we do with some of the assets, because we're a very GitOps type platform, and could we use this as a way to kind of help ease some of the pain of a lot of the PRs we're cutting, have some validate, do a lot of these things. So we looked at a lot of those things initially, but then I think we took a step back and said, let's look at the areas that we could really help our user with support. And so that's kind of, I think, I see continual experimentation there. Now, long-term, I definitely see this evolving. I see there's a lot of opportunities in the observability space to really find those needles in the haystack. And to me, that's the other areas where I think it's gonna be a real game changer and to help improve some of those experiences of things like open telemetry and all these areas where you have so much inputs that how can I find these things that are very difficult for someone on the team who is managing so much infrastructure as we scale? So that's me from a more pragmatic perspective. Monica, what about you? What's next? I mean, I think I want to say the same thing. We are in an experimentalization period of time. So we kind of try to figure out how we can leverage AI more in order to build a easier database to use for people. So... Or are you going to become a VC? Yeah. Yeah. Yeah. I think it's interesting. Yeah, I don't know, I mentioned this and now you're always saying, but I think technical companies like B2B, but companies that are building something technical, they are easier for them to adopt AI. But I think for me, this advice that the VC gave is usually the way happening is that all the VCs have the same advice for their companies and regardless of the type of the companies. But I saw companies, B2C type of companies, struggling and asking around other technical founders how they should integrate AI in their product application that they're exposing to their users. They are not very used with AI. So I think that's interesting. All right, for me, I'm going to go a little bit out there. I want to build the computer from the Star Trek Enterprise where using generative AI to create API calls. And I know you can program it, but I wanted to learn to read our documentation, be able to generate an API call and do it. Because how cool is it to ask a computer to do something and then say, go set whatever parameter on whatever system and it knows how to do it. And using the Gen AI, I think that'll be a really cool feature where you can just talk to your computer like you do Alexa with any API and it figures out how to do it. Let us know. That could become another talk. That'd be great. Absolutely. Getting an excuse to get together. Lisa, you organize this panel and think it's fair you have the final word. We are going to take time for questions. Of course. We've got the five-minute sign, so we saw that. Thank you. Yeah, I'm just excited to see what we build. I mean, I love this community because I love the way we approach challenges, right? I mean, you and I both lived through the era of you can't run anything stateful on Kubernetes and I just spent the last six years at two different startups running lots of state on Kubernetes, big applications, Cloud Native Storage, a database company that I was at for the last three and a half years. So I love the way we solve problems and really it's the user community that pushes us to solve these problems. You know, I've dragged lots of companies, Dreamworks, HBSC on stage to talk about what they built and when at the same time someone's next door saying you can't do that, don't try that, you'd be silly. So I love this community. I love to see how we're going to tackle this problem and I think we really just need to keep in mind how are people trying to use this? How are, you know, let's talk to the people that are running stuff to see what is really breaking and those are the folks. I mean, you and I have had so many conversations about government standards and what like insecurity especially and how that's going to dictate how you build things and that's not always things that we think about. You know, we're so narrow focused and when we're on a certain project or in a certain part of this ecosystem, we don't always spend enough time talking to our end users who are thinking about every single part of this. So I just, you know, keep those conversations happening. I'll keep showcasing them and I'm just really excited to see what we build. Yeah, I think a lot of this comes down to the fact of seeing the different opportunities where people can get involved, where their questions can be addressed, where their concerns can be taken care of and in the same way that you do for Kubernetes or whatever open source projects that you're working on to treat the subject in a similar manner and not fall in, you know, be a victim of the hype and some of the more negative things we may have mentioned previously. That being said, I think we can open up the time now for questions if anyone has one. There's microphones in the front. So was that Romero? Yeah, just right there, there or there? My question's really quick about licensing. You talked about co-pilot, large language models that are, you ask it to generate a piece of code. It gives it to you. A lot of times that data is gonna come from sources that have many different licenses behind them. From a compliance legal or risk management standpoint for large enterprises that are deploying this, what's their risk exposure and how does that get managed in an open source community that's trying to advocate for the use of this software? Not all at once. So that's actually a really interesting question that's come up a lot, especially around co-pilot. And it's one of those things where it's like, well, developers can go to Stack Overflow or GitHub and copy the code. Now they're just able to do it a lot more efficiently, a lot quicker. Is that okay? And you know what's a mixed bag, right? There's a lot of, it's still very early, I think Microsoft and GitHub are doing a great job at at least letting us know where the data is being processed, where it's coming from. That helps a little bit, but it's a new frontier, right? And you have to make the decision for the industry and the company that you're in, whether or not that's something that you are willing to take on, whether you think that it's going to be a breach of any kind of copyright or not. But for the most part, it's very early. I think we're getting a lot of the tools from the vendors. But at the end of the day, just like anything else, you gotta make the decision for you and for your company. Thank you for the panel. It's super interesting. On the topic of hype, I would like to hear your thoughts on the role of CNCF on AI. I live in San Francisco. There are two very different ecosystems. Do you think CNCF should care about AI or is it just driving with the hype? You're in San Francisco. You see the same thing I see. Like I don't think people realize how huge it is in SF. Like I hear that this whole perception, the Gen AI collective, Lane Space, like there's a lot of events happening. There's a whole world happening. I think if we don't focus on it, and I think yesterday was a great look, look where we went from Chicago when we were talking about on this keynote stage. So where we were at with Kubernetes versus like what we had yesterday. So I think the focus is there, but there's more work to be done. But that community is huge and it's a lot different to be. So I think there's alignment and I'm excited to see how we continually like work together. I'd like that as well. Absolutely CNCF should get involved, right? Because if we look at what it did with Kubernetes, it took a project and it beat all the incumbents because of the community behind it. It's an unstoppable force. And if we really want to make sure that the ethics is taken care of and the open sourcing of the algorithms, it has to be driven by the community. And so I encourage everybody to do so, everybody to get involved, start talking about it, learning about it and let's get the CNCF on it. Last thing, if anybody can please take a picture of our names with us and send it to my LinkedIn, I'd love it. I've never been on stage like this and it's really exciting. Thank you very much.