 Hello, welcome to SuperCloud 6. I'm the panel host for the AI Founders' Day. And I'm a serial entrepreneur and then AI executive for a long time. I'm so happy to invite two of the founders in Silicon Valley to discuss a very interesting topic. AI developers, right? Is AI going to replace developers? That sort of things. So first, why don't you introduce yourself a little bit? And tell us not just what you do, but also the journey you get to here. Eddie? Yeah, for sure. My name is Ellie Schleifer. I've been in tech 20-plus years. I started out at Microsoft. I had a B2C startup out in Boston area that I ended up selling to YouTube. I ran that organization inside YouTube for three more years. After that, I went to work on self-driving cars at Uber. I worked there for three years. And it was really at Uber, building self-driving cars. We had about 600 engineers working inside a giant model repo, Craig and I code. And the day-to-day process of being an engineer in that organization was really hard. Like what we noticed, we were basically building the underlying operating system of that car. But as staff engineers with my co-founders, we saw it's very hard to actually make any progress inside this project. The developer experience, the tooling that was in place, was really lacking. And coming from Google, where I'd say it's really amazingly easy to build software inside Google 3, what they've built there is just tremendously amazing efficiency for your engineers. And we're really lacking that. And I saw this massive opportunity to say, what if we could bring Google quality developer tools to the world? And that's why we left Uber and we started Trunk three years in now. And we really focus on building developer experience tools for large teams. Yeah, for a large team, that's the key. I really like the company's name, Trunk. Because I remember I grew up a VMware, grew from a very small team to a very large engineering team. And I remember the day that, wow, how come we grew the company to a certain level? The Trunk is always broken. The main Trunk is always broken. So you need a lot of the true chains. So yeah, you mentioned to me that when you were at Microsoft, there was a little bit different world. But Google, they really advanced the true chain. Maybe you can share a little bit what sort of the things that give us, go a little bit deeper. What are the things that are complex, interesting? For sure, yeah. So Google 3, inside their model repo, they have just amazing automation, right? And really from the point of I'm writing code, it's going to be tested correctly. It's going to kick off the right set of tests. Tests are going to be flaky. They're going to be identified. You're going to have dashboards for all this stuff. The back end systems and CI are just going to always be running correctly. There's automatic code quality tools being put in place. And then, of course, like a merge queue in place there to make sure that everything merges cleanly, right? All of these things are like basic building blocks that we've seen time and time again. Other smaller companies replicating. Engineers leave Google. They go and replicate that. We've seen Shopify and Spotify. And just any company you could think of has built different in-house versions of these tools. But these tools really should be commercial grade. They need to be enterprise quality. And that's what we're trying to do. We're trying to basically take those ideas of, say, let's make sure we can find all your flaky tests. Let's make sure we can show you where CI is flaky and burning cycles and show that to your engineers. So really empower those code quality, code excellence teams inside large companies to leverage our software and build on top of it. They can then focus on the problems that affect them most and not things that are generalizable to all software engineering. Cool. We'll get to more about what Trunk does in a little bit. First, you know, Massey, tell us about the journey you get to starting MetaBot. Yeah, happy to do so. So my name is Massey Miliano-Genta. Just go by Massey. It's a bit simpler. And yeah, we started MetaBot about three years ago. The idea started. So my background, I have about 15-year experience also in AI development, successful company before in the space. However, everything started through the open source. So I'm a very big, very involved in the open source community, part of many communities. And a few years ago, my co-founder and I, Avinash, we were a part of a community we started called Clice, where initially our goal was to provide a framework to optimize our open source project to run. And following that, we actually also wanted to help contributors to monetize from their work. And in order to do so, we developed a technique to look at PR, look at each commit, and trying to identify how likely our dose commit or how likely dose commit will change over time, will need revisions over time. With the goal of trying to identify the ROI of each commit, what's the monetary value of it, specifically for big projects. And through that project, which was completely just mainly for fun or to help the community, we developed this pipeline. And then we realized we had great potential because we could predict over time how likely a specific code neighborhood will change based on their semantic markers, based on data flow, structural integrity. So we had many input that we looked at. And because of it, we actually, at the same time, I was actually working as EIR, Entrepreneurial Residence for NSE. We were working at the Lab at Princeton. And so we were also looking out to optimize the code review process. As we all know as developers, debugging itself is one of the biggest challenge. And also what debugging means is up to interpretation. But for us, it was really identifying. It's really the designing of the program. It's not just writing the code itself. And so we actually thought that the technique we developed through the Open Source School had been used perfectly for this specific task. And so that's really how MetaBob started. So we're still very close with the Open Source community as we do offer our tool for free to them. We have an IDE extensions. But yeah, every C started there. So from the evolution point of view, early on you were sort of detecting the likelihood for the code to have revision. Now MetaBob is to detect the likelihood that a piece of PR has a bug. Yeah, well, we still do the same. So we developed the pipeline. Obviously it's greatly improved over time as, again, there are many components to that, the way we classify data and the way we train it. So we look at both code, but also things outside the code, like obviously the PR. We look at code changes and let our main technique, it help us to predict where problem will occur in specific area of code, right? And then what we use, we use that context to fit into generative AI. So to LLMs that we developed to then create explanation and recommendations on how to resolve the issues we detect. So over time it has evolved, but the main concept is still the same. Like we still follow this technique which is based on several AI methodology. We use something called graph neural network to map the data we use, again, vector database. But we have many, really what make us unique is the pipeline we developed. Right, so let me just go one step deep into the product you guys are offering. You mentioned that you are bringing the Google quality tool chain, right? But it couldn't mean so many different things. So can you just, for instance, one or two very specific problems you're solving for developers? Yeah, so we have an automated code quality tool that is basically gonna run all the right static analysis tools, formatters, linters, on any programs. So figuring out the bugs, potentially? This is more like to maintain code quality and conformity and stylistic idiomatically correct code, right? Your organization says all typescript functions have to run a certain way. We're gonna run ESLint for you, make sure that those rules are applied correctly, but we're gonna run that ESLint for your typescript and Golang CLN. So less about buffer overflow detection sort of things. Right, these things are all built also into static analysis tools, right? So it really depends on the language. Native languages have a lot more tooling around those kind of problems. And then as again, to larger teams, we have a merge queue solution that's basically gonna make sure that you don't end up with a broken main build, a broken trunk branch. And then we're bringing to market now a flaky test solution. So as engineers write code or generative AI writes code and writes tests, those tests are inherently gonna start going flaky. It's just basically a nature of entropy, as you might say. And those flaky tests tend to really crush developer productivity. So we're trying to look at that as a holistic problem. How do we give engineers back their time? Because right now the solution is, hit the retry button or automate retries. And both of those are pretty lousy solutions to keeping engineers moving quickly. We're trying to basically make engineering like efficient and fun. If you wanted to pick the, you know, detect a flaky test and then yank it out of the system. Yeah, so even better than yanking it out, if you yank out the test, I'm losing signal, right? So engineers are very low to be like, just comment out the test. Well, that test was written for a reason. We're actually gonna run the test and we're gonna quarantine the results. So if it fails, we're gonna say, hey, this failed, but you still got to run it. Because as soon as you turn it off, you're creating greater trouble for yourself. Got it, got it. So along this line, what is the AI angle in your company or in your company's product? Yeah, so the AI angle here on this flaky test is really to look at what are the problems that are coming out of these test reports? Can I generalize them? Can I summarize them to something that's easier to consume as a engineer, right? Get a more explainability out of it. Exactly, because these tools are gonna generate long logs and long stack traces and trying to just condense those into something that's uniform. And then looking at it holistically, is this problem similar to these other problems, right? And then say, actually, this is a problem across 30 tests in the last 30 minutes. This is not a problem of this test going flaky, but actually CI is on fire, as we like to say. And we need to address this more at a higher level, right? Call in the SREs or call in the DevOps people to fix this problem. Maybe some external source used to reply at a certain rate, now they have rate limiting, and now all of your tests are gonna be broken. Do you see a lot more percentage or a lot more volume of the tests being written by general AI these days or in the foreseeable future? I would see that as code grows, we'll have more tests. Everything, especially complicated tests, integration tests, to make sure that all these systems work together, which is really what large real companies are doing in the building, really highly integrated systems. When you're doing that, your tests, because they're talking to other systems, are gonna be flaky, and your CI pipeline is gonna be subject to that kind of invariability. But so you are saying that you are anticipating that, but it's not quite the mainstream just yet? Oh no, it's a huge problem already. It's the core code that engineers write, yes. No, I was talking about gen AI generated tests. I would say anything that is generated at this code is gonna be a subject to flakiness. Because flakiness can be determined even just by the interaction of tests, that an AI bot would be very unlikely to understand it. If I run this suite in a certain order, there's gonna be a presumption that this file will exist or not exist, and therefore this test will be flaky depending on what random order that test runner picks that day. So a lot of these things are not gonna be protected by a generative system up front. They're gonna keep writing bad tests. Right, right. So Massey, you were doing AI before gen AI or before AI is red-hot, right? So any perspective to share and then also how you are thinking about using AI or in particular gen AI in your product? Yeah, well, when it comes to gen AI per se, we use gen AI as everybody does, which is creating content, right? And so our core, like Metabob itself, obviously we use AI in the entire pipeline for many reasons, many different things. I would say per se, gen AI is just used to create explanation of what the issues are after we identify in the region of the problem as well as provide recommendations to resolve the problems we identify. Sounds like a very similar usage, you know, between you and Addy, right? Yeah, there's definitely some similarities in some way, but so we, again, we use AI, not gen AI, but many parts of the products, again, in the classification technique, to be able to identify where, again, code changes, where specific area of the code and the likelihood of those will change over time and to map different part of the code. So our core, I would say IP, is really into identifying the code region where problems will occur. And then again, what we do is we feed that context is to the gen AI side of it. So, again, the gen AI for us is really used to create content when it comes to, in our end, it's code improvements, refactoring improvements, or to just explain where the bugs and what are those issues we identify. In terms of my take, well, do you want me to talk more about like what's the specific angle you want me to focus on? Well, you know, one, you already discussed, right? Using gen AI for explainability, right? Do you see gen AI playing, you know, roles in that area for your company or for the developers in general? Like what's your view? Well, gen AI, it's great when it comes to, again, creating content, right? When it's learning from input and following inputs and generating output to that. So anything that comes into that, obviously, in our space is one of the best use case for gen AI right now. Again, everything that is content-related is, but when it comes to coding, you see tools like Copilot, obviously, are greatly used, and we leverage LLMs as well for our products. So I do definitely see great value in that and keep improving. So with plan LLMs alone, I don't believe necessarily something that we'll be able to do everything just the way the directions that are currently going, which is building larger and larger models. And so, and just the way LLMs function, they're definitely heavily sensitive to how data is presented to them and the input is presented to them. And that's why I believe adding different techniques to LLMs is gonna be the way to go. And that's what we are trying to do, and I'm sure most entrepreneur in the developer tool space, but in the AI space itself, they are figuring that out and trying to adding more methodology to it to solve problems that LLMs have. So let me pose a question to the two of you, right? Is AI or gen AI going to replace developers? You want to go first, or should I? As a developer for many years, I think highly unlikely it's gonna replace developers. Highly unlikely. When I think about like historically, right? We're talking, let's go all the way back to the 60s, Apollo era, we have computers that are literally just humans that are doing calculations, right? All of a sudden computers get, computer programming languages become more powerful. Now people are writing in assembly, eventually they're writing in C, super low level languages, the amount of things you can do. Did we have like 50 million engineers at the time? But then as we got greater and greater languages, all of a sudden was reduced? No, we had more and more software engineers, right? 20 years ago, there were a million software engineers in this country. Now they're four and a half million engineers, right? Still a small percentage of the workforce. I think overall, we might like have, the things that software engineers do, we always have to remember, is not write code, software engineers build product, right? They build things that other people care about. And the part of writing the algorithm is not the hardest part of what engineers do. It's not like the hardest part of being a software engineer is figuring out how to do a for loop or even graph traversal. These are like well understood problems. But it really matters and copilot is great at stuff like that. Be like write me a function that does XYZ, no problem. Build me a system that interacts and tracks data correctly and stores that in a database in a reliable way and doesn't store too much. The server's gonna fall over and scale that thing and make sure that you're following GDPR compliance. Like these are things that engineers think about. That's a product, that's a solution. These are what engineers actually do in their jobs, right? They're working and thinking about what is the problem that I'm trying to deal with? Build the metaphorical model and then put that into code. Telling the LLM to do that, I think that will be the job more of the engineer. Be like, let me actually make sure that you're gonna do the thing right. I'm gonna be like more conducting at a higher level. Which makes sense, like we keep moving higher and higher up the stack of like, what is the responsibility of the engineer? It's gonna be more to build product than ship product versus, I'm thinking about the lines of code. So I think that you're basically saying that a coding is just a smaller portion of the overall developer's job, right? Of course, depending on your seniority, how junior you are, it may be different, but it's still a small portion, right? For that small portion, some of that could be replaced by co-pilot or whatnot, but then the rest, it's not going to be touched much, right? That's kind of what you're saying. From that point of view, do you see a reduction in the developer's sort of the jobs, right? You mentioned the 4.5 million. Are we going to go back to 1 million or are we going to keep increasing in your view? I think the software engineers that are working and the hardest problems will grow. I think the software engineers that might be right now fixing up Shopify websites, they'll become less of those, right? And I think that you see that even with no code tooling, like basically enabling more things to be solved just by someone who doesn't have coding experience. But at the end of the day, information technology, robotics, all of these things require people to think hard as engineers. I think we'll have more software engineers. Just, I'm hoping for higher quality, higher functioning software engineering problems. Very nice. Yeah, well, I have a very similar take. So I don't want to just repeat what he said, but I do believe at the end of the day, I don't think it's going to replace. You still need inputs to give into the AI first. So what the software developer does really, it's like AI can be a new language, we can say, right? So there was still the function and the task will change over time, for sure. It will speed up and automate some processes, but it won't want to replace because again, you do need, as AI is growing right now, you need the inputs. And so you always need somebody to provide the inputs and to also improve the model until perhaps we reach AGI, but that might happen in a long, long time from now. We can talk about AGI here, right? When do you think AGI will happen? Well, not anytime soon, in my opinion. Why that? Well, because in order to achieve AGI, like right now, you know, AGI is more about, it's not to just, like about, in order to achieve that, the model cannot just be trained on external data, like it is right now, how LLMs are growing, it's more like, you just make larger and larger model, but that itself, it's not gonna make an AGI, right? It will need many more methodologies to be implemented to that in order to reach that level. And really the AI needs to understand the relationship between input and output, which right now they cannot do. And just the way LLMs are structured, it's not something they will do, right, anytime soon. So, if you think like how the brain works, right? Lives markers, lives transmitters all around to get informations and to learn from it, right? To learn from the input and why specific output occurs, which again, right now LLMs don't and cannot do. So I'm not saying it's impossible to achieve, but I'm sure it will at some point. Is it something right now I'm afraid of? No, I am not. Okay, Eddie. I don't think there's anything magical about human consciousness and intelligence. I think at the end of the day, we're extremely advanced like machines. So therefore like to say that we would never be able to create a machine that could have intelligence would be, I think, silly or presumptive. Like with enough computing power, you should be able to achieve it. So why do you think it will get to AGI in your view? I couldn't make a prediction towards that. I don't work enough in that space to like, you know, give a conservative opinion. But do you think that's in the near future or this is a little bit further out, but it will be there or, you know, I don't know, it's too far. Like what's your... I would say like on like human scale, very near future, because like, you know, human's been around for, you know, tens of thousands of years. And if you look at where we were, you know, 50 years ago with our understanding of genetics and computing power and all the things, you know, that have come in... So the trajectory you are seeing that it's kind of a way of getting there soon. Yeah, whether it's 20 years, 50 years, 100 years, like it is within like a reasonable timeframe to achieve such a thing. And then, you know, the sky's the limit, really. So from that point of view, you know, what's the future of AI, you know, like other than developer tools or this and then, you know, what else are you anticipating? Are you, you know, do you see the, hey, I wanted to see something, right? You know, what is that? I mean, what I really want to see, I look forward to like, you know, the reimagining of our cities, right? I like, right now I live in San Francisco. There's still cars parked everywhere on the street, but I took away Moe home from dinner last night and then that car disappeared, right, along the way. And I think, you know, robotics and alts, which really is driven by artificial intelligence, will really like replace and reimagine the way that our cities look and feel, right? I think even, you know, quadricopters, they can move you from San Francisco down to Palo Alto, it means I can jump over the freeway, but also means like I could jump up north and go into, you know, to nicer, you know, outdoor space really quickly without sitting on long roads, right? So I think that the way our world looks today is gonna be totally different in 20, 30 years and I can't wait. It's like very excited to think about what the cities will feel like, what it'll feel like to move around. And AI I think will be a tool along the way, right? I think so much about what we're talking about here is how are we gonna use these tools to change and make life, you know, better, you know, create more abundance for everybody. How soon do you think that that world will be? Like the city will be totally different. Oh, I think that, you know, having worked in self-driving cars and now you're like living obviously in the test bed of a Petri dish for self-driving, I think in 10 years, like, you know, these self-driving cars will be everywhere. Everywhere. And it's gonna explode onto the scene. It's gonna feel like a trickle and be like, this will never happen. And then obviously it'll be like, obviously, this is what it is. It'll be like when the cell phone came out, be like, few people had and then now it's in everyone's hands. Cool. I'm looking forward to that too, honestly. I mean, I'm Italian. Both of you live in the city. Well, yeah, I am Italian, so a lot of my Italian fellow will critique me to say that because, you know, sports cars are big things there. However, I'm very much looking forward for just being able to work and travel nonstop without any human driver. That's definitely one of the most exciting, like near-term evolution that I'm looking forward to. Aside from it, obviously, like, when it comes to also gen AI, fields like medicine, legal, it's something right now we don't see as much, simply because there is, it's a more risky, I would say, like, field. When it comes to like, obviously, the output are key. If AI makes a mistake, it's a huge leverage, right? And, but we also see, I mean, that's something that most definitely it's going to happen the next couple of years as well, right, because it's just, there is a bureaucracy side of it and there is also just a tune-in part of it that I've seen firsthand how great progress we're making. So, again, in the short term, that's really what I'm very excited about it because right now, most people that are using generative AI, it's in the marketing field, obviously, like generating blogs or any type of content on that side, it's very great and it's very easy or coding as well. As we discussed, developer tool is a great application for it. But anything really that includes content can be done by generative AI, very great. So, again, the spaces that I'm very excited about is definitely to the medical field, the legal field. So that's definitely something I see short, like in the very short and near future. Yeah, yeah, yeah. Yeah, I mean, that's an interesting point on the marketing side. I think generative AI generates so much stuff that I never ever want to consume or read because it's so clearly generated by a robot that has no taste. And I think it's so important, my wife is a writer and when I think about the impact of art and culture, it's like, generative AI generates just whatever someone else already did, regurgitated in another form. And what I care about is story and what actually is something that's gonna resonate. And I think about when digital effects were all the rage in Hollywood and be like, oh, this movie is great because you can see an alien flying through space. But like, now we're like, I don't care about that. What's the story? Does it matter as something resonant that actually care about and has heart? And I think, you know, it doesn't really matter how great your graphics are in a video game. If there's no good gameplay, you're not gonna do it, right? So I think it's like, this is an interesting point where I think generative AI will be awesome in medical because at the end of the day, yeah, there's no heart there. I just want the medicine to work right. More black and white. I fully agree with that. And that's to my previous point as well, it's like when everything that requires like art of creativity, right, like really the greatness of AI will only come, well, it will be able to learn not just from external data, but also from its own input and outputs, right? And the relationship between the two. So until that happens, then, yeah, we will always face the same problem, which is like just an AI based on external data that has been feeding it. I mean, some of the gap, right, not enough hard or it sort of looks like from a bot, that has to do with the model is still not good enough, right? But let's just imagine a world that GPT-5, maybe even six, is released, right? The marginal cost for AI, gen AI is approaching zero in the next few years, right? We know that sort of thing will happen, right? So what will the world look like? Let's come back to the developer, sort of the world for a second, right? You know, we talk about outside the developer, just going back to the developer, what does the world look like at that point? Like as a developer, what to expect? Do I do mostly similar things? But with, you know, of course, writing code is going to be automated or even the way I interact with my product management is going to be very different. How do you think about it? I think I'll probably, your IC engineer is gonna be doing less like, you know, time typing out algorithms into VS code. And I think that we'll all be inundated with a whole lot more generated content that then hopefully are generated, like chat GPT-6 bots will reject, spam even better. Like, so at a certain point, it'll be like, you know, I think all the stuff that is generated will become noise to us and we'll actually try to revert to is like, is this actually an email that was sent by you to me and we're actually in real conversation? And if not, I'll be like, I don't want this, right? It's basically like just more noise into my inbox because if generation of email is basically because the price of zero, which it is, then the email becomes, you know, kind of... Useless. Useless. And I think that's like, you know, we're gonna see more and more of that. We'll have to, there'll be an information war for your eyeballs to make sure like, how do you protect your eyeballs and your inbox from things that are just being generated at basically zero cost? So I fear for that future. Yeah, I guess, yeah, I will say like, it's always hard to tell like, how the future will look like in this case because, well, obviously, if AI becomes more efficient than humans, well, why do we need humans now? Unjocked, obviously. But, well, I think, again, I think when it comes to software developers, we said, I don't see any near future where they won't be needed because, again, something that humans always want to do is to improve something. So there's always gonna be maintenance. As we've seen, like managing and running a model, it's definitely time consuming. It's already becoming a commodity when it comes to the cost of it, right? Like we've seen, you can run an AI company with pretty limited cost. Obviously, it depends what's the scope of it, but it can already be done, right? We already started seeing that. And it's gonna go down, I'm sure, I don't see that to get to zero, really, any time soon as well. But, yeah, I mean, my thought is the same. It's like, you still need people to provide the input to the AI, to maintain the AI in a proper way. And when it comes to anything that the AI won't be able to do, again, when it comes to LLMs itself, which I guess is the main conversation that people are talking, the way, the direction we are going, I just don't see, really, first of all, the cost to go to zero completely or to be able to perform certain tasks because the direction anyone is going is to be larger and larger model, and more complex model, right? So that's, it doesn't, and LLMs works in general, like they're based on listening to any input and generating an output based on that. So a lot of the concern in that space I just don't see right now until we are going to implement additional methodology to LLMs, which is gonna happen, when it's gonna happen, I can tell you that. I think that both of you are a little bit cautious, right? On the one hand, you do anticipate or hopefully see a world, right? You know, the city or whatnot, right? Leveraging AI, Gen AI, whatever those technologies, but at the same time, you know, from, you know, sitting where you are seeing the developer world, right? You don't see, you know, suddenly, you know, I need 20 developers, not only I need a one or two, you don't see that, you know, that's the trend. Sam Alterman said, well, we will see one person unicorn at some point, right? But, you know, you are saying that, look, you know, if it's a developer-heavy sort of a solution company, a lot of that is, you know, will stay on the same course for a while. But, I mean, is it, think about like, you know, the smallest unicorn I could think of is probably like Instagram, at that point, maybe 20 people, right? So is there a big difference to, you know, 20 person unicorn and a one person unicorn for something to be worth a billion dollars on 20 people's effort for a couple years of work? Like, I don't think, like, that's not a giant phase change in, you know, what we need to build something, right? I think also, obviously, I think, think about Instagram, there was built on top of Facebook, which was a giant company already powered by thousands of engineers. So, even that one engineer is like, if you have a one person unicorn, they're built on top of the- So that one person unicorn is just one person sitting on, you know, standing on the giant shoulder, right? It doesn't mean, by itself, it doesn't mean a lot. Right, exactly. And I'm the same, on the same idea and that. Cool. Any last word that you want to leave to our audience? Well, on my end, like, well, I always hope everyone will, this is a great time to pursue entrepreneurship as we're speaking today. AI is definitely becoming a commodity, is accessible to everyone, doing less and less technical skills to start, and there are great applications that can be used for, so I always encourage. If you want developers to save time from debugging, for the refactoring, come to Metabob, right? They can come to Metabob, they can build their own, you know, developers are famous to also resolve problem by building their own solution, rather than using existing solution. Now, I'm not saying not to use Metabob, you should use Metabob, but I always encourage, it's a great time to be able to get into the space as well. I think we've discussed about the consequences of AI, the future of AI. I'm aside from just the AI, the technical side of it and the software developers, my impact, the way, you know, the entire economy is gonna be run, and so we don't wanna get to that point right now, but it's definitely a great time to get into that. Thank you, Massey. Adi? Yeah, I think, you know, engineers love to build things, you know, I like, for my entire career, I got into software engineering to build something, it wasn't to write code, it was to like, build a product and build a solution for something, and I think that is just innate in our human spirit, and it's so excited to see all of these technologies flourish because it's just more opportunity to build, you know, more interesting, compelling things. You don't have AI, you don't have machine learning, you don't have self-driving cars, like these things are required to solve really interesting problems, and that's like where it gets interesting, like what are the problems that we're actually gonna solve with these technologies? Because at the end of the day, I see these as a generative AI or AI in general, it's a tool, how we're gonna use it, that's really exciting to me. And to add on that, like, sorry about that, I started talking about the open source community, how much involved I am, and I wanna just also end it with that. For everyone who wants to start, you know, AI is a great application, like open source community are on the rise. As you say, there are many hackathons that are organized on a weekly basis, and I always say, again, encourage people in school or people who wants to become an entrepreneur, the best way of becoming an entrepreneur is by doing hackathons, by learning how to solve problems, right, and be part of communities that work towards one goal. I also strongly believe that's the direction AI should go, being open source, and so that's definitely something I really wish to see, more and more. Thank you, Messi. Thank you, Eddie. Wonderful conversation about AI and also the developer tools, tool chain to help developers to be more efficient. It's not just, you know, hey, figuring out the bug, detecting the bug, but also how you're suggesting about how to get people more involved and engaged through the hackathon. I really like your sort of the line of, at the end of the day, AI is just a tool, right, you know, but we, you know, or entrepreneurs or companies, we care about solutions, you know, tool, it's, you know, AI is just a means to an end, right, you know, at the end of the day, AI is now going to replace everything, anything we do. It's going to be a wonderful tool, right, you know. So thank you very much, and thank you for watching SuperCloud 6, AI founder, and have a wonderful day. Thank you, Ahwe. Thanks, Ahwe.