 Welcome to SuperCloud 4. I'm Hawaii Xu, Senior Vice President of AI at Palo Alto Networks. I'm the guest host for this panel, Enterprise Software with Genitive AI. With me, there are three distinguished panelists, Vijay from Microsoft, Corporate Vice President of the AI Architecture and Strategy. I have Warren, the AI leader at Google, and Jayesh, the Senior Vice President of Salesforce. Before this panel actually did a poll on LinkedIn and Twitter, and asked people, what do you think about this Genitive AI, hype or reality? I give four choices. Predominantly, people actually believe this Genitive AI, this wave is the largest technology platform shift ever. 17% of people feel like this is actually the largest platform shift now, and 65% of the people feel like this is the largest shift, but it will take some time to mature. So, 82% of people are very, very excited about this thing. Now, at the same time, we all know that this is a lot of the talk, we are going to see more production, more real use case. So, my first question to the panelists here is, what do you see? Do you agree this is the largest technology platform shift? And why? And secondly, since the majority of the people believe it will take time, what do you see? It will take time to do what? Do we need to build a more database, build a more infrastructure, like a mentality shift, or we need to hire more data scientists? Like what is in the waiting for... So, let's start with Vijay. Thanks indeed, it's an honor to be here. I think it's a very complex question you're asking. So, one is, is it how big a technology shift is it, and what will it take for it to actually become to have impact? So, from what I see firsthand, is I see four kinds of adoption going on, and I try to measure the trajectory of those adoptions to see how big the wave is today. So, one end, I see a lot of people using generative AI to get the most value from their personal information space. The meetings I go to, my inbox, what I write, and how I react to other people. So, that is really going well. The second thing I see is people saying, give me the benefit of all the data I have in the enterprise and outside, and things like retrieval augmented generation, and there are limitations in that, and that's a big wave we see. But the third and fourth are very interesting, because typically early on in new technology, you don't see this level of sophistication until later. So, we're seeing companies that are saying, now there's generative AI that can read all that information I used to neglect, my digital exhaust. Can I create a new data base, a data set? That's my competitive edge. So, an example is a couple of companies, I can mention their names because they have a public good obligation, and this work will come out, I would imagine, Q1 next year. They're taking every molecule that's ever been approved for testing in humans, and they are saying, I'm going to augment it and enrich it with every patent, every piece of literature, every essay, every imagery, everything. But we're not just going to extract fields, we're going to read. In the study, what was the role of this molecule? Was it primary, was it secondary? Were the authors of the study diffident about it? What was the follow-up? So, new data sets, which become competitive edge. I was never in the data aggregation business, I can do it now. And the fourth one, and that's the one where I spend most of my time on, is what I, and again, I'm being a bit pretentious here, maybe in the terminology, this is high cognitive, high consequence reasoning. Drug discovery, designing an industrial plant while minimizing safety issues by learning from everything else. Keeping track of what exceptions and that will arise in the course of my operation and learning from how did other people mitigate. So these are high consequence decisions. So my overall point is, I've never seen a technology shift in which people can play at all these four spectra. Bottom-up, sporadic, team, company-level mission, and the frontiers of where the value is created. So to me, this is the first time I'm seeing that wave. Now, how far away are we to those realities, especially for the third and the fourth example, right? So all of these can be built today. It's a question of how much effort and what's your expectation. So the way I say to people is this. Imagine you had an army of postgraduate interns. So they have work ethic, they are happy, they work for you. Would you trust their judgment? You would if you gave them the information to reason over. Then absolutely, if you didn't, you shouldn't. So really, it's all about scoping the problem, making sure the right information goes in the context, and then also it's a culture you hinted at it. It's also learning how to take advantage of it. So when search happened, people could have said, why is search returning this website which has apparently nothing to do with what I asked for? Actually, a lot of the value of search was in that. So we learned how to cope with that. So we have to learn how to take advantage of that, expand the scope of what our work is, not just my narrow work from the past, but can I do more? So anyway, that's how I see it. So I don't see barriers, things will improve, but I see all these four happening today firsthand. Thank you. Speaking of search, there is a company who does search better than everyone else. So from Google's point of view, what do you see? In terms of the platform shift, what are we waiting for? A lot of the use case to materialize, what's between now and then? What's your take? Yeah, I think the use cases are here. I think that people understand them. The interesting part for us has been seeing how we've taken this kind of AI power that required understanding coding and understanding hyperparameter optimization and being able to put that into the hands of people who actually understand the business. And so if you look at the simplest example I have is I was working with this CTO in the UK and he was a professional photographer, giving him the ability to just have the prompt for the image models and using that, he can drive images that I couldn't even imagine because he understands the space. It's the same thing that we see in financial services. People, if you can put these things in front of the business users, which now you don't have to really know Python. You don't have to, it's a natural language. And so it's really opening up new people who can come and be part of kind of this data piece. In the way of like inhibitors, data and data hygiene continues to be an inhibitor. And a prime example of that for me was big industrial company, they came back and said, hey, your model's hallucinating. It gives us a different answer each time. Two different answers. And it turned out when we looked into it, it turned out that their data corpus that they're grounding in had two different answers for the exact same question. And so still some of these traditional things that we've always had to worry about in the way of data hygiene and things like that end up still being important as we move forward. And so you see things like that still inhibit just like they did in the past, but revolutionary. If you'd asked me last December that we would have shipped all this stuff that we've shipped in the last 11 months, I would have told you, you're crazy. But like we see every week, every week we see new innovation and that's the exciting part of it and making it easier for people to use it. And that's a big piece of what we're trying to do. Just so for me to have the clarity. So you are in the camp that this is the largest platform shift now or it is, but it will take time. Which camp are you in? No, I think that it's, yeah, I think it's now, but for that to get into production and major usage and things like that, I mean, it takes time. It depends on the metabolic rate of the different companies and how quickly they can get these things into production. I do think that there is like a mind kind of a bending thing within this, which is you still have, there's still business rules and ROI and things like that that you have to kind of figure out as part of it. And I still think people are still working on some of those things. But if you look at like the people who are putting these things into production today, you know, they're reaping benefits right now. It's just a question of like, how widely does that get? I think just like any new technology, it takes a while to acclimatize to it. Thank you. I think, I couldn't agree more. I think this is perhaps the biggest technology shift that I have seen in sort of my working career. And I'm sure the same for all of you as well. My view is that I think the technology, why is it so big is an interesting question. I think if you look back at other sort of waves that have come before this AI wave, the internet wave, the move to cloud computing, they're by definition universal in the sense that there's no one industry it helps over another, they're foundational. It's at a lower level of innovation which every industry can uptake. And I think this sort of checks all the boxes when it comes to sort of that kind of mindset. Is that so foundational? It's about language, it's about modalities that we as humans used to communicate. So to me, it's gonna impact every industry which is what makes us a huge, huge revolution. I think the other question is an interesting one, where are we in the trajectory? I personally think we're very, very early. The way technology diffuses to customers, there's a certain rate to it. I haven't seen this kind of a rate of diffusion in other technologies before. Internet took a long time before it sort of diffused to everybody, same with cloud computing, it's still diffusing if you ask me. But I think with AI, at least the whole genitive AI movement, I feel the rate of diffusion is amazing. There's lots of early adopters trying. And as is the case with early adopters, they build as the technology changes. And you can see a lot of startups doing that. You can see a lot of large companies doing it. But I think it'll take time for it to materialize in a big, massive, scalable way. Because I think it's both about deriving value from this technology, building the entire solution stack to be able to derive that value. And frankly, somewhat cultural aspect of how do you interact with these systems. There has been a lot of conversation around, wouldn't it be great if we could talk to our computer? Well, now we can. But it's gonna take some time for this computer to be less stochastic than it is right now. It does give two different answers to the same question. So there's a trust issue that needs to be dissolved from a human machine interface perspective. There's the stack that needs to be built, the solution stack that needs to be evolved. But yeah, I think all of these are amazingly exciting journeys for all of us to take. I like your phrase of the rate of diffusion, right? Like any technology, it takes time. This time around, maybe it's even faster than the last a few waves. But still, there is a rate of diffusion, right? Thank you, Jayesh. So let me come back to a few questions I have in my mind. So, Vijay, you're talking about four different use cases, right? Amazing things. You are seeing that it's even achievable now. So I've been looking at Office 365, co-pilot, last couple of months, how people are using it or whatnot. A lot of those use cases about summarization, sort of those things. And I wonder when is the co-pilot are going to do more meaningful things or more sophisticated things such as, hey, here's the requirement, here's the, I'm going to book appointment for you, automatically, right? You know, I don't think we are quite there. A lot of people talk about AI agent. I looked at AI agent. It's not that reliable. But we all know that enterprise software, because this is an enterprise software panel, right? Enterprise software needs reliability. So I don't see enough of that, which is kind of a natural because, you know, this is a probabilistic model. It's not, you know, as accurate. So when are we going to have a AI agent or AI model or AI solution that's going to do the automatic things, you know, just a boom, it's there? Like, what do you see? So one is a product answer and second one is a technology answer. And the product answer, I will not preempt my colleagues who are shipping co-pilot. You have seen the tip of the iceberg and you will see more. But let's just take that and we'll come to the technology part, but let's take the example of something that really truly impresses me. Can you allocate, schedule this meeting between the four of us by making sure it's convenient to everyone but somehow make sure that people come to you? Let's imagine that's a bias, it does that. Well, if there's enough information in your calendar and your previous emails, it should be possible to program the AI to do it. But if there's not enough information, at this point, you are expecting the AI to give you an answer that probably you yourself wouldn't trust yourself with. So we got to be very careful. Like, what are we really looking at? So we see things that are possible will get delivered. I will not speculate on the time scales, but the second one is more technology. So a lot of work on agents is happening inside, of course, the foundation model producers and also there are companies well funded to just go for the agent technology. The question is, if I give an agent my goal, can it figure out a plan and can it figure out what the constraints are? So now I need or AI needs the ability to do planning. Of course, extreme plans, you send a solver with a thousand constraints but generalized planning of probably feasible, possibly optimal, which is what you and I work on every day, is there going to be planning? Does it understand enough senses? Does it know the difference between generally accepted facts? We're not talking facts in the philosophical sense, generally accepted facts versus pure speculation. So once you have those technologies, it's up to people who create products to turn it into an agent or not. So the way I look at it is the technologies on that trajectory. You can create an agent to do these things today. The question is how much product effort are you willing to put in, both in creating the technology and educating the user as to the edges? So that's how I look at it. Sounds very interesting. Anything to add? Go ahead. No, I agree with you. I think for me, the part that is super interesting is how will we build interfaces to make sure that these reach users and the element of trust is super important. People, the products that we're building will get adopted so long as you're able to get to a higher trust bar. One good example I have is when Tesla came up with the auto driving solution, something very interesting that they did earlier on when the auto driving wasn't operational was they would just show you what the car sees around it as it drives. And as that sensing system got better, it built trust with you as the user that it's able to see what you can see so there's confirmation there. And then at some point, it sees things that you don't, like your blind spot. So that's kind of, I think that kind of thoughtfulness needs to come into the whole building process. How do you bring in the expert humans in the loop at the right time in a way that is something that can make the whole solution ship? Your example of, wouldn't it be great if this could be completely automatically scheduled is great, but would we trust it? Like if we showed up on a calendar, would we show up here without talking to Harvey? We probably wouldn't. But I think that's sort of where the barriers also exist, I think. I think we see it in pieces today. I just don't think we're mature, it's mature enough for it to all come together and orchestrate it, right? So you can summarize documents, great, that works really well. You can go find like special legal knowledge, right? Like I want all the limits of liability, I don't want to change the control clauses from these 600 contracts. That it can do in a high reliability, but when it comes to pulling these things together, I think that we're not there quite from a maturity perspective. I think the other piece, if you look at things like Lang chain and pieces like that, there's kind of this view, there's one model to rule them all. And I don't think that when you're talking about agents, that is one model to rule them all. You end up using things like Lang chain to use multiple models and cascading models. Current language models don't do math very well. Well, should we push hard into making that, fixing that one big model to do that? Or do we call it to a calculation engine and let it do the math and then feed it back in? I think that these are the types of things that as we look forward, you're going to see more of this app layer that does the coordination using those types of things like Lang chain to kind of pull these things together because ultimately there's different modalities and different specializations that need to happen rather than trying to get this one big model that does everything. Yeah, so how can I just jump on, okay. So I really like the way that Warren and Jayesh talked about, tell the user, like the Tesla example, what am I seeing, what am I missing, right? And as Warren said, what are the ways in which we get agents to talk each other? So I do want to shill without naming too much for some of my colleagues in research and they've put out, so just like Lang chain is a pattern which says, call this model, pass this information to this other model, reason in this way and then there are chains of thought reasoning, there are evolved, you know, self-generated prompts and so on. So there is new research and right now it seems to be trending number one and number two on GitHub from my research colleagues which is about how do, if you've got five agents, how do they talk to each other to accomplish a task? So all these things really are becoming possible and to me it's a question of do I have to spend a lot of time prompting and controlling the AI or does the AI innately know if I just give it a goal and say this is who I am? So a lot of these things aren't in possibilities but they go first. Yeah, I think we clearly have a lot of the exciting technology or they see that all four of us are, you know, very excited about the technology but you know, there are a lot of exciting pieces maybe not together yet, depending on the expectation. So let me get to, you know, we talk about a lot about the model, right? Let's talk about the data. Data is still another potentially missing angle because in a lot of the enterprise they have their own data, right? For instance, right, you know, Palado Network, we have a lot of the proprietary data or customer data. We're not going to get this data to be pre-trained by, you know, the charge PT for instance, right? The generic model. We will have to do something. So, you know, we have been working with, you know, some of the vendors including Google, you know, on this topic. But it turns out that, you know, marry the data, right? You know, your own data with the model is not an easy thing, right? Whether fine tuning or the retrieval augmentation generation, right? So my question to Warren is, why, you know, do you see the kind of the complexity, you know, because you deal with lots of customers, right? Is that an easy problem? Maybe it is easy because Google knows such, you know, more than anyone else. Or maybe it is not. What is your recommendation to the company's out there if they wanted to, you know, marry their data with the model and then figure out to do something amazing? Yeah, I think it's, we talked a little bit about data hygiene earlier. I won't go back to that again, but that's an important piece of it. I think that there's effectively two techniques. You know, you have fine tuning and you can do different levels of fine tuning. You can use different methods, Laura, pets, et cetera, and things like that. I think that those are getting better and better over even over the last few months. I've seen the new versions of Laura have gotten better when it comes to tuning. But you also have grounding, which is important as well. And I think you have to kind of step back and go, how much of this do I need to do, right? Because the models are relatively elastic and they do a pretty good job of many things. But it ends up being, you know, how specialized do you need to be in the case of your world of security or medical or things like that, has a certain taxonomy and things. So in those cases, you end up having to bring that stuff in to really get the type of recall and precision that you want. So grounding on your corpus of data, being able to do reg on your corpus of data and using it as the authority. And so I'm asking this question, the model may have an answer to that question or may think it does, but can you actually use grounding to go do it? And we're offering different types of grounding, you know, whether it's grounding against immediate information, like I need to know what the weather is or grounding against your corpus or grounding against, you know, a different types of like commercial data, like something like a Bloomberg or Moody's or something like that. I think that those two techniques end up coming together, but certainly tuning is, I don't think that we all have gotten all the tools yet to make tuning really, really easy, but I think we're getting much closer right now, but we see some pretty great results with just fine tuning. One of the things that we did on the research side early on was we took a bunch of data and we did a full retrain, basically instruction tune to model and then we did fine tuning with Laura. And we found that from an accuracy perspective there's less than a percent difference in the accuracy and recall of that. And so we know that fine tuning is very good. I think it's a question of making sure that your data is clean, you know what you're trying to get out of it and you know what sort of evaluation you're trying to use. And I think we have a bunch of eval tools now that allow you to train and tune models and then hit them against this eval and seeing how different models react with that. I think that ends up being part of the experimentation that is gonna be important, so. So you are saying that the data angle, people today need to do a lot of kung fu. Do you expect the vendors or the model providers will provide enough tools so that I don't need a lot of kung fu to do this? Well, I think that one of the parts of it is is that if you kind of look at like where we were and where we are, there's very different places, like a year and a half, two years ago, if you're like, hey, I wanna go train this model to understand this type of PDF, I wanna be able to extract information out of tax PDFs or something like that. To do that, you would have to have tens and tens of thousands of examples of that to train that model to do that. To be able to do that today, to do entity extraction against something like that, it takes a few hundred. And in fact, more research that we did showed that more than about eight or 900 of these examples actually doesn't get you any better at extracting entities. And so we went to this place where it's like, I need millions of pieces of data to get a little bit. Now we're at a place where I only need 800 pieces of data to get a much better result. So like from a non-linear increase in the functionality and ability for you to do that, we're at a very different place than we were a year or two ago. And so I think we're definitely on that road. Are there gonna be more tools to clean and make that stuff? Absolutely. And I think that some of that's well understood and some of it I think is still, people are starting to figure out. You answered one of my questions because some people said, hey, I'm in the small vertical niche, I have much smaller data set compared to, whether pre-trained data or whatnot, is that still going to be effective? I think you are saying that the latest technology, the new research can still make that very meaningful. Yeah, I mean, I think there's examples in the market, like there's a fin-serve model that's out there. And if you kind of look at that model, what was it trained on? Like 93, 45, 6% of that model is all public data and a little bit is private data. And so I think that there's vast amounts of public data out there, which you can use to train models on and adding your little pieces of private data actually takes it to the next level. But I think that the thing that's fascinating is around style and things like that. I can give it 100 documents and say, this is the image style I want, this is the language style I want, and things like that. That makes a whole difference in how the model actually outputs that. That's like a real thing, you can do that today. So you can make your marketing content actually sound like you without having to go spend a ton of time. But for more complicated things like answering complicated security questions, it takes time and jiu-jitsu to make it happen. Thank you. So we talked about the model, the data. So I wanted to talk about the solution. So Jayesh, you are in a more neutral state compared to whether Microsoft or Google. Google, Microsoft and Google. Yeah, sounds like you. So what is your consideration, right? The frontier models, different frontier models, open source model, open models. And from a solution point of view, what do we learn? Great question. I think, you know, broadly speaking, if the point of view that we have is we want to solve a business problem. Let's say you want to make a sales professional 10 times more productive. If that's the problem to be solved, a lot of what we do at Salesforce is we have a deeper understanding of what jobs they do at work, what goals they have for the jobs that they do. And one of the things we have noticed talking to customers is we're sort of at a point now where the CEO has played around with chat, GPT loves it, like you said, in the back room there. And the first thing that it is they went and got a license, you know, for their company. They put a few engineers on it. They produce some prototypes and they don't quite know where to go next. How do you craft them into products? Products can take two forms. You know, they're net new products that are generating new revenue for you as a company or that efficiency plays for your employees. In both cases, I think there is a huge solution gap that is gonna get filled pretty quickly. I think we talked about the two methods for getting the data right, the RAG method as well as the fine tuning method. I think crafting all of these together, tied to the job to be done, requires a full stack to be evolved. So from a solution perspective, we look at working with a variety of partners on the large language models. We also build our own, our research team has built an amazing model called CodeGen. But there's a whole gap from there to that solution and the problem that I was describing earlier, how do you solve it? You need a way to ground in that customer's data. A great mental model that I've found works quite well. To explain this is the right way to think of our large language model is literally it's an index into a set of programs that you can run on your behalf. Now you need to have the right magic incantation to make sure that that program gets executed. That's the prompt. You can give it more examples on what looks right and what doesn't. But then every program needs inputs. That's the context that's needed and that's where RAG comes in and plays a really, really important role. I think these systems are such great multi-task learners that you can go beyond what we're talking about as well with the reasoning engines. And I think one of the things that we're working towards how do you bring APIs, which is where actions are taken, data, which is where context comes from, and this new programming paradigm, such that all of these sort of things come together and application developer doesn't need 400 people to start a Jasper.ai, you can do it with five. So let me just double click on one thing, right? You mentioned there are two types of the products. One is for internal employee productivity. The other one is outbound customer-facing menu value. For that one, how much are you worried or not worried about the reliability, repeatability of the model capability, right? And how do you measure the value to the customer, you know, the customer get and then maybe some insight into pricing? Like how do you think about the pricing? So if you want to comment about that. You know, all great questions. I think our first 40 into this is ensuring that there's a human in the loop at all times, guiding the system, even when there's automation, at least having some sampling frequency with which outputs are actually seen by human, resolved, and these are expert humans that are in, let's take the example of contact centers, right? Call centers, and in this case, you have a set of trained agents that can teach the system, if you will, like a student-teacher model, and then once you've gotten that to a level of sophistication, you've run your evals. I think then it's sort of, you know, a good point to like turn it over to doing some degree of automation for the cases. Things we look at from a metrics perspective, interestingly, these metrics have been in enterprise for almost ever, because if you build a software system, you want to learn the efficacy of the software system. For example, in contact centers, you look at average handle times. If your AI is doing a good job, your average handle times ought to go down. Used to be the case that when you did that, the customer satisfaction also went down, right? Because automation wasn't great. For the first time, you're seeing both getting better. So average handle times going down, but customer side going up, because these automations ensure that you're not waiting for a human to come answer your simple question. What is the key learning of the pricing? I think definitely consumption-based pricing is something that this technology will push forward. That said, for really large customers, pricing needs to be easy and predictable, which is hard to do with just a pure consumption-based pricing. So our focus is giving customers a mix of both, having some credits in place that they can get started in a consumption-oriented manner and then having some tiers where they can get some degree of predictability as they use these systems. Very good, very good. So all of us are very excited about this transforming technology. My last question, anyone can answer, particular order is, what is the killer applications? We can imagine from here, right? What are the things that we were not able to do, but with this way we are able to do? Actually, I put a poll on the social, LinkedIn, Twitter, one person left a message saying that, hey, you are so bullish about this technology, what does it take for you to do your own startup? So when people think about this, right, it must be something big killer sort of applications. What do you guys see? I'm happy to go because I'm sitting next to you. I think we, over the last 100 years, we have allowed people who have physical work skills, who are prepared to put motor skills brawn in the ability to go to the site. We've allowed these people to fall behind. Knowledge work gets the entire pipe of the GDP growth. I think the biggest opportunity that I see is uplifting physical work. If I'm a care and home, make me as good as an in-hospital therapist. I go to Delhi, and it's no sarcasm to say that, that Delhi's discovered the fitness culture relatively recently, sort of gyms everywhere, but they do not have the skill set that we, for example, in Seattle, there are people who train mariners, who train Seahawks. How can I get those skill sets to the physical workers? And to me, that's really the one. You know, we're really worried about how do we preserve the current advantage knowledge workers have, but that's a small fraction of the world's population. And this is a time, I didn't go to school, I'm smart, feed information to me, feed intelligence to me, allow me to get the job done. So that's basically, I see that as, and if you don't mind if I can add a second killer app in there. The second killer app, which is the first world killer app, is I'm seeing businesses say, exactly as Jayesh said, is it an efficiency play or is it an outbound play? I'm seeing another thing which may fall in your second category, Jayesh, which is what is a business I couldn't get into before? But I am the best situated to get into. So I'm working with cosmetics companies who are saying, cosmetics are not just meant to make people look good, they really are completely capable of making people feel good and be good, be well. But I don't have the ability to hire and manage 20,000 PhDs, like some of the large pharma companies hire. So how do I get into this business? And they are like, I can get into this business now. So you have killer apps for people and you have killer apps for businesses. Thank you. I can go next. I live in Seattle, but obviously I don't have an NHL physique though, so obviously I missed out on that piece of the boat. But I don't know if there's one killer app. I see that there's areas that are definitely can be accelerated. I mean, creatives is one piece that I think that we tend to underestimate a little bit, but like music models and image models and models that create scripts and outputs and things like that, there's just a massive amount of interesting things that can happen there. And I think that as we look around the industry, the whole media industry is gonna change in a huge way. And what's the one app for that? I don't know, but you're putting skills into people's hands that they didn't have before and the ability for them to create things that they never could. And even a year or two ago, if you saw people who were doing creative music and visual art type things with AI, they were coders, right? They had to understand that. That's not true anymore. It's like a little bit if you understand the change between analog and digital music, right? Way back when, when I was a failed musician, I used to cut tape, believe it or not, to actually in a studio. Well, this is the same type of revolution on the creative side, the ability for you to create things that you could never create. So I think that's one piece. Citizen programmer, citizen musician. I love it. Yeah, I mean, I think it's gonna be a very interesting thing. So I think that that's, for me, one of the areas that's most exciting that I think that is really gonna hit in an amazing full force. So, yep. Jayesh. You know, it's hard to pick a killer app, but I'll tell you the one thing that I'm doing. Salesforce. Isn't Salesforce a killer app? Salesforce has always been a killer app. But it's hard to pick. I think this, the space is moving so quickly. I think it's hard to pick. Palo Alto Networks. And Palo Alto Networks, and Google, Microsoft. I think the, to me, the thing I'm most excited about is this future where everyone will have a personal tutor. Like imagine that, like every kid, every child that is in any grade, all of us will have a personal tutor. Humans learn very differently. And I think personalizing education to uplift, to learn, I think is gonna be the biggest killer app because it's gonna be so foundational to just move the bell curve one standard deviation, right? Imagine what that means to humanity. That's just amazing. And what does that do? Enterprise AI turns out everything. Like when we come into work, we learn new skills. Imagine a tutor that could teach you just in time how to run a marketing campaign as a startup. You're not a marketer, you're a baker. But you just, in time, learn how to do a marketing campaign, how to create great flyers. It taught you how to close your books at the end of the month, right? Just in time learning, personalize tutoring, we are starting to delve into this domain a little bit with co-pilots and digital assistants and such. But I think this is gonna be the killer platform, if not an app. A platform for learning. I love it. So it's not just the killer application, but also giving the superpower to each and every person, no matter what kind of professionals you are in. And this is making the entire society good. So I really love it, right? I really believe that we're in the another age of discovery, right? Once we find a new continent and a new land, what are we going to do? Maybe a new country, maybe a new sort of everything going to be born. Thank you, everyone, and VJ, Warren, Jayesh, this is a fantastic conversation. I'm pretty sure, I'm hopeful that we'll come back to this panel and then discuss that, because this is going to take a little bit more time. And so much for today. Thank you, everyone. Thank you. Thank you. It's great.