 Welcome to SuperCloud 5, the battle for AI supremacy. I'm Lisa Martin here in our Palo Alto studios with Savannah Peterson. Gonna be having four days of amazing coverage, talking all things AI. Savannah, it's great to be with you. The fifth special edition of SuperCloud. I know, I can't believe we're already at five. The SuperCloud series started in August of 2022. We weren't even in chat GPT territory yet. That wouldn't come until November. It's gonna be a jam packed four days here both in the studio in Palo Alto as well as from John and Dave in Las Vegas. I am thrilled and I'm so excited to co-host with you. Same, I can't with all the open AI debacle that's been going on. The controversy the last 10 days, a little bit of jam. I'm so excited to see where organizations are taking their customers on the AI journey really to make it a reality. So I'm excited to hear that from our vendors, our partners, our customers across Palo Alto and Las Vegas. Fantastic, I am too and I'm curious to see if there are any bigger predictions for who our winners are gonna be both on the generative side, on the traditional front. Who's gonna be powering this and lots of conversations here as we close 2023 in the most exciting conversation of the year. Artificial intelligence, I think our first guests are gonna have some exciting things to say. They are. How wish you was here AI and data executive. How we take it away, you got a great power panel next. Thank you Lisa. This is actually a fantastic week because just a year ago, chat GPT was introduced and then especially in Silicon Valley so many exciting stuff going on. So with that, I have two distinguished panelists with me. Jerry and Androm, I will let them introduce themselves a little bit. But my first question to you is not just introduce yourself but also you started a company about a year ago, around the time chat GPT was introduced and what's the mission? And also what's the reason for the name of the company? Especially for you Jerry, Lama Index. Everyone knows about Lama. Did you borrow the name from Lama, from Meta or what's the story? Great, so just a brief introduction. My name is Jerry. I'm co-founder CEO of Lama Index and the thing I have to clear up for basically everybody here is that we came up with the name the week before the Meta model came out. And so Lama Index actually has nothing to do with Lama, the model itself. But we were sitting on a couch and we decided to rebrand from our existing Project GPT index into a new name and we were thinking of a cute animal and we thought of a Lama because it had the words L-L-M in the name of the Lama, the animal itself. And so that's what caused us to basically got us a rebrand and we're really excited about the name. So great minds think alike, cute animals basically. Yeah, exactly. And then the week later Meta came out with the exact same, exact same. So what's the mission of Lama Index? So the mission of Lama Index is to connect a user or organization's knowledge with the power of large language models. And so basically Chat GPT knows a lot about a lot of things but it doesn't know about you yourself the individual or you the company. And so our goal is to take all that data that's sitting kind of like private to you. For instance your PDF files, your CSVs, your documents, your APIs and basically figure out a way so that you can operate Chat GPT over all this data and do all the things you can do with Chat GPT. So basically a large language model indexing to data to do more wonderful things. Exactly. We'll get to more into that. Andre. Hi, thank you for having me. I'm Arjun, I'm the co-founder and CEO of Distill. Our mission is to distill value from technology for large enterprises. And that's really the origin of the name. It's our mission. The technology we focused on to start the company is large language models. And our goal is to elevate the core business processes of every one of our clients to be more efficient, more productive, create new revenue opportunities, leveraging this capability and technology in AI. So I know you guys work with open AI guys a lot, right? What's the sort of the reason you started a company a year ago? Like you started a company right before Chat GPT was introduced, right? So what was the background of starting the company? Yeah, so Chat GPT was this really interesting tailwind that happened after we started the company. But the capability that we actually got the most excited about was the ability to follow instructions. So an instruct GPT came out. And to us, that was this aha moment where we realized that this wasn't just something that you could use to write letters or edit emails, which is great. But it's also something you can use to give instructions and actually carry out tasks that could have meaningful operational impact at enterprises. And that's when we decided to really form a company around this. And Chat GPT just gave us the tailwinds because then everybody started talking about it. Oh, that makes a lot of sense. So basically the instructor GPT, the paper was introduced probably a few months before you started a company. That was the aha moment you had to start a company, right? That's right. Oh, that's a very good story. So with that, this is actually the second installment I wanted to have with the generative AI industry. In the first installment, about a month ago, I talked to executives from Microsoft, Google, Salesforce. Just asked them, hey, how real is generative AI? All of them said, this is the biggest platform shift that they've seen in their career. They've been around the block for 20, 30 years. And yet they said, this is the biggest technology shift. At the same time, we all know that a year later, the enterprise deployment of the production co-pollards is still small scale, I think, to be fair. So what gives? Is that because there is some technology gap? Hopefully, or perhaps that's exactly what you're working on. Maybe we should just dive right into it. What is the biggest gap do you see when you work with the customers? And then what sort of the front technology point of view, the value you are providing? Well, I think what gets people excited about generative AI in general, and this might be why there's so much enterprise interest, is the time to value for knowledge, automation, and extraction is way shorter. You take an unstructured PDF, you dump it into the text window of your chatGBT browser, and it can just automatically understand what's going on. Or you copy and paste some code, copy and paste that into chatGBT, understand what's going on. So clearly there's a ton of potential here. And I think the reason people are so excited is because they're trying to explore the upper bounds of what this potential has to offer. Of course, there's a lot of gaps too, I think, as you mentioned. A lot of people are trying to build ALM applications these days, mostly to build prototypes, and they're finding it hard to productionize. And we've written pretty extensively about this. There's a few kind of core issues. One is hallucination, basically the ALM itself, given any sort of information you feed it, might not actually understand some of the output sometimes. It's a stochastic black box. So there's always some error probability that it will fail. The other piece is that a lot of people are building software systems around ALMs, and they're still figuring out the best practices for doing so. So for instance, when you combine the ALM, not just kind of as its own thing, but actually with a vector database, or with other systems, then you add more parameters, and the more parameters you add, the more failure points there are. And so people are finding it hard to, one, figure out how to properly- That's because error is compound, right? You have more components. Yeah, compound, and when you add more parameters to stochastic system, the entire thing is stochastic. And so one is they're finding it hard to actually figure out how to evaluate things, and then two is they're figuring out how to actually optimize all these parameters for better performance. So you write blogs or LinkedIn posts, all the time tweets, right? Just give us one or two things. You wanted to share with the audience here that the key things that you are passionate about, and you feel like this is the things that we need to do much better than- I could probably talk about this for an hour, but in 10 seconds, the main thing we're pretty excited about, and the thing that we're seeing in the most enterprise adoption is retrieval augmented generation. So basically, combining a knowledge base with a language model, and so this whole paradigm is called retrieval augmented generation, or RAG for short, and we've been investing pretty much the past six months of effort into this. And so we're excited to just continue making this production ready, and we have a lot of enterprise deployments of our software. So with another 10 seconds, why RAG is hard, right? Because on the paper, right, you just do the retrieval, vector database, embedding model, boom, you retrieve top K contents. Why it's hard? Why it's so hard? So this is exactly where the point about adding more parameters to the system comes in, because the moment you build a retrieval in addition to the language model, all of a sudden you have to think about how does your retrieval system work? How do you actually load in data, ingest it, parse it, put it into a vector database, right, and embed it, and then how do you figure out how to actually retrieve it? And so a lot of current practices are relatively naive. They're doing the most basic stuff, like you split every five sentences or something, you use open AI embeddings, you do top K retrieval, and typically we've seen practitioners just fix that and then not know where to go from there. And so a lot of failure points aren't just due to the LLM, it's due to the selection of parameters at the earlier stages of the process. That's outside of large language model. Exactly, yeah. And Jeroen, anything to add? Yeah, so I think, I just wanna echo everything that Jerry said, he's absolutely right. I think there's this misconception that to build an application with an LLM, you just need some documents, some data sources, and you connect it to an LLM and you get magic, voila. In reality, what we have discovered is that it's a little bit more complex than that, and I'll break it down into two things. Number one is enterprise software engineering, and number two is the large language model itself. So for enterprise software engineering, this isn't new. It's all the set of considerations you need to scale to a large number of users, a large number of workflows, a large number of data sources, while respecting the access control postures that your organization has. And the good news is that we've been doing this for decades, but it still needs to be done. So we need to bring those practices over. Big data, you know, but it's a little bit different requirements, right? Exactly. But you don't need those when you're building a demo or a prototype, but you absolutely need that if you're trying to create value from it. So that's the first consideration, which is the enterprise software engineering. The second consideration is the LLM itself, which, as Jerry said, is a little bit of a stochastic black box. And so there are all of these techniques and tools that you need to be able to make it work in a way that you expect it to work, and these range from data pre-processing and post-processing instructions, and these are not well understood. In fact, they're actually fairly custom. You need to work from the use case backwards to really understand what are the right combination of techniques to really make the LLM do what you want it to do for that particular use case. And if you get that right, the value creation is massive. But if you think of it as something that automatically works out of the box, then you're gonna have a good demo, but you're going to be disappointed once it goes into production. So these are really the two things we think a lot about when working with our clients. So in the past, we have been to this point a few times from technology evolution point of view. You have a new technology, but it's sort of fragile, it takes some engineering work. The last decade that the way to solve this problem is just put that into the cloud. Let someone else deal with it so that I consume reg as a service or consume engineering product as a service. Do you see that happening? And then if not yet, why? Yeah, I mean, I can definitely see that happening. And I think as to whether or not we're there yet, I think there's maybe just some factors in terms of just maturity of the technology and whether or not there's actually services that can handle some of this at like production-ready workloads. But what we're seeing is generally there's a lot of people coming into the AI space from a variety of different backgrounds. There are the tinkers that people that really want to get deep into writing their own frameworks and prompts and all these things and they really want to compose their own systems. And then you have kind of people on the other end of the spectrum that just want something to work, right? And so there's definitely an opportunity for people to build services to just make things work out of the box. For instance, if hypothetically there was like a rag as a service and it worked really well, people would use it, right? And there's some segment of people that would use it and there's some segment of people that would use it. So we don't have a reg as a service today, is that because a reg as a service is so hard? Well, also a case in point, right? Open AI dub day two weeks ago or a few weeks ago, I mean came out with like a retrieval API as part of their assistance API, right? And so that is an example of reg as a service and there are a few companies doing that. Yeah, GPTS essentially is a reg as a service. Yeah, exactly. If you like upload a file. There are some limitations, right? Upload up to 20 documents or whatnot. Right, and we did like a quick benchmark and it does like roughly like maybe on par a little worse than just Lama and actually some tweaking of the setting. You mean the naive version. Yeah, exactly. And so there's definitely room to get better and I guess the point is there's definitely room for reg as a service. There's a lot of interest in it, but I think just it will probably take a few months for the mature, for like the technology to evolve to actually handle some of these like performance like requirements for some of these use cases. So Andrew, what do you see the complexity of working with customers? Fortune 500 customers in particular in your case, right? The complexity of getting reg done right. Yeah, absolutely. So I think it's important to understand how to address the different needs you have. So I wanna split really your requirements into two separate ones. One is workflows that don't require a lot of customization and aren't necessarily unique to you. And for those, I think your SaaS vendor is going to incorporate AI and that's probably the best way for you to get value from artificial intelligence. The thing that we really think about is there's this entire set of use cases and workflows that are categorically unique to you. And if you're a large enterprise, that's actually what differentiates you as a company. So what does it mean to incorporate artificial intelligence into those? And our biggest learning is that it requires you to work backwards from what your workflows are and what your company does. And that informs the choice of architectural decisions about what your AI stack should look like. It's different for a company that has predominantly structured data, it's different for a company that predominantly works with unstructured data. It's slightly different for a company that has a very large knowledge worker workforce but work use case backwards. And then once you understand that you can then decompose it into software engineering best practices that allow you to scale up and LLM best practices, again just working backwards from the choice of use cases in a very first class way. So the way I heard about from you is like we are still early, we're still figuring out the typical design pattern, working backwards. And we are not at the age of having cookie cutter rag as a service yet because we need to understand the design pattern and then figuring out that maybe perhaps in the future we will have the rag as a service but not yet. I'm very optimistic and hopeful that increasing parts of the stack are going to get standardized but where we are today is a place where for it to work for you with the reliability requirements you have you need to work from the use case backwards. So you and I had this conversation before you mentioned that working on this part of the system is like art and science, right? Can you elaborate a little bit to our audience here? Like the art versus the science slash engineering part you see? Yeah, absolutely. So the science part of this is just what are the techniques and what are the best practices that are well understood and are the same. And these largely fall into the bucket of things that are software engineering. So we all know how to do CI CD. By the way, the LLMs also require that because you need very high iteration speed. So how do you do this in a safe and secure way? This is very much science when understood. Let's just do what works. Similar things for role-based access controls. The way the art really comes in is there's an entire set of techniques about how you deal with the data, how you write the instructions, how you do data minimization, how you do post-processing after inference. And there's a little bit of art here of picking the right components and piecing them together in a way that really works for the requirements of this use case and the users that are trying to use it. And that's still very much art today because we're learning more about it. We have knowledge of the tools that we have, but what that comes together as a Lego set for different users and for different clients is still very much different. Yep. So we talked a lot about the rag. What about fine-tuning? We haven't touched that word, but people talk about fine-tuning a lot. What do you see? You work with customers, like what do you see? Do we even need a fine-tuning or is a fine-tuning mature enough to be even brought to the table today? Like, what's the status? Yeah, I mean, that's definitely the other buzzword in the AI or the gen AI space right now. I would say besides rag, a lot of customers or users are thinking about fine-tuning. I would say no one has really reached a conclusive answer yet, even the people that are doing fine-tuning. And the reason is I recently talked to a few users who basically said that even if they spent a lot of effort trying to fine-tune, let's say like GPT 3.5 or GPT 4, by the time the next model comes out, you're basically just like kind of a little bit below or matching the performance of whatever the next model will be. And so it does feel like some of this stuff is going to be potentially a little bit of like a temporary effort as models get better, costs come down. And generally, like if you believe in kind of an exponential growth curve of the model capabilities over time, the kind of like requirements for fine-tuning to go down, right? Which is very practically a lot of people are doing fine-tuning. And the reason is that with a current set of models, it allows you to squeeze out better performance for less cost. And so there's certain types of tasks that are very specialized. And you can definitely fine-tune something much smaller and much cheaper versus just using like GPT 4 or GPT 3.5. For instance, like a classification problem or like a routing problem, you a lot of times you could just like fine-tune in beddings or a classifier as opposed to an actual LLN. I will say the proportion of people doing fine-tuning, a tensile range, there's like two buckets. There's people that are like tinkering around because they really like machine learning. And then there's people that are in the enterprise and they're legitimately fine-tuning to try to like bring their costs down and performance up. But typically that stage happens after some initial prototype demonstration of value. And that typically occurs without fine-tuning just because it's way easier to set up. If you can just like take GPT 4 or 3.5 or Lama 2 and wire together a prototype, it's way easier to do without actually training the model itself. And a lot of times fine-tuning is this kind of optional optimization step that people are still trying to figure out. Cool. Hey, Andru, we'd love to hear your thought but also wanted to step back a little bit. So Jerry talked about his view about the fine-tuning and you mentioned to me that your customers, some of them do not even use RAG, just use the traditional information retrieval. Maybe just to give us a zoom out of view, right? What are the different ways, right? And then RAG versus fine-tuning and then versus the traditional ways. And then talk about what do you see with the Fortune 500 customers? Yeah, absolutely. So one mental model that I found particularly helpful to think about this through the lens of is what serves the purpose of accuracy best. And when you're fine-tuning, much the same way like when you're training model, there is information loss. And so then the question you're really asking is, You change the weights of the model in the case of fine-tuning. Exactly, and what you're also in the process doing is not all of the information you're giving it is being perfectly given, being learned by the model. Which is why it's not the best way necessarily to teach the model facts, right? What we have found is that the information loss from fine-tuning is larger than the accuracy gains from treating it as an information retrieval problem outside of the large language model itself. Will that be the case forever? Probably not, I don't know. But what is very much the case today is decompose the problem into two parts. One is the large language model problem. And there it's really good at planning, it's really good at picking the right tools, synthesizing, summarizing, summarizing exactly. And then treat the actual data problem as an information retrieval problem, which could involve SQL in the case of structured data. It could involve RAG in the case of unstructured data. It could also involve information extraction to create structured data or knowledge graphs which you then complement with the large language model. But we have really good techniques of doing high reliability and predictable information retrieval. We should be leveraging those. And we should be using the LLMs for where we know they work today. And it's gonna be a work in progress to get fine-tuning to the point where you can trust it for information as well. Where fine-tuning is giving us value today is when you want to teach it things like brand, culture, style, things that aren't necessarily fact based. I'll put a format. Exactly, exactly. Where you have good guidelines and you can give it the necessary information for it to learn things at a stylistic level. So what both of you are saying is RAG or the text to SQL, whatever those kind of mechanism give you some pretty good data augmentation already. You don't always have to use fine-tuning. And sometimes in some isolated cases you see the benefits of fine-tuning but not across the board. And also, Jerry, you mentioned when you see the model update that maybe you have a regression. So there's limitations with the fine-tuning. Yeah, I mean, there's a variety of reasons but at a high level, I mean, one big reason out of the many is honestly just like UX. It takes a lot longer to do anything by training the model versus just using it. And that's also part of the reason by like GPT-3, GPT-4, like chat GPT is so popular as an API because now any developer can just call an API, get something and build a software application with it versus having to collect a data set, get human labels, train the model, like tune the hyperparameters and then finally you get something that kind of works on this use case, right? And so that's kind of what like fine-tuning is basically the training process that is reflective of like traditional machine learning. And so for that reason, it can be very powerful like I'm sure, but it also just takes a lot longer. And I think for that reason, that's why a lot of people are just getting into this space without training. So before we wrap up on the technology side, there's one topic I cannot help by asking, right? Hallucination, right? Other than rag or fine-tuning, is there any other ways to sort of deal with hallucination? That's half of the question. The other half of the question is, what's the evaluation, right? Cause this, to your point, there are some art, right? Part of the art, maybe we need to evaluate the product very differently. So maybe start with on- Yeah. So I think what you're really getting to the heart of is how do you think about the reliability of these systems? Yes. And so it's useful to decompose reliability to two parts. One is software engineering reliability and the second is LLM reliability. For software engineering reliability, we have known best practices that and we should just continue using them. So that's number one. Number two is large language model reliability. And we think about this in a few different ways. Number one is you need evaluation data. Interesting fun fact here, we used to have test and validation sets as part of traditional machine learning because you needed training data to build the model to begin with. And so machine learning engineers would just set some of that aside. Come out. Exactly. And so there was this forcing function to make sure you always had. And the thing with large language models is because you're getting a pre-trained model, you don't need it. But I would really strongly encourage people to make sure they always have this. So that's number one. Number two is you need to be collecting feedback. So with all of the techniques that I talked about earlier, you can go from something that works well as a demo to something that has 85 accuracy and can work well with a human in the loop. But how do you go from 85 to 95 or 99? Well, you need to be collecting feedback. And this boils down to feedback in a few different dimensions. Number one, user feedback. Number two is actual logs of what questions are getting asked over and over again that you can use to inform patterns. And number three is observability analytics to understand which task is performing well, which model is performing well, which has low latency, and this can help involve and improve your adaptive routing techniques. So collecting feedback is a very, very important part of actually improving the reliability and accuracy of these systems at large. Let me ask one question to you because you are delivering the value to the Fortune 500 customers. There are probably 50 evaluation startups. Are there too many? Are there too little? Because on the one hand, I heard it's so hard to do evaluation. On the other hand, I've heard 50 startups doing evaluation. What's your take? It's a really good question. What we have found is that you need to really work from the use case backwards. And there are some really interesting frameworks that people have come up with, but they aren't necessarily patterns and standardized choices. What has really worked for us is just starting with the use case and working backwards because we're still at that stage with LLM reliability where the problem selection is what informs evaluations. So what I'm hearing is it's not a solve the problem yet. Yes, right. Jared, this is a topic that you are passionate about too. Yeah, I mean, yeah, I basically agree with everything Arjun said, maybe just a few points to add. One is basically along the lines of there's a lot of people from different backgrounds getting into the space, especially in the developer community. Some of them have experienced training models and validating on evaluation data sets. Others don't, right? Especially if you didn't have like a traditional data science machine learning background. And so I do think pretty much everybody these days, like building production grade ML systems should have some eye or view towards evals, right? And so if you didn't have that experience, you should at least learn some of the basics just because I think understanding how stochastic machines operate requires some notion of understanding how to do evaluation. Otherwise, the way you test a machine learning system is different than writing a unit test or integration test for a software. So that's maybe one point that I would add. And that's something that requires a lot of education at best practices and YouTube tutorials, blog posts and those things. And we're investing a decent amount of effort, both like kind of first party as well as through our partners or kind of like evaluation partners. The second piece here is, I think there are probably starting to be some emerging practices for like standardized metrics for evaluation. But as Arjun said, there's like a ton of like customization you basically need to make sure that the metrics reflect your use case the best. In terms of like standardized metrics, if you're building RAG for instance, like one basic metric is you should probably think about retrieval, like just evaluate the quality of your retrieval system. And that is actually completely separate from the outline. And that you can basically just evaluate that like evaluation of ranking systems has been around for like 10 to 20 years, right? Cool, very good. So before we wrap up the session, there are two topics I wanted to discuss real quick, right? You know, one is use case. You know, we discussed technology, many people know about use case customer service, you know, the documents, all that kind of things. You know, Arjun, you work with Fortune 500 companies. What sort of the use case you see that people don't talk too much about it, but you see that that's really, you know, the huge potential for JNAI. What do you see? Yeah, I wanted to start out by saying something which I think everybody intuitively understands, but doesn't say it. So I'll just say it, the perfect GPT-4 application is yet to be discovered. And I think that's just important to be honest about. So with that having been said, I think the first instinct that a lot of people have is customer service, right? Why? Because people have heard about it and chat GPT is something that gives you a prior to think about that through the lens of, but I think that's the tip of the iceberg, like email drafting or customer service or marketing copy. I think that's the tip of the iceberg. Where I think the real enterprise value is and where we're seeing people really get bottom line and top line benefits from is when you start plugging it into operations. And so the questions here are, how do you make your supply chain more effective? Because I can more effectively retrieve information to make a supply chain analyst more productive. I can do better decision making as a result of this. I can improve the standardization of choices that are made across an entire fleet. Because historically people were just making decisions in an idiosyncratic way. Now the artificial intelligence is able to retrieve the right information in a standardized way and create standardization across a workforce. And this not only improves productivity and consistency, it also frees up people's time to actually up level to do value added workflows. So in the case of, for example, one client, we freed up enough time from them doing reactive work to actually then be proactive to say, hey, I see that you have not ordered this other product for a while. Do you wanna talk about that? Which is not even something they had time for because they were stuck finding the right data for two weeks. So what you described is about, you know, plugging Gene and Aveya into the guts of the enterprise. Exactly. How enterprise works. And I just, you know, the documents, the customer service, you know, that's pretty cool. So, you know, my last question to both of you, right? You know, you are being, you know, you started a company about a year ago, right? After almost a year of the journey, what's the biggest learning? Yeah, I mean, there's been a lot of learnings. I would say one learning has probably just been the need to be adaptive, I think. It's always gonna take care of it. Things change so fast. Every week, you know. Refresher processes every week or every two weeks. It's always a good thing if, you know, after a month you realize that the current thing that you're doing isn't working because that means things are changing. You're probably growing as a company. You know, the things we did when we were like two people between me and my co-founder are different than the things now with nine people on the team. And so just, especially in a fast moving space, it's important to always just kind of re-evaluate your priorities. Wow, you know, you still only have nine people with, you know, Lama Index. Everyone knows about Lama Index. I didn't realize only nine people. We doubled in size in the past month. Cool, cool. Okay, last but not least. So I think the most important thing that we have learned from just having worked with our clients is what does it actually take to create successful experiments and adoption? So I just wanna start off maybe with a counterfactual of what we've seen fail. We've worked with many clients where you had a consultant come in and they would show up with these like menu of 50 use cases, which are very top down because they studied your 10K and they were treating it like a management consulting exercise. And the honest reality is most of those are terrible generic AI use cases. And also most of those are actually not gonna really work. What we have found work most effectively at our clients is actually asking within because the answers are usually within your organization and things need to be bottomed up. So the best organization definitely has some top down guidance in terms of governance, like what kinds of data can you use with AI, what can you not use, what kind of use cases you can use, just governance boundary conditions. But the sourcing and experimentation is a very bottoms up effort that involves the business users because ultimately they understand their pain points the best and the perfect GPT for applications you have to be discovered. And you're not gonna discover it by studying a 10K somewhere. You're gonna discover it by actually really understanding what the pain points are. And what this practically means is, by the way software companies have already learned this, but I think everybody should probably hear it, like the product manager is very important because a product manager is fundamentally to understand and be the voice of the customer. And then number two is designers are very important. So what we do is we do a lot of desk sites to try to understand the user pain and really work backwards from that. We think that more of that is what is necessary to create value, especially in the zero to one face of value creation for AI. Thank you, Andrew. Thank you, Jerry, for giving our audience a glimpse into the technology, the use cases, right? And then last but not least, the way we should work with customers to sort of deliver the value of the gene AI to them. Thank you very much. Thank you.