 I'm going to start by introducing Robert Nishihara. Robert is the co-founder and CEO of AnyScale. Additionally, Robert, as you may know, is one of the creators of Ray, great project. I'm actually a big fan of it. It's a distributed framework for scaling deep learning applications as well as Python applications. Today, Robert is going to discuss how LLMs are changing the way we organize and do AI. So please help me welcome Robert Nishihara. Thank you. I'm Robert, one of the co-founders and CEO of AnyScale. And over the past few years, we've been in the exciting position of getting to work with hundreds of organizations using Ray, our open source software, to power AI workloads. From training GPT-4 to powering deep learning at Uber to powering AI platforms at companies like Pinterest or Spotify or Samsara. And over the past year, the way these organizations have gone about adopting AI and thinking about AI has changed significantly due to LLMs. And I want to talk a little bit about what we've seen there. First of all, AI has become far more capable over the past year. The kinds of applications people are building today are different qualitatively from the applications many of them were building one year ago. But as AI has become more capable, the challenges have grown as well. I want to share a few of these challenges. First is scale. When we started AnyScale, when we started Ray, our bet was that there would be a growing need for scale, for distributed computing to power AI workloads. Now, that may not have been obvious at the time, but today, scale is a fact of life. We see this from models getting larger, not fitting on a single GPU, the data is getting larger, especially as we moved from just text data to working with multimodal data, images, and video. And we see organizations routinely using dozens, hundreds, or even thousands of GPUs to power AI across their business. And as scale has become a fact of life, cost efficiency is now top of mind for nearly all of the organizations that we work with. But beyond the scale and cost considerations, one of the challenges increasingly facing organizations adopting AI is the concept of future readiness. They know that AI is moving faster than they've ever seen before. Many of these organizations have been practicing AI for a long time, pre-deep learning. They've been using AI for over a decade. And they remember going through this transition where they were using classical machine learning models, random forests, decision trees, linear models, and then deep learning became a thing. And they needed to migrate, they needed to adopt deep learning capabilities to make sure their business stayed competitive. But in the transition from classical machine learning to deep learning, many of them found that they didn't have the capabilities to do deep learning out of the box. It's far more data intensive. It requires GPUs now. And so in order to actually adopt deep learning and roll out deep learning throughout their businesses, they had to significantly extend their AI platform capabilities and migrate many of their AI platforms to support deep learning. And this was a long journey for a lot of businesses. Now these businesses that have gone through this migration, gone through this process, are in a similar boat as generative AI takes over. As organizations and departments throughout their companies start to want to use LLMs and other generative models. And as we see this, many of these organizations that have been adopting or have been investing in AI for a long time find themselves lacking the capabilities to fully take advantage of LLMs. And they know that this is not the end. Even as they work to extend these capabilities now to adopt AI and LLMs throughout their business, they know AI is going to continue to change. One year from now, we won't just be using LLMs, we'll be using multimodal models. Text data is not actually that big. Once we start working with image data, with video data to routinely power chatbots and applications like that, the infrastructure challenges are gonna become far harder. It's going to become far more data intensive. So there are more challenges coming down the road. Today we're largely using GPUs made by NVIDIA. But those GPUs are getting better and better than architectures are changing of these GPUs. And there's a rich ecosystem of hardware accelerators that's burgeoning. And so a year from now, we may, many organizations are preparing to have more optionality, to be able to take advantage of a diverse hardware ecosystem. The AI applications that people are building are getting far more complex. For example, we see many organizations starting to use not just one LLM but multiple LLMs. Many LLM calls to power individual applications. And as they do this, techniques like model routing, which automatically selects different models and send the right queries to the right models at different times are becoming more and more important. So these are some of the challenges that we see organizations thinking through. And as organizations adopts generative AI and LLMs, we generally see that happening in two phases. And this happens not as one phase followed by the second phase across the whole organization, but on a per use case basis. The starting point is typically about building and learning. In contrast, when people were doing AI, say two years ago, the starting point was typically to curate a data set and use that data set to train a model and iterate on the model. And if the model gets good enough, then you can start to use that to build a feature or ship a product. We're reversing that now. The starting point is now to build a product or build a feature and ship it and iterate on it and validate that the feature makes sense, that it does the thing you want it to do. And at this point, there's a premium on velocity, on iteration. We see companies needing to over invest or invest heavily upfront in evaluation. How do I know if the chatbot is doing a good job? It's extremely hard to answer. If I swap in this one model for another model, did my application get better? It can be hard to tell. So this is one phase. And at this point, companies are typically figuring out their overall strategy or approach toward LLMs. Do they bet on open AI? Do they bet on open models? Do they train some of these models in-house? Do they learn how to fine tune or customize models for their different applications? And many organizations are pursuing a hybrid strategy here where they use a mixture of frontier models from the leading foundation providers like open AI. And they also learn to customize models. They learn to fine tune models as they transition into the second phase of their journey. But at this point, what really matters is about experimentation speed, iteration speed, and validating product market fit. Once you've validated that the product you're building or the feature you're building makes sense, we see organizations double down and really invest. And at this point, you often go beyond the capabilities that you have with existing APIs. You want to customize models more. You might start doing pre-training. You might start using a whole lot of different models. You start using model routing. You start to think about things like scale and reliability and latency and continuing to improve quality. And at this point, what matters is a broader platform that has the flexibility to meet your needs, your current needs, and your future needs. I'm gonna adapt this application to use multimodal models. Do I have the capabilities to do that? So these are the two phases that we see organizations go through. And many organizations are currently in the build and learn phase. And some of the early adopters, some of the people who started earlier are just now getting to the point where they've found product market fit for the applications they're building, the products they're building, and they need to scale them. And as organizations go through this journey, the dimensions that they care about when evaluating models, when evaluating LLMs continue to change. And this is something that will be changing rapidly over the coming years. So today, many organizations are in the process of getting their LLM applications to work at all. And LLMs only became widely adopted over the past year. So the capabilities just became good enough to really support all of the use cases, or many of the use cases that people wanna build. And because LLMs just became capable enough to enable these applications, quality is paramount. When the top consideration, when people are evaluating models today in the prototyping phase is quality. Now, what we're seeing is that models are getting way better. You have GPT-4, you're gonna have GPT-5, you have LLMA-2, you're gonna have LLMA-3, models are gonna continue to get better. And as they get better, for many applications, people are finding that the models are becoming good enough. And as the models become good enough, what happens is that the dimensions of comparison shift dramatically, can shift rapidly. Instead of caring primarily about quality, people will shift to caring primarily about cost and latency and other aspects of the user experience or dimensions like privacy and customizability. So there are many different, as organizations go through their journey of adopting LLMs and as the LLM landscape changes and models continue to improve, the dimensions that people use to evaluate these LLMs is continuing to change and to change rapidly. We've seen fine-tuning become pervasive as a technique, more and more organizations starting to do fine-tuning. We've seen fine-tuned LLMA models outperform GPT-4 and do this at a fraction of the cost. This is a dimension, a knob that organizations have to improve their AI applications and reduce costs once they get to the point of really making these applications real. And the last point I wanna make is simply that there's been a massive expansion over the past year of who is even doing AI at the organizations that we speak with. We've seen, one year ago, many of the people building AI applications were machine learning experts. Today, they're developers, people who care about building products, not about models. And this is placed a premium on usability. People wanna treat AI more and more as a black box. And it's a fantastic thing because it means far more people are able to build AI applications. And the number of AI applications we're gonna see and use cases will continue to grow dramatically. So this is the right time to be in AI. We're incredibly excited about the changes we've been seeing and that we're gonna continue to see over the next year. Thank you so much.