 Welcome everyone to theCUBE's continuing coverage of SuperCloud 5, the Battle for AI Supremacy. I'm your host, Rebecca Knight. I'd like to welcome two guests to theCUBE right now. Harry Alt, he is the EVP of Corporate Development and Partnerships at Datastax and Mona Chata. She is the Director of Technology Infrastructure Partnerships at AWS. Welcome, Harry and Mona. Thank you. Great to be here, huge fan of theCUBE. Thank you, Rebecca. Excellent. We're talking about all things gen AI. Harry, I want to start with you. Tell our viewers a little bit about Datastax. It's a database management solution companies. Tell our viewers a little bit more about what you do there and how Datastax is currently focusing its efforts. Perfect, love to do it. So Datastax was founded a little over a decade ago to be an enterprise distribution of an open source technology called Apache Cassandra. And roughly seven, eight years ago during the mobile app boom, enterprises were looking for an always-on distributed operational data store to power the applications that was gonna drive their business and their customer experience. And so many of them selected this open source technology. So if you, most of the apps we all use on our phones, if you used Uber to get to work or to run an errand, if you used Apple today, whether it's Apple Pay or iCloud, JPMorgan Banking, Verizon, Capital One, all of these companies use Apache Cassandra as the operational data store to power these applications. And the really interesting element about bringing our partnership together and taking us to the next level with AWS is that all of these customers, the overwhelming majority are running AI in production today. It's just not generative AI. It's predictive AI. It's taking unique machine learning models, deploying those in forms of recommendation engines, feature stores that provide really advanced personalization that make us love the Marriott Bonvoy app experience or the Delta Airlines app experience, and sort of enable that application to be always on and serve the customer. So our partnership is evolving now to this next level of generative AI. And we've been on this journey as data stacks for the last four years. We've done four acquisitions. We've been very focused on developer-centric experience with our cloud product and AWS. And we have a purpose-built platform that's already running predictive AI that will talk more about how we with AWS are now bringing generative AI with radical simplicity for the developers and operators of those systems. Yes, exactly. I want to dig more into this new integration. Moda, I want to talk to you about GenAI, which has been the buzzword of 2023 and that shows no sign of slowing down next year in the years to come. I'd love you to share some insight about how AWS is approaching the GenAI wave. Yeah, and really our... Well, just to sort of echo some of Harry's points there is that, look, AI has been around for decades and Amazon has been doing AI and has incorporated AI into their applications for decades. And it manifests in Amazon Go, you can see it in Alexa, even on the Amazon website, just really being able to create just better experiences for our end customers. And we've always sort of incorporated that. And now, as Harry also mentions, generative AI kind of takes it to the next level. And that also manifests in a lot of the partnerships that we have, including the one that we have with data stacks with the new kind of SCA that we've also put in place. It's just adding that extra layer to it. But really our strategy, the AWS sort of strategy for generative AI has really been rooted in ensuring that we innovate rapidly, to deliver just comprehensive set of services and programs that really help customers accelerate their innovation with generative AI and doing it in a cost effective and secure way. And I think that that's really important. And so the way that we look at generative AI is almost in like three mega layers. The first one being that you have to run your, whether it's these large language models or any other foundational models, with the right power, with the right compute, right? And in a cost effective way. And so that's why AWS developed AWS Traneum, which really is like powered, Amazon EC2 instances that really bring that efficiency to allow people to run their large language models and train them in a most cost effective way. The other set of chipsets that we have are AWS Inferentia. And that really leverages sort of that custom design where you can leverage your learning models, run them on Inferentia. And what we found is that a lot of other providers of large language models like Anthropic, for example, are actually incorporating AWS Inferentia and Traneum into their large language models build. So it's just a very easy way to like now build, train and deploy your future LLM model. So that's sort of the first layer is all around compute and having that power available to run these foundational models. The middle layer is really, you can kind of think about it as like, almost like MLMs are foundational models as a service, right? And then being able to embed those into your generative AI applications. So that's Amazon Bedrock. And that is something where, Datastax and us as part of our SCA agreement, that's where Datastax has integrated with Amazon Bedrock. And that's a fully managed AI service that really, it includes our large language models like Titan, but also third party models such as Anthropic Cloud, Stability AI, AI 21, Labs and Co-Kir. And there's others that are integrating to it. And so, that sort of middle layer is kind of important to then build upon and run those large language models and embed those into applications via APIs in a very simple way. And then the last layer is really about development tools and having applications that run these large language models or any sort of foundational models. And that's sort of manifested in Amazon Code Whisper, which is really your co-companion, right? And it's a key development tool to really accelerate code creation for any generative AI application. And it's trained on billions of lines of code so that you can generate code suggestions that range from snippets to then, any sort of fully functional lines of code in real time. So those are sort of the, I would say operational aspects of it, but then when you look at it, when you look at this, so that's on the services side, but then you gotta think about the program side of it. And one of the key things that I don't think we talk about enough is that you gotta give people the skill sets. You gotta have the programs in place, right? That allow developers, as Harry mentioned, take over and be able to leverage and use these applications, large language models, be able to build it, right? And so that's why what we've done is we've built a Generative AI Innovation Center. And what that does is it really allows experts in AI that are technical experts that really help you develop your application, really help you think about how you want to, develop a business around it, think about what the business value is, and then just really help to figure out how do you wanna leverage some of the applications, some of the tools that we have, the services, any third party that includes third party applications like data stacks, like AstraDB, into your generative AI applications. And then we also have things that we announced recently, Amazon corporate wide around AI ready. So again, training 2 million people by 2025, so that they have the skill sets that they need in order to not just build, but also just have the knowledge of what it takes to create and to innovate with generative AI. And then we also have an AWS generative AI scholarship that we're providing to high school students, right? And to 50,000 high school students is really our target. And you gotta start young, right? We gotta start with the knowledge and the education and as quickly as possible. So we just don't want generative AI to be in the hands of a few, but for many so that, you know, everyone can ultimately benefit in this sort of transformative technology that's kind of been here for a while, but it's now gone to the next level. Right, people in the industry, as you said, have been working on this for decades, but it's really sort of captured the public imagination this year with the launch of chat GBT. Harry, as you've said, data stacks in AWS have worked together for a long time, but this year, I think you said, we sort of took our relationship to the next level with this integration with Amazon Bedrock, and you took a little bit more about what you see as the primary benefits of this integration. Yeah, I think just to riff a little bit off of what Mona said, you know, if you listened to what she commented on, it's all around developer empowerment and developer simplicity of bringing GenAI apps to the companies that they work for to change the trajectory of those companies and to do it at low cost of failure, to do it with velocity, to do it with simplicity. And so before I get into the agreement, what we are seeing is that we're seeing a fundamental shift of our partnership that is less enterprise migration oriented and much more new app generation oriented, catering to that developer community. And so that means a lot of heavy lift behind the scenes that our engineering teams have worked on around radically simplifying adoption through APIs. Most developers are not going to want to work with CQL Shell to do their new generative AI app. So having APIs that are in the language that they prefer, having rich sample frameworks and application starter kits as a part of the experience, developers want to work in a product led cloud native type approach. And so with AWS, we're providing that with AstraDB on AWS and the integration of the SCA level with Bedrock and SageMaker, again simplifies that capability, not just from a Bedrock Python notebook, quick pip install via SageMaker, but actually the larger ecosystem, motor referenced other alternative foundation models in LLMs. We work actively with Langchain as an example to simplify that onboarding of the system in a retrieval log managed generation application deployment. So our partnership being taken to the next level is a couple of fold. One, it's becoming developer focused. Two, it's becoming more and more product led in terms of developers don't want to talk to salespeople necessarily. They just want to go. They want to have a cloud native experience. And so we're doing that in our product integrations. And then I think lastly, we as in our partnership in the past, we have not had a large developer go to market focus. And I'll just give you one example. Myself and several of our technical leads will be in India and Bangalore in two weeks time. AWS is hosting a Gen AI conclave. There'd be roughly 1400 folks there. Very developer focused event. We're doing dev days before that event with a lot of our customers that are in production which entered of AI already to help foster that community of bringing people on board and showing them the power of this technology and how easy it is to adopt with our partnership. I think that community is such an important element of this. Talk to you, Mona. You mentioned the strategic collaboration agreement, the SCA, which seems incredibly relevant and pertinent to the time as we're living in right now in particular, the objectives that you said in terms of how AWS intends to work with its partners. Can you talk a little bit about how you collaborate with your partners in terms of AI advancement? And I'm talking here about your partners in consumer packaged goods, companies, utilities, governments, who do you work together to make sure you are all speaking the same language? Yeah, that's a good question. And I think that also this agreement that we have, we've been doing strategic collaboration agreements with multiple partners. And I think part of that is to really create transformative solutions for our end customers. And I think at Amazon, you've heard this a lot of times and we'll continue to say this is that we always work backwards from our customers. And as we also, and Harry can attest to this, right? Like whenever we are developing any sort of integration that we have, whether it was with AstroDB and Amazon Bedrock, we're always looking at, well, ultimately, what's the best experience for our end customers? And so that's what we anchor all of our sort of strategic collaboration agreements on, as well as just start regular sort of go-to-market initiatives on. And really what we're trying to do is we're working with partners that want to create those sort of transformative experiences for end customers and really help to transform their end applications and integrate generative AI solutions into that. Whether it's our services or other third-party LLMs or other solutions that they need, such as data stacks in order to build their generative AI solutions. So one of the things that we look at is we look at it from like sort of three levers, if you will. It's co-build, co-market and co-sell. And so the co-build part is always when we talk about integrations with AWS services as well as with the ISV solutions, that's kind of how we create, that's like the product, right? And focusing on what that product fit is for the end customer. That's sort of the co-build and the co-market is we really think very thoughtfully about the market part, right? How do we go to market together? And it's not about throwing money at this at all. That's like an easy way, that's an easy way out. The real thing is you can have a ton of money and not know what the hell to do with it. So this is really, we're very focused on how do we wanna market to our end customers together? We do that in a very thought-provoking, I think we've done this with data stacks, right? With AstraDB on integrated with Amazon Fedrock. And that has really made the difference with our end customers. And as a result of it, we now have end customers across different industries that are interested in figuring out how do they partner with us, with this partnership as well as with other AWS partnerships. I think the other thing that we do is that we also, look at those user experiences, making sure that it's good, but then we also have a co-sell component of it, which is where our fields come in and where we have that interlock with our field organizations to ensure that, since they're the sort of front lines to customers, how they understand what the solution is, what the advantages are for the end customers, and ultimately how this is gonna benefit them in the long run. And what this really means to them and building their generative AI applications. And one of the things that we heard from customers is that I have a lot of data and I really need my applications to really be customized with their data and other sort of tools that they have. And so enabling data as your differentiator is one of our key components as we're sort of partnering with generative AI solutions with our partners. And so in doing that securely, I think is really critical. And so that's why we've developed things like agents for BenRocks to really help those customers build and those partners build the application so that it's very specific to the business, to your data, and ultimately to the end customers. So that's sort of how we've encompassed it. And then ultimately these all get integrated into a strategic collaboration agreement that we, similar to what we did with data stacks. Harry, I want to piggyback a little bit off of what Mona was talking about in terms of the customer focus or to use Amazon, Parlon's customer obsession. Do you have any specific case studies or use cases of customers in terms of how they're using data stacks in AWS to come up with and to build with GenAI and come up with some cool initiatives for their organizations? Yeah, yeah, so let me just kind of dovetail on a couple of things that Mona mentioned. So much of this comes back to the developer experience. And so one of the things in our strategic collaboration agreement that we did was we provided a free tier for developers to get their vector databases started, to get their operational data stores started at no cost. So, and we see hundreds of those vector databases created on a weekly basis. So we're getting the velocity in that community, as you mentioned, which is an exciting element about the partnership. And then on top of that, we have thousands of these customers that already have their applications running on data stacks or on Apache Cassandra. And from a vector data store perspective, it makes complete sense for them to run vector in that same operational data store that's certified, proven, supported in the company. And so these dynamics create a catalyst for us to have the engagements. And that's what the partnership's all about. And we will take Mona's money, we won't spend it for you. But it is a big part of it that we are both investing in this and we're excited about it. So from an example of customer deployments, what's interesting right now is we are in the early innings or early phases of GNAI adoption. The types of prototypes and applications that people are adopting are largely, let me take static data, do vector, have it connect to LLM through RAG and then provide some type of output. So we see things like co-pilots, virtual assistants or chat bots that are using vector and LLMs to go to market. And we're excited about that. We're engaging with customers, but it really is just the tip of the spear or the potential of this. And I think in many ways our partnership because of the type of applications that data stacks is involved in, our partnership is looking at the next level of generative AI of how do I bring advanced agents into that real-time operational data store and that real-time experience. So things like hyper-personalization, really advanced dynamic fraud detection for banks, real-time supply chain decision-making via vector and LLMs, helping support desks, not just capture static data via generative AI, but capture real-time data. So you have the call that somebody just put, made three minutes ago, that is now re-incorporated back into that operational and vector data store for querying capabilities. So we're getting started with some easy use cases. A lot of things around basic search capabilities, but where the potential of this is and where we're gonna engage with our joint customers is super exciting and it's gonna change everybody's way that they have application experiences on a daily basis going at four. Excellent, great note to end on. Harry, Mona, thank you both so much for coming on theCUBE. Thank you. Great being with you. Thank you, Mona. Thank you. I'll see you later. See you at re-invent. Definitely. Stay tuned for more of theCUBE's live coverage of SuperCloud 5. I'm your host, Rebecca Knight. You're watching theCUBE, the leader in technology enterprise coverage.