 Hello, and welcome to SiliconANGLE News Breaking Story here, Amazon Web Services, expanding their relationship with Hugging Face, breaking news here on SiliconANGLE. I'm John Furrier, SiliconANGLE reporter, founder and also co-host of theCUBE. And I have with me, Swami from Amazon Web Services, Vice President of Database Analytics Machine Learning with AWS. Swami, great to have you on for this breaking news segment on AWS's big news. Thanks for coming on, taking the time. Hey, John, pleasure to be here. You know, we've had many conversations on theCUBE over the years. We've watched Amazon really move fast into the large data modeling. You, SageMaker, became a very smashing success. Obviously you've been on this for a while. Now with ChatGPT, OpenAI, a lot of buzz going mainstream. Takes it from behind the curtain, inside the ropes, if you will, in the industry to a mainstream. And so this is a big moment, I think, in the industry, I want to get your perspective because your news with Hugging Face, I think is another tell sign that we're about to tip over into a new accelerated growth around making AI now application aware, application centric, more programmable, more API access. What's the big news about with AWS Hugging Face? You know, what's going on with this announcement? Yeah, first of all, they're very excited to announce our expanded collaboration with Hugging Face because with this partnership, our goal, as you all know, I mean, Hugging Face, I consider them like the GitHub for machine learning. And with this partnership, Hugging Face and AWS, we'll be able to democratize AI for a broad range of developers, not just specific deep AI startups. And now with this, we can accelerate the training, fine-tuning and deployment of these large language models and vision models from Hugging Face in the cloud. And the broader context when you step back and see what customer problem we are trying to solve with this announcement, essentially, if you see these foundational models are used to now create like a huge number of applications, such as like tech summarization, question answering or search, image generation, creative, other things. And these are all stuff we are seeing in the likes of these chat GPT style applications. But there is a broad range of enterprise use cases that we don't even talk about. And it's because these kind of transformative, generative AI capabilities and models are not available to, I mean, millions of developers. And because either training these LLNs from scratch can be very expensive or time consuming and need deep expertise. Or more importantly, they don't need these generic models. They need them to be fine-tuned for the specific use cases. And one of the biggest complaints we hear is that these models, when they try to use it for real production use cases, they are incredibly expensive to train and incredibly expensive to run in-front zone, to use it at a production scale. And unlike search, web search style applications where the margins can be really huge, here in production use cases and enterprises, you want efficiency at scale. That's where hugging face and natively shadow mission and by integrating with training and inferential, they're able to handle the cost efficient training and inference at scale, I'll deep dive on it. And by training, teaming up on the SageMaker front, now the time it takes to build these models and fine-tune them is also coming down. So that's what makes this partnership very unique as well, so I'm very excited. I want to get into the time savings and the cost savings as well on the training and inference, it's a huge issue. But before we get into that, how long have you guys been working with hugging face? I know there's a previous relationship. This is an expansion of that relationship. Can you comment on what's different about what's happened before and then now? So hugging face, we have had a great relationship but in the past few years as well where they have actually made their models available to run on AWS in a fashion. Even in fact, their Bloom project was something many of our customers even used. Bloom project for context is their open source project which builds a GPT-3 style model and now with this expanded collaboration, now hugging face selected AWS for the next generation of its generative AI model building on their highly successful Bloom project as well. And the nice thing is now by direct integration with Tranium and Inferentia where you get cost savings in a really significant way. Now for instance, TRN1 can provide up to 50% cost to train savings and Inferentia can deliver up to 60% better costs and Forex more a higher throughput. And now these models especially as they train that next generation generative AI models it is going to be not any more accessible to all the developers to turn open. So it'll be a lot cheaper as well. And that's what makes this moment really exciting because we can't democratize AI unless we make it broadly accessible and cost efficient and easy to program and use as well. Okay, thanks Swamy, really appreciate. Swamy is a CUBE alumni but also vice president database analyst, machine learning and web services breaking down the hugging face announcement. Obviously the relationship he called it the GitHub of machine learning. This is the beginning of what we will see a continuing competitive battle with Microsoft, Microsoft launching open AI. Amazon's been doing it for years. They got Alexa, they know what they're doing. It's going to be very interesting to see how this all plays out. You're watching SiliconANGLE news, breaking here. I'm John Furrier, host of the CUBE. Thanks for watching.