 Welcome back everyone. This is theCUBE's coverage in Las Vegas for re-invent. That's AWS's annual user conferences. Our 11th CUBE year at re-invent. I'm John Furrier, host. Dave Vellante, my co-host, is downstairs in the sugarcane with MongoDB. Another set down there we're doing. The biggest editorial coverage ever of a re-invent. You're seeing everything on SiliconANGLE, theCUBE. Live streaming at the Palo Alto for our SuperCloud 5 special edition. Just for this one generative AI week. I called the battle for AI supremacy. Adam Sileski delivered a mecha keynote this morning about how Amazon is going to be not only a leader, but an extender of the value of the mission of AWS. Game changing event here. I think for Amazon, I think it was a very offensive, positive keynote. I got a guest here to unpack it. All of this. Bratton Saat, there's a VP of AI and machine learning services with AWS CUBE alumni. Was on earlier as a preview. Bratton, great to see you. Thank you. Thank you for having me. It's always great talking to you. So Adam had a keynote and, you know, he could have been defensive, okay? And I've been saying all along, Amazon can't play defense, especially when people say they weren't in AI before Microsoft and others. And everyone who knows Amazon knows that there's a deep pedigree of database, AI, machine learning. Generative AI is just a new thing, but it's important. It's an important trend. So I'm glad to see it layout the keynote, but he's got great products out there. He's showing some real meat on the bone on content and products. CUBE, we saw the date of the storage got innovated and reinvented. He laid out the three layer stack. So much action. I mean, what does it mean? Take us through the implications from an AI perspective, what Adam's keynote delivers. Look, as you said, John, we have a very long heritage and a rich heritage in machine learning. We have been doing it for longer than anyone else. And what Adam laid out today was a very comprehensive picture, the three layer stack that he talked about. It starts with the infrastructure, where we have our GPUs, but we also have our custom processors, Tranium and Inferentia, and you get significant performance benefits and cost performance benefits from those. And then your SageMaker that provides you the end to end software infrastructure for building and training. And then, so there are some customers who say, you know what? I want to go in and build my own models or deploy my own models. They can use that. Then you have customers who say, no, I just want to use models that are provided by Amazon or others. And then we have Bedrock and that provides you the most choice that there is out there, off state of the art models. Along with other capabilities we talked about today like agents and RAG knowledge bases and so on. And then you have the set of applications at the top. You know, we launched Amazon Q today. We think it's going to transform the way employees interact with their data. So, you know, you have Q for business users. You have Q for Connect, which is contact center. You have Q for the AWS. So I think it's a really good set. And it just keeps, just shows that we'll keep on innovating. You know, we had Hellscribe, we had Code Whisperer. Now we have Q and we'll just keep innovating. So I want to just get this out of the way because I want to make sure this context to the next couple of questions. Your titles VP of AI and Machine Learning Services. What specifically are you overseeing in your function? So, there are the three, we talked about the three layers of the stack. So, you know, across the three layers. So there's the SageMaker and the infrastructure portions. Then there is Q. Then there is our investments in health AI, in contact centers, personalization, at the local thing that we have in SageMaker canvas. Then our industrial AI, which is monitor on. Then we have the AI in the edge and cameras, and you're overseeing all that product. So Q's under you. Amazon Q, so Amazon Q for business, that was also launched today. That was, you know, the one that customers can ask questions. The developers one, we've seen versus the SageMaker out there, but there's no IDE support yet on that one. So Q had two components to it. Developer piece, and then the user piece. So the user piece was the new piece. That was the one. So that's one Matt Wood that gave the demo one. Yes, yes, that's the one that's also. Okay, got it. Okay, so he mentioned vector embeddings is what makes all the reasoning happen. We're going to hear that tomorrow on the keynote. You don't need to answer that. I think the answer's yes. In Amazon Q, when a user comes in and asks a query, then you know, it uses, it uses, it has to have access to the company's information to be able to get that answer. And there we use a number of techniques to get it quickly and efficiently and accurately. One of the things that's coming out of the three layer stack is this obviously the middle layer, which is the LLM foundation model layer. I call it middle layer middleware because I like to think simply infrastructure middleware app just kind of as a simple over simplification but mental model. A lot of actions going on there. So I wanted to get your thoughts. I've been thinking a lot about since our last interview the role of data and how data helps AI but how a generate AI helps data because there's going to be this symbiotic relationship and flywheel between data and generate AI. What's the vision of that? Because again, you're hitting all three layers of the stack. That means data's got to work up and down and across. Yes. So, you know, we think having a robust data service is essential for customers and this predates gen AI. Any machine learning system we have built requires data as a critical ingredient. So there I think at AWS, we really have the most comprehensive set of services. You know, storing, querying, analyzing your data. Then data labeling services where we have, you know, SageMaker ground truth. So I think all of this is critical if you have to build generative AI or any kind of AI. The other way, you are also seeing gen AI. You can use gen AI for data, synthetic data generation. So there are many situations where it's hard to get a lot of data and then you can use synthetic data generation there as well. So I do think there's going to be that flywheel and also as customers use our services more, we are going to keep refining things. So I think that and that, you know, the fact that we have more customers than anyone else actually helps us have that flywheel. Let me ask you a question. So you brought up synthetic data because that's been a big conversation in the industry. Some people may or may not understand specifically what that is and where it's helpful and where it's developing. So what is the role of synthetic data? I see the edge as a key, Eric, as the edge needs more data and why move data around if you can get synthetic data. But the goal of synthetic data is what? What's the purpose and where is it developed and where is it developing in terms of use cases? You know, there are many situations. So if you're training, if you're doing machine learning, you need a lot of training data. There are many situations where you can get it but there are many other situations where it's hard to get it. I'll give you one example. Let's say you want to do for defect detection. Now in defect detection, you don't have a lot of defects. It's hard to get real data. But then you can use simulated data where you're simulating various kinds of defects to train a machine learning model. So those kinds of situations where it's hard to get data, you can use synthetic data. In that case, no one wants to build defects just for the sake of doing defects. So you get more visibility into multiple scenarios. Is that going to enter us into a new era of digital twins? Because if that goes forward, we're going to have a lot of simulation. Simulations are going to be important as well. You can imagine a situation where you're simulating a lot of these, a lot of simulations are happening for various activities. It can actually be one part of that. It can also be data generation. But one part of that is just simulating a physical process. You know, you got to be pretty excited about Q. I can imagine that was pretty monumental. Probably been working for a while. It's a very innovative product. It's really game-changing. Build the Cube app for me. I can't wait to get our business transforming and reinventing. But one thing that jumped out of me, I want to get your thoughts on this, because this is the theme I saw on the keynote I see as the new next-gen AWS. The game has changed is cost and performance. Is whether it's GPU stacking on a rack, whether you've got the connectors around them, or it's going to make it faster, but also more energy efficient with the chips. And then on the Q side, the demo of migrating 1,000 Java apps in two days. I mean, I kind of fell out of my chair at that moment because you go, okay, that's a Herculean task to do that that fast. And then the teaser of .NET to Linux will wipe out licenses costs for the customer. So we're in a new era where you're seeing step-function cost savings and massive productivity gains at the same time. Can you scope the order of magnitude for us like where this is going? Because, I mean, just there, I'm just always seeing that's pretty massive, that's a no-brainer. I mean. You know, I think Generative AI will help us in every task we do, in the, you know, in the long run. Maybe even in the long run, in the medium run, it's going to come in and get integrated into everything we do. Software is going to be a big one. You know, there's the migration, the translation that you talked about. There's also code generation with code whisperer, right? And we have done studies where we have seen developers get up to 50% more productive. So all of these, I mean, I can imagine test generation going there. So all of these are going to see very step-function changes. And the three things you mentioned are going to be key, you know, cost, accuracy, performance, latency. You know, all of these are going to be key. Yeah, and I like how the MV link, we're bringing that together. Well, I got you here. One of the things that Adam mentioned was he sees a lot of success up and down the customer base. Small meme size, busy highlight of the guitar players with the passport thing that was pretty cool, as well as the big leaders like Salesforce. Be a consultant for us now, since we have you here. The queue, we're changing, we have data. What's your advice to us? How do we change and transform? Do we hire more engineers? How would we take advantage if you are giving us free consulting in real time? How do we take the next step? Because we see advantages. I think if you are looking, so we look at the customer persona and what you want to get done. If you're trying to build scalable applications using foundation models, then Bedrock is the easiest way to get started. And so I would use one of those models. And then, you know, it's pretty, you know, you pipe it to your, or hook it up with your data sources and, you know, off you go. Now if you want to get to a higher level of abstraction, like I can imagine in your company, you want people to be using queue. You want your employees to be using queue. In those cases, you probably don't want to necessarily build it yourself, you just want to go over there. So what I would say is it's really important to get started and it's really important to test these things out and then, you know, integrate them into your workflows because they are going to change the way things are done. That's great feedback. I want to just get your thoughts on Matt Woods demo. Three steps, three was just go, was really two steps. In every major inflection point that I've lived through in my career, PC and web, every time there was also embryonic, massive adoption of the concepts. Some naysayers, oh, it's never going to be, the web's a toy, it's not real, it's too slow. Everything was poo-pooed about the worldwide web, the internet. And then it grows fast. Here, in all those areas, they got faster, but the successful players made it simpler and reduced the steps that takes the do stuff and it's simpler and intuitive. Is this where you guys see this going? Because that's so simplicity with the Q demo where, click, click, you're in. Three is your actionings value. Ease of use is very important to us. You know, it is pretty fundamental to a design process and, you know, we do UX designs pretty early. We look at the mock-ups. Is this easy enough for the customer? Are we adding friction to it? Because if we don't do it, you know, customers get frustrated. So making things, not just, you know, performance and low cost and latency, that is certainly important, but making things easy is important as well. And there's multiple ways of making things easy. That's the aspect you mentioned, which is, you know, we make it easy to set up. There's, of course, the easy to use as well, but there's also kind of a no-code aspect of it, you know, where we have no-code products like SageMaker, Canvas and so on to make things easy for customers. So you, you know, you said it right. These of use is really important to us. I love the race to the top comment by the anthropic founder. I like that philosophy. But also, after that was over, you guys went into the Titan model. So you got Anthropic, which is a relationship, which we've reported and validated at the keynote. It's got some chip optimization, which is going to be advantageous to your customers. And then the Titan model came out. They talked about, I love hearing this, the keynote, fine-tuning, REG, which is Retrieval Augmentation Generation, and continued pre-training. I mean, that's pretty techy. So how is Titan going? And the Anthropic is not an Amazon company. It's just one model of many. Titan's the Amazon model. What can we expect from Amazon to innovate on going forward on Titan? And has that fit into the selections? That more just your proprietary model or your model, does it work with certain databases? What's the, how should we think about Titan? We will continue to keep innovating in this space and continue to provide choice to our customers, right? And so I think you should continue, you will continue to see more of these models being developed by us. At the same time, we'll continue to partner very closely with Anthropic. Like, you know, their models will be available earlier on Bedrock, as was said today. And with the other providers we have, you know, AI21, Stability, Cohere, and others. So I think giving choice and, you know, and then on SageMaker Jump Study, you have the richest collection of other models. So I think we are going to continue to innovate on Titan while at the same time taking a lot of state-of-the-art models to give it to customers on Bedrock and just making sure customers get what they need. And that, I think, is really important. Like, you want to have a, and as we build these systems ourselves, and you know, you've seen this also, it's really important to have a variety of models that you can choose from. Yeah, the choice was a great home run on the keynote. That's a winning hand. You never want to, you never bet against choice. In my opinion, open and choice always wins. I think that's going to be a long gameplay. I'm sure that's going to pay out. We'll look like predictors. That's easy to predict. Nitro was a big part of things. The EV, I mean, the MV link with NVIDIA. Now at the infrastructure layer, what's going on there for AI and ML? Because if you look at what, and I wasn't expecting Jensen to be on stage because you guys weren't included in the DGX launch that they had, but that brings to the table, not just GPUs. So you're a customer on one hand, and they have their thing, but pulling it together around the chips. And we talked about this one in the queue when you last on about end to end. And now I see where you were going with that. There's more there than just buying GPUs. Oh yeah, you know, when you are setting up these clusters for generative AI, you're talking of tens of thousands of nodes, tens of thousands of instances. These have to communicate with each other. Like you have to take the weights and parameters and data and send it out and then overlap the communication and the computation. There's a lot of, it's just not chips. There is that, then there is the interconnect, then there's the bandwidth of the interconnect. You know, we have EFA networking over there. Then we have our own custom chips. We of course have GPUs. Now we have, you know, the Envelling DGX Club. I think it's a, I think, you know, we are providing the most performant infrastructure from a hardware perspective to our customers. I got to ask you this, I was just so swammy again. I know it's a database background. He's famous for the DynamoDB and then came from an intern. Now he's big dog up in the Amazon. The data business is changing the role of where the DNA and the industries come from. You know, when I went to college, I got my database. One of my tracks was database. It was databases and that's what you did. Schemas, unstructured kind of wasn't around. The object store didn't exist. Now it's not about databases. I mean, you got to know databases, but the data industry, the data core competency, the data skill is more like a operating system kind of thought problem or systems architecture. What's your view on this? Because a lot of people are transitioning into this platform engineering meets data moving into the developer world where, just like infrastructure as code, you know, data as code, you got developers shifting left with data. We hear guardrails. You announced a guardrail product here today. So developers will be shifting data into their pipelines. What is the future of the data careers? Not just databases anymore. It's more like architecting platforms. I think so. Customers ultimately want actionable insights from the data. So having a robust data platform is always going to be important. And having clean data and data pre-processing, data post-processing, data quality as part of it will be important. Now what will happen is, as people are going about their work, they want actionable insights as they're doing the work. And so you're going to see a lot of this, the infrastructure, the machinery for providing actionable insights kind of get pulled into the work. And that is where, you know, the data and the AI and the generative AI thing is going to be good. That's a great observation. In essence, latency now is redefined, not just packet latency, latency to creative value. The more you're engaging and iterating or inferring and using inference and other things. How quickly am I able to surface a good actionable insight that makes a difference to what you're doing? Well, this is a low latency conversation we're having here, and a high bandwidth one as well. Prathin, thank you for coming on theCUBE. We appreciate your insights. It's always great to have you on. It's like a master class in AI. And thank you for your perspective and taking time out of your busy schedule. Thank you for having me. It's always a pleasure talking to you. You're always riffing on theCUBE. We're bringing, but this is important stuff. The game has changed and data and AI are going to work together. And the synthesis of data and how data helps gen AI, how gen AI helps data. This is going to be a new flywheel and it's going to be a whole nother game changing. And theCUBE's got it for you. We've got a ton of data coming up at theCUBE. We've got articles on SiliconANGLE, theCUBE.net, tons of videos. This is our biggest reinvent ever in our 11-year history. Thanks for watching. Back to the studio for our special SuperCloud 5 event. We'll be back with more after this short break.