 Hello everyone, welcome back to theCUBE's coverage here. We're in the on location at re-invent, Amazon's web services annual user coverage, our 11th year of CUBE live coverage, bringing you team coverage this year, bigger than ever for Silicon Angle and theCUBE. Our entire team's expanding, we're all here. We're getting more interviews, more editorial content, more signal than ever before, as AWS continues to thunder along. I'm John Furrier, host of theCUBE with Dave Vellante. We also have Rob Hoef, Mark Albertsons, and Shelly Kramer, our new analyst. We just joined the team, check her out. Dave, we are rocking on 11th year of theCUBE, and Adam Sileski on stage, his third keynote. And got to say, he crushed it. And of course, we had the preview, we nailed it. Although he didn't reveal it, he kind of connected the dots, but I thought we put it together beautifully. And I thought it was a very strong keynote, notably on point, the pace was great, but serious jabs at Microsoft. And I think the undertone was, we are not going to let you or anyone else say that we're behind in AI. We're going to flex it, all the naysayers out there, we're different, we're reinventing. This is who we are at AWS. All the pundits, all the analysts out there that are saying they're behind. I heard a great keynote where he's basically like, we're Amazon, trust us what we've done, our track record and how we've continued, had the experience and track record to establish a reinvention philosophy and playbook and lay laid down the goods. I thought it was a great keynote. It started out a little slow. Here's our customers by all verticals. We're number one in everything. All the top people work with us. And then they went into the generative AI and the GPUs with NVIDIA home run. So just went a lot to unpack here, but I thought it was an exceptional keynote, probably his best ever, given what was on the table, the risks that Amazon was facing with the public perception. I think that has been, the gauntlet has been laid down. There's no question. This was his best keynote ever. And John, first of all, it's great to be with you doing the editorial here inside the sugar cane. Thanks to MongoDB for giving us this space. Awesome, supporting our editorial with an in kind contribution. Love that. You absolutely nailed the preview. If you read, if you read John Furrier's preview to AWS re-invent this year, it laid out all the themes. I will say this, John, no question, it was his best keynote ever. He was relaxed, he was really, really solid. But I do think he buried the lead. To me, the most important part of this presentation and the strongest part of his keynote was when he basically said, look, you can trust Amazon with your generative AI. Privacy, trust, security is central to what we do. It's built in, it's designed in. And after the open AI meltdown, I thought leading with storage was like, okay, great, they're going to lead with object storage. I guess that's because it went back to the beginning of time for AWS. But I thought by far, that was the strongest part of the presentation. He did say a number of times, John, we're the only cloud provider that has. I don't know, I got to do some research on that. For example, as you know, as you pointed out in your article, Microsoft does use other LLMs. Lama 2, for example. But there's no question that Amazon is laying down the gauntlet and saying we have the greatest optionality and heard from Pfizer. I thought the Pfizer gal was extremely strong. We heard from her the importance of having that LLM optionality. And we know that from our own experience. Yeah, and the other thing that jumped out, I thought was notable was, I'll get into the Jensen interaction, but in person you could really see the vibe. And I have a lot to comment on that. Yeah, yeah, me too, but you were there in person. I was watching in my room. But I think the revolutionary aspect for some of the demos and some of the product that they announced is significant and I'll tell you why. Some of the things that they demo with the use of AI with the scale that they have and the unique Nitro capabilities is going to give them unmatched performance. Okay, and cost reduction. The Java application porting thing at the top of the stack, that alone is going to cost savings, developer productivity and cost alone. There's so many areas where Amazon's going to address the cost problem. Whether it's energy and the power envelopes or price performance on application developers and time. So what jumped out at me was this whole cost with us is less, performance is high. So this is classic Dave, price performance battle right now. And I think the battle for AI supremacy, our featured show we're running this week out of our Palo Alto studio is all going to be all about price performance. We're going back to speeds and feeds. Hardware matters, Dave. And you're going to start to see a conversation where everybody's talking about that now. Like once again, we were like a head of the game on that one. We called that two years ago. I'm sure others will take credit for as well on their show, but that's not the case. But that's going to be a big driver. And again, back to Silevsky and I thought the risk at this event, and you pointed it out on our Q pod, was Amazon gets the last word of the year in terms of event. The risk of this show was if Adam didn't hit a home run. Reminds me of that Pat Gelsinger keynote he gave at VMworld like that one year he was kind of on the hot seat. He hit a home run, changed the game, did the deal with AWS with Andy Jassy. That changed the game. This was that similar moment for Adam Silevsky because he set it up beautifully. He came out, talked about the future, talked about the customers, but when he got into the making the case of YAWS, he simply said, we work back from the customers, we think differently. We're kind of a quirky culture. He didn't say that, but that's the word that they use. Jassy, Adam, Amazon's a quirky, innovative culture and they're going to stick to their knitting, Dave. And then he was like, hey, this is who we are. We're always reinventing. And what they want to do is give people access to the technology that they have. And I think there's an opportunity for them to be the supercomputer for the cloud. They can be the super cloud, Dave, because if you look at what's going on in the announcement, I thought that GPU silicon was going to be a war between their custom silicon and Nvidia. I was blown away that Jensen came on stage because DGX cloud, Amazon was not included. Remember, I mentioned that when they did that. So now he's on stage. Amazon's infrastructure absolutely is the glue layer for scale around GPUs. And the fact that they're dropping mega GPU love, DGX on AWS is Nvidia's AI factory, they called it. 16,384 GPUs connected in as one giant supercomputer. That's 65 exaflops. To 3,000 supercomputers, essentially. And then the MV link, one terabyte per second, is it going to be a key linchpin as they start creating these clusters? Is that infinite, man? Ha ha ha ha! MV link, 32 Grace Hoppers connected as one unit with Nitro. And you called this with Nitro. I gotta say, the Nitro innovation is the enabler. And what Amazon's going to do is that's going to be their differentiation, their ability to use their existing tech in this kind of like connective tissue between chips. And like we said in the post, it's not just the chips, it's what's around it. And you're going to see Amazon just have the best cloud, the super cloud of the market. So that Nvidia thing caught me by surprise. I thought they were kind of like posturing, but... I thought it was a little bit awkward. I mean, you were there, but see, I've seen Jensen probably, I don't know, seven or eight times this year at various shows. So the guy is just, he's on message. So at Dell, John, he was talking about AI on laptops. Okay. He's a chameleon. He's a chameleon. At Snowflake in the same jacket, he's awesome. At Snowflake, he was talking about bringing super charging Snowflake data. At HPE, he's talking about on-prem. At Ignite, he was talking about in the DGX cloud. Amazon doesn't have, I mean, Adam didn't have, he kind of a geek, right? So Adam didn't have that long-term relationship like Jeff Clark does. I mean, they've been hanging out forever doing laptops together and gaming. So that was, I thought, a little bit awkward. But nonetheless, the fact that they're putting the DGX cloud inside of Amazon is a huge, huge move. But it's essentially a me too, to what Microsoft just announced. Well, it's a me too from a Jensen standpoint. He's selling more GPUs. He's an arms dealer. So he's an arms dealer, but he's kind of right, Amazon's got the best product. So to me, I've been following the DGX cloud and I think it's a separate play. It's a super cloud play for NVIDIA to be the connective tissue between all clouds. But NVIDIA doesn't have Nitro yet. So up on stage, I was really close to the stage. I can see the interaction. They didn't really look at each other much. Jensen also took the, when Adam set him up to do the plug for their deal, he went into sales mode pitching NVIDIA basically. So he kind of was like- Prompting him. Just enough, like, yeah, I'm addressing the deal, but don't forget NVIDIA's got an AI factor. NVIDIA's got this, NVIDIA's got that. And we bring that to you. So it's kind of like two chieftains going head to head, like, wait, it's my show, yeah. We would do it, get the plug in there. Jensen was playing ball. I mean, he's like, yeah, sure. No, he played ball, but when you squint through it, you see NVIDIA up there, like, look it. Yeah, it was a little awkward. We know you're doing custom silicon. We're going to do our thing. You're a big customer, you know, frenemies or whatever you want to call it. That was definitely an interaction, but- Keep buying. But he can't diss AWS for this future strategy because he's selling so many GPUs. And when I talked to Adam on one-on-one, it was very clear. And then he gladed out with the graph that would have been 13-year history between the two companies. So what you're seeing with Jensen is that they're a huge customer of NVIDIA. So it's all this kind of posture. I've seen this in other ways is that you look at people always have hedges going on. They have a hedge. Okay, maybe NVIDIA DGX is a hedge to get them to buy more or, hey, we're going to go to other clouds. And I think if you're an arms dealer, let the best cloud win. And if they got super-duper nitro and some high-speed interconnect between, on networking and make those clusters super-fast, advantage AWS. Good for them. I mean, they had to get Jensen here because he was on stage with Satya at Ignite. It's the last week or a week before, last week it was. So he had to do that. I want to go through the announcements real quick. S3 Express One Zone, that's high-performance object storage. Okay, high-performance object's been around for a while. I'm glad they got to it. Graviton 4, big shock. Fourth generation. 30% faster, blah, blah, blah, maybe even better for database. 45% faster for Java. One of the things that Graviton is not great for is old Java code. Okay, so they talked about their Q, which is their co-pilot, being able to regenerate and migrate Java code. That's going to be huge for a lot of that legacy Java code. They also announced Tranium 2. He talked about their AI stack. You laid that out in your piece. He talked about Q, which is essentially their co-pilot. He didn't use the term co-pilot. But that's interesting, and I want to talk about that a little bit, get your impression of that. But I mean, look, Microsoft led with co-pilot. It's the co-pilot era, it's here. You know, again, that was kind of a, we have co-pilots too, but I really like their emphasis on AI safety, privacy, and trust. The demo of Q for business users was phenomenal. The Q for developers was phenomenal. The fact that they can migrate code, the ID, the ID's forced not there yet, but they got to have that. So, okay, code was for developers, check. The business side was killer. The demo that Matt Wood gave was, I thought very great. Two steps, you're done. Connect your data, 40 applications out of the box, Google, the Salesforce, all the stuff they use. There is a but here, John, right? Because the perception, of course, before the open AI meltdown was that Microsoft and open AI had the lead, and they did. But one of the things that Matt Wood talked about, or maybe it was, yeah, it was Matt Wood, he talked about the semantic knowledge getting into the system via vector embeddings. And the issue right now for Amazon is, those vector embeddings are stovepipe by the different data stores. Now, Microsoft alluded to their knowledge graph. They've been talking about knowledge graphs forever, but the point is all these co-pilots are going to have to interact with a coherent semantic layer that has consistent information across the platform from all the data stores, structured, unstructured, semi-structured, different query types. So you don't have to move data, you don't have to worry about what's the real single version of the truth. The reason that's important, because if you want co-pilots to actually take action for you, you've got to trust the data. So to me, the real race in AI is the data. And you've talked about this, it's really all about the data, the quality of the data is going to determine the quality of your AI. So the data platform, and Adam talked to you about this, he said, our customers are in great shape because of the data platforms. They still have some work to do to make all those data platforms coherent across data zones. Swamy's keynotes tomorrow. So my guess, he will have some dry powder available for Swamy. I'm going to say, I'm going to say there's still going to be some to-dos. Of course, yeah. I think that they're still, their metadata strategy is still very stove-piped. They need to address that. And that's something that, I don't think you're going to see it tomorrow, but I think you'll see it within a year. I think you'll see announcements tomorrow, but like what Adam said was, we're going to reimagine things. And the fact that they have that queue, the way that this design, it does all that reasoning for you, that's a good step. I'm imagining that the data management strategy that they have on their data platform, because he told me that, and I said, is data management upside down? He goes, it's going to be the flip will stripped. The script will flip. And that means that we'll probably see some changes on zero ETL, which he referenced again here. I'm sure we'll hear about tomorrow, but the data pipeline and the data strategies have to be enabled to handle that narrative. And the narrative at the keynote, really kind of put the bow on the keynote, is that he laid out the three layer stack. Layer one, infrastructure. Layer one, infrastructure. And this is how the labeling was on the slide. Tools to build with LLMs and other foundation models. Layer two, okay? The foundation models themselves. No, no, I'm sorry. Layer one was infrastructure for FM training, foundation model training and infrastructure. So that's compute, storage and networking. And all the horsepower. Layer two was tools to build with LLMs and other FMs. That's basically bedrock. Anthropa came on stage, nothing new there. And then layer three, apps that leverage foundation models. Okay, what's that? What's that layer? Let's unpack that a little bit, because clearly Microsoft has those apps, right? They've got the full stack from Silicon, even though we can, let's talk about their Silicon strategy, but all the way up to their collaborative applications. I think that Microsoft is going to take that strategy because that's their only card to play. They got their built-in application. It's a strong card. They get market share. It's their only card. But it's a hell of a card. It's a great card. And it's a great card for them because they have data, they have existing stuff. AI, the co-pilot makes those apps better and sexier. And actually, they're at risk by being developed. If you can port Java in two days, thousand apps, you can also port Microsoft. And by the way, he announced that, Matt Wood announced that .NET, I mean, Adam announced that they're going to have a .NET to Linux conversion. Yeah, of course. A lot of applications. That's going to save. A lot of Windows running on Amazon. How many licenses fees does that take? A lot of Windows running on Amazon. How much money is going to save a customer? Yeah. Port all your .NET to Linux, free. Boom, instant money. It's going to rain money for IT, so that's key. And layer three is going to be all about who writes the apps. So Amazon's card is ecosystem. And so we're going to hear the narrative come out again. Amazon's competing with the ecosystem. I'm sure you'll see some announcements that are going to come out, that are going to be threatening to some companies. What do you mean? What do you mean by that? They're not going to be doing, developing collaborative apps, are they? Well, Redshift came out, they compete with Snowflake. Yeah, but sure. But that's infrastructure. Here's my take. That's database. Amazon's got, yeah, that's still infrastructure. Amazon's got middleware. Amazon's got better infrastructure. There's no question about it. I thought Adam did a really good job of saying, look, because Satya, if you listened to his keynote at Ignite talked about, we have more data centers than anybody in the world. And Adam said, yeah, but we have three AZs for every, or three data centers for every availability zone at a distance, 100 kilometers apart. So that's huge. He was throwing a dart at Microsoft and Cloudflare. Yeah, yeah, yeah, Cloudflare is a super Cloudflare. But so Amazon's got better infrastructure. They're going back to their roots of undifferentiating and heavy lifting. And look, the card that, you're right, the only card that Microsoft has to play is, we got apps. Yeah, and then their chips are way behind on generations of them in the chip, so they can't. Let's talk about that. Okay, so they announced two chips. Maya and this other chip basically for training and inference. One general purpose cloud chip, that's their Graviton. They also announced their version of Nitro. It was definitely a copycat of what Microsoft or what Amazon has announced. The question is, because it's ARM based, and ARM is a standard, a well-documented standard, does that allow a company like Microsoft with a lot of resources to catch up to Amazon, or is it a case where there's no compression algorithm for experience? Is ARM standard that compression algorithm? David Floyer thinks yes. Yeah, I do. I think it's an opportunity, and that's why the emphasis around Nitro is critical. I think that chips are going to be all about chips and design. And we talked about this when I interviewed Andy Bechtstein in 2018 about, he was the Rembrandt of the motherboard. It's like laying out a motherboard. You got a size of the board constraints, and here the cloud is a power of constraints. What Jensen announced on stage with the GPUs is a architectural change to provide super computing using connective tissue like Nitro and other interconnects around the processor, okay, and around the chips. That is huge, so I can build an ARM, but I can't build architect low latency, real-time data pipelining. So Microsoft has to. That Microsoft's doing that. I mean, they are building out their own Nitro, they're building out their own fiber optics. This is going to be Dave. The customers will vote with their wallet. If Amazon's got an alternative to open AI, and their shit's better, okay, and based on what he's showing out there, you can save money and energy. That's hard. Their infrastructure's better. That's hard cash. This is no question, their infrastructure's better. But what Microsoft's doing is making it so easy, and don't worry about the infrastructure down below. It's good enough. I mean, it's their playbook. Good enough versus, you know, non-differentiated heavy lifting with the best infrastructure. But here's what I'll say. I want to make a point. Satya and his keynote basically said that Maya, they're essentially their cloud chip. It's the highest performance chip in the business. So they leapfrog Graviton 3. Guess what? It lasted a week, okay? Graviton 4 just leapfrogged you. So now the game begins. And so we're going to see if there is an advantage of AWS having such a huge lead over the years, 2018. They started shipping Graviton and they've been working with Ann Perna for, since 2015. We're going to see whether or not that lead is sustainable. Yeah. The chips will be one of those arms dealer kind of deals. We see them leapfrogging each other and Amazon's got a clear lead. So Microsoft will be catching up and that's obvious. The thing that's going to come down to is security, right? The security's brought up multiple times. You saw these agent technology, they're going to integrate CloudWatch, CloudTrail and SOC compliance. That's huge. If you're building an app, having that kind of cloud infrastructure is going to be an advantage for Microsoft. And I think that's where they would slam in Microsoft on the security side. But like if you're a developer and you're a business, you're going to have that now. I will say that from an app standpoint, you asked the question, I think Amazon strategy is clear to me now. They're going to create an enablement layer just under the app layer, above the layer two. So the top of layer two, you see an abstraction to enable apps to be built, okay? And let the customers build their own apps. So there's, you're seeing this conversation be around proprietary assets, your data's your intellectual property. You're going to have complete security within your own instance. You could bring models in. Adam especially told me that the model's going to come into your own instance. They're going to bring cloud into your stuff and nothing ever leaves. You're working with that model and building your own, interacting with each other. No one model rules the world, he said. And he's, again, brought that up again. It's the key point. Model, foundational model on LLMs at layer two is the new middle layer, Dave. That's what we've been saying on theCUBE and it's playing out beautifully. And so if you're Microsoft and against Amazon, Amazon has to use the infrastructure advantage, performance speed and provide value to customers. And I'm telling you price performance cost is going to be huge. Energy cost alone. Standing up GPUs on your own on bare metal, great. Energy bill, you can only, how many in Iraq? How many NVIDIAs can you put in Iraq before the power envelope is blown out? No question, power is the new motherboard. Here's the question I have. So Adam was really doubling down on the LLM optionality. I feel like that's something, because all these LLM vendors, John, are arms dealers. They're just like Jensen, right? They want everybody to use their LLMs. Now, of course, open AI is like the iPhone and AT&T when the iPhone first came out. They got the exclusive, but everybody else is an arms dealer and eventually open AI is going to be an arms dealer. So my point is it's not hard to replicate that optionality. Where it gets interesting is what you wrote about in your piece was the interaction between for instance, Anthropic and the chips. That can that relationship between AWS and an Anthropic or other LLM vendors make the chips better and do tighter integration. And of course, you're seeing Google complain about the Anthropic relationship. We have the relationship too. So it's really interesting. But so what are your thoughts on that integration between the LLM and the Silicon? Is that an advantage for AWS? Yeah, so first of all, Doryo is on stage from Anthropa. He's said a few things. Obviously Adam set up with like three areas, customizing models, using use of proprietary data and fine tuning and other features. Okay, when, and then they get some exam. Oh, we're using all these verticals. When the guy came on from Anthropic, he said, we're building enterprise features. Okay, so he said, we're doing three things, training their stuff on Amazon. Cause the whole, who is Anthropic really working with? He says, we picked Amazon for mission critical workloads. What the hell does that mean? Yeah, what does that mean? What does that mean? Okay, it's like an official statement. I'm using you for that. And I'm going to use Google for this. I'm playing with everybody. They're an arms dealer too. So it's the world of arms dealing right now in technology, Anthropic is one. But Amazon's like, no, no, explain why you were in Amazon. Adam made him explain it. And he said three things. Okay, one, we're going to use Amazon's cloud to strain our stuff besides mission critical workloads. Use, be part of bedrock. And then hardware, silicon optimization. He actually set it on stage. So they are definitely working with Amazon to give them an advantage on the silicon. And their mission critical workloads, I think is essentially their core product. So that's my guess. And I'm giving you examples. Does a company like IBM have an even greater advantage? Now, of course, they, you know, with their own LLMs and their own silicon, cause they control both. Yeah. Or in the case of Amazon with Titan and their own silicon. Well, the problem, how we shoot for this stuff and we talked about in theCUBE, the leapfrogging also, not just in the chip game, it's happening at the LLMs. There's faster copycatting going on in LLMs. So the question is, is Anthropic the right horse for these guys, number one? And number two, if you're going to have only unique features on AWS related to Anthropic, did they bet on the right horse? Cause IBM might say, hey, we got some arm deals and we have our own knowledge. We're going to bring that to the table. And they do. And the same with Google. So these, these generative AI factories that are emerging is going to be a big deal. I see, I see, I, Google, AWS and IBM as three who have that advantage because they, two things, they have their own LLMs and they're designing their own custom chips. So they can do that tighter integration. And to the extent that the Anthropic deal actually has meat on the bone. And I think it does because of the level of investment and what you heard on stage today. That's another, it's not as good as owning the same team internally. It's like the old VCE model. We'll bolt a little storage on, a little compute on and a little networking, but it's tighter integration. Look at the success that Oracle has had with that hardware software integration. So I think AWS, Google and IBM have the advantage there over Microsoft. I think we're, I think that's Kino validated obviously the articles we wrote and access we had to add them and the executives here. I think where the dots connected my mind is if I'm a customer, do I really care that there's a lot of proprietary and amazing costs and speed going on? I really don't care if I'm thinking, okay, maybe, maybe, maybe I'm going to say like Intel processors back in the old days, do I care that Amazon's got all those cool nitro stuff and saves me energy and high performance? I get all that cost. I love that. I don't care. Now don't be close when it comes to running an ecosystem. So I think there's a game changing shift happening where the cloud players are the new global infrastructure like hardware. And it's now systematically built in cluster configurations, connective tissue, laying out geographic locations like a region, local zone space. So the physical transit and transport of data is going to be rolled out. Okay, now you're getting into some really interesting areas of because of the power law, we suggested, okay, some of this stuff is going to be done on-prem. Is that going to be done in local zones? Is it going to be done in outposts or is it going to be done on Dell and or HPE infrastructure? This is why I like the DGX from NVIDIA because someone has to build a connective tissue between where the data is. Now, whether that's outposts or something else. But it's got to be physically proximate. I think if you have physical data on a premise or edge, you're going to have to have inference there. So it's got to still talk to the cloud. So to me, that's distributed computing. And I don't think Amazon will own all the customers. They'll have to be an independent third party. Maybe it's NVIDIA, maybe it's VMware. We don't know. We don't know. Dell, HPE, I don't know about Lenovo, maybe. Dave, we're kicking off two sets here at AWS re-invent. The Cube is here, our 11th year. We are on the ground with our team coverage. We're in the MongoDB, Sugarcane activation. The Emerald Lounge. That's what they're calling this. Emerald Lounge, that's my VIP pass. We're also going to stage in the press area. I'm going up there. Shelly Kramer, Dave are here. We got Sarbjit here. All of our family and friends are here. George Gilbert. Thank you all at Amazon re-invent. Stay with us all day for live coverage. We're sending it back to the studio. We've got a great team up there. Howie, she's got an amazing panel. Tons of content. It'll be raining, silken angle, Cube coverage all week from Monday yesterday through Thursday. Stay with us and keep in touch. Let us know what you like. We'll see you on the social media channels. Twitter, X, Threads, Facebook, LinkedIn. Check out the special reports too. I mean, amazing. Special reports. We've got so much content. I'm silken angle. I think the definitive word of what's going on, silken angle, has the best coverage. Looking around at the other sites, they got one or two articles. I think we've got like 20 flowing out now and tons of videos. So, you know, stay with us and continue to cover the Cube with us. We'd love that you join us in this front row seat to this innovation. And we'd love having you on for the ride. Thanks for watching. We'll be back to you guys in the studio.