 Hi, this is your host, Apni Bhartiya, and welcome to tier four, let's talk. Today we have with us once again, Rob Hershville, CEO and co-founder of Rec and Drop. It's great to have you back on the show. Swap, it's a pleasure. Thank you. Today we are going to talk about AWS Dream when you were there. Let's quickly talk about how different was this year's event as compared to last year's event. Re-invent this year felt like we were getting back to the 2019 levels, which is a big deal. In some ways I felt that the show was bigger than in 2019, although not bigger from an Amazon perspective or an AWS perspective, but bigger from a resale and partner perspective. So this was the first year I've ever seen them literally take over for people who haven't been to that area of Vegas. There's a row of restaurants right outside the conference floor and those restaurants got booked 100% by the top vendors in the area. And those vendors were not necessarily the typical ones. They were ones who were representing data storage, analytics, logging, consulting. And so it's an interesting mix when the partners use re-invent to be their big investment, their big networking, their big customer and sales perspective. So it's a change in re-invent from past years where this is really the dominant sales opportunity for a lot of these companies. So I feel like that's an important shift where Amazon is moving from being the thing people talk about to the partnerships, the channel, the ecosystem is really a dominant part of that discourse. What was the major or common or most dominant theme this year? Was it AI, gen AI or something else? People were getting tired of talking about AI. Yeah, there was, there was the amount of AI. The challenge isn't that people are tired of talking about it. The thing that people are getting a little exhausted on is the amount of fish hooks in the water, if you will or the amount of froth in the coffee about this topic. So it's very clear that AI is disrupting every single product, service, industry but nobody is entirely clear how. So Amazon is widely regarded as playing catch up here and part of the way they're doing that is by putting AI and AI capabilities into every product they have. Just a good strategy, but it's a bit confusing. And so what ends up happening is that AI becomes sort of this white noise in the middle of everything and it's hard to tell if anything is that real. Interesting, walking the floor, AI, there's a significant lag between when the floor messaging and the booth messaging gets approved and how all that gets built. So from a booth messaging perspective, AI was not as dominant as it was, say on the billboards outside, which Amazon doesn't have the same control of. From Racken's perspective, how was this event? You know, this event is a continuation of the theme of the cloud being the dominant component. Racken really helps companies do self-managed infrastructure and normalize and use standard best practices for infrastructure they buy and own. So in some ways, you might consider it anti-cloud. We consider it post-cloud from that perspective and so the companies that are doing really well in cloud are typically our best customers. And what we see from that, there wasn't as much discussion of outposts, they're definitely there and present, but widely we see some of these get out off the cloud efforts by not just Amazon but the other cloud providers too are interesting but they're not seeing the traction that we'd wanna see. You know, Amazon is definitely looking and embracing hybrid cloud to a much larger extent and recognizing that a lot of workloads are struggling to move into cloud infrastructure. And so that's been a component for this. Although at the same time, I think that they would tell you that they've sort of won the messaging battle and they're not spending as much energy convincing people that they should move to cloud. They're really talking about how they take advantage of that movement and that infrastructure. And they're doing some amazing work, right? They definitely do have some really great infrastructure available, compute and training compute is definitely always a factor at this point, meaning a lot of the training that's being done, a lot of the AI ML workloads are still requiring GPU and specifically NVIDIA GPUs. And there was a lot of partnership and activity around that. However, Amazon is also working to build its own chips and its own inferencing. And just like all of the cloud infrastructure providers have some degree of trying to find alternatives to NVIDIA and the CUDA libraries that are so dominant in this field and how things go. It's just very expensive and hard to acquire gear. So from Racken's perspective, we see cloud native technologies and techniques as being a core capability for people accelerating how they consume infrastructure. What we're looking at is Amazon doing things that are really only possible in Amazon. And sometimes there's a very strong yes and sometimes these techniques are not that differentiated. Of course, we do know that AWS, of course it's one of the biggest player. They all say the internet literally runs on AWS, but are there any areas, especially when it's come to AI or Genetic AI and if you look at Google event, if you look at Ignite event, where you see, hey, you know what, this is what Amazon is waking up to and this is where they do, they are kind of lagging behind, but they seems to be picking things up or you are like, you know, this is the bus they have missed. I think it's too early to really say they've missed any bus. This, you know, it's the AI craze here and this is one of the reasons why my comment about the floor not reflecting AI as much yet is still very young and early in the process here. You know, people are, you know, they don't have the same foundational models or access to the foundational models, they're building partnerships for that, but I think that that's actually an acceptable approach at the moment for people to be looking at, you know, which models they want to use, how they're doing it. It's not clear, and this is one of the things that came through in a lot of conversations, that I had on site is it's not clear how much people need to build their own companies, need to build their own models at the moment. The thing that I find is an overriding story is that when companies do think about building their own models, they're very concerned about the providence of the data, owning the data, how that data gets moved around. It takes a lot of specialized compute, a lot of East-West traffic, meaning, you know, a lot of very high bandwidth interconnect for those systems, and it's not clear at the moment how people can build and run that infrastructure themselves in Amazon or any cloud specifically. So, you know, we're going to see some real stress on the cloud infrastructures if they try to take on those types of workloads. The thing that's not clear, though, is that if people really need to build their own models or to what extent they'll need to build big models. Right now, the expectation is that the major cloud providers will offer inferencing based on an LLM, and they'll have to have one of those. Amazon has the resources to buy or partner, just like Microsoft has been, you know, partnering very deeply with OpenAI, and Google's sort of doing it themselves, but Facebook is sitting out there with their own models and they're looking for partners also. One of the not maybe surprising partnerships, but one of the distinctive partnerships at AWS re-event was IBM, and IBM showing up very strongly as a partner in this AI space. And they're building some, you know, Watson X on AWS, and then there's also been some discussion about potential quantum efforts around some of these models from an IBM perspective, because they have a leadership position there. Whether, you know, I didn't see or hear much discussion of quantum specifically at re-event, but you know, that might be something that comes up in the next couple of years. What about hardware? Because you folks at, you know, REC and you like hardware, so any hardware related strategy and announcement or where you see things writing from Amazon's perspective, AWS's perspective. Not that we saw, you know, the AWS is still doing a really good job with their ARM-based infrastructure, and we get very excited about that, because I think that ARM has a lot of potential on how things go, but you know, realistically, you know, from the announcements I was seeing maybe one slipped by me, you know, we're not seeing a ton of hardware excitement in AWS, specifically, I think there's a lot of interesting infrastructure going on outside of re-event, outside of the cloud vendors, where they're looking at CXL as being a very interesting technology over the next couple of years, but, you know, and the new architectures coming out from, you know, Amazon's pushing this hard, Google's pushing this hard, AMD is pushing hard on alternatives to the GPUs from NVIDIA, and so it's really a question of, you know, when we start getting some alternate inferencing technologies and alternate inferencing libraries, and the cloud providers have a significant interest in building that, just like, you know, many other providers do, so, you know, I think that there's definitely a challenge coming with how the cloud providers, you know, make their inferencing technology more accessible and more available, I don't think that there's a simple answer yet, it's gonna take time for people to adapt to other libraries, but cost will play a factor in that, and that's gonna be a big part of owning that type of infrastructure. This may or may not be related to Racken, but what are the thoughts on Q? The Q Star pieces, OpenAI's Q Star, there's all, that gets interesting, again, something that's more rumor than actual implementation. There is an interesting answer from a Q perspective of, you know, how are people figuring out how to use AI ML? And so it's, you know, what we're struggling with at the moment is there's a huge amount of progress and change and capability and ideas, turning those ideas into real enterprise value and integrating them into your applications is, you know, that is very tricky, and most people don't have the experts to do that well. So I think we have a real challenge with, you know, a ton of AI things going on and none of, you know, knowing how to actually execute on any of them is really, really difficult. Before we wrap this up, any closing thoughts you have about this event or what you, I mean, it's too, we cannot say, I'll comment on that, but what would you expect in the next event? You know, this event felt, you know, a degree of normal, which is good. You know, I don't feel like there was, except for AI necessary and overriding theme or challenge in this, I do think that, you know, when as we're looking for normalizing how people consume and use cloud infrastructure services, right, cost, patterns, controls, compliance and governance, those are things that are still major challenges for users here and places where AWS is working to improve that experience. Hopefully for next year, we'll see a lot more sort of building governance and compliance pieces. And I expect that if we don't see that coming from the clouds themselves, we're gonna start seeing that come in from a vendor perspective in a much bigger way. Potentially, you know, using AI to do a lot more governance and compliance. So that's still, this feels like my regular theme for these cloud conferences is they have a ton of new stuff coming in and still not that much governance and controls. And, you know, when we talk to enterprises, the rack end talks to enterprises, a lot of what they're missing is that control. That's the innovation that they wanna do is blocked because they're not able to govern the work they're doing. Rob, thank you so much for taking time out today and gave us a great analysis on Redinvent. Thanks for your time today. And I look forward to talk to you again soon. Thank you. Thanks a lot. This was my pleasure.