 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. Generative AI has created a new mandate in enterprise tech with significant implications for all companies generally in AWS specifically. Amazon's powerful playbook, which is based on agility, developer choice, power, scale, reliability and security. It's now got to evolve to accommodate simplicity and coherence for mainstream customers. This imperative has come into clear focus at AWS re-invent 2023 this year. AWS continues to innovate at a very fast pace, but must now do so in a changing customer environment that increasingly values direct user productivity gains through software. How AWS is navigating this challenge is our topic today. Hello and welcome to this special episode of theCUBE Research Insights powered by ETR. I'm joined today by George Gilbert, principal analyst at theCUBE Research, and in this Breaking Analysis, we're going to share our take on Amazon's strategy in the nascent Gen AI era. The chess moves that it's making to maintain leadership and the challenges of doing so as a 90 billion dollar giant in a fast moving market. Let's start with the backdrop of this challenging macro environment. This chart from ETR shows what's called net score granularity for AWS across nearly 1,000 AWS customers in ETR's quarterly surveys of about 1,700 customers. Net score, remember, is a measure of spending momentum and essentially it tracks the net percent of customers that are spending more, that's the green on this chart, and nets out those spending less, that's the red. That blue line there shows the metrics with the net score metric with a very strong 2020 and 2021, but continued deceleration in 2022 and into 2023. That shows the macro pressures that Amazon and most vendors have faced. That yellow line is the relative market presence in the data set, which has continued to grow for AWS, but like most companies, the combination of macro spending headwinds and cost optimization challenges have shifted spending patterns. Nonetheless, as we reported, AWS growth rates held up sequentially last quarter and we expect a re-acceleration of growth in Q4. As George, most companies are seeing these trends, obviously Microsoft and NVIDIA are two obvious exceptions. First, thanks for coming on with me. What are your thoughts on this? Well, we're seeing maybe a deceleration because there's a sort of certain spending bucket that's being reallocated dramatically, the business becoming much more capital intensive, so we're seeing a huge shift in data center spend towards NVIDIA gear, because it's not just the GPUs, it's the whole clusters. And then Microsoft is drafting off that and we'll talk about that more relative to AWS. Chasing those big chips. Okay, let's tee up the topics that we're going to talk about today. AWS went back to the basics. They're really the roots of infrastructure this year. There was even a big focus in Peter DeSantis' talk on Money Night about synchronizing clocks across systems down to literally microseconds. But big, big focus on Silicon with Graviton 4, Tranium 2, obviously Compute, new high-performance S3, object storage, networking security, always a focus and a lot of discussion on database, especially doubling down on zero ETL and expanding zero ETL to more data stores. We're going to talk about the AWS GenAI stack. That was a big, big emphasis of both Adam Salipski's talk and Swami's talk. I'm going to give you our take on how it came about and our hypothesis about the relationship with NVIDIA, the moves that AWS had to make there. Now the stack has three layers. Infrastructure for AI, then Bedrock, which is the abstraction layer to enable LLM optionality. And then the third layer of application was these applications, which AWS showcased in the form of Q, which is essentially their co-pilot. So we're going to discuss the data imperative for customers and what that means for AWS. And what our view AWS must do beyond the innovations that they announced here this week. And then we're going to net out our view of where AWS needs to focus going forward. So George, it was striking to see the focus on infrastructure, they really made that point that, hey, we are an infrastructure player. What's your take on that? So, Salipski had really not emphasized infrastructure in his keynotes in the last couple of years, which were his first couple of years. And I think there's a couple of potential stories behind that. One is, they're emphasizing the fact that they have had great infrastructure and they need to remind customers and partners that they were the original hardware platform, but it was more than that, that they architected their hardware to be composable so that they could do things that, as I like to say, if Microsoft even dreamed of, they'd have to wake up and apologize. Like, they could assemble an EMC-like SAN on demand so that you could bring a legacy SAP system that needs a SAN on board or an Oracle database. They could assemble a supercomputer on demand. Microsoft, if they had to do this, would have to hardwire this stuff together. But, their software was not composable. Their software was designed for choice and power, as you emphasized in the beginning. The problem now is, the market is shifting, accelerated by Gen AI, so that the axis of competition is software development productivity and productivity is built on composability. Composability means the software pieces need to be designed to fit together. And we'll come back to that. We'll stop you there. The point being that the software that Amazon, and they have a lot of software developers, obviously, but it's designed to really get the most out of the hardware. I think you called it light up the hardware, exploit that hardware. That was really the focus, as opposed to being able to compose solutions for end consumers. Yeah, actually, you're articulating it better than I, which is that their mindset was a hardware shop, which is, we make this great hardware, we architected it for composability, and then we built all this software to light it up and give customers all this choice, but we never architected the software to fit together because that would have slowed us down. Anyway. That's the challenge today. Yeah, now, but the point worth making is, Microsoft was just the opposite mindset. They were a software shop that built and architected for composability from the start. It all came together at night two weeks ago where they showed, really, at the top of the stack, co-pilot studio, low code, no code, with 1,200 connectors, meaning 1,200 building blocks of Azure services, Office, Dynamics, all assembled and orchestrated by an LLM agent, an open AI agent. But the problem for Microsoft was their hardware was always just a place to run their software. So Amazon's out there emphasizing our hardware is really good because last two weeks ago, Microsoft made it clear their software is way better. Okay, so what you saw also at Ignite is Satya Nadella talked about how they have more data centers around the world than anybody else. Amazon CEO, Adam Salipsky, counter punched, which I thought was a very strong point and I'm glad he made it and it's shown on this slide where basically he was showing that their regions contain three availability zones that are 100 kilometers apart. So they're essentially running at synchronous speeds and so if there's a fire in one or a flood in one, they can fail over to another one. And then you can see in the upper right of this chart, it basically says, you know, the other cloud providers are talking about Microsoft, obviously, basically it's a data center. One data center. One AZ zone, okay, you lose that one, you're in trouble and they say, okay, well, you can fail over to another data center but it's probably done at asynchronous distances and so it's a far less superior infrastructure and I think most CIOs and IT folks understand this. So anything you would add there? Just that it feels like Amazon's trying to emphasize and remind people our hardware is great and even if you're going to be running third party software, it runs better on our hardware. I think they say it differently, right? They say our services are running on the world-class infrastructure that we built, but you're right, it's more of a hardware mindset versus a software mindset, Microsoft's coming at it from a different direction. So George, after Adam talked about the industries, he really stressed, he actually let off with financial services which ironically it was the industry that was the most fearful of cloud in the earlier days and then it became one of their top industries but healthcare as well, same thing with HIPAA standards, obviously very popular in the cloud, manufacturing, automotive, later on in the conversation he brought in IT and telco. But the first announcement went back to the beginning, went back to 2006 S3 which is through the beginning of AWS Cloud Time and they announced the high performance S3 in the form of S3 express one zone as you see here in this screenshot of Adam Salipski's announcement. It's basically a fast object store, I mean S3 revolutionized cloud storage, George and high performance object's been around for a while, I mean Pure announced it years ago, but AWS has it now, they are lowering the cost for customers, what do you make about the starting point of the keynote announcements being here? Again, it goes back to you can't divorce the infrastructure from the software because if you have a better hardware architecture you can build better software. Like Snowflake was able to build Snowflake because it was in the cloud that could separate compute from storage and teradata and Oracle couldn't. So what they're saying is now with fast object storage you can treat it like a cache without trying to jerry-rig third-party cache. Okay, let's transition and talk about the NVIDIA relationship, this was actually a big deal, they made a very strong case showing the timelines of its firsts, first to use GPUs and the timeline of how they were ahead of the market with NVIDIA deployments and Jensen is like a brilliant chameleon at these shows, he talked about the fact that AWS has deployed 3,000 NVIDIA supercomputers and AWS announced support for the DGX cloud, which rumor had it and reports had it that initially they resisted that and that may be why they were maybe not getting as much allocation of the high-performance GPUs. Microsoft last week at Ignite announced the DGX cloud so AWS had a counter with that, but George, there's more to the story here and a theory that you have, I wonder if you could explain why it's so important that AWS lean into NVIDIA when again, by all reports, it was initially reticent to do so for DGX cloud for example. So once upon a time, supposedly politics was like a backroom thing and the power brokers in the backroom with their cigars and playing poker would decide who's going to be mayor, who's going to be president, whatever. So Jensen is now that power broker. He had a big time. And his chips, his poker chips are, it's actually not just the GPU chips, it's his forcing people, the customers to buy whole systems because he designed these systems which are really data center sized clusters that were optimized for training the large language models. And even though AWS had the best hardware for years and years, it wasn't quite right for building large language models. It was more for smaller- General purpose workloads. Yes, and so here's the conundrum. So Microsoft was so far behind in hardware, they outsourced all the new accelerated infrastructure to NVIDIA and there appear to be accounting for like 25% of NVIDIA's revenue. They're like far away their biggest customer and they're going up and buying capacity at Oracle, Lambda, Core, Weave. So it's, their capex looks like it might grow 55% next year. I mean, it's a monstrous number. So NVIDIA is bundling a lot of additional functionality beyond the GPUs to customers. And if you don't want that additional, if you don't want those data center sized clusters which AWS doesn't really want because they have their own cluster architecture which was pretty good. And so NVIDIA is like, you don't want our data center clusters, you're not getting much of a GPU allocation but they still need each other because AWS needs to say, well, we do have the DGX clusters, we might not have a lot but we have them and we do have the GPUs and NVIDIA needs to make sure that their DGX clusters and the GPU chips are here because there's Google, you know, off in the wings with its own chips and clusters. So they both need each other but they're not embracing each other. Yeah, they had to do this to really- Optics. Optics, but as well, the impressive thing about AWS is that they actually can do this. I liken it in a way to VMware cloud on AWS where AWS said, okay, we're going to build a bare metal, you know, service for you and you can now we can run VMware, you know, on top. So they'll be able to, people are going to be excited about this. To the extent that they can get capacity, people are going to be running models. Remember these clusters that you're talking about essentially supercomputers, they're $250,000 a pop, they weigh probably, you know, 70, 80 pounds and there's hundreds and hundreds of parts in them. I mean, these are really complicated, you know, systems. And so AWS, yes, optics, but also they say their customer obsessed, they've got to go where to the customer. I know, but they're just not getting a big allocation from NVIDIA. But we don't know whether or not there was a quid pro quo here. Hey, you announced DGX and we'll give you a bigger allocation. We don't know, right? We don't know how long that allocation is, or do we? Well, there is one industry analyst, semi-analysis that has as good inside information as anyone and they know who's getting what allocations because they know the whole supply chain. Sometimes those allocations can change though, as you well know. Okay, let's talk about the stack, Amazon's JNAI stack. And there are three layers of that stack, as we said earlier, as we're showing here. The bottom layer is core infrastructure, Nitro is that enabler. Remember, Nitro was that virtualization engine, that NIC, if you will, but a very lightweight virtualization engine to accommodate silicon diversity around your GPUs, Tranium, Inferentia, Intel, AMD, you name it, they've got it all. And then ML tools like SageMaker and the entire AWS best in class infrastructure stack, repurpose for JNAI. That middle layer is Bedrock and Amazon is focusing on LLM optionality. We're going to talk about that quite a bit with the narrative that there's not going to be one model to rule them all, which by the way, we agree with, but there's more to that story in our view and we'll share that with you. And finally, the application layer up top, which was showcased in the form of Q. And as you see on this slide, Code Whisperer, which was announced earlier this year. So George, what's your take on the overall stack and then we'll double click into Bedrock. Okay, so the overall stack, it's coherent and it's compelling. They needed Bedrock and it looks like it's coming together. We talked about, you could run, you can do, they're paying Anthropic to actually train models on the Amazon native infrastructure, but there will be some limited amount or some amount of NVIDIA infrastructure. So that's that base level. And Nitro was that revolution four years ago that gave them hardware composability that you talked about. They just didn't do that for the software too, like the equivalent composability. So let's drill into Bedrock. Yeah, let's double click into Bedrock if we can. So then this slide here shows Adams laying out the various LLMs, but simply put, Bedrock is AWS' fully managed service. It offers access to a lot of different foundation models as shown on this slide. AI21 Labs, Amazon's own, which is called Titan Anthropic, which is the company that really Amazon's betting on. They're putting a big bet on that. We'll talk about that more. Cohere, Metislamatoo, which many, many folks are using, Stability AI is another one. These are available through an API in Bedrock with also other developer tools to build Gen AI applications. So why was it so important, George, for AWS to take this optionality approach to LLMs? Okay, so, well first, they needed optionality because they didn't have a frontier model of their own that they could build on. Well, they had Titan. But that's not, when I say frontier model, I mean one of the ones that follows the scaling laws where there's this golden ratio of compute model size and parameters and training data. And as you move generations, there's generally each generation is 10 times bigger on each dimension and incredible new capabilities emerge. It's an emergent phenomenon. And that's what OpenAI brought. Microsoft didn't have this either, but that's why they invested in OpenAI. They saw this coming. I think their first investment in OpenAI was 2019. And so, okay, so they've been on this for a while. And Google was on it, but was reluctant to apply it to search because it radically destroyed the cost structure. So there was a business model reluctance. And then the anthropic guys spun out of OpenAI. So they're really the third foundation frontier model. Now, the reason this optionality is it's not just for Amazon customers. They, with this slide you're showing, they're saying, Amazon's saying it was part of their hardware mentality. We have this great hardware. We have all the models, except OpenAI. We run them all. Yeah. And that's what you need. Yes. But your point being, there's more to the story. Amazon software needed a frontier model. To apply to their own internal needs. To queue, for example, which we'll talk about. And the Titan model, Amazon wasn't taking LLM seriously until like this year. And Andy Jassy even said on one of the earnings calls I think like six months ago, he's like, this stuff didn't get interesting until this year. Well, I think most companies haven't really paid attention until this year. Now 100% are paying attention. Yeah, but they got caught. What percent were paying attention in 2022? A very, very small slice. But here's the thing. It's not enough to say, if you're a software vendor, it's not enough to say, oh, okay, now we've got a frontier model on our platform, because it takes years of building the frontier model into the functionality of your tools. The frontier models are not really end user products. They're enabling technologies. So it's like SAP can't say, oh, well, relational databases have come along and now we're just going to stand up from R2 to R3. We're going to, I don't even know if R2 was on a relational database. But the point was it took, it takes years of work to build this into your own functionality. And the reason I say this is, when you look at Bedrock and the deal for Anthropic, it wasn't just so that they could offer it to Amazon customers. If now we talk about Q, Q. Before we get to Q, they, essentially what you're saying is they needed their own version of an open AI class relationship for their own internal purpose. So before we get to Q, I want to talk about, I want to underscore some of these points and look at some ETR engagement and sentiment analysis. The data is specific to AI, Gen AI models, so that's what this chart shows. It's from ETR's emerging technology survey of privately held companies. The vertical axis measures net sentiment, which is intent to engage. And the horizontal axis is mind shear. So look where open AI is. It's so far up to the right, it's literally off the charts. And the separation from the pack is notable because Microsoft, because Microsoft locked in with open AI and had the exclusive, AWS had to counter with not only LLM optionality, to your point, in bedrock, but a tighter relationship with Anthropoc. So they made an initial investment of 1.25 billion that could go up as high as 4 billion over time. And some of that is in kind contribution. In other words, Anthropoc's going to be paying AWS to use their services. But as part of the deal, AWS is going to be the preferred cloud for Anthropoc to train its models. And also Anthropoc is going to bring its tribal knowledge to help AWS customize silicon for Gen AI. So the distance between the packs is quite, between open AI and the pack is significant, but this market's moving so fast. AWS, like everyone you pointed out was caught off guard by open AI's impact on the market and has responded pretty quickly. Is it going to be enough in your view? Yes, the question is timing. It's like, how long? Because it's, in software, there are these trade-offs between how much time you have to anticipate a problem so you can rework your architecture to build in this new capability. So let's take Q, for example. Q is meant to essentially accelerate the software development. Well, let's bring up Amazon Q. Let me just, let's bring up the slides. Basically, the slide says we have co-pilots too. That's our words, not Amazon's. They didn't have that, but that's what they're basically saying with Amazon Q. Here you see Swami giving his presentation. Q is the top layer of that stack and it's AWS's answer to co-pilots and workplace assistance. And we believe it was nistly trained on Amazon's Titan, which is its internal foundation model, but Amazon has some work to do here. At the analyst summit this year, a lot of us pushed on what exactly is under the covers powering Q. And it's very clear AWS intends Bedrock to be that answer. They basically said, you don't have to worry about that. It's all Bedrock, but George, based on your research and our hypothesis, AWS has to retool Q to make it run as a world-class co-pilot by leveraging Anthropic and other LLMs. Your take, please. Okay, so I got a bit under the covers on Bedrock by talking to some of the product folks at the booth. So beyond the analyst day. And Bedrock looks like it's maturing pretty well and it's competitive like with Azure AI. It's just what GA, I think, in September, right? Okay. Dwell GA in September, October. Okay, okay. But recently. And we've seen the correlation between GA and uptake and adoption be pretty substantial. But the issue with Bedrock, it's not a Bedrock issue. It's what LLM have you been building on and how long have you been building? So. It's relative to Q here. Yeah, we're talking about. Yes. And then. Okay, so Q is help us through the whole software development lifecycle. Because as we said, we're now moving to accelerated software development and operational productivity. Which means this stuff's got to work together and it's got to be dramatically easier than it has been. So again, back to Bedrock. The problem isn't with Bedrock. When you talk to them how they use Bedrock, it calls out to these Amazon services. You wrap them in a little bit of code and the LLM says, okay, I'm going to call this, then I'm going to call that, then I'm going to call that. The problem is going back to this theme of composability, those things they're calling don't really talk to each other all that well. They have different permissions models. They don't store data in a common data store, so you've got to pipe data between them. Then you've got to transform the data, which means you're back to chewing them in baling wire. So again, that's not a Bedrock issue. Now, in the case of Q, on the software development lifecycle, let's say, again, they were using a Titan or something derived from that in Code Whisperer. We talked about the scaling laws and how emergent capabilities grow with every 10x improvement in each generation. One of the hardest things to emerge in greater capability is coding assistance in each generation. And if you never were doing LLM research or weren't taking it seriously and you slapped together something in the last 12 months and said, oh, we do CodeGen, it's not going to be competitive with what Google was doing or Microsoft was doing. And that's one reason they need to re-host on Anthropic. That can be fixed pretty quickly. Yeah, I would think by next re-invent, we're going to have to see that. Oh, sure, I mean. But there's a more difficult one, which is they were going around telling their largest customers, oh, now with LLMs, we're going to be able to read in all the error information, the operational information about how your applications and infrastructure is running, and we're going to actually tell you what's wrong and tell you, either tell you how to fix it or fix it yourself. This is the observability piece of it, yeah. Yes, yes. So I went and took a look in there and observability is all about having a ton of contextual information, logs, metrics, events, traces, from all your software and your infrastructure. And you have to make sense out of all that information. There's a lot of information, you got to tie it together. So there's the semantics of it and then there's the amount of it. And well, when they were using Titan, which was this primitive, or one of their in-house things, what's called the context window. The amount of stuff you can stick in the prompt is really small. So they were showing, and not only that, they weren't taking context from everywhere. They were like, if there's a network problem, we'll go look in the network log, and we'll put the network log error into the prompt. So to get it, so to cue today, what they showed is really a demo. It's a demo. And they've got to extend that. They've got to really retool it for Anthropic and other LLMs to make it deliver on its promises. That one will take longer. You think it'll take longer than a year? Well, it's a journey. It's a journey. But the decoding one they can fix, I think pretty quickly. Yeah. Okay. So those are, yeah. So another piece of this that we want to talk about is the data and AWS's data opportunities and challenges. Here you see a slide that, again, Swami showed again. AWS's data opportunities and challenges. Those are our words. That wasn't on the AWS slide. The quality of data is ultimately going to determine the competitive advantage that customers can get from Gen AI. And Amazon has had a very strong track record of developing databases and other tools that are fit for purpose, we'll call it, and aimed at specific use cases and application classes, Aurora for transactions or Redshift for analytics, Kinesis for streaming data and so forth. Many, many data stores, I don't know, 13, 14 data stores. And this has served them well. The challenge is now in a world where Gen AI becomes the productivity driver, how will AWS present a unified front with its disparate data platforms, offering developers coherent data elements and metadata across all the services so that builders can quickly develop intelligent apps infused with AI. George, explain the technical challenges that AWS has in doing so. You teed this up and this is a case study in the challenges of trying to get to composability. So the operational databases, like the Aurora's and RDS, DynamoDB, all these different apps, you can't unify all of them on one database. Maybe when we had one app in SAP, that was possible, and even then it didn't really work that great. So we kicked the can down to the analytics and we were designing a historical system of truth. Originally it was centralized, eventually it'll be federated, but you put all your application data in this coherent repository, which means all your analytic engines need to talk to that one coherent repository. And as we're talking about, like with Shelly, we're on this road to intelligent data apps, you've put more and more metadata about that system of truth so that eventually, like all the intelligence that was in your application logic is stored with the data and then you have these building blocks, like Amazon, like Uber, you know, fares, riders, drivers, you match them, those are building blocks, okay. Today, data zones, for example, has the business metadata. Glue maybe has the technical metadata and runs on a different data store. So the data and the metadata, the metadata, the data about the data is in different places today. It gets even more challenging. Because even the analytic data stores have their own data stores. So Redshift is in Redshift. They say you can talk to S3, but it's really slow and you can only do four joins. It's limited. Yeah, and then there's S3 and you can run Athena SQL on it and you can run Spark EMR on it. But like the open search, like observability stuff, it's its own data store. So Amazon's building data lakes for security, for supply chain, but then they're not using all their analytic engines. So they need to get that common data foundation. They need an analog to the security data lake within their data platform. It's just harder because it's such a wider scope. Right, okay. And then just the key point to make on that is in the age of AI, you program your AI with data. So you need a coherent and organized data estate. That's the key point. That's the foundation. All right, let's wrap with some summary thoughts and where we think AWS has to focus on in the days, months and years ahead. We want to stress that we don't see this as a winner-takes-all market, like so many tech markets. AWS has got the best infrastructure. It's got a winning hand. Microsoft's got its software prowess and both can do well in our view. Microsoft frankly has a better margin model because it's a software company. But AWS is going to thrive with its infrastructure excellence. It's shown over the years an ability to execute, but we're laying out some of the challenges that it has to focus on. But Gen AI is really changing all the aspects of software development, George, through the entire life cycle. Can you explain the implications of that? So to net it out, Amazon created the cloud market because it catered to the technically sophisticated who wanted choice and power. Amazon did not build a platform as a service. They built infrastructure as a service with a lot of services that you could mix and match. It was the Unix ethos. Give me a lot of choice and give me power. And then as we enter now the mainstream customer market, they have not the skill and inclination and time to try and bolt all the stuff together. They want productivity and productivity is built on convenience and simplicity. And Gen AI accelerated that. And so that's the shift. All right, coming back to our key points here. Number three, how does AWS make its building blocks simpler for customers to consume in composable pieces? I think we hit on that pretty hard. Microsoft is retooling its entire software approach to do this as a software company. That's their focus. AWS, while they write tons of software, a lot of it is designed to make the hardware run at peak performance. Light it up, as George was saying. Microsoft treats infrastructure as kind of a necessary layer but its infrastructure is not the main focus. It's not really best to breed. A key challenge for Amazon is to rationalize its data platform, as we said, in a coherent, consistent, and simpler manner in a way that it's done for security and supply chain with its data lakes for those areas. And finally, AWS has to move fast to retool Q on Anthropic and other LLMs to take Q beyond being a cool demo. It's a journey, as you say, George. And AWS, as I pointed out, is consistently shown in ability to execute year after year. They face a big challenge. Can they pull it off in your view? Oh yeah, I don't think there's any doubt. I think the question is, we've got two ecosystems, the sort of relatively closed ecosystem and not that it closes out others, but the Microsoft one is, we'll put all the pieces together for you. It would be analogous to the AS400. This is the UNIX ecosystem with Sun, Oracle, EMC, Veritas, although it's much bigger now. The question is, how many of the pieces will come from Amazon? George, thanks a lot for your time. It was a great analysis, as usual. All right, that's it for now. I want to thank the team here at Reinvent. Brendan, Christian, Kony, Jay, Sheila, thanks for all your help in letting us use your set during this break. Alex Morrison and Ken Schiffman are actually in Barcelona, but they'll be helping us as they always do with the podcast. Kristen Martin and Cheryl Knight help get the word out on social media and Rob Hoef is our editor-in-chief over at SiliconANGLE.com. Don't forget, all these breaking analysis episodes are available as podcasts, just search breaking analysis podcast, please subscribe. I publish each week on wikibon.com, which is rebranding, thecuresearch.com. There's more to be revealed shortly there. And siliconangle.com, check out SiliconANGLE for all the news. You want to email me, David.Valante, at siliconangle.com, I'm getting inundated with prediction requests, send them my way, I print them all out, I have a big stack, and we go through them all, I promise. You can reach out at dValante if you want to DM me or check us out on the LinkedIn posts. And don't forget to check out etr.ai. We've got a broadened relationship with ETR, we're going to market together, we're providing access to their data platform through our research services. So if you want to know more about that, let us know. Thanks for watching theCUBE Research Insights powered by ETR, we'll see you next time on breaking analysis.