 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. With Watson 1.0, IBM deviated from the Silicon Valley mantra, fail fast, as it took nearly a decade for the company to pivot off of its original vision. In our view, the opposite dynamic is in play today with Watson 2.0, i.e. Watson X. IBM's deep research in AI and learnings from its previous mistakes have positioned the company to be a major player in the generative AI era. Specifically, in our opinion, IBM has a leading technological foundation, a robust and rapidly evolving AI stack, a strong hybrid cloud position thanks to Red Hat, a rapidly evolving ecosystem and a consulting organization with deep domain expertise to apply AI in industry-specific use cases. Hello and welcome to this week's theCUBE Research Insights, powered by ETR. In this Breaking Analysis, we share our takeaways and perspectives from a recent trip to one of IBM's research headquarters. And to do so, we collaborate with analyst friends in theCUBE Collective, Sanjeev Mohan, Tony Baer and Merv Adrian. We'll also share some relevant ETR spending data to frame the conversation. Now this past week, about 50 top analysts spent the day at the Thomas J. Watson Research Center in Yorktown Heights, New York. It is a beautiful facility in an American gem of core scientific research. It is the headquarters of IBM's research. The innovations from this facility over the decades have spanned mathematics, physical science, computer science and other domains that have resulted in semiconductor breakthroughs, software innovations, supercomputing. It's a main spring of IBM's quantum computing research and has spawned several Nobel Prize winners. The event was headlined by IBM's head of research, Dario Gill, along with Rob Thomas, head of software and go-to-market. As I say, about 50 analysts attended for a full day of briefings on GNI infrastructure, semiconductor advancements and quantum computing. Now for today's breaking analysis, I'm going to set the stage with some spending data, my perspectives, and then we'll cut to the conversation with Sanjeev, Tony and Merv, which took place during and immediately after the event. First, I want to review the top level macro picture in enterprise tech spending. And there seemed to be some confusion amongst the analysts about the actual data. Now if you follow this program, you've seen this ETR data format previously, it plots spending momentum on the vertical axis using a proprietary metric called net score. That reflects the net percent of customers in the survey, which is around an end of 1700 IT decision makers, that spend more within a sector. The horizontal axis represents the penetration of us within a sector, within the survey. That's called pervasion. So it's a proxy for market presence. So that red dotted line at 40% represents a highly elevated level of momentum. That's on the Y axis. Here's the story that this picture is telling us. During the pandemic, as you can see, the squiggly lines there show this, spending on ML and AI peaked. As we exited the isolation economy, we saw momentum decelerate on AI spending. And then one month before chat GPT was announced, the sector bottom and then the trend line was reset upward. Now because the top line IT budgets are not growing dramatically, a majority of AI initiatives, we estimate probably around two thirds, are funded by taking money from other sector buckets. Now we're highlighting cloud in this graphic and you can see it's downward trajectory as cloud optimization kicked in the last several quarters. But other sectors face the same headwinds. It's striking to see the momentum that companies get from rapid feature acceleration and quality announcements in gen AI. Here's a chart from the August ETR survey that shows which players products are getting traction in gen AI. The blue bars indicate the then current actual usage and the yellow is planned usage. And you can see Microsoft and open AI are clear leaders with Google next with Vertex AI and then IBM Watson X AI. Other likely includes platforms like Anthropic. Now the point is Watson X is clearly in the mix and IBM's focus is on hybrid cloud with Red Hat leading the story and AI. Now with Watson X. So this is just three months after survey was taken three months after IBM's Watson X announcement at its think conference. And the same timeframe that 90 days really about roughly for Google Vertex AI from the point in which they announced that in May. And the point is the pace of innovation is fast and adoption is following pretty close to GA. Spending is happening. AWS bedrock just went GA last month and we expect measurable impacts in the fourth quarter. Now here's a similar graphic with the why is spending velocity in the X representative of market presence for specific AI platforms. Now that table insert that informs the placement of the dots those two columns essentially net score and then the end in the survey. Let's start in the upper right. Notice the positions of open AI and Microsoft. They have net scores of 78% and 67% respectively. And it supports the narrative. These are off the charts and it supports the narrative that these companies are winning in the market and gen AI very clearly, very strongly above that 40% magic line. Now the cloud players as well are top of mind as is anthropic and Databricks with its machine learning and data science heritage. Now while IBM is showing much lower on both dimensions look where the company was in April of this year just one month prior to its announcement of Watson X which again was in May. They went from a negative net net score that again that means spending was flat to down for a higher percentage of customers than it was up. Okay, so they went from a negative net score to a strongly trending upward movement to 15.8%. That is meaningful and their net score surpassed Oracle in the last survey. Now interestingly when you dig into the data one of the reasons for that uptick is the percentage of new customers. These are new logos that are adding Watson or new customers to the Watson X platform jump from 3.6% in July. So it was relatively low single digits in July of 2022 and it jumped to 13% so mid single digits in the October 2023 survey. Now the other really notable is the churn dropped from about 16% in July of 2022 to 7% over that same time period. Now remember legacy Watson 1.0 is still in these figures so they're working off some of the managed decline there and transitioning the base to Watson X. The point is quality announcements with strong features are coming and they're coming fast and furious and they're catalyst for Gen AI platform adoption. We're going to talk about that in the analyst round table. So with that as background let's go into that and these are seven areas that we're going to focus on. The first one, digging into the top takeaways we're going to look at the Watson X stack and we're going to go from silicon all the way up to SAS and comment there and then we evaluate the impact on two ecosystems the analytics partners, remember Tony, Sanjeev and Merv, hardcore data and analytics folks so we're going to focus on that and we're going to look at the Gen AI partners. We're going to go all the way up the stack. We'll look at any surprises from the event and then we're going to finish with commentary on the roadmap and top challenges that IBM must overcome in the market. Now one area we don't dive into too extensively in the conversation because it was mostly NDA is quantum computing. Now I'm not deep into quantum but I'll say it's starting to feel like it's getting more real and IBM is betting big on quantum is taking a very long-term view of the technology. My take is conventional computing is going to be here for decades. The big question that we think about here at the Cube Research is how existing computing models they keep getting better and better people figure out better mathematics and better physics with existing silicon and innovators are finding new ways to solve problems irrespective of quantum. So the crossover point is really fuzzy right now but it's something that we're going to really start paying closer attention. Okay, with that, let's take a look at the analyst discussion. In this episode, we're partnering with our friends in the broader Cube collective community, Sanjeev Mohan, the founder of Sanjeev Mohan creator of the It Depends podcast and Tony Baer who's with DB Insight and a longtime friend of the Cube. Tony's part of a group of outstanding data analysts that we refer to as the data gang and of course, Merv Adrian, also former Gartner analyst and now an independent analyst with the company he formed IT market strategy. Merv had to catch a plane. So we prerecorded his thoughts and we'll work them into the conversation. And we're here at the Thomas J. Watson research facility. It's in Yorktown Heights, New York, a beautiful facility. I'm Dave Vellante with the Cube Research, formerly Wikibon and we're going to share our thoughts and opinions on IBM's progress in the area of AI, data, research, maybe a little bit on quantum and its ability to commercialize and monetize its impressive research capabilities. And guys, as you know, IBM launched its Watson export portfolio in May of 2023 at its think conference and continues to evolve the platform with things like Watson X AI, Watson X data, leveraging open standards like Apache iceberg, they're partnering with key players and they're evolving its roadmap into governance and beyond. So let's just get right into it and start, guys, what are your top takeaways, Sanjeev, kick it off and then we'll go to Tony. Thank you so much for having us. It is absolutely wonderful to be at such a prestigious location. My key, I have so many takeaways from what we saw today, but I'll stick to only one. My main one is the speed at which IBM is moving is breathtaking. Watson X was only announced in the month of May 2023 by June, Watson X.AI and Watson X.Data had already gone GA. In September, code assistant went GA. And today in November 2023, Watson X.Governance has been announced. This collaboration between the research and the business is something that we have never seen before. And I feel that IBM has literally shrunk the time between what the research and what they commercialize. Thank you for that. Tony, what's your main takeaway or takeaways? I'll basically follow that up, which is that, and I think one thing that, and I never really correlated this until I actually got to this facility where it was basically about a dozen years ago, it was right after Jeopardy, where we saw the original Watson, maybe we can call it now Watson 1.0, it was kind of like showcasing what Watson 1.0 did on this huge national, huge TV game show. But I think the thing that really hit me over the head today, and I think what Sanjeev is talking about is a good result of that, is that IBM learned plenty of lessons from the first Watson, which is one, let's not build something so unique and so complicated, we need to get product out to market, we need to make this easy to use, we need to basically integrate parts. And in many cases, they're actually building on what they built before. A lot of the Watson X portfolio comes from a lot of the Watson tools. So I think they really have learned there. And I will take the liberty of saying, why don't we call this, why don't we call Watson 2.0? Yeah, thank you for that, Tony. Let's go to Merv and get his take that we prerecorded. Yeah, play Merv and let's come back and we'll talk a little bit about my takeaways and then we'll move on. Well, the biggest thing I think is that IBM is an overnight sensation after 20 years of working on AI technology. And the big story here, I think, is the focus on immediate actionable business outcomes. They did a lot of pure science for many, many years. We all know how commercially successful that was, but now they focused on a deliverable set of technologies that create and deploy and optimize things that are being designed to solve real-world business problems for some of the largest companies in the world who needed desperately. Okay, for me, I think what Tony referred to as Watson 2.0, I mean, I think that this company has the term embarrassment of riches, it's got a lot of riches. And I think to your guy's point, I think they got it right this time. I mean, in terms of the original Watson, trying to make it do things that it wasn't necessarily intended to do, but they built up a lot of expertise and a lot of learnings and they're really compressing that time to innovation, which is critical in this day and age, obviously. But now let's think about Watson X and look at it from the stack standpoint, from chips and infrastructure. You think about foundation models and LLMs the whole machine learning platform that IBM has. And of course they got data and analytics all the way up to apps and SaaS. And let's start with Merv. Let's listen to how Merv thinks about the Watson X stack. Well, it's been a while since we've looked at the value of a truly deep vertically integrated manufacturer here. And to your point, all the way from the Silicon up through the models, through the data collection, through the data extraction from assets that aren't really used today to the creation of commercially viable models that are not oversized the way the open source projects are but are tuned and sized appropriately for performance and very importantly indemnifiable. And inside a development environment that knows how to use all of that, deploy it, version it, provide the apps associated with it and then on top of all that, a consulting organization that's actively engaged in delivering that technology package for specific business problems for users today. That's all of the pieces coming together at the same time not at varying stages of maturity or preparation but everything we saw today is being done somewhere right now by a customer's paying. All right, thank you. Thank you, Merv. Tony, give us your thoughts on the stack. Well, for one thing, IBM has obviously thought from the hardware level up to essentially, I don't want to call it the virtualization layer but essentially it's the open shift layer, the orchestration layer. And that's actually a very unique basically asset that they have. And then at the top that essentially that's where essentially what I'll call for lack of a better term the application layer, which would basically be Watson X and all the data lives. And so I think from that standpoint, it's a fairly comprehensive vision. It includes pieces that are in various stages of being filled out. And I think also it gives you IBM and we'll talk about this in a bit an opportunity to put a lot of pieces together. I think where I think that essentially that IBM is really differentiated from let's say like AWS or Google for that matter is that AWS and Google will not make any bones about the fact that they're going to know their customers, applications or their customers, vertical businesses. I mean, yes, Google and AWS, I believe have basically some horizontally call center automation options that use AI and now using generative AI. And I just have a really good feeling and it's nothing that I can really point to at the moment. But if you look at IBM's history of close engagement with customers, it's very significant systems integration, consulting business, 160,000 people in that business. They know what they're very familiar with, with customer business processes. And so what I'm really hoping, I think what IBM really has the chance to do compared to a lot of, let's say like, it's either rivals or frenemies in the hyperscaler world, is to take AI and go further in terms of verticalization, in terms of let's say matching up, let's say the right foundation models with let's say some prescriptive types of data sets. I think IBM has a lot of potential there. So I think that to me is the thing that really sticks out. Yeah, I mean, IBM's always had pretty significant data chops. And what is relatively due is the impact of Red Hat as a linchpin. When you think about arbitration as mantra of hybrid cloud and AI, OpenShift containers and Linux are going to be everywhere. Linux is going to follow the data to the edge and obviously containerizing workloads enables a hybrid cloud. But Sanjeev, anything you'd add to that? Yes, I wanna say that if you look at the layers of IBM's AI stat, the layers are not any different from what others do, but there are some certain nuances that make them different. For example, at the hardware layer, they can run on NVIDIA, they can run on Intel, AMD, even Google TPUs. But then the layer above, we talked about OpenShift as being the layer above that and also Linux, but they also have Ansible that they have inherited from Red Hat. So that Ansible Automation Platform is another key component of that. And then as you go up the stack, they've got the whole governance layer, they have their own databases, they have their own data integration, master data management, so they have all of that. They have the SDKs, the APIs, and then at the very top, they have the Assistant, which allows them to do code generation. So that's how they think of the entire stack. Okay, thank you for that. Let's talk about the impact of the overall ecosystem. I mean, IBM's ecosystem spans from, and it's stack obviously spans semiconductor partnerships to hyperscalers, SAS, ISVs. And of course, it's internal consulting partnerships and external partnerships with consulting organizations. Sanjeev, why don't you kick this off? How do you think about the evolution of IBM's ecosystem? So they have had partnerships with major cloud providers with SAP, Salesforce, Adobe. They're actually working very closely. We learned today a lot about Adobe Firefly and how much that collaboration is taking place. But what really excites me is their partnership with Samsung. And the reason I say that is because IBM is rethinking its entire AI first chip, which is currently at five nanometers, it's going to even two nanometers, and they have two partnerships. Samsung is the one that has a foundry and builds their chips, but now they are also very closely partnering with Japan, a company called Radius. And they're developing a two nanometer chip. So I see that's a very exciting part of their partnership. Yeah, you know, I'm glad you brought that up because when IBM essentially jettisoned its foundry, you know, spun it off the microelectronics division. People thought, oh, they're getting out of the semiconductor business far from it. I mean, they changed the business model, leveraging external foundries. Samsung, obviously the number two foundry in the world behind TSM, but very rapidly with IBM's partnership, moving along that curve, I think significantly driving innovation. Tony, what would you add to the ecosystem conversation? Yeah, well, a couple of things. I'm going to first continue on Sanji's thoughts with Samsung, which is that, you know, with IBM, you basically, you know, very much see this as a classic licensing play. And the fact is that there is a ready market out there because the fact is NVIDIA can't supply everybody. And so we are seeing kind of a race here. I mean, for instance, like, you know, Amazon has developed, you know, Tranium and Inferentia. Google has its TPUs. The interesting one, Microsoft just partnered with Oracle to piggyback on Oracle's NVIDIA superclusters. So basically, you know, that doesn't necessarily, I wouldn't classify that as going to a second source. The fact is that all of the big, you know, let's say like hyperscalers are essentially, you know, have huge appetite, you know, for these types of chips. And in turn, for anybody else to get their hands on these chips, the best we can do at this point is look for some sort of secondary aftermarket. In other words, let's turn time on some corporations, you know, I guess, you know, contracted time on an H100s. Where I see IBM playing here is that, you know, licensing, working with partners like Samsung, it becomes a key part of the supply chain for these advanced AI chips. The other thing also is that, you know, as we start looking at, you know, generative AI, the fact is that it's going to require lots of different types of foundation models, which themselves, you know, there may be a need to have different types of chips, you know, that are specialized with different classes of foundation models. So I see huge potential in that. The one other partnership I want to focus on is AWS, because I think IBM, you know, and actually was at, he's here exactly, you know, almost exactly five years ago. I saw Arvin Krishna at that time before he became, you know, because CEO. And basically, he outlined IBM's hybrid cloud strategy. Until that point, IBM was trying to say it's IBM cloud. And IBM was at that point saying, you know, something we realized where the market has gone, we still very much will have our cloud, but we also realized that our customers are going to be using other clouds that have already scaled. And so in the past about year or so, you know, year, year and a half, IBM is really, you know, very much ramped up its relationship with AWS. And it's to the point where I want to see a lot of these Watson X, you know, SaaS services, which are initially premiering, you know, both on-prem or in IBM cloud, you know, or hybrid cloud and OpenShift, I should say, I really want to see that go to AWS. And I think there's a very strong potential. It's going to be a really strong route to market for IBM. Thank you for that. And then, you know, you think about some of the advantages that, that IBM has in terms of its, you know, leadership and quantum. Now, quantum is still, you know, years away from really having a measurable market impact, but it's also building a quantum ecosystem as well. And, you know, a lot of times people will use the CUDA, Nvidia, you know, CUDA analogy, you've got to have a programming environment that will run applications and IBM is well on the path there. But as I say, it's many, many years away, but it could be a linchpin of IBM's, you know, future of business volume. Let's hear from Merv. Give us your, go ahead. Yeah, so I just want to add, since we were talking about the model, foundation models, specifically, another very exciting partnership I liked what I saw today is between IBM and Huggingface. So, and it's a bidirectional, very tight integration. You can go into IBM Studio, which is the front end, and you can either run a native model from IBM called Granite, which it's fully trained by IBM and they've published all the data sources, all the training, optimization, everything is out there in the public. Or you can pick a Huggingface model from its collection of models. Vice versa, you can go to Huggingface and you can select Granite. So that's a very good integration they've done. Great call out. And I tweeted during the session today, some ETR data on Huggingface adoption and like many MLAI platforms, it was very high momentum during the pandemic. And then it started to wane. And then month after chat, GPT was announced, it started on a new trajectory and you're seeing Huggingface is obviously one of those partners you want to do business with. Let's hear from Merv. Merv, give us your take on the ecosystem, please. Partnerships is something that IBM has historically been very good at and one of the most intriguing moments of discussion of that today was the story about their semiconductor factory, the work they've been doing with Samsung, the work they're doing with New York State, building a new fab plant for the new generation of hardware that's gonna be used for a bunch of this stuff. They're partnering with almost everybody. And I say almost advisedly because it was notable that we heard Google's name mentioned very, very rarely today and also Oracle's name mentioned very, very rarely. But we heard AWS, we heard Microsoft, we heard SAP. I guess the usual suspects with a couple of notable, at least for the moment, omissions, they said it was work in progress. So it's not clear whether that just hasn't closed the loop yet, but right now those guys are not in sight. So that was a little interesting to me. Okay, thank you, Merv. All right, Tony, let's start with you here on the next sort of series of questions. Is there anything you heard today that surprised you? I mean, personally, I was struck by Dario Gill's commentary on the steps that IBM has taken to actually turn research into commercial opportunities. And this is something that IBM, I think, struggled mightily with in the 2010s, but he had very specific details, organizational details, the process, the mindset, collaboration, silo busting across multiple research organizations. And then Rob Thomas' commentary on focus where the product folks do nothing but product. They're not doing marketing. They're not doing go-to-market, which used to be kind of one-third, one-third, one-third. They're focused on product. So that was both surprising and refreshing to me, but Tony, anything that you surprised you? A couple of things that I'll just continue along that line for a moment, which is that I think a lot of it just, this comes from the top. And IBM for the first time in many generations has a product person at the top, you know, Arvin Krishnam. And so I think with him, you basically have seen kind of like a back-to-basics with IBM, which is like, let's go back to, let's look at what we do best and become more of a product company and become more focused. I think out of that, of course, you know, what Dario was talking about was that, each of the labs used to be their own fiefdoms. They each had their own specialty. So you go to Toronto for this, Alamedin for that, you know, and Watson here in New York, Tyler Heights for something else. And now it's that they are now putting together, kind of, it's almost like, you know, in a way like almost like an ERP of research, which is that they now are operating basically from the same sheet of music. They may still have their specialties, but at least there's now basically more of a transparency, you know, more, you know, more, I guess what we call inter-process communication. So I would agree with you. That same, which I think was very exceptional. The thing is I said, I think a lot of the stems from the fact that they've had a change of leadership that is basically much more, you know, basically, you know, feet to the ground. But the other thing that really just sort of, you know, stirred my, you know, that really surprised me was the take up, the ramp up of Watson X. This product, I mean, it was only announced in May. And yet, when we saw the logo, all those logos out there, this is unusual for any new IBM product line, even as it was just going to early release, but there was that much appetite, you know, in the customer base. And the thing is, I'm not going to describe it totally this, but I'll give this, you know, this sort of reflects, you know, kind of what we were just talking about with, you know, with research and with focus, which is that, I said, think. I've been thinking in previous years and it was just all over the place. Think, you know, in Orlando was very focused on, and they stayed focused on two messages or two themes. One was hybrid cloud, and the other is AI and they stuck to it. And so I think, I think, you know, with IBM being, you know, more focused, I think it's given its customers more confidence that, you know, let's buy into this product, it won't necessarily become an orphan. And so I think that to me, as I said, the rapid ramp up of logos to Watson X, just surprised the pants off. And the data confirms that if you go back to my tweets on 11.9 the day that we are actually here at IBM, I shared some ETR data in terms of Watson's spending momentum on Watson platform compared to a year ago. It's still nowhere near where the big cloud players are, but they made a substantial meaningful move. Now, one quarter doesn't a trend make, but it's something that we're watching closely. Let's bring in Merv and get his take. Anything, Merv, surprise you. Change in attitude was a bit of a surprise and it was very notable at the very, very beginning of the day when Rob Thomas spoke specifically to Dario, software organization talking to the research organization about how they jointly made this decision to invest in a technology when it wasn't necessarily all that exciting to a lot of people. And of course they'd had their issues with how well Watson had gone. So they made that commitment, but also Rob made this very specific decision to have the people working in the software organization work on delivering product, not owning marketing anymore. And that was controversial and he's not entirely sure yet how successful it will be, but it's a renewed focus by IBM on commercial delivery, which frankly has been absent. We've heard an awful lot of stuff that sounded really good as data sciences, IT research, but there is an enormous amount of focus right now on stuff their customers can use for solving the business problems. And it's a convergence of trends in the marketplace. They could be uniquely well positioned for this moment. And that's pretty exciting. It's the first time I'm as enthusiastic about them as I've been for a while. Okay, thank you, Merv. Sanjeev, let's go to you. Any surprises that you'd care to share? Yeah, I am delighted to see this new IBM, it's refreshing. I feel for the first time in a very long time there's a sense of urgency, there's a sense of focus. And also what I was quite amused to see is how IBM is leveraging a lot of their acquisitions from the past. So I'd be very honest. In the past, when I used to see IBM, there used to be a file net group or cast iron group, all kinds of, and they never talked to each other. But today it was quite interesting to see that when you use the AI code assistant or something and the code, oh wow, the lights went out. The lights just went off. Turn the lights on. Yeah. You know, your camera and your, there you go. Yeah. This is a real time, baby. Real time. Yeah, so when, so now what I saw that I found very interesting is the fact that as soon as AI does a recommendation, let's say you say AI write me a COBOL program to do certain tasks and the code assistant does that. Or if you say that, okay, now I want you to translate into Java, it gives you recommendations. Those recommendations can then be made actionable through and super, like I said earlier, you can put it in Astana. Which is another one of their acquisitions to do application performance monitoring. You can use IBM's data band acquisition to do data observability. So all these pieces are now sort of coming together. And I thought that was very, very good. Nice. Okay, let's press on in the dark here. Merv, how would you describe IBM's data and analytics ecosystem? You know, let's talk about the positives and negatives, the challenges. You know, we're talking here about the data catalog, the databases, which of course is Merv's wheelhouse, integration tools, governance, data ops. Let's hear from Merv. Yeah, what's been sorely missing besides their openness story, which is admirable and their willingness to work in a hybrid cloud environment which is also admirable, but neither is particularly unique. And of course they've been good with structured data and even some unstructured data. One of the most impressive things I saw today was watching one of these models extract data from a photograph of a form that was filled out by hand. And the demo was extraordinary because they didn't just show us the magic that happened at the end. They also showed us, here's the data stream that the system extracted from this document and fed to the model for analysis. And you could see that there's stuff going on that people aren't talking about, which is what do we do with that? How do we get that data out of those things? It reminds me of Steve Martin's old thing about get an ad. You can make a million dollars and pay no taxes. It's easy, let me tell you how. First, get a million dollars. Well, hang on, let's go back to that part. That's the hard part. How do we get this data not suitable for consumption into a form that's consumable by these models? We've made the models good. Now, how do we give them the data? This is a really core competency and not very many people are talking about it. So that's one. The other one is, I think the amount of time and effort that IBM has put into trust, into governance, into PII recommendation, all that stuff, recognition, excuse me, all that stuff. And the fact that as a result of all that effort, they are willing to indemnify their customers, to offer them models that they are indemnifying. I haven't heard that word in any of the other conversations I'm having about this technology. And you think about who their customers are and the fact that those customers are going to have to sit down in front of regulators and not just say, yes, we've got explainable AI, but in fact, give them a report that says in the words those regulators want to hear, how they got to the decision to lend Donald Trump $50 million more than his property was worth. You know, that's a big question. Okay, Sanjeev, what's your take on IBM's analytics ecosystem? So their ecosystem consists of some databases like DB2, consists of the governance pieces that I've talked about a little bit. It consists of their, I didn't see any BI stuff from them though, maybe I missed that part. But I think they, like I said earlier, they're leveraging all their existing pieces. Their lake house is very interesting because they're using Iceberg. So Watson X dot data is their lake house. They are relying a lot on the cloud pack for data. So this is what we saw today is actually the next iteration of cloud pack for data. Yeah, okay, thank you. And Tony, why don't you close out this section? Yeah, I'm not quite sure that basically we saw the next iteration of cloud pack for data. I think that right now, and this is what I've been on IBM's case about this, is that they really need to bring Watson X dot data and cloud back for data together. I didn't really hear that today. And that's actually one of the things that's on my wish list. I really do want to see that. But yeah, I think I was very, I mean, I did a whole lake house study of lake house market ecosystem, back in Q1 of this year. And I basically, one of my predictions was that IBM was going to support Iceberg. Iceberg was as it has become essentially, kind of the de facto standard open table format for the lake house. And so my prediction was vindicated there. And I think, and in fact, as they've actually implemented Iceberg with a few interesting little twists in terms of being able to locally cache data and be able to do certain things in a very distributed fashion. So I think IBM has actually done some, taking some of their innate capabilities and kind of put them in good use with Iceberg. They're not just any other Iceberg implementation. That being said, as I said, I do want to see some real product convergence with cloud pack for data, which is not just data warehouse. I mean, cloud pack for data also included basically kind of like AI ops tools, also included analytics tools, actually kind of subsumed a lot of like some of the old Cognos analytics, as a matter of fact, you know, taking that to the cloud. So that's so much, I mean, I didn't see that today. It's something that I do still want to see from IBM. Yeah, so we hear it with the, it depends on the podcast and the cube after dark. The cube after dark. It's a belated Halloween special. Okay, the lights came back. So there you go. Great, listen. Audio is the most important thing is as you guys know, we turn these into podcasts and that's where we get a lot of listens. But let's talk a couple more questions for you guys. Let's talk about the external ecosystem of partners, specifically gen AI partners like hugging face. We talked about a little bit, the use of LangChain and so forth. So let's get into the thoughts on IBM's partner strategy as relates to, you know, gen AI and just broadly that category. Sanjay, Sanjay, if you first. Yeah, by the way, you know, LangChain and a lot of these partnerships are very interesting. The thought that crossed my mind was that the space is moving so fast that I wonder how fresh these partnerships remain. Like for example, you know, OpenAI for instance, introduced techniques to bring data in the context size increased a lot. The context size of OpenAI this week went from 32K to 128K. Dranit's size is 32K, which was actually the largest when they announced it. But now they're not. Some of the LangChain, Lama index capabilities are now being brought into the model. So it's a complete moving target in my mind. So it's great that they have these partnerships and they should, but it's literally tomorrow they would need to have new partnerships that they don't even know exists today. Great, thank you. Tony, your take. Yeah, it's really all about foundation miles and having is, you know, I mean, right now we're kind of, we need to go through a Cambrian explosion of foundation miles, which at some point it will rationalize out. But the fact is, is that, you know, what we found is that general purpose, you know, you know, large language models, LLMs are just not going to be even efficient way to solve most business problems. And that in the long run, we also start to need to look at the carbon footprint of all this. And so a lot of this is going to be models that do, you know, through the School of Hard Knocks experience we're going to find out, you know, learn which models are going to be more suited for which classes of problems and that work with which, you know, types of data sets. And there probably will be some vertical industry ones. That being said, I don't, I mean, and this is not to criticize IBM. I think everybody is trying to partner with anybody and everybody. The actual makeup of them, you know, of the partnerships between different hyperscalers and IBM and others will vary a bit. And in terms of like the, you know, for instance like Oracle Invest and Co here, Google and Amazon are trying to compete with the BBFF, you know, with Anthropic. And of course, you know, Microsoft's kind of a tight, you know, bear hug or bear hug with open AI, which I'm not sure maybe goes the other way around in terms of, you know, who's really driving that relationship. So qualitatively, I don't see a lot of difference in the partnerships. It's not a criticism. We're still at early stages. Basically, you know, you gave some examples on how each of them are changing, but essentially, you know, their, you know, their, you know, their capacities, their, you know, their capabilities for like, you know, in many indexes. Again, this is just going to keep, you know, changing. I think what's kind of interesting, of course, open AI, I just had its developer conference and part of that is they laid down the gauntlet. We're going to start to really, you know, underpriced because essentially we have the volume. And so it's going to be a very interesting blogging there. But I think in the long run, it's going to be won by the long tail, which is basically, which is like, you know, finding the right foundation now. And then at that point, we will see some, some consolidation. And I think this is where I think IBM with its vertical industry expertise and it's consulting, you know, it's, you know, where it's on the ground with clients can then get, you know, prescriptive. And I think that's really where the value of the partnerships it's going to be. It's not going to be that IBM is partnered with Huggingface, just like Amazon is, just like Google is. I think the difference is going to be where, you know, where the rubber meets the, meets the road, which is basically IBM being able to have their, you know, have, you know, the right foundation models for the specific, you know, classes of let's say vertical industry problems of its clients. So a couple of comments there. So you look at the spending date, open AI and Microsoft by far the most spending momentum. Anthropic also is pretty high momentum, not as prominent, not nearly as prominent. Certainly Google with Vertex AI is showing. And then one that's not showing up in the spending data yet is Lama 2 from Meta. But my, by my sources, the indications are to your point, Tony, about the industry specificity and the domain specificity, perhaps as much as 50% or more of the downloads of Lama 2 are going on-prem or at least the organizations that have major, major data center presence. You know, Bedrock from Amazon just went GA, you mentioned coherent Oracle, they've got momentum, but right now it's really, it's open AI and Anthropic are the two big ones. And of course, Bedrock has a lot of momentum, but it's more a suite of optionality. Let's get Herv's take here and then we'll wrap up. I think this is going back to IBM doing what IBM does best. They identify technologies early and where appropriate, they invest even by the vendors involved. And then they deploy it as part of their stack. And generally speaking, they're not closing that when they do it. The world is turning to Ansible for deployment of things. Red Hat acquiring that was a very smart move. Their work with HuggingFace is really interesting and they'll be able to allow their customers to play with stuff, but none of HuggingFace's models are indemnifiable just yet, but the ones that IBM partners with them on and deliver will be. And so it's a little bit of the best of both worlds, I think IBM knows how to work an ecosystem, but they also know how to take stuff that's open that they have invested in, worked on themselves and released to market. They know how to take that and make it appropriate for industrial grade use in the most mission-critical applications. Last question, and it's kind of two-part question. So we'll combine them, part A, part B. First is, what do you think about IBM's roadmap? And the second part is, what do you see as IBM? So let's end with its key challenges. So first the roadmap and then its key challenges. Well, the roadmap was largely missing in action today, unfortunately, I think we got to see a good roadmap up to the present and maybe six months out, at least on the software side. Wasn't much talked about beyond the first half of next year. And I would have liked to know a little bit more about that. Now that may in part be derivative of the fact that they're going to be guided by the actual deployment experiences. A lot of this is gonna be, what did they ask us to build? Now we've delivered it. And hey, by the way, that's resellable. We've got a model for this business process that we developed for one customer. It won't be available only to that customer. That bodes very well. Prospects for the future. IBM has had a relevance problem for a long time. A lot of commercial buyers are looking to the exciting hot young checks on the market, right? As opposed to the old stalwarts. On the other hand though, IBM's customers are the largest industrial and governmental organizations in the world with very intractable problems. And they have a deliverable here that is well suited to those requirements. And that is and will be a lucrative market. We've been looking at transformation and we've been looking at migration and we've been looking at evolution of the technical debt of large organizations for a long time. And lots of people have taken a run at this. IBM is in a position to service the largest piles of technical debt in the world to the advantage of their customers. And we may be on the verge of one of the largest waves of migration and transformation of legacy software assets because they're not liabilities. They are performing assets. But the transformation of those into a new environment at a higher rate of speed and within demnification. That sounds pretty good to a lot of Wall Street firms who have thousands of bespoke applications that they wrote 30 years ago. That could be a dramatic wave that could be as significant as when C arrived on Wall Street. Okay, last question. Can I say something a bit contrary and far reaching? So Dave and Tony, the way I see this partnership, this whole ecosystem developing is if you look at what a data break state, when they started, what did they do? They only did Apache Spark. That's all. And from Apache Spark, they then went into all other ends and nooks and crannies or the flow. Yeah, and they just built out the entire ecosystem. Snowflake started by building a better cloud native a data warehouse and then they added container services and Snowpark, Python and all of that. So now they're very similar. IBM we are seeing is taking their industry partner or industry focus and the fact that they have 160,000 strong support and now that they have their own foundation models and they've built up the rest of the ecosystem or they're building it. I feel OpenAI, we are going to see the same thing. OpenAI is going to say, this is my imagination, that we started with foundation models and now we're going to build out the ETL version of AI and the analytics version and we're going to have yet another player in our space that does not exist yet and that will be OpenAI. Well, I can certainly see where they might go into like horizontal tools, but again, I think where IBM is going to have the big advantages in the verticals. Everyone will have their strengths. Read Ben Thompson's assessment of OpenAI's launch. It was actually really well done and pretty interesting. Last question, we got a wrap guy so they're going to kick us out of here. Two-part question. What do you think of IBM's roadmap from what you saw today, part A and part B? What do you see as IBM's key challenges? Let's actually start with Merv and then we'll go to Tony and then Sanjeev, you bring us home. Merv, let's get your tank. Okay, Tony, bring it on. I think the big challenge is putting all the pieces together. I mean, I was impressed with the roadmap and then basically you gave me kind of the clue that I think that IBM is going to get a lot more vertical. I mean, it gave me that clue. I can't say one way or the other, but it gave me a really strong hint that that was going to go that way. And so that was really what kind of impressed me is the new kind of potential there. But again, I think what's really important on the roadmap is that IBM has a lot of assets. And as I said, the obvious example that just sticks out is basically, you know, it was with CloudPak for data and Watson X sub-table. I'll throw another one out there, which is looking at governance. I am waiting for the point where somebody only cracks basically how to essentially converge model governance, AI governance, data governance. The fact is that two are just, they're intertwined. Because thing is like, if a model is going, if it looks like a model is going off, the question is, was the model fine, but the data was starting to drift or vice versa or some combination of both. And from that standpoint, I'm really interested to see what IBM is going to do with their Manta acquisition. And I think there's going to be doing a briefing on this next week. It supplies, it provides a very key source of data lineage, which if you're looking at, you know, tracking models and tracking data, tracking the lineage of both, this is going to be a key ingredient. I want to see that integrated into Watson X dot governance. And we talked a little bit about the importance of governance, Tony, you and I last night at dinner. That is one of the big blockers to invest in this space. Well, thank you for that. And then Sanjeev, you bring us home now. All right, so first of all, I don't think IBM shared with us a very clear roadmap. So some of this is just my own learning from today. Vector databases were not mentioned, but when I asked around, I was told they're coming. So that's on the roadmap. Once one statement from Rob Thomas that really stood out for me, he categorically said, our models do not hallucinate. And I think what he means is that because these models are so curated for certain use cases that they are very confident in their model's quality of answers. However, models are based on probability, so they will hallucinate if you expand the use cases. And I think IBM will discover that as their models get deployed more and more, that hallucination problems will show up. The last thing that I wanna say is that I was asking about PowerPC, there was so much talk about their own chips and nobody wanted to even talk about PowerPC, the chipset they used to have. So it's all about the open chips from commercial parties. But I promise you a lot of their expertise that they're using today in AI chips is coming from power. Power is actually quite a difference. You know, the way it does memory management and it's just look at power just didn't have the volume that x86 had in our course arm. Well, we took the tour of Watson lab and looked and so the quantum machine. Someone actually asked, have you taken that experience from mainframes and applied it to doing all this, this ultra cooling with the quantum and the intercept unqualifiedly? Yes. No doubt we are kind of back to the mainframe future that pendulum swings. And to me it'll just close the big challenge is really awareness of the scope of IBM's capabilities. And really it's ability to execute on translating that research and development into the income statement. I wanna see that cycle get compressed and at the same time really be productive and measurable. Any vendor that's listening to theirs are watching this video and they're thinking that we also are going to go to market with our own AI story. I'm sorry, it's too late. You become yet another me too. If that's how you lead, IBM has an advantage because it has a whole stack and it has hide it for decades. So the future belongs to the organizations of vendors that have the complete story to tell, not just, you know, I also do generative AI. Everybody's doing generative AI today. You know, I got to say Sanjeev, Tony and Merv, they become great friends of the cube collective and we're really grateful for their collaboration. It was a little challenge with the lights and with Merv having to leave early. Really appreciate these guys spending some time and contributing to the cube and our community. Tony, by the way, just wrote a great piece on SiliconANGLE reminding us that there's other AI beyond gen AI. So check that out. Okay, that's it for now. I want to thank Alex Meyerson and Ken Schiffman. Alex is on production and manages the podcast. Ken Schiffman also does production. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob, oh, it's our editor in chief over siliconangle.com. And remember, all these episodes are available as podcasts wherever you listen. Just search, breaking analysis podcast. They publish each week, wikibon.com and siliconangle.com. You can email me at david.volante at siliconangle.com or DM me at dvolante or comment on my LinkedIn post. And by all means, check out ETR.ai, the best survey data in the enterprise tech business. And also check out the cubeai.com, the cubeai.com. It is now out for public consumption. The cubeai.com, check it out. It is essentially ask the cube. Love your feedback. This is Dave Vellante of the cube, research insights powered by ETR. Thanks for watching and we'll see you next time on breaking analysis.