 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. OpenAI, the company, and ChatGPT have taken the world by storm. Microsoft reportedly is investing an additional $10 billion into the company. But in our view, while the hype around ChatGPT is justified, we don't believe OpenAI will lock up the market with its first mover advantage. Rather, we believe that success in this market will be directly proportional to the quality and quantity of data that a technology company has at its disposal and the compute power that it can deploy to run its system. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we unpack the excitement around ChatGPT and debate the premise that the company's early entry into the space may not confer winner-take-all advantage to OpenAI. And to do so, we welcome Cube collaborator, alum, Sarbjit Johao, and John Furrier, co-host of theCUBE. Great to see you, Sarbjit. John, really appreciate you guys coming to the program. Great to be on. Okay, so what is ChatGPT? Well, actually, we asked ChatGPT. What is ChatGPT? So here's what it said. ChatGPT is a state-of-the-art language model developed by OpenAI that can generate human-like text. It can be fine-tuned for a variety of language tasks, such as conversation, summarization, and language translation. So I asked it, give it to me in 50 words or less. How did it do? Anything you add? Yeah, I think it did good. It's a large language model, like previous models, but it started applying the Transformers sort of mechanism to focus on what prompt you have given itself, and then also what answer it gave you in the first sort of one sentence or two sentences, and then introspect on itself, like what I have already said to you, and so just work on that. So it's itself sort of a focus, if you will. It does, the Transformers help the large language models to do that. So to your point, it's a large language model, and GPT stands for Generative Pre-trained Transformer. And if you put the definition back up there again, if you put it back up on the screen, let's see it back up. Okay, it actually missed the large, word large. So one of the problems with chat GPT, it's not always accurate. It's actually a large language model, and it says the state-of-the-art language model, and if you look at Google, Google has dominated AI for many times, and they're well-known as being the best at this, and apparently Google has their own large language model, LLM, in play, and have been holding it back to release because of backlash on the accuracy, like just in that example you showed, is a great point. They got almost right, but they missed the key word. You know what's funny about that, John, is I had previously asked it in my prompt to give me it in less than 100 words, and it was too long. I said, I was too long for breaking analysis, and there it went into the fact that it's a large language model. So largely, it gave me a really different answer for both times. But it's still pretty amazing for those of you who haven't played with it yet, and one of the best examples that I saw was Ben Charington from this weekend MLAI podcast, and I stumbled on this thanks to Brian Graceley, who was listening to one of his cloudcasts. Basically, what Ben did is he took, he prompted ChatGPT to interview ChatGPT, and he simply gave the system the prompts, and then he ran the questions and answers into this avatar builder and sped it up to X, so it didn't sound like a machine, and voila, it was amazing. So, John, is ChatGPT going to take over as a cube host? Well, I was thinking, we get the questions in advance sometimes from PR people, we should actually just plug in and ChatGPT, add it to our notes and say, is this good enough for you? Let's ask the real question. So I think, you know, I think there's a lot of heavy lifting that gets done. I think the ChatGPT is a phenomenal revolution. I think it highlights the use case. Like that example we showed earlier, it gets most of it right, so it's directionally correct, and it feels like it's an answer, but it's not 100% accurate, and I think that's where people are seeing value in it. Writing marketing copy, brainstorming a guest list, gift list for somebody, write me some lyrics to a song, give me a thesis about healthcare policy in the United States, it'll do a bang up job, and then you got to go in and you can massage it, so we can do three quarters of the work. That's why plagiarism and schools are kind of freaking out, and that's why Microsoft put 10 billion in it, because why wouldn't this be a feature of word, or the OS, to help it do stuff on behalf of the user? So linguistically it's a beautiful thing, you can input a string and get a good answer, it's not a search result. And we're going to get your take on Microsoft, but it kind of levels the plan, but ChatGPT writes better than I do, Sarvjeet, and now you have some good examples too, you mentioned the Read Hastings. Yeah, I was listening to Read Hastings, fireside chat with ChatGPT, and the answers were coming as sort of voice, in the voice format, and it was amazing, he was having very sort of philosophy kind of talk with the ChatGPT, the longer sentences, like he was going on, just like we were talking, he was talking for like almost two minutes, and then ChatGPT was answering, it was like one sentence question, and then a lot of answers from ChatGPT, and yeah, you're right. I, this is not what I believe, I've been thinking deep about this since yesterday, we talked about like we want to do this segment, the data is fed into the data model, it can be the current data as well, but I think that like models like ChatGPT, other companies will have those too, they can, they are democratizing the intelligence, but they're not creating intelligence, yet, definitely yet, I can say that, they will give you all the finite answers, like okay, how do you do this for loop and Java versus C sharp, and as a programmer you can do that, but they can't tell you how to write a new algorithm, or write a new algorithm, search algorithm for you, they cannot create a secretive code for you to have competitive advantage, not yet, not yet, but you, did Google do that today? Google? No one really can, the reasoning side of the data is, we talked about this at our super cloud event, with Juman Gandhi, who was CEO now of Next Data, this next wave of data intelligence is going to come from entrepreneurs that are probably across discipline, computer science, and some of the discipline, but they're going to be new things, for example, metadata and data, it's hard to do reasoning, like a human being, so that needs more data to train itself, so I think the first gen of this training module for the large language model they have is a corpus of text, that's why blog posts are, but the facts are wrong, and sometimes out of context, because that contextual reasoning takes time, it takes intelligence, so machines need to become intelligent, so therefore they need to be trained, so you're going to start to see, I think, a lot of acceleration on training, the data sets, and again, it's only as good as the data you can get, and again, proprietary data sets will be a huge winner, anyone who's got a large corpus of content, proprietary content like theCUBE or SiliconANGLE as a publisher, will benefit from this, large FinTech companies, anyone with large proprietary data will probably be a big winner on this generative AI wave, because it just will eat that up and turn that back into something better, so I think there's going to be a lot of interesting things to look at here, and certainly, productivity's going to be off the charts for vanilla, and the internet is going to get swarmed with vanilla content, so if you're in the content business and you're an original content producer of any kind, you're going to be not vanilla, so you're going to be better, so I think there's so much at play, Dave, well, I think the playing field has been risen, so we... Risen in, leveled? Yeah, and leveled, just on an extent, so it's not like that few people as consumers, as consumers of AI, we will have an advantage and others cannot have that advantage, so it will be democratized, I'm sure about that, but if you take the example of Calculator when the Calculator came in, and a lot of people can't do math anymore because Calculator is there, right, so it's a similar sort of moment, just like a Calculator for the next level, but again... I see it more like open source subject, because if you think about what ChatGPD is doing, you do a query and it comes from somewhere, the value of a post from ChatGPT is just a re-use of AI, the original content accent will become from a human, so if I lay out a paragraph from ChatGPT, it doesn't have you living on some facts, I check the facts, save me about maybe an hour writing, and then I write a killer two, three sentences of like sharp original thinking or critical analysis, I then took that body of work, open source content, and then laid something on top of it. But sort of Jesus example is a good one, because the Calculator kids don't do math as well anymore, the slide rule, remember we had slide rules as kids, remember we first started using Waze, you know, we were the minority and you had an advantage over other drivers, now Waze is like social traffic navigation, everybody had, you know... All the back roads are crowded. They're crowded, exactly. All right, listen, what about this notion that Futurist Ray Amara put forth and really Amara's law that we're showing here, the law is we tend to overestimate the effect of technology in the short run and underestimated in the long run, is that the case, do you think, with ChatGPT? What do you think, SerbG? I think that's true, actually, there's a lot of... We're going to debate this. There's a lot of, oh, like when people see the results from ChatGPT, they say, what the heck, it can do this? But then if you use it more and more and more and I ask this sort of similar question, not the same question, and it gives you like same answer, it's like reading from the same bucket of text in the interview, read half-mended, where the ChatGPT, you will see that in a couple of segments. It's very, it sounds so boring that the ChatGPT is coming out of the same two sentences every time. So it is kind of good, but it's not as good as people think it is right now. But we will go through this hype sort of cycle and get really realistic with it. And then in the long term, I think it's a great thing. When the short term, it's not something which will take no jobs. You're saying it's not... I know, I think the question was, it's hyped up in the short term and not, it's underestimated in the long term. That's what I think what he said, quote. That's what he said. I think that's wrong with this because this is a unique, ChatGPT is a unique kind of impact and it's very generational. People have been comparing it. I have been comparing it to the internet, like the web, web browser, mosaic and Netscape navigator. I mean, I clearly still remember the days seeing navigator for the first time, oh wow. And there weren't not many sites you could go to. Everyone typed in, you know, cars.com, you know. Wasn't that overestimated? The overhyped at the beginning? And underestimated in the long term? It was underestimated long run. But that's a Moore's law. That's what it is. No, they said overestimated part? Overestimated near term, underestimated long, overhyped near term, underestimated long term. I got, hey, right? Well, yeah, okay, so I would then agree, okay. We were off the charts about the internet in the early days and it actually exceeded our expectations. Well, there were people who were like poo-pooing it early on. So when the browser came out, people were like, oh, the web's a toy for kids. I mean, in 1995, the web was a joke, right? So, 96, you had online populations growing. So you had structural changes going on around the browser, internet population. And then that replaced other things, direct mail, other business activities that were once analog, then went to the web, kind of read only as we always talk about. So I think that's a moment where the hype long term, the smart money and the smart industry experts all get the long term. In this case, there's more poo-pooing in the short term. It's not a big deal. It's just AI. I've heard many people poo-pooing chat GPT and a lot of smart people saying, no, no, this is DexGen. This is different and it's only going to get better. So I think people are estimating a big long game on this one. You're saying it's bifurcated. There's those who say, okay. All right, let's get to the heart of the premise and possibly the debate for today's episode. We'll open AI's early entry into the market, confer sustainable competitive advantage for the company. And if you look at the history of the technology industry, it's a little littered with first mover failures. Altair, IBM, Tandy, Commodore, Apple even. They were really early in the PC game. They took a backseat to Dell who came in the scene years later with a better business model. Netscape, we're just talking about is all the rage in Silicon Valley with the first browser drove up all the housing prices out here. Altavista was the first search engine to really, you know, index full text. Owned by Dell. I mean, definitely. Owned by digital and Compact bought it. Okay, of course, there's an aside. Digital, they wanted to showcase their hardware, right? Their supercomputer stuff. And then so, Friendster and MySpace, they came before Facebook and the iPhone certainly wasn't the first mobile device. So lots of failed examples, but there are some recent successes like AWS and cloud. You could say smartphone, so I mean. Well, I know and we can parse this. So we'll debate it. The Twitter, you could argue had first mover advantage. You kind of gave me that one, John. Bitcoin and crypto clearly had first mover advantage in sustaining that. Guys, will OpenAI make it to the list on the right with chat GPT? What do you think? I think categorically as a company, it probably won't, but as a category, I think what they're doing will. So OpenAI as a company, they get funding. There's power dynamics involved. Microsoft put a billion dollars in early on. Then they just pony it up now. They're reporting 10 billion more. So like the browsers, Microsoft had competitive advantage over Netscape and use monopoly power and convicted by the Department of Justice for killing Netscape with their monopoly. Netscape should have had won that battle, but Microsoft killed it. In this case, Microsoft's not killing it. They're buying into it. So I think the embrace extend Microsoft power here makes OpenAI vulnerable for that one vendor solution. So AI as a company might not make the list, but the category of what this is, large language model AI is probably will be on the right hand side. We're going to come back to the government intervention and maybe do some comparisons, but what are your thoughts on this premise here? That it will basically put forth the premise that chatGPT, its early entry to the market will not confer competitive advantage to OpenAI. Do you agree with that? I agree with that actually, because Google has been at it and they have been holding back, as John said, because of the scrutiny from the Fed right, so. And privacy too. And the privacy and the accuracy as well. But I think Sam Altman and the company on those guys, right? They have put this in a hasty way out there, because it makes mistakes and there are a lot of questions around the sort of where the content is coming from. You saw that, your example, it just stole the content without your permission, you know? Yeah, just quick aside. It quotes on people's behalf and those quotes are wrong. So there's a lot of sort of false information it's putting out there. So it's a very vulnerable thing to do, what Sam Altman did. So even though it'll get better, others will compete. So just side note, the term which Reed Half-Muffman used a little bit, like he said, it's experimental launch. Like, you know, it's- It's pretty damn good. It is clever, because according to Sam, it's more than clever. It's good. It's awesome if you haven't used it yet. I mean, you read what it writes and you go, this thing writes so well. It writes so much better than you do. The human emotion drives that too. I think that's a big thing. I want to add one more last one. Okay, so he's still holding back. He's conducting quite a few interviews. If you want to get the gist of it, there's an interview with Strictly VC, interview from yesterday with Sam Altman. Listen to that one. It's eye-opening where they want to take it. But my last one I want to make it on this point is that Satya Nadella yesterday did an interview with the Wall Street Journal. I think he was doing- You were not impressed. I was not impressed because he was pushing it too much. So Sam Altman is holding back. So there's less backlight. Got 10 billion reasons to push. I think he's- I guess I just laid off 10,000 people. Hey, Chad, GPT, find me a job. You know? He's overselling it to an extent that I think it will backfire on Microsoft and he's over-promising a lot of stuff right now. I think, I don't know why he's very jittery about all these things. And he did the same thing during Ignite as well. So he said, oh, the AI will write code for you and this and that. Like, you call him- I'm probably going to stop him to tell you. Yeah, he's got a lot of hyperboleak. All right, Ajee. Let's go ahead. Well, can I weigh in on the whole Microsoft thing? Whether OpenAI, here's the take on this. I think it's more like the browser moment to me because I can relate to that experience with Chachi personally, emotionally, when I saw that. And I remember vividly- Ah, a moment to see- This is obviously the future. Anything else in the old world is dead. Website's going to be everywhere. It was just instant dot connection for me and a lot of other smart people who saw this. A lot of people, by the way, didn't see it. Someone said the web's a toy. The company I was working for at the time, ULA Packard, they like, they could have been in. They had invented HTML. So all this stuff was, they just passed. The web was just being passed over. But at that time, the browser got better. More websites came on board. So the structural advantage there was online web usage was growing, online user population. So that was growing exponentially with the rise of the Netscape browser. So OpenAI could stay on the right side of your list as durable if they leverage the category that they're creating can get the scale and if they can get the scale, just like Twitter, that failed so many times that they still hung around. So it was a product that was always successful, right? So, I mean, it should have been. That's your right. It was terrible. We kept coming back. The fail, the fail whale, but it still grew. So OpenAI has that moment. They could do it if Microsoft doesn't meddle too much with too much power as a vendor. They could be the Netscape navigator without the anti-competitive behavior of somebody else. So to me, they have the pole position. So they have an opportunity. So if not, if they don't execute, then there's opportunity. There's not a lot of barriers to entry vis-a-vis, say the capex of, say, a cloud company like AWS. You can't replicate that many have tried, but I think you can replicate OpenAI. And we're going to talk about that. Okay, real quick, I want to bring in some ETR data. This isn't an ETR heavy segment only because it's so new, you know, the coverage. But they do cover AI. So basically what we're seeing here is a slide on the vertical axis is Netscore, which is a measure of spending momentum and the horizontal axis is presence in the dataset. Think of it as a market presence. And in the insert right there, you can see how the dots are plotted, the two columns. And so, but the key point here that we want to make, there's a bunch of companies on the left, is it like, you know, DataRobot and C3AI and some others. But the big whales, Google, AWS, Microsoft are really dominant in this market. So that's really the key takeaway that we've got. Those IBM is way low. Yeah, IBM's low. Actually bring that back up. But then you see Oracle, who actually is injecting, I guess that's the other point, is you're not necessarily going to go buy AI and build your own AI, it's going to be there and Salesforce is going to embed it into its platform, the SaaS companies. And you're going to purchase AI, you're not necessarily going to build it, but some companies, obviously. I mean, the quote IBM's general manager, Rob Thomas, you can't have AI with IA, Information Architecture and David Flanagan. AI without IA. You can't have AI without IA. If you have an Information Architecture, you then can power AI. Yesterday, David Flanagan with Hammer Smith was on our super cloud. He was pointing out that the relationship of storage, where you store things, also impacts the data and stressability. And Jermak from Next Data, she was pointing out that same thing. So the data problem factors into all this too, Dave. So you got the big cloud and internet giants, they're all poised to go after this opportunity. Microsoft is investing up to 10 billion, Google's Code Red, which was the headline in the New York Times. Of course, Apple is there. And several alternatives in the market today. Guys like Chinchilla, Bloom, there's a company, Jasper and several others. And then Lina Khan, Loom's large. And the governments around the world, EU, US, China, all taking notice before the market really is coalesced around a single player. You know, John, you mentioned Netscape. They kind of really, the US government is way late to that game. It was kind of game over. And Netscape, I remember Barksdale was like, eh, we're going to be selling software in the enterprise anyway. And then the company just dissipated. So, but it looks like the US government, especially with Lina Khan, they're changing the definition of antitrust and what the cause is to go after people. And they're really much more aggressive. It's only what two years ago that- Yeah, the problem I had with the federal oversight is this. They're always late to the game and they're slow to catch up. So, in other words, they're working on stuff that should have been solved a year and a half, two years ago around some of the social networks hiding behind some of the rules around open web back in the days. And I think- But they're like 15 years late to that. And now they've got this new thing on top of it. So like, I just worry about them getting their fingers. But there's only two years, you know? Open AI, two years? No, no, no, there's still fighting other battle. The problem with government is that they're going to label big tech as like a evil thing like pharma. It's like smoking wants to kill big tech. So I think big tech is getting a very seriously bad rap. And I think anything that the government does that shades darkness on tech is politically motivated. In most cases, you can almost look at everything and my 80-20 rule is in play here. 80% of the government activity around tech is bullshit, it's politically motivated. And the 20% is probably relevant, but off the mark and not organized. Well, market forces have always been the determining factor of success. The governments, you know, have been pretty much failed. I mean, you look at IBM's anti-trust, what did that do? The market ultimately beat them. You look at Microsoft back in the day, right? Windows 95 was peaking, the government came in, but you know, like I said, they missed the web, right? And so they were hanging onto Windows. And so I think you're right. It's market forces that are going to determine this. But Sargeet, what do you make of Microsoft's big bet here? You weren't impressed with Nadella. How do you think, where are they going to apply it? Is this going to be a Hail Mary for Bing? Or is it going to be applied elsewhere? What do you think? They are saying that they will sort of weave this into their products, office products productivity and also to write code as well, developer productivity as well. That's a big play for them. But coming back to your anti-trust, so for comments, right? I believe the, your comment was like, Fed was late 10 years or 15 years earlier, but now they're two years. But things are moving very fast now as compared to the years to move. So two years is like 10 years? Yeah, two years is like 10 years. That's the one I'm going to make that point. This thing is like, it's going like wildfire. Any new tech comes in. I think they're going against the distribution channels. Lena Khan has commented time and again that the marketplace model is, she wants to have some grip on. Cloud marketplaces are a kind of monopolistic kind of way. I don't see this, I don't see the chat AI. You told me it's not Bing. You had an interesting comment. No, no, first of all, this is great from Microsoft. If you're Microsoft, because Microsoft doesn't have the AI chops that Google has, right? Google has got so much core competency on how they run their search, how they run their back ends, their cloud, even though they don't get a lot of cloud market share in the enterprise that got a kick ass cloud because they needed one. They've been invented SRE. I mean, Google's development and engineering chops are off the scales, right? Amazon's got some good chops, but Google's got like 10 times more chops than AWS in my opinion. Clouds hold a different story. Microsoft gets AI, they get a playbook, they get a product that can render into the not only Bing, productivity software, helping people write papers, PowerPoint. Also, don't forget the cloud. AI can super help. We had this conversation on our super cloud event where AI is going to do a lot of the heavy lifting around understanding observability and managing service meshes to managing microservices to turning on and off applications and or maybe writing code in real time. So there's a plethora of use cases for Microsoft to deploy this. Combined with their R&D budgets, they can then turbo charge more research, build on it. So I think this gives them a car in the game. Google may have pole position with AI, but this puts Microsoft right in the game. And they already have a lot of stuff going on, but this just, I mean, everything gets lifted up. Security, cloud, productivity suite, everything. What's under the hood at Google? And why aren't they talking about it? I mean, they got to be freaked out about this. No, or do they have kind of a magic bullet? I think they have the chops, definitely. Magic bullet, I don't know where they are as compared to the chat GPT-3 or four models like Dave. But if you look at the online sort of activity and the videos put out there from Google folks, Google technology folks, if that's the account you should look at if you are looking there. They have put all these distinctions what chat GPT-3 has used. They have been talking about for a while as well. So it's not like it's a secret thing that you cannot replicate. As you said earlier, like in the beginning of the segment, that anybody who has more data and the capacity to process that data, which Google has both, I think they will win this. Obviously, living in Palo Alto where the Google founders are and Google's headquarters is down over, we have inside information on some of the thinking and that hasn't been reported by any outlet yet. And that is that from what I'm hearing from my sources is Google has it. They don't want to release it for many reasons. One is it might screw up their search monopoly one, two, they're worried about the accuracy because Google will get sued because a lot of people are jumping on this chat GPT oh, it does everything for me when it's clearly not 100% accurate all the time. So Lina Khan is looming and so Google's like careful. Yeah, so Google's just like, this is the third, could be a third rail. But the first thing you said is a concern. Well, no, what they will do is do a Waymo kind of thing. Will they spin out a separate company? They're doing that. Discussions happening, they're gonna spin out the separate company and put it over there and saying, this is AI, got search over there. Don't touch that search because that's where all the revenue is. So okay, so that's how they deal with the click and dilemma. What's the business model here? I mean, it's not advertising, right? Is it to charge you for a query? What's, how do you make money at this? It's a good question. I mean, my thinking is, first of all, it's cool to type stuff in and see a paper get written or write a blog post or give me a marketing slogan for this or that or written code. I think the API side of the business will be critical and I think how we shoe, I know you're going to reference some of his comments yesterday on SuperCloud. I think this brings a whole nother user interface into technology consumption. I think the business model not yet clear but it will probably be some sort of either API and developer environment or just a straight up free consumer product with some sort of freemium back end thing for business. And he was saying too, it's natural language is the way in which you're going to interact with the system. I think it's APIs. It's APIs, APIs, APIs because these people who are cooking up these models and it takes a lot of compute power to train these and for inference as well. Somebody did the analysis on the, how many cents a Google search costs to Google and how many cents this chat GPD query costs. It's 100x or something like that. You can take a look at that. 100x on which side. You're saying two orders of magnitude more expensive for chat GPT. Much more, yeah. And for Google. It's very expensive. So Google's got the data, they got the infrastructure and you're saying they got the clock. No, no, actually it's a simple query as well but they are trying to put together the answers and they're going through a lot more data versus index data already, you know. Sorry, let me clarify. You're saying that Google's version of chat GPT is more efficient. No, I'm saying Google search results as what we are used to today. But that would be cheaper. Is that going to confer advantage to Google's large language? It will because there are deep sciences. Google, I don't think Google search is doing a large language model on their search. It's keyword search, you know. What's the weather in Santa Cruz? Or how, what's the weather going to be? Or, you know, how do I find this? Now they have done a smart job of doing some things with those queries, autocomplete, redirect navigation. But it's not entity. It's not like, hey, what's Dave Vellante thinking this week in breaking analysis? Chat GPT might get that because it'll get your breaking analysis. It'll synthesize it. There'll be some maybe some clips. It'll be like, you know, I mean. Well, I got to tell you, I asked chat GPT. I said, I'm going to enter a transcript of a discussion I had with near zoo, the CTO of Palo Alto Networks. And I want you to write a 750 word blog. I never input the transcript. It wrote a 750 word blog. It attributed quotes to him. And it just pulled a bunch of stuff that, and said, okay, here it is. It talked about super cloud. It defined super cloud. It made, it made shit. Wow. But it was a big lie. It was fraudulent, but still blew me away. Again, vanilla content and non-accurate content. So we are going to see a surge of misinformation on steroids, but I call it vanilla content. Wow, that's just so boring. There's so many dangers. Make your point. Okay, so the consumption, like how do you consume this thing? As humans, we are consuming it and we are like getting like a nicely, surprisingly shocked, you know, wow, that's cool. It's going to increase productivity and all that stuff, right? And on the danger side as well, the bad actors can take hold of it and create fake content and we have the fake sort of intelligence if you go out there. So that's one thing. The second thing is, we as humans are consuming this as language. Like we read that, we listen to it, whatever format we consume that is. But the ultimate usage of that will be when the machines can take that output from likes of GPD and do actions based on that. The robots can work. The robot can paint your house, we were talking about it, right? Right now, we can't do that. So the data has to be ingested by the machines. It has to be digestible by the machines and the machines cannot digest unorganized data right now. We will get better on the ingestion side as well. So we are getting better. Data reasoning, insights and action. I like that model, paint my house. So, okay. By the way, by the way, that would be drones that are coming in. It's great painting your house. It wasn't too long ago that robots couldn't climb stairs as I like to point out. Okay, and of course, it's no surprise the venture capitalists are lining up to eat at the trough as I like to say. Let's hear, you referenced this earlier, John. Let's hear what AI expert, Howie Zhu said at the super cloud event about what it takes to clone chat GPT. Please play the clip. One of the VCs actually asked me the other day, hey, how much money do I need to spend, invest to get another shop to the open AI sort of the level? You know, I did a song. I know. $100 million is the auto magnitude I came up with, right? Not a billion, not 10 million, right? So a hundred million. Guys, a hundred million dollars. That's an astoundingly low figure. What do you make of that? I was in an interview with you. I was in an interview. I think you said a hundred million or so. But in the hundreds of millions, not a billion. You were trying to get him up. You were like hundreds of millions. He's like, hey, not 10, not a billion. But first of all, Howie, he's an expert machine learning. He's at ZScala, he's a machine learning AI guy. But he comes from VMware. He's got his technology pedigrees really off the chart. Great friend of the cube and kind of like a cube analyst for us and he's smart. He's right. I think the barriers to entry from a dollar standpoint are lower than say the CapEx required to compete with AWS. Clearly the CapEx spending to build all the tech for the run of cloud. And in some case, apps too is the same thing, but I think it's not that hard. But am I right about that? You don't need a huge Salesforce either. If the product's good, it will sell. This is a new era. The better mousetrap will win. This is the new economics in software. Because you look at the MonoMoney, Lacework, Sneak, Snowflake, Databricks. Look at the MonoMoney they've raised. I mean, it's like a billion dollars before they get to IPO or more because they need promotion. They need go to market. You don't need. Open AI has been working on this for multiple five years plus. It wasn't born yesterday. Took a lot of years to get going. And Sam is depositioning all the success because he's trying to manage expectations to your point star. Earlier, it's like, yeah, he's trying. Whoa, whoa, settle down, everybody. It's not that great. Because he doesn't want to fall into that hero and then get taken down. So it may take 100 million or 150 or 200 million to train the model, but for the inference machine, it will take a lot more, I believe. So imagine. Go ahead, go ahead. But because it consumes a lot more compute cycles and certain level of storage and everything, which they already have. So I think to compute is different, to train the model is a different cost, but to run the business is different because I think 100 million can go into just finding the Fed. Well, there's a flywheel too. Yeah. You're running the business. It's an interesting number but it's also kind of like context to it. So here, 100 million spend it. You get there, but you got a factor in the fact that the wage companies win these days is critical mass scale, hitting a flywheel. If they can keep that flywheel of the value that they got going on and get better, you can almost imagine a marketplace where, hey, we have proprietary data. We're a silicon angle in the cube. We have proprietary content, cube videos, transcripts. Well, wouldn't it be great if someone in a marketplace could sell a module for us? We buy that, Amazon's thing and things like that. So if they can get a marketplace going where you can apply to data sets that may be proprietary, you can start to see this become bigger. And so I think the key barriers to entry is going to be success. I'll give you an example, Reddit. Reddit is successful and it's hard to copy not because of the software. They built Moe. Because you can buy Reddit open source software and try to compete. They built Moe for their community. Their community, their scale, their user expectation. Twitter we referenced earlier that thing should have gone under the first two years, but there was such a great emotional product that people would tolerate the fail whale and then when I was a whole nother thing. Then a plane landed on the Hudson and it was over. I think verticals, a lot of verticals will build applications using these models like for lawyers, for doctors, for scientists, for content creators. You'll have many hundreds of millions of dollars investments that are going to be seeping out. If you had to put odds on it that open AI is going to be the leader. Maybe not a winner take all leader, but like you look at the Amazon and Cloud. They're not winner take all. These aren't necessarily winner take all markets. It's not necessarily a zero sum game, but let's call it winner take most. What odds would you give that open AI 10 years from now will be in that position? If I'm zero to 10 or anything? Yeah, it's like horse race. Three to one, two to one, even money, 10 to one, 50 to one. Maybe two to one? Two to one, that's a pretty low odds. That's basically saying they're the favorite. They're the front runner. Would you agree with that? I'd say four to one. Yeah, I was going to say, I'm like a five to one, seven to one type of person, because I'm a skeptic with, you know, there's so much competition, but... I think they're definitely the leader. I mean, you got to think. Oh, there's no question. I mean, there's no question about it. The question is, can they execute? They're not Friendster is what you're saying. They're not Friendster. And they're more like Twitter and Reddit where they have momentum if they can execute on the product side. And if they don't stumble on that, they will continue to have the lead. If they say it's stay neutral, if they say it's stay neutral as Sam has been saying that, hey, Microsoft is one of our partners. If you look at their company model, how they have structured the company, then they're going to pay back to the investors, like Microsoft is the biggest one, up to certain number of years, they're going to pay back from all the money they make, and after that, they're going to give the money back to the public, to the, I don't know who they give it to, like non-profit or something. Okay, the odds are dropping. It's very... It's a shock. That's the point though. Actually, they might have done that to friend off the criticism of this, but it's very interesting to see what the model they have adopted. The wild card in all this, my last word on this is that, if there's a developer shift in how developers and data can come together again, we have conversations around the future of data, super cloud and meshes versus, how the data world coding with data, how that evolves will also dictate as a wild card could be a shift in the landscape around how developers are using either machine learning or AI-like techniques to code into their apps. That's fantastic insights. I can't thank you enough for your time. On the heels of super cloud too, really appreciate it. All right, thanks to John and Sarbjit for the outstanding conversation today. Special thanks to the Palo Alto studio team. My goodness, Anderson, this great backdrop. You guys got it all out here. I'm jealous. And Noah, I really appreciate it. Chuck, Andrew Frick and Cameron, Andrew Frick switching Cameron on the video Lake. Great job. And Alex Meyerson, he's on production, manages the podcast for us. Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and our newsletters. Rob Hoef is our editor in chief over at Silicon Angle. He does some great editing thanks to all. Remember, all these episodes are available as podcasts. All you got to do is search breaking analysis podcast wherever you listen. Publish each week on wikibon.com and siliconangle.com. Want to get in touch? Email me directly, david.valante at siliconangle.com or DM me at dvalante or comment on our LinkedIn post and by all means check out etr.ai. They got really great survey data in the enterprise tech business. This is Dave Vellante from Cuban Sites powered by ETR. Thanks for watching. We'll see you next time on breaking analysis.