 Welcome back to SuperCloud 3. We are live in Palo Alto for our third edition of SuperClouds, we break down the next gen cloud, multiple environments, multiple clouds, edge data. This is security plus AI as the focus. Day's been the theme throughout. I'm John Furrier with Dave Vellante. Sarb Jeep-Joel's here, CUBE Collective, part of our team, member of our influencer group that we hang out with at all the events and certainly distinguished CUBE alumni. Great to see you coming on to do the analysts section of what we've been talking about, SuperClouds. So listen, we're going to analyze. Dave, good to see you, Sarb Jeep, thanks a lot. Thank you. Okay, let's analyze. So SuperCloud 3, security plus AI. Dave, this is the continuation. More and more momentum, we just had Doug Merritt come out of retirement. He's now the CEO of Aviation. He's going to take them past 100 million, take it public. He did that with Splunk, a data company. Jay Schodberg was Z Scaler, absolutely killing it. One of the high-performing SaaS companies on the planet. They're up on the top of the ranks of this new generation of companies, pro-game, great speed of acceleration. Just the acceleration of what's going on in SuperCloud. It's like the big clouds, they're not going away. They don't have to die to bring in multi interoperability. They're continuing to do great. Amazon Web Services, Azure, News Today, Microsoft teaming up on the AI side with open AI. Moves are being made in the cloud. The edge is developing. What's the analysis? I think that prior to, I mean, the SuperCloud trend was happening, right? We saw that, but prior to chat GPT. The chat GPT created this awakening and I think it's accelerated the intensity within these specific sectors. So we talked to Doug Merritt about how the granularity is actually going to get greater and the focus is going to get greater. So those companies, my view, love to hear what some of these things that really lean in aggressively are going to extend their position. You're seeing that with Microsoft. You heard that with Zscaler. We heard that with Cloudflare. We heard that with VMware, right? They're in a strong position and they're investing and their goal is to get stronger. Now, having said that, you've got these interesting disruptions. The other piece of that is, I don't think they know yet what to do with generative AI, right? And it's like Jeff Jonas said, it's good, it's amazing, but it gives you different answers every time. So how do they apply that in security and other areas? They're still trying to figure that out. So there's a lot of opportunity for structure. I want to get into that in a second. Starvedjee, let's get it to you first on the top level cloud players, SuperCloud 3, security plus AI. I think the time compression, as Doug said, that's a huge thing, right? So everybody's trying to rush in and put the solutions out there. Right now, the usage of generative models or LLMs is on the design time when you're cooking up stuff, right? So when you're planning for it, right? When you're coding, that's so obvious for us, right? But the problem is, how do you apply that at runtime? Because at runtime, you need consistency and accuracy. At design time, you still have human in the picture, but at runtime, you want to take the human out of the picture, right? So that's challenge. I think to take out that problem out of LLMs, there will be domain-specific LLMs, which will be more precise, and they will give you, I would say, not same answer, maybe, perhaps almost same answer every time. So the domain-specific LLMs will emerge. That's what my... You know, I think the acceleration is a great point. I want to get you guys' reaction on this because what came out of Jay Chaudhry's interview and we heard from Kit and others, and we love sports analogies, so I'll just say it. It's a pro game, and you talk sports. The speed of the game is so different than college. In football, for instance, security is a speed game and it's a pro game, and that leaves this whole democratization on the table. Interesting, because how do you democratize a slower game? Okay, and we saw that in the data world. So in security, you cannot compete if you don't have that game, because the defense levels or requirements are so high on security, it's going to be very challenging. So the question is, does AI accelerate and open that bottom end of the feeder to bring up talent? Because what we're seeing from all the pros here on theCUBE talking about SuperCloud 3 is security is a pro game. The speed is at a level that you got to be a certain athlete to play at. This is a big issue, that means startups might be looking different. Do you have to do more work on the front end, even get in the game? Dave, what's your take on this? First of all, I think you got to be cross-cloud. You got to be SuperCloud in security. That to me is table stakes. The second thing is you absolutely have to apply AI, whether it's generative AI or other machine intelligence, to that corpus of data that you have and those that have the best data are going to win. And I think that during the pandemic, we had this flood of venture capital come in to the security space. Not all those guys are going to survive. I mean, you do see companies, you know, like Wiz pops up in the data. I mean, we saw this at RSA. All you do is look at the line outside their party, right? Practitioners are enthused about them. But then you have companies that are popping up, specializing in edge security. Is that the right model? Do we need another stovepipe? Or is it so unique and so different that actually you do need a best of breed stovepipe? So you think you have to be cross-cloud. You have to be applying AI. And you do have to be best of breed at something. And as Jay Chardy was saying, if you try to stretch that too thin, you know, it's like when you're rolling the pizza dough and you need to get a hole in it, you know, that creates problems. So if you don't have the game, you can't compete. That's what you're basically saying. If you don't have the game, you can't compete. The pro level is definitely different than college. This is not JV. I think in, it seems like in software world, in x86 world right now, it seems like it's a championship round. The AI is the championship round. Before we start with, you know, the next round with supercomputing and, you know, different paradigm and shift. But in this one. I don't think they want that round. Because, like what we are trying to do is, like we used to write logic. That was, that's what coders did, right? We put like brain into machines, right? But now machines will learn from data. So that's, I think it's the sort of, towards the end game of this sort of, you know, if you are- Putting logic into data. Yeah, yeah. Data flipping. And data fusion too. And data fusions come up too. All right, so let's get into the, let's get into the general, because there's a hype cycle and you've got spending data that matches that says, that's a unique thing you've never seen before where the hype cycle is strong and the spending momentum is aligned with that. That's not normal. Okay, that's what we'll get to that in a second. I also want to bring up the machine learning. I mean, I don't know if you remember Dave, go back to 2011, 2012, when we were doing theCUBE. At that time, machine ML engineers were out in the market and it was well-known that Google was paying up to $2 to $3 million per person in AccuHires if they had machine learning engineers on staff. So, you know, machine- Now it's GPUs. Okay, but machine learning has been around for a decade, hard core. So a lot of this generative AI is the discussion. It's generating new things, but it's based upon supervised machine learning, not unsupervised. So, you know, a lot of the people come in like, all the top companies like Zscaler, Natives, all those things. But we've been doing ML for a while. So that is kind of AI. So it's a different kind of AI scene. The generative AI is super hyped, but it's new, but it's machine learning too. It's data. I think it comes back to the design time versus the runtime, right? So we have been doing machine learning kind of work for so long. I used to work a visa in 96 and whenever you swipe your card, we ran an ELEGO very quickly with the milliseconds, IBM TPF technology behind the scenes, super fast, right? So it was like, we know your spending pattern. If your amount is anomaly, we will stop you. Like you have to call somebody to get the approval, right? But it's nothing new. But with LLMs, which is part of generative AI, generative and LLMs are a little different, right? So the whole idea is to get the corpse of data and get the intelligence out of that. But the problem is that it's not accurate all the time. It cooks things up, it makes things up, right? There's hallucination and whatever. It gives you different answers multiple times too. Yeah, what you're trying to do is now, it's like trying to start, okay. Oh, you know everything now, but like don't say this, but say this. Like we're trying to tame it, if you will, with like a in SQL world, like we have a not clause, like give me all that, but all these things, but not that. It's also a lot less controlled. It was distributed, that transaction processing facility example that you gave was, it was distributed. You might have some terminal, NCR terminal at the edge, but today you've got cell phones, you've got machines, you got, so it's kind of this wild, wild West. You know, you don't have like everything rack F, everything controlled inside the mainframe. You've got all kinds of different standards, different open standards. And so it's a lot harder to actually control what's happening throughout the network. Yeah, and another true, but another thing is very important is that Jerry Chen wrote, he revised his blog about like new, new modes, right? So I have been talking about these three types of systems, the system of record, system of differentiation and the systems of innovation. And he used that sort of system demarcation methodology in his analysis as well. I think systems of record are very close to what we need from a regulators point of view. Like we have accounting in place and we have regulators, they look at your books and you can't just cook up stuff, right? So the systems of record, I think they will not be going anywhere near LLMs anytime soon. I think systems of innovation will very quickly and systems of engagement or system of differentiation, they will flirt with these models pretty soon. And they are already doing that in many ways. So I think that brings in another aspect, another sort of prospect, another aspect of looking at the problem, right? That is consumer versus the business, right? B to C, people are like, oh my God, this is cool stuff. Personally we are very nicely, pleasantly shocked. This is productivity, all that stuff. But when you go to business side, then you're like, hold on, right? It's always been harder. If you look at the search business, I remember back on the day when Google was getting in the business and I'll search engine the internet, it was easy to do search on public information than inside a company. You had structured data, different databases. So I think the B to B market is interesting because you have one confidential information, you have different infrastructure, and this brings up the conversation that we've been having around, okay, how do you define your value and the data value is key? So Dave, when we talk about the SuperCloud, love to get both of your perspective, when you think about the digital transformation, I'm an enterprise, I'm a B to B company. I look different from an AI perspective than say a consumer company that's going to be a search engine like a Bing or whatever, open AI, it's public. The private is going to use LLMs differently because you have to operationalize it. So the question I guess for you guys is how should companies think about operationalizing the SuperCloud security and AI story because we've heard some people, low-hanging fruits configuration, some automation, but observability comes up. How much data we have that we haven't harvest before? I don't always... But this is the key, I think it's got to start with the data because the language model itself is going to be commodity, right? I can get it from Amazon, I can get it from Google, I can get it from Microsoft, I can get it from open AI, I can get it from open source, whatever. It's the data, and that's where you're going to get emote. And so you really got to get the old Rob Thomas, you can't have AI without IA. You got to really get your data... Architect. Architect, your data models together. What is, that should be the question that you're asking. What is our most valuable data and how can we take advantage of it? You got to start there. You're referring to Rob Thomas from IBM, who almost what, seven years ago? Seven, eight years ago was saying, IA, informational architecture before AI. Yeah, you can't have AI without IA, and you can't have AI without a data architecture. You agree with that? I agree with it. Actually, Georgia earlier today, he mentioned that they are ingesting multiple LLMs into their security operations. That's what they do, right? So they specialize in security. So I think just like cloud, we are in a multi-cloud world right now. In AI, we will be in the multi-LLM world. So as you said, like LLMs will be commodity, and he said that too, and I believe in that. Well, we talk about LLMs, which stands for large language models, but there's also foundation models, because language is text. You got multimodal, which is text, audio, and video. So computer vision has foundation models. That's not an LLM, that's vision. So just to get the semantics right, LLM is for text and language, but computer vision, when you're looking at, say, a picture and someone's climbing a fence, I interviewed a company who did text, people climbing fences, that's someone breaking into a yard or something. But you know what's interesting, if you listen to Ilya from OpenAI, the interview he did that fireside chat with Jensen, he will tell you the two things. He said that scale was underestimated, the importance of scale and having all this data. The second was vision. That vision actually dramatically increased the accuracy of LLMs. So it kind of go hand in hand. And I think one of the things just Jeff Jonas, I think nailed, and people talk about it, but I think we don't fully understand it yet, is we are going to replace this with this. Yes, yeah. We'll be talking to him. That's a huge difference. John Chambers said that on theCUBE in our podcast. He did. Voice will be authenticating things, be part of access. I don't think people have really grok that yet. Yeah, talking about the security ops, right? Security operations, right? Right now we are looking at the logs. People are sitting at these, looking at the monitors and what's going through and all that stuff, things are changing gradually. But in future we'll be talking to these machines and let's say we see DDoS coming from China or some national adversary, or even some friendly country, like within the country, right? So it's like block any traffic coming from Arizona or something, you know somebody's DDoSing from there. So you can just talk to machines and then it will take action behind the scenes because it knows what you're trying to do and it just maps it to action. Is that going to be generative? This is the question I have. Is that going to be generative AI? Is it, you know, say, tell me what's the signature of this DDoS? Is it, where's it coming from? Is it North Korea? Is it Iran? Is it China? Is it Russia? Okay, and then what to do about it? Block, okay. Are you going to use generative AI for that? Because it's generative, so it's sort of guessing. What's it generating? Solution? It's generating the next word, essentially. Yeah, I don't actually, you will. I thought about that when I was sitting here. I was at Rodin's one, some notes. So when I was doing something like thinking, like I used to work at EMC and we had a very sophisticated sniffer, actually we call it CMDB, but it was not. Anyway, it's a long story here, right? So we were looking at every nth packet on the network, right? So we, because we couldn't look at every packet. So you were sampling? Yeah, we were sampling. So in this case, that's also like I, between my two jobs, I did a stint at Port of Hockland, ran a trucking company for some while, and figured out all the containers coming from China, we can't inspect every container in year old, right? So we were, they were picking the Port of Horry, pick every like 50th container and look through that, or sort of almost look a random thing, right? So, and it seemed like security at, you know, at Tel Aviv airport, like they do it differently. So everybody will do these things differently, but they will pick every nth packet, or buy how the packet looks, or by the intensity of the workload, like how critical it is, yeah, yeah. Once you get that packet, now you want to analyze what it is, like what's the signature of the payload, right? Then you can give it to the LLMs and Generative AI kind of, you know, mechanism. Find out what is this? Is this malware? Who's attacking us? So there will be like, these packets are coming in, like boom, boom, boom, right? And you pick some random packets and give it to these sophisticated LLM based security models and say, tell me what it is. So are you going to get that's the same rough probability each time, right? You're saying you are. If you have the right controls, because then the other thing that Ilya said, and people debate this, in fact, Jeff Jonas was like, nah, it's generative. Ilya said, look, ChatGPT wasn't designed to be that consistent, you know, accurate buddy, but it will over time evolve there. And so, and others are saying, eh, maybe not. Maybe it's different machine intelligence. Depends on the data. Their reasoning is based upon only supervised, not unsupervised, which changes the dimension of how much the reasoning is accurate. But I think also the idea of prompting is going to change how we fuse data. So to me, what I'm hearing people talk about and my vision would be that the fusion of data, interplaying between data sets is going to be a new thing that's going to be outside the scope of traditional, like how to organize data in a warehouse or cloud. Because if data is free to move around and interact, that's where the generative AI could really kick in. Here's what I think. I think because it's probabilistic, the way in which gen AI models, at least ChatGPT is going to approach it, is when there's a lack of confidence, it's going to communicate that to the prompter and or ask the prompter questions. Can you tell me more? It really doesn't do that today. It doesn't have those types of, that's when people talk about guardrails, that's the type of guardrail that I would imagine that it is sort of self-monitoring, and then it's forcing you to give it more information so that it doesn't give you incorrect. That the key is incorrect, like the accuracy is important in operations, right? So I believe, let's take an analogy, like when we go to like university colleges, we, some people specialize in certain areas. There's some stringent guidelines in some areas. Some areas are very wide open, humanities and art and science are domains, right? So just think of these LLMs as individuals for a minute, right? Some people know a lot about everything, right? They're humanities major, right? Communication, right? You guys, you are actually especially, right? So you know a lot of things about a lot of things, right? But there's some people, like they just day in, day out, bang their head against like how the atoms work and how this genetics work and all that, they specialize in that. So they know their information coming from them is more accurate as compared to somebody who knows many things about many things, right? So I think models will be same way. Specialized model will have more accuracy. It's all relative I think at that point because it's still LLM, but I believe that we will have like security models for different type of industries as well and industry models itself, like for regulators. Kind of like Wikipedia. Actually, we make a Wikipedia with technical and mathematics and stuff like that is actually very accurate because you have experts that really know that. You got to change those pages and make sure they're accurate. Yeah, but the problem is it's... Curation-based. Yeah, right. Survey, final question for you. Security plus AI is the theme for SuperCloud 3. Couple questions to end this out. What, where do you think the momentum that SuperCloud is right now? And SuperCloud 3, AI and security, what's your analysis, what's your final take? SuperCloud momentum and then the second question is the security plus AI, what's the critical message there? I think SuperCloud momentum is that we have to make the multi-cloud work and to make that work, we need another abstraction layer and that's why we see the rise of cloud player, companies like Cloudflare and Snowflake and all these companies are built on top of cloud. And now the cloud providers are having second thoughts about should we play in that we go at one layer above, Amazon is having this sort of identity crisis that I believe, like they are saying, oh, we are Builders Cloud, but on the other side, Microsoft is saying, we will give you these apps, which are like AI apps. Actually, a lot of experts say that that chat GPT-4 is an LLM app. It's not LLM itself, it's an app, it's an app, right? So they have an app there. These people are saying we will let you build LLMs or we have one of our own. So there's a lot of uncertainty there. So I usually say that technology is like medicine, right? And every pill has a side effect and so will AI. And there's over-the-counter AI, which is these LLMs, which are public LLMs, and they're prescriptive AI, like just like medicines, which will be just for you, made for you because you have certain kind of symptoms that you're trying to fix as a company. So we will have sort of different demarcation of like a staggered AI adoption. Of course, security is a huge problem because bad guys have access to the same tools what good guys have, and bad guys can break more rules than the good guys can. In many ways, even the good guys look bad these days. Sam Altman going around the world, saying like, oh, it's very dangerous thing, it's very dangerous thing, but he's cooking it up on the fly. Very tricky situation. I think that's a hedge too. Sarbjit, thanks for coming on, Dave, great analysis. Okay, wrapping up that section, we're going to have live coverage coming in, remote live from Dell Technology, CTO, I'm John Furrier, Dave Vellante, and Sarbjit Chowall here. Breaking down with theCUBE Collective, SuperCloud 3, we'll be right back. In the world.