 Hi everybody, welcome back to SuperCloud 4. We're here live in our Palo Alto studio I'm Dave Vellante with my co-host Rob Stretcher. Rob, we've been going all day. We're going to continue tomorrow with so much demand for SuperCloud content. We decided to bleed into day two. Lior Gavishes here, he's the co-founder and CTO of Monte Carlo, a company focused on data and observability and joined by Dave Linticum of Deloitte. Dave just wrote a book, Insider's Guide to Cloud Computing, just surpassed 10,000 copies. Congratulations. Thank you. Get a copy on Amazon and welcome to the Cube Studios. Thanks so much for coming on. Thank you guys for having us. So why'd you write this book? You've written a lot of books. Why this one? A lot of my clients were telling me too that they really need to understand what the story is behind the story. In other words, what are the secrets behind cloud computing? What works and what doesn't? And they seem to be getting a lot of that detail from communicating with lots of industry insiders, things like that. And I figured I'd write a book on an Insider's Guide to Cloud Computing to reveal what's working out there, what's not, how to make investments, how to leverage technology like generative AI in an effective way, and also many chapters on how to build a super cloud and deploy that in your enterprise to save money and mitigate complexity. So it was a book I think needed to be written, so I wrote the book. Obviously, you see all the data. David and I were talking earlier. It's like, cloud, it's less expensive. Cloud, it's more expensive. Yes, it's true. And it depends, right? I mean, it's not, it's horses for courses as they like to say in Britain. But what's your perspective on all this evolution? And we're going to get into the AI piece, but the cloud is kind of a tour now, isn't it? Yeah, absolutely. From my perspective, I'm struggling to think of a company that hasn't adopted the cloud. So I think the question's more what's going into the cloud and what doesn't. There's a lot of factors that go into that decision, cost being one of them, but also agility and flexibility and the ability to scale. And so I see companies making those decisions and gradually getting, obviously getting workloads into the cloud. Yes, a very few companies are not in the cloud. I mean, you're right. Everybody's in the cloud. But at the same time, I think the vast majority, I think only about 20% of the data that we have suggest that 20% of the customers are all in on cloud. So it's a hybrid world, right? And you're seeing, I'm sure you talked about this in your book, but I'd love your perspectives on it. Maybe it's not an equilibrium per se, but it's more of a balanced approach. People sort of rethinking where to put stuff, right? Yeah, it used to be a very polarizing approach. People either all cloud and they were cloud only. You would come and get the edict there. Or they were never going to use a cloud and depending on who you were talking to. And the reality is that they're finding that it is going to be a balanced approach. Some workloads and data sets reside, should reside in the cloud because they're going to be optimized for the cloud. Some should still reside on premise. Some should reside on edge computing. And really we're kind of getting this to do this ubiquitous computing model. We're going to run everything everywhere and be able to find that processing and leverage it and leverage the data where it resides. So it's kind of coming to an agreement that there is no agreement as to that. The fact there's going to be a single cloud to use and your ability to leverage abstraction automation to mitigate that complexity because it's obviously going to get to a more complex environment is really where people are focused on right now. So they want to leverage the platform that has the most optimization for their workloads and for their data sets and do so in a very realistic way and do so in a way that's going to scale and do so in a way that's going to return the most value back to the business. This is boring, is that? Well, but to get to this simplicity, as you're saying, David, it's very complex. You love complexity, I'm sure. Yeah. Because that's what you guys do. How is AI changing the game generally, I know sort of a bromide, but how is it changing sort of how you approach the market? For us, AI is a huge accelerator and we're an observability company. We basically help companies understand the health and the reliability of their data systems. From our perspective, the promise of GenAI is amazing, right? There's so much good stuff that enterprises can do with generative AI and I think the key to that is just getting enterprise data integrated with AI, right? There's lots of problems that you can solve with public information. Coding being one of them, right? Generative AI models are very good at generating code, but if you think about a lot of the problems that enterprises are trying to solve, either internally or for their customers, that's going to require access to proprietary data, right? It's not going to be solely based on information that's been out there and it's been used to train those foundation models, which means that enterprises are going to need to augment AI with their internal data through a drag architecture or they're going to have to fine-tune their models and basically help the models become more knowledgeable about specific domains that are relevant to the enterprise. And from our perspective, that's very exciting. We want to help companies enable that. These systems are pretty complex and whether you fine-tune or use RAG, you're building a lot of data pipelines that are feeding information from data that's found across the enterprise and into the customer-facing application. When Monte Carlo is there for our customers to help them make those data pipelines work and work reliably and be trusted because the adoption of AI is you have to make these things trusted and reliable, right? The first year of AI was about hallucinations and all kinds of issues that emerged with AI and were there to help mitigate some of these things and make them more powerful for the enterprise. And I think you just hit on a really powerful point. It's the transparency and it's as you build data products on top of all of this data, be it an AI-based product or other analytics-based product or ML-based product, as the case may be. It's really about where did the data come from, its lineage, the quality of that data. How do I get to it? Because I think you guys had a blog a couple of weeks ago on different use cases. And the one I like to use is, hey, my finance team is looking to build an LLM to basically build their 10 cues every quarter. And it's on a very, it's very structured, it's very fixed. I know data needs to come in there. I know how to go and build it. That seems like a couple of use cases. What are you both seeing as the starting use case where people are really inserting themselves here? I think it's a matter of solving operational problems. Now there are things they couldn't automate. We're getting to that last bastion of automation with digital transformation was all about. And there was about 20 to 30% of the processes that we just can't digitize. We just can't automate it. But they're kind of busy work. They're able to be repeatable, but they do need certain amounts of expertise and knowledge that is able to run those things. So if we're able to automate that last mile, then I think we're eliminating a lot of the inefficiencies that are in the existing manual systems right now. And so it's the fact that they're looking at those processes as something that they can automate, whether it's a sales automation system, the ability to automate a factory floor, ability to automate supply chain. A lot of stuff is still done manually and those things are ripe to be automated with the generative AI and other AI systems for that matter. So who gets disrupted? I don't even answer that. More specific question. Take something like RPA. If you're a point product in RPA, I would think that you would want to sell the company or move on and find a new path as an example. It's like being a storage admin back in the day, right? Doing loans. But so that's an example where there was a reasonably heavy lift to put in RPA and now Gen AI can do a lot of that stuff. Are you seeing a lot of examples like that? And what does that mean for organizations in terms of what they do with their processes? How they reskill? I mean, this is a big complicated question for a lot of companies. How do they go about doing that? So, unfortunately, there aren't a lot. You're deferring. What was that to me? No, I'm kidding. Go ahead. No, please, no, please. You know, you're absolutely right. The job changes quite a bit, right? In the same way that if you used to be a database administrator with the advent of Snowflake, you had to go and do other things. It's the same thing right now. Even with coding, like generative AI is making that a lot easier and a lot of other tasks a lot easier. And the bad news is that there's no one out there that knows how to do this, right? Like there's very few people out there with experience of how to use Gen AI to do X or Y. But that's also the opportunity, right? Yes, good news in a way, right? Right, it's good news. We all have to learn and adapt it to our own lives in some of the early use cases that we've seen extremely successful around, again, coding or content creation, right? We've seen a bunch of success with that. And there's a lot more coming. And I think it's the tip of the iceberg. And I think that, again, so far, we've seen mostly foundation models doing very basic manipulation of public information. Not very basic, but mostly public information. The next generation is those models being informed and knowledgeable about the enterprise's own data and kind of unlocking unstructured data for the enterprise, right? So far, if you're an enterprise that's using data, you're mostly using structured information and you're mostly using tabular information. And now you can suddenly look into textual data, images, videos, voice, and analyze it and use it in ways that haven't been done before. And yes, you need to learn a new skill set whether it's prompt engineering or how to build RAG architectures or how to fine tune models. And people have to learn that. Most of the enterprises I've seen so far basically take a tiger team, a small team. It's usually someone reporting to the CTO or whatnot and they basically brainstorm 70 or 150 or whatever different ideas of how that particular company can use generative AI and then they go ahead and try to prototype and implement some of these things and create proof of concept that then sometimes get handed off to teams in the company, they go and productize it and implement it in customer facing. I'd like to pick your guys brain on sort of the intersection of cloud and AI again and Andrew, I wonder if we could queue the power law. We kind of took liberties with the concept of power laws, but basically the idea being the vertical axis when we bring it up is size of model, the horizontal axis is model specificity and it's a long tail. And you know, you've got open source and third party models pulling the torso up. So the one of the things we, you hear a lot of the tagline we're going to bring the AI to the data. Well, the data is everywhere. The data is in the cloud, the data is on-prem, the data is at the edge. So here's my question. Is the on-prem sort of experience enough cloud-like today where that sort of abstraction layer exists? Will they have the tools, the innovation that the cloud guys have? And then what about the sort of edge that really long tail? How will that sort of play out? Will they be, will it be yet another silo? Will they be interconnected? What's your vision on that, David? Yeah, I think that's a great question. I think the reality is that the on-premise systems that we have available to us now may have more value. Yeah, excuse me, if you could bring that up. Sorry to interrupt, David, but yeah, bring that up. So you can see here, I won't go into it too deeply, but it's that middle part with that specialized AI. That's the model specificity. And then the edge, you started to address that. There's governance issues. There's IP leakage concerns. The consumers are driving the big, the models up on the left, the Googles, the Amazons, the NVIDIA, from gaming actually was a lot of their heritage. Open AI, of course, with ChatGPT, but then lots of opportunities. And this is not meant to represent the dollar opportunity. I think the dollars could be much, much bigger in the long tail. So I apologize for the interruption, but back to sort of that question. Yeah, I think that's a great chart. And ultimately, if we are going to do a specialized AI that's localized for us within the business, there may not be a reason to store it in the cloud unless there's some financial benefit or optimization benefit of doing that. In many instances, that's not going to be there. The price of hardware within data centers has dropped tremendously in the last 10 years. And so that's a very compelling reason to keep a lot of stuff on premise. So unless there's a compelling reason to put this stuff in the cloud, then that's where it's going to reside. Edge systems as well, the ability to put something within a robot system because we were looking for the AI, that's closest to where the data's going to get gathered and that's an intelligent edge-based system, that may be the benefit of platform moving forward. Just like we said a few miles ago, we're moving to this ubiquitous computing model where everything's not going to be in one platform. So I was going to be designing the edge and I'm going to be all on premise. It's going to be on my wristwatch, mobile phone, cars, all these sorts of things. We've got to be able to leverage it where it exists. And I think that AI is going to be much the same thing. We don't have to run it within a cloud environment. We can run it on premise. We can run it in the cloud. We can run it at the edge. And it can be very effective. Again, it's the what if. When there's what does it need to do, what's the purpose of it? And then creating the bespoke, I'm highly optimistic to make it happen. Yeah, are you seeing this in your customers and where they're storing their data? Because I mean, that's what you do, right? Is how to help them build those products. Yeah, I'll take the opposing perspective. I'm going to argue that a lot of AI is actually going to happen on the cloud because of the economy's a scale, right? There's no way that any individual company can run something that's as advanced and as scalable and as flexible as an Azure or an AWS or a GCP or an OpenAI can. These are people that, you know, they have armies of engineers and operations people that think about this problem day in and day out. It's going to be hard for any enterprise to do that. And some people will for good reason, right? I don't know, the Department of Defense is probably not going to run its AI models on public cloud, but you're going to have to have a really, really good reason of why you're going on-prem and for us, for the vast majority of the world. Well, let's debate this a little bit, because you can make a case of all the innovations happening in the cloud. They've got an optionality of LLMs and they've got startups coming in and Google's making some big moves. OK, it's alluring. The flip side of that is people are concerned about IP leakage and you're not going to run a Tesla inference in the cloud. So the edge is going to be very clearly real-time inferencing at the edge is going to be done on ARM processors, not likely in the cloud. Maybe they'll move the cloud to the edge, but any thoughts on that? Yeah, I mean, he's right. There's a reason, a compelling reason, to leverage the cloud. And I'm not saying we're going to disallow moving into the cloud. That's always going to be a good option, but we have to do the analysis to figure out if it's optimized for the particular workload that LLM that's going to run on the particular system. In many instances, it's going to be contraindicated to run in the cloud because we're able to get more value out of a lot more value in some instances out of running it in a single private data center for some reason. There has to be a reason to do that. You have that compelling reason. I get it, you have all this other stuff is co-located with the cloud. That's the reason you put AI systems in the cloud. That's the majority of the times that I put AI systems that's in the cloud. But the reality is we have to consider all of these assets at our disposal and we have to look at the optimization of these solutions and their ability to bring the most value back to the business. That's the metric. And I think in many instances we're missing that. And some people are just moving things automatically to whatever they think the platform is that everybody is moving to or the cool kids are moving to. And that's absolutely the wrong choice. And we're going to end up having to push those things back, repatriation in many instances, put them back on the platforms, we're able to provide the most value because we made the mistake of putting them on a platform there where it wasn't optimized for that particular workload, that particular LLM, that particular dataset. But this is obviously good news for the cloud guys, right? They're going to get a tailwind there. And I'd say it's good news for ARM, is they're going to kick ass on the edge. But when you listen to like I was at the Dell financial analyst meeting and listen to the HPE one, they're presenting it as, hey, this is great news for us too, right? So is it a rising, is it a tide that lifts all shit? I think it is. And I think it's in between what you both have said, right? And I think it's an exact thing is that things going to be able to run on CPUs. It's not like you're going to have to have all GPUs. And some CPUs are going to have co-processors that are GPU driven type co-processors. And NPUs and accelerators. Yeah, I think, well, you'll have a lot of specialization, especially for inference, because to your point, I think inference at the edge is where it makes sense. Co-located with the people who, or the data, machine data that's actually doing it to go and make those decisions. If you're running a factory and your factory becomes disconnected from the internet or disconnected from your cloud, you don't want that factory to stop if your machines are running and using inference to do that. And I think where the models get trained and you need 10,000 GPUs to go train the model, makes a lot of sense to go to the cloud and do it up there. I think it becomes the economy's scale. Maybe. And what size model? Doesn't OpenAI run a lot of its training in a data center in Ohio, because it's too damn expensive to do in the cloud? I would say they do it in a lot of different places with their addition of the money that they got out of Azure and the credits and stuff like that. I would say that there's a lot happening in Iowa. Or like Andy was saying, Andy was saying that some of the specialized cloud players, like CoreWeave and others, sort of gaining. So I think, again, it's a cloud operating model where I think this is where you get to where you're going to have it all over the place and you're going to need to have DevOps or Platform Engineering be able to connect all of these things together. Because I think the data, sometimes you're going to read data from one cloud to another cloud. I had customers at a previous company that they were collecting all their data into their data warehouse in their data lake inside of AWS. And then they were cutting that data and transforming it and putting it into BigQuery and Google. And it was costing a phenomenal amount of money to move the data out of Amazon into that. But they had certain reasons that made cost sense to go and do that on BigQuery because they built out this AI system on top of BigQuery. And BigQuery is awesome and it served its needs. But so you will get better. I mean, Nvidia has been building a layer that would basically abstract away the cloud vendors. Yeah, they wanted to do it in Amazon. Amazon said no, right? Is that? Well, they did it anyways, right? Now you can touch an API and run it whether you want to run it on Amazon or Oracle. Right. Do you care where the data is? Would you prefer it to be in the cloud or it doesn't matter to you? It can be in the edge. It can be on-prem. From our perspective, we are where our customers are, which is primarily on the cloud. And so we focus on the cloud as well. We obviously care a lot about where the data is because we build deep integrations and we deeply understand the systems that we're helping our customers monitor and observe. But ultimately, we build what our customers use and so where they are is where we build. Is that on-prem experience though, that cloud operating model experience? Is it close enough, David, to the cloud today because you talked to guys like Adam Salipsky and he says, that's not cloud. You know, we're cloud. You know, he makes that statement. I'm paraphrasing, obviously. But of course, when you talk to the folks that have heritage and on-prem, they say, no, it's a cloud operating model and we can basically do what the cloud guys do. What's the truth? It can be if you do the right things with having the right tools and technology on the on-premise system to make it seem like the cloud and certainly their private cloud you can leverage as well. And those are, by the way, making a comeback now. However, there's reasons to move it into a public cloud provider because of the co-locations of all the other IP and services that they have there. And he's absolutely right. They're going to have a better uptime record. They're going to have a better ability to do that and maintain it over time. But the reality is it's how much does that, how much does that mean to you? Does it mean $20 million a year? And that's when people suddenly love this operating model and able to figure out how to make it work, right? And I think that's what we're coming down to. So a lot of these people are making very tough decisions about where these things should run and it really kind of comes down to money. We did an extraordinary bad job in the last 10 years in optimizing workloads on the cloud. And so people got these huge cloud bills in 2022 as one research paper after another, how much more expensive this is than we thought it is. And people are responding to that. They're looking for other alternatives that can not get them into this financial dire straits that they got into, you know, by moving very quickly into the cloud and the pandemic. You know, there was a study, ETR did it last summer. Remember, you saw this data on which component of your cloud bill is most concerning. And the answer surprised me, it was database, right? I thought it would be compute. I look at our cloud bill, it's all compute, a little bit of storage in there, but it was database. And I wonder, maybe it was a heavy snowflake influence because they bundle it in, really not sure, but it surprised me because of the revenue flow you think is mostly compute. Yeah, no, I think it has to do with a lot of the different pieces than that stack that come together that are abstracted because they still have to pay the compute bill for those databases. They have the networking bill. They have all of the different bills and that when you start to see that layer cake built up, it becomes very expensive. But to your point, I think people look at it and go, there's open source technologies and there's data mesh technologies that I can now deploy that kind of give me a cloud like data platform as well that can stretch on-prem and on the cloud. And maybe that's good enough to bring certain data together because the data at the end of the day has weight. It has gravity. It wants to sit where it is and making copies. And I think a big piece of that database the most expensive is the fact that they're doing transformations and when they do the transformations, they're not really being efficient in how they transform. And it's driving a lot of compute. Yes, and it drives a lot of compute and it stores a lot more data. And I think AI is one of those things that the unintended consequences of AI is way more data is the output from that. And then what do you do with all that data? Because again, that's gonna factor into do I keep it on-premise or do I move it back on-premise once it's trained to consolidate some of that cost or is it cheap enough in the cloud to take that? This is the way more data. It's like music to your ear. We love it. No, I think you're right. But I think things are gonna balance out a little bit until 2022, 2023 we're in a growth cycle and everybody just built and built and built for growth and nobody really looked at the build too much. And then the economy changed and we all started taking a closer look at our bills and one alternative is to go off the cloud, go into private cloud or change the infrastructure that you run on. But what we also see companies doing is optimize the workloads, right? Like there's many years of workloads that have been built not necessarily with an eye to efficiency or optimization and this can be optimized and the cloud vendors are not oblivious to this either. They are taking proactive measures to help customers reduce cost and changing their pricing. We're helping our customers. Observability is a big part in optimization, right? We've helped our customers reduce their snowflake and Databricks bill quite a bit. And so I think we'll see things kind of evening out as the macro economics force companies to take a closer look at how they use data and how they spend money and we're all for it. It's gonna make everybody. You're right. It was a good point. Kind of forget things have been so AI crazy. The first half of 2023, and the second half of the fourth quarter of 2022, a lot of optimizations. Actually Amazon and others have been at that for a long, long time, which the bet is, hey, if we can optimize, we can keep them there, right? And so that's gonna be interesting to see Amazon announces earnings tonight after the close about an hour and 15 minutes. I don't think you're gonna see a big uptick just quite yet from AI, but I think in Q4 we might start to see that. And if we don't, then I think people are gonna start to get a little bit concerned and say, okay, show me the money, right? Absolutely. I think right now it's anybody's game. People are implementing at their own speeds. There's a lot of talk about AI going on as far as what's actually happening in terms of consuming cloud cycles and database cycles in the cloud that kind of remains to be seen. Right now I think there's a lot of experimentation going on. There's proof of concept prototypes. People normally aren't getting as far as people think based on the hype that we see out in the press. And so I would push them to say, let's go a little slower to go faster and let's make sure we're making the right decisions about where to platform this stuff and where to make the bets. My big concern right now is that people are building, are building systems using generative AI that should not be built using generative AI. And I saw this throughout my career in the early days of AI and that's why it became too expensive and that was kind of pushed off for a while. Things like that. We're having the same sort of thing now. We can do everything with this technology. It can add value on it. It certainly can add value, but again, how much is it worth to you? And your ability to put resources and invest them into certain things and innovative bets and investment you need to make in your company have to be completely strategic now. We can't make that mean a mistake. You're going to make a mistake and find yourself outside the market. Yeah, it really comes back to that business case. It's something that I'm sure you spend a lot of time doing. Guys, thanks so much for coming on SuperCloud 4, really appreciate it. Okay, up next, Jeff Boudreau was just named the chief AI officer at Dell. Dell is nearly a $100 billion company and they don't have a chief data officer. So we sat down and talked about some of the organizational implications, how Dell is using AI internally and externally. What he's seeing, what Jeff Boudreau is seeing with regard to this role in other organizations and financial services and healthcare. So keep it right there. Dave Vellante, Rob Strecce and John Ferry will be back. Watch this and we'll come back with our live program right after this conversation with Jeff Boudreau at Dell. Thanks for watching.