 All right, welcome back to the exec event here at Randy Seidel's house in Massachusetts, sales acceleration, cube after dark. I'm here with Ed Walsh as the CEO of Chaos Search. Ed, good friend of the cube, good to see you. My friend, thanks for coming on us. So we just wrote this piece, George Gilbert and I, this weekend, we dropped it. It's called Getting Ready for the Sixth Data Platform. So we basically said, okay, there's five really prominent data platforms in difference to Oracle. We didn't kind of sort of included Oracle, IBM. You know, there are others, but the big five with the big three clouds, Snowflake and Databricks. And we're saying, look, they're really not set up with this new Uber for all real time environment, digital twin of your business. And we had Ryan Blu on who's the creative iceberg. Basically say, look, these, these platforms could morph. They're becoming, you know, more accepting of open table formats, which by the way could disrupt them. So I want to ask you, because you're in the field, you see all this stuff, what's missing when you talk to customers in today's data platforms? Good question. So, and we might not know all the answers, but I just, what customers are telling us is, or at least, and I can give you a couple examples, is they're looking for, there's platforms for long term retention. What they're looking for is how to get closer to real time. How did they get that telemetry, event logs, but the machine generated data is coming at them so fast and furious that typically all those platforms you just mentioned all have pipelines associated with them. You know, it's the old data prep issue, right? IDC is usually 80% of the problem. Now it's 90% because the tools are easier, but the data prep, the data pipelines, the exact same thing since, you know, the tools are getting better, don't get me wrong. But the net net is they're looking for, okay, that's fine for what I'm trying to do, but how do I marry, and they're looking to marry. So we're like a complement to these platforms. What we do, well, is the streaming data, the telemetry data coming in and making it instantly available to you. So the concept of, you know, we made S3 or GCS hot database, literally stream into any of your access points for GCS or Amazon, worldwide, it's literally interfabric, it's in your dashboard in a minute, queries in seconds. That's something you can't get from these of the platforms, mostly on the streaming data aspect. So that's where we're seeing most of the poll, the demand is looking at how do you bring these and then I guess the other thing is, and you saw this with a Splunk acquisition, the idea of bringing the observability of security data lakes together, both streaming data, both coming at you, you know, the four V's of data, and that's typically they're not able to do that in the other platform. Yeah, so I want to ask you, so you're saying for the streaming data in particular, you can leave the data in place on S3 and that's going to be less expensive to do all that activity, maybe not the full data pipeline, but like you're hearing from customers that, well, it's getting expensive to do all this inside of Snowflake. We want to maybe do some of the batch work outside of Snowflake and you're seeing the streaming work now as well for certain workloads. Sure. And then the other thing is that the, we'll talk Gen AI because you have to toss us into it. People are looking to leverage that. What they used to do like the data bricks on the ML ops, the AI ops, we do see, and I was curious what you're seeing, the LLM technology, people are saying, why do I have 18 different data lakes? Why do I have all this different ML AI and I can look at what you can do with these LLMs? Now they're going to bring private LLMs, they're going to go public to private, but that's causing people to look at architecture differently because, okay, all those platforms, now are you going to go across six different platforms and have six different Gen AI platforms? It becomes hard. I mean, we're definitely seeing a trend toward sort of unifying all the different storage types and data types, but you're right, the power of these LLMs is amazing, but it's really confusing to people, right? There's like all these choices. I know in our particular case, we built theCUBE AI recently and we built it kind of, and we got to an MVP in a month for tens of thousands of dollars or less, I mean, it's quite amazing. And I was at UiPath last week and it's a two-edge sword for those guys, right? On the one hand, a lot of the low-end work, like making clips and pushing them out to social media, a lot of stuff, we did that with our own, we didn't need RPA to do that. We don't need end-to-end automation, but at the same time, if you really do need end-to-end automation, you're going to need a horizontal platform. So it's both, but I think it's hard to predict right now. It's very disruptive and there's so many choices out there. So we're just following clients. So the first, we did the first move, which is using public LLM, so open AI type of things. Now we do telemetry, so security observability. You can't share that with a model. No matter what you do. No, you can't let the LLM vendor have access to that data in no way. So even before, we'll show you how to use a public model and then we see people using, and they're using different terminologies. It's private LLMs or SLMs, subject matter LLMs. I don't know if that's the right terms, but that's what people are using with us. Yeah, we've called it domain specific. Okay, but they're looking for their own, but they're going to take these models, which kind of think of what happened with Meta, right? It somehow leaked out there. Now you have this model that they spent gazillion dollars building. Now you can leverage and optimize for your own environment. So we see, actually, you're going to optimize the application and then the model, but there has to be an innovation at the data layer model. So what we're able to do is bring in all that data, our superpowers, getting all that data, streaming in easily without big teams, doing pipelines, whatnot, have it streaming and now have your model go after it. And we can do public models. And by the way, it's amazing. Without the public model seeing anything but schema, it's amazing what you can do for these different use cases. What LLMs are strong at is helping the humans. So natural language, and it knows a lot of these security type of exposures, they're not all things that happened in the last 18 months. Think of all that history, you're able to do it really quickly. And then we quickly see clients using their own LLMs. But our ability to actually take what you do and application the model, and instead of just using, what everyone's doing is they're integrating the data model, they're not innovating. What we're doing is we have this order of magnitude better data model for cost effectiveness. Now we're orchestrating what you can do in LLMs, public or private with your application. And that like unleashes things. Where does the innovation need to be in your view in that data layer? So it's a couple fold, right? So if you think of data, once you do the LLMs also, you have AI, you want more data, right? So right now with the data models because of complexity of bringing data in the cost, no one keeps even their simplest things like logs. Look at the hack at Microsoft with the whole State Department. If you didn't have the logs, you couldn't go after it. So very basically you should have all your logs but people don't. And then they do a pipeline of logs and they throw out data and that's a wrong way to do it. Because it's too expensive? It's too expensive to put in. So what we have is an order of magnitude benefit. I can give you a couple examples of what people did. But every one of our customers goes from, I'm at four terabytes a day, I'm keeping now 38, Black Friday, 400, 300 and I'm still saving 58%. So what we didn't do is we didn't integrate the backend. We innovated the backend. It is a purpose-built six patents database. We make S3 or GCS a database and because of that we pick up the attributes. So there needs to be fundamental. It's not just putting a format on S3 or GCS. Yes, it's cheaper, right? Everyone knows that but over some, you just can't keep it all but with us you can. So one is you need to be able to cost-effectively keep it all. Second thing is a pipeline, the key ingest. If your database is S3 or GCS, land it and we understand that schema, you set it and forget it. That's not the case for any of these other platforms. That's a lot of time and complexity for all these partners, so. So it wasn't a big surprise. You mentioned earlier the Splunk acquisition by Cisco, $28 billion acquisition. That's good. Like I said, it wasn't a big, well-kept secret. We knew it was coming. We knew a year earlier Cisco tried to buy Splunk for, I think, $20 billion. They ended up paying $28 billion. Maybe Gary still got a little cleanup worth $8 billion, nice job. But what do you make of that? Consolidation in the observability space. I love it. Why? Why do you love it? Why are you happy? I mean, Jeremy Burton was saying the same thing. Okay, so if you look at where Cisco, you just picked up high-quality SaaS revenue. So very few acquisitions you can pick up that type of size of, again, high-quality, high-marge in revenue. So to me it made all sense for the financials. Cisco's trying to do that as a business, get more on the software side, repeatable. And they really want to do this full stack observability, which there's a lot between the lip and the cup to do that, right? So, and the one thing is you look at Splunk, it does a lot of that. So they're bringing enterprise clients. So to me, for technology, revenue quality, the customer base makes all the sense of the world. Now, I think underneath it, same thing about, do you innovate on the data layer? Well, less, right? So there's no, you know, I'm not giving anything proprietary about Splunk, but yeah, they need to do some innovation on the data layer. Imagine if you could have, you know, again, if you look at the architects we're doing, we're saving 58% in the back ends. Imagine what that would do for Splunk's margins. Now, Splunk's margins are unbelievable if you looked at them, right? Well, yeah, that's why it's going to be a creative to Cisco. So I love it. And also, it brings awareness. It's like, hey, listen, you need all this streaming data in. They do that well. Notice, you didn't say the number six platforms were one of them. Now they do this streaming well, just like we do. We just do an order of magnitude more cost-effective. Yeah, this is the, I mentioned Jeremy Bird, he sort of did this post, it was like old guard, new guard, you know, Jeremy's very clever in that regard. But, all right, Ed, thanks so much for taking some time with us on theCUBE. I'll let you go back to the party and appreciate your time. As always, thank you. You're very welcome. All right, keep it right there for more action from the sales accelerator exec event in Massachusetts, we'll be right back.