 Hey everyone, good afternoon. Welcome back to theCUBE's live coverage day to Google Cloud Next. This is our second day full day of coverage here, Lisa Martin with Dustin Kirkland. We've got an OG, CUBE alumni back with us. We're going to have a great conversation about databases, GenAI, why real-time AI is the new norm and a great customer story. Please welcome back. Raj Brava, CEO of SingleStore. Great to have you back, Raj. Great to see you, Lisa. Yeah, absolutely, always. Canada's here as well. Chief architect at LiveApp. Hi guys, thank you for joining Dustin and me on theCUBE today. Raj, we go way back. We do? We do, we do. That doesn't mean we're getting older, but don't tell anyone. No one's listening. My kids do keep reminding me, but yeah. I know. Give us a little lay of the land, SingleStore, you've been in there about four years now. What's going on? What are some of the exciting things that you have going on in cooking? Yeah, Lisa, as you know, I've been a big proponent of real-time. We do think that the world needs to be real-time and especially in the service economy that we live in. You and I go back to Tipco days where we were propagating the same mantra. And what was fascinating about SingleStore now and MemSeq well then, was the fact that in the database world, there wasn't a technology that could bridge the gap between transactional data and analytical data. So our vision has been and continues to be that the future of databases is going to be a platform where you can transact with very high fidelity and reason with data, without moving data and put it in the right context in a hybrid, multi-cloud environment with millisecond response times. And if you can do that, then that gives you actionable real-time insights to be able to do whatever it is, risk personalization, save life of kids or what have you. So that's really our vision. You know, one of the things you and I talked about, I think it was early during COVID right around the rebrand was real-time access isn't nice to have anymore for businesses in any industry. You mentioned some great ones. Thor and I know is a great customer of SingleStore's but retailers on the consumer side, we just had this expectation we can always be connected and get whatever we want but it needs to be real-time, it needs to be relevant. Talk about, you say real-time is the new norm for everybody, talk about why that is and how SingleStore is really leading the charge there. Yeah, I do think that, I mean, just come to think of it what you did since you woke up this morning in terms of the service providers that you used. I took an Uber, I listened to an audio book, I'm going to probably buy something on Amazon or at least my wife did. And all of that without real-time doesn't work. So the fact is whether you like it or not the world is real-time and our expectations of real-time are very real, right? And they weren't so five years ago, you know? And the fact is that, in fact, that's one of the topics I cover in my upcoming book, Time is Now. We talk about the fact that AI is not a new thing. I mean, I'm a computer science engineer and we were doing AI 30 years ago when I was eight, of course. And the fact was it's just basically because of the amount of information in the world and the compute power and we see ourselves in Google, one of the biggest compute power providers are arguably one of the better ones. That has made AI and real-time actually possible. So really the three-step process as we lay out in the book is information. There's tons and tons of it. Then it's putting that in the right context and then the choice as to what to do with it. That choice element is always something which calls for new age leadership, 100%. And I cover that in detail. However, putting information in context in real-time provides you the knowledge to make proper choices. And that's what we stand for. And I'd like to double-click into real-time. What kind of latencies are we talking about here and is public cloud suitable for all of those use cases or are we going to end up seeing some of that data decisioning, inferencing happening at the edge to hit tolerance in those latencies? Excellent question. We do feel that the latency that we are talking about is very low single-digit milliseconds. So it is sub-10 milliseconds. Correct. In fact, one of Fortune 5 company or SLA is not to exceed 20 milliseconds of latency. And they're one of the biggest data companies or shops in the world. You ask a very interesting question and it's also very, very close to Canon's heart as well. Is the entire need for speed of compute through information which is really the GPU LLM conversation? Right. I do think that the future of AI is with GPUs. Now, will the clouds incorporate GPUs as part of their stock? I just see that as the next move in the cloud. For sure. I do think that the world is moving towards GPUs and there is no doubt about that and single store is extremely suited and available for GPU processing. Excellent. Canon, let's bring you into the conversation. Give the audience a little bit of a backstory on live ramp. I know you guys have a great story, a lot of strong substantive outcomes, but give us that story about what you guys do and then let's double click into what you're doing with single store. I'm Canon, I'm chief architect of LiveRamp. LiveRamp is a data collaboration platform. So we live in basically customer data, different customer touch point data and we make it useful by connecting them together. And we let you collaborate with your partners and customers using that data. So that is our main collaboration platform. So we have nearly two decades of experience in the space and so we have partners, rich partners for so many years and we develop relationships and all those good stuff, which is like wine, it's good by aging. Whereas technology is not that. Technology we have to reinvent. We have to reinvent every few years to stay on top of it. The knowledge and data database is good, but whereas technology we have to reinvent. So that's what we are doing right now. We are reinventing ourselves with a new technology like single store and different new technologies of tech. We are upgrading it to solve our problems now. So, yeah, that's where- And I understand that that Libram has two terabytes of read, write on an object store with single store. Talk about that and why that's so crucial to Libram's business. Yeah, basically, I joined Libram like three years back. We started looking at the data very deeply. The Libram's data is more structured data, right? Whereas we've been using Hadoop kind of process to scale it, right? Because that was the only thing available at that point of time, right? With the structured data, we want to scale with a really good database. That's where we are evaluating database. Other times single store came, we evaluate a single store, right? Other times single store started supporting object store. At the same time, Google had good object store support, the GCS support, which gave us like multi terabytes per second speed, which came together really well for us to solve our problems, right? So now with the combination of GCS as object store and single store as a database and our data, we were able to get hours like a batch process which was taking for like 20 plus hours, we were able to bring it down to seconds, right? So that led us to a big change and started working on it where from batch process to close to real time processing kind of thing now. So when we are doing that, single store introduced vector DB, like in a vector approach. That gave us like a different angle to move our analytical data, like for the transactional data and analytical data we were able to do with the same database. Now we were able to convert to vector. With the vector data, we were able to train the models and bring the models to life, right? So I don't know, in 2019 I mentioned this in Apple's Foundation DB Summit that there's a data, we need a data store to solve the micro models, right? We need micro models and micro, to solve that we need a good data store. In 2019 I talked about it, single store solve it now. And I think it's like a dream come true kind of way in a way, right? So we need that type of, because LLMs and bigger models are huge data models there actually, right? But whereas to solve enterprise problems, we need micro models, more specific models for that enterprise to solve their problems. To do that, we need to convert our data into vector and train our models. That's where single store coming into play. Where we can move our analytical data and transactional data into vector and train our models with the GPU support. You know what, we'll be the fastest graph and biggest graph in the world. Yeah, I think you've described what, you know, sounds like a very complex architecture, but that's actually quite common when you start dealing with scale, the structured data, the unstructured data, you know, multiple different database implementations in there. But you also gave us a bit of, you know, what you want out of the next generation. So keep tugging on that thread. Tell us a bit more about, you know, your vision into where this evolves. So basically, right, you know, now that analytical data is there, transactional data is there, now they combine it together. That's very, very good, right? So we need one database to solve both the data. Now they move it to vector now. So when the data is in vector, still, the GPU memory and our CPU memory are isolated now. So this is the database which is closest to solve the problem because we can, because they swap between memory and storage very easily now. With the same, this one with the partnership with the NVIDIA they have, my dream is for them to do the same swapping between GPU and memory and disk, which will solve the whole problem. That's how you're going to meet in the middle between the CPU and the GPU is single store. Yeah. And that native memory store database, right? You know, so they already done it in memory. So they are the closest one to solve that in GPU. Well, that's my vision one. I want to solve the whole problems. Well, thank you. That's great. Raj, bringing you back into the conversation. Talk a little bit about some of the technology requirements that LiveRamp had that came to single store and said, this is what we've been doing. It's not working. We've got a lot of customer demand. What were some of those technology requirements under the hood that single store delivers that no other competitor can? Yeah, I do think that probably Kana knows single store as well as any of our engineers do. I mean, that's really the beauty of the infrastructure world. One of the things that you touched upon, Justin, was you said that the architecture for the new age seems very complex. That's true. And one of the core things that we at single store, apart from performance and the other stuff that I'm going to get into, we feel this complexity cannot scale. And the only thing that scales is simplicity. Now, simplicity doesn't mean you ignore complexity. It means you conquer it. And what we at single store believe is that you can retire between three to seven databases from your architecture by bringing in single store, hence the name, and you were part of the rebranding, single store for your data. Now, that doesn't mean that the world only needs one database. Sure, it needs more than one, but a vast majority of the data workloads can be managed by single store. So that was one of the big, and I don't mean to speak on Kana's behalf, but as he articulates, that solved a lot of the complexity or will solve a lot of the complexity for life, apart from the fact that there is always this tug of war between performance and price tag, right? So of course, everyone likes performance, but at what cost? Due to our unique storage architecture, right? We are memory, disk, and object store. So you can get low single digit millisecond response times as well as the lowest TCO for your business. And that fundamentally is the value proposition of single store, which was thankfully very well recognized by Kana, and then by his team. So very grateful for their support. So I don't want to go too, too deep. I'm sure we can read the white papers on it, but just tell us a little bit about the performance of single store, you know, I'm super interested in how fast can that GPU, you know, put and get data from it? So one of our esteemed customers, Wink, Wink, they have been able to reduce their response time from 44 hours to 18 seconds. Oh wow. So that's what we are talking about. So two days. And I know someone who can talk more about it, but yeah, that's the sort of time that we're looking at. But I'll give you another couple examples. We actually have a very large telecommunication provider which takes all these reports from the telemetry on their switches and infrastructure. They had a report, they call the big report. And that would run for 12 hours and then time out because it just couldn't process the information. They have converted that into a real-time dashboard using single store. So you're talking about three to four orders of magnitude, difference in performance. Can you elaborate on that? I think the spotlight is on you. So one of the main thing is like, you know, we are innovation company, we want to innovate, right? So we want to innovate your partner. That's what we looked at single store, right? You know, we want equally innovative partner so that we can change the world for another side, right? So that's one of the reasons. And when you look at the details as you asked for it, right? So it checked up all the check marks we wanted, right? We had a good SQL interface, as a MySQL interface for the database. And now they produce MongoDB interface. Now we have other arrow kind of different interface they're developing now. So we have solid interfaces to get to the data. And the storage from memory and disk, NBMEs and solid state disks, and when to object store. That object store support made the difference of analytical database, right? That changed the world actually, right? It's yet to realize fully for the rest of the world, but we are happy to be in the front, right? Solving the bigger problems because now the database is bottomless. But still we can use the power of memory and the real time processing in an efficient way. They are the best persons to swap the data between those things very, very efficiently. So that's where we are getting millisecond performance on the things which we are doing. Like beyond the IO level, we were able to cash and get the data performance and the current system now. So. Kind of question for you, from live ramp's perspective, what are some of the additional data collaboration opportunities that like Gen AI is going to be able to deliver and how is Singlethor a catalyst in that? So basically we can use the Gen AI in a different places, right? One is letting our customers to run curious against our data and to analyze the data we are providing a Gen AI layer using their interface actually to provide that, that is one part of it. And that is the simplest part I think, right? Because it's a well known use case. Everybody knows about it. Of course it's at least easy low-hanging fraud. But the hard part is using our graph data and converting our graph into a vector data and building models against it to answer some of the questions we are trying to do with basic joins and basic table one using a customized model is going to be the future. That is where we can change the way we are working and change the world is working, right? So that is where the biggest change we are expecting. That's where we are partnering with both Google on the Vetech side and Singlethor on the Vector DB side. So we are combining and trying to solve a bigger problem now. And we do integrate with Vetech seamlessly. Oh, nice. Excellent. Raj, last question for you. You know, there's been a lot of challenges in building modern Gen AI applications. How does Singlethor solve this for your customers? Yeah, we actually have a fairly simple mantra. You know, if you've ignored your data estate for decades, AI is not your silver bullet that's going to solve it, right? So fundamentally, your ability to hop onto this, you know, train of AI is steeped in how well you have looked after your data estate. If it's complex, if it has multiple data stores, if the data is very, very fragmented, and of course, you're going to just find it harder. A lot of the customers who've done the hard work and have created their own, what we call contextual store, which is a single store where you get contextual data in, and then that contextual data can train your LLMs, like Kanan and I were talking right before the show, to provide that unique context of your organization to what I call industrial LLMs, and they can then respond and be the ambassador of AI for your company. I like that. That's really where we shine, and also speed is extremely important in AI, and single stores always stood for speed, and speed at reasonable economical costs, and that's really our mantra. And that's good, love it. Raj Kanan, thank you so much for joining us, sharing how Singlethor and LiveRamp are working together, how you're using GenAI, and really the data collaboration benefits that your customers are getting and all the data customers of Singlethor. We really appreciate your insights and your time. Thank you for having us, we appreciate your time. For our guests and for Dustin Kirkland, I'm Lisa Martin, you're watching theCUBE Live, this is day two of our coverage of Google Cloud Next, 23, up next, our analyst panel comes together to give you a great breakdown of day two. Stick around, we'll be right back. Let's shake hands and take a picture of that.