 Welcome back everyone, live coverage here. SuperCloud Six and Palo Alto, I'm John Furrier, your host of theCUBE with Dave Vellante, extracting the signal from us. This is the AI innovators segment of SuperCloud, SuperCloud Six is the theme here. I've got a great guest, Raj Verma, CEO of Singlestore, veteran in the industry, CUBE alumni. Raj, great to see you. Thanks for coming in for the special edition of AI innovators. Thank you for having me, John. It's always fun to be with Dave and you. I wanted to bring you in when we were going to do an update and we would Singlestore again to continue success with what you guys are having over there. But you've been in the industry through a couple of cycles where there's been inflection points in your career. We chatted about this last time, we were together around October timeframe. And Jenny, I was still hitting hard, but now it's kind of continuing to thunder away in terms of a value proposition. It's a pretty special inflection point. You were at Davos, I saw you there. You've been traveling the world. You're talking to a lot of people out there. This inflection point is different. It's bigger, but it's going to enable a lot of change, a lot of wealth creation, a lot of opportunity for folks and companies. And it changes the game a bit because the buyers have a different environment now. They're end to end. There's no really IT department anymore. As the personas are changing, the world is changing. What is your view of Gen AI as you have looked at your career and as you were you are at Singlestore now, you have a unique view. What's your vision of how Gen AI is playing out? Yeah, you know, I was fortunate enough, John, that I went through a few trends in my career, you know, 30 years in the IT industry. Fortunately, I started when I was eight. So it's, you know, I remember selling PCs and printers as my first job. And then the internet happened and I started getting emails. I'm like, holy hell, what is this, you know? And then the entire cloud revolution. And on the side, while I did not participate in it, but I was the recipient of the entire mobility and communication and networking trend and evolution which has really changed the world for us. The thing that I find about this latest evolution trend in AI is it's the culmination of all the previous trends into one. So it's no surprise that it is going to be bigger than all of them put together. There's no if and and buts about it. I do think that one of the best definitions that I got about AI and what we can hope from it is the easy would get automated, right? So yes, the job discussion aspect is going to Israel. The hard would become easy and the impossible will become possible. And if you look at AI in these three trends, the amount of innovation that you're gonna have over the course of the next decade or so is going to be more than we've had in any other decade by order of magnitude. Will it create some issues, societal issues in terms of some percentage of the bottom segment of IT jobs would disappear and would never come back. However, there'll be more additions than subtractions in that field is my view offered. But yes, it'll be the biggest opportunity we have seen. What was the Davos conversations like? Just curious because I wasn't there this year. What's the reaction from the general world seeing around AI? Honestly, we're seeing reports in Europe a little bit more regulation oriented. I won't say Wild West, but Silicon Valley, we want to see more innovation. Where's the balance? What are you hearing? Yeah, Davos was been going to Davos for over a decade now and for the first time in ever, it was a very singular theme which was AI or die. I paraphrase, but everything about Davos was AI. And in fact, Davos, our World Economic Forum, I have never seen it for over a decade, just jump in with both feet on a technology prospect. And I saw the change happen on Monday as people are warming up at the reception, et cetera. The CEO was talking about, well, it was blockchain a couple of years ago, it was cryptocurrency or what have you, maybe a year before that, and now it's AI, ha ha ha ha, as they sip on their cocktails. But by Wednesday evenings, evening, it was very clear, oh my God, I called my C-suite and we said, the minute I'm back, I need an update as to what's going on with our AI trends. Now, so that was the excitement. So I came back from Davos and I was very, very clear that the investment in AI over the course of the next 18 months is going to be unprecedented. There's no effing and butts about that. There is another aspect to it which was cautionary, which is the regulations around AI so that it doesn't fall in the hands of bad actors. And by the way, the fact really is, John, when you democratize anything, by the nature of democratizing something, you are putting it in the hands of 99% good, 1% or 0.1% bad, and you just have to deal with it. Or else you don't democratize it, and I'm still in favor of democratizing it. So the point though I wanna make is regulations will play a huge part in ensuring that AI is adopted in the way that we hope it is adopted. And the three centers of regulation would be America, EU, and China. And what was very interesting when you talk about regulations, no one is disagreeing that we don't want or all three of them don't want the next world war to be started by machines. We do not want our identity to be hacked and be taken over by a machine. None of that. There's agreement there. The disagreement is around cultural aspects which are thousands of years in the making, like the Microsoft president actually put it really well, Brad. He said, you know, if you see the manifesto from EU and China, it's the difference between Aristotle and Confucius. So it is really that societal difference and how do you want to regulate, essentially what it means is, how do you want to regulate society? It's very different in America, very different in EU and in China. But I think we are extremely close to having a global regulation. The fact that it's even talking about society at this point to me is a great dot to connect. I think the bridge a little too far at this point, but the fact that it's being realized is interesting because that means full validation to your point earlier, Dave. Well, I want to go back to something you said about, just all the, it's combining all the waves of the past. So I've sort of wrote down, the PC was about microprocessor and so you've clearly got silicon playing a huge role today. The web, network effects, we've never seen anything like the adoption of open AI. You know, cloud turned the data center into an API. Now we're turning the technology into natural language interfaces. You know, social was all about data. You got a CAPEX build out, like what you didn't see for blockchain. You had pockets. And then you also introduced regulation, which is now way earlier in the cycle than it was for instance, when you know, Microsoft realized that, or the FCC or DOJ realized that, that Microsoft was going to put Netscape on a business and came in, you know, whatever, eight years too late. So now it's, all these forces are colliding at the same time, like we've never seen before. It's quite remarkable. It is quite remarkable. And there are actually two thoughts on this, David. One is, remember when mobile phones were two years into it, because we are in the second year of really the AI evolution revolution. That was 2009. Smart phones are mobile phones. Mobile phones. Oh, mobile phones. Mobile phones never go way back. Right? So remember those Motorola break-size phones, and you know. The phone bags. Phone bags, and by the way, you know, the battery life was 45 minutes and you put it next to your thing and your ear would turn maroon, you know? At least mine would. And so in the second year of any technology trend, there we are with AI, David, is far farther than we were with any of the previous evolutions. In cloud, we hadn't sorted out the data privacy and the easy controls and all the rest of it. With the internet, again, the monopoly effects weren't sorted out. I do give credit to the EU in terms of the GDPR and the privacy leadership that they've taken in that privacy regulation around data privacy is that's going to form the foundation of a global regulation. And so yeah, I mean, I think most of the technology CEOs would agree that we have all fretted and fumed about GDPR at some instance in our professional careers. But overall, I think, and I never thought the day would come and I would say that, but I think overall, it will benefit us in the long run. On your customer base with Single Store, you guys have a lot of discussions around databases and data. Where are their heads at now in terms of the challenges and opportunities they face? Because if we're talking about society impact, you've got to look at the enterprise impact now, they've got to be looking at general AI as one validated directionally. And then how do they start thinking about projects? And yeah, there's some experimentation. We heard here early all day today, again, reinforcing the vector embeds as a great way to start indexing content, at least to get context in a new way for the RAG retrieval augmentation generation to other enrichment data sources to enrich data. We've heard that theme, but it all comes back down to storage. Where do you store it? How do you access to it? Do the applications have access? So we're seeing this kind of, I won't say reset, but a refactoring of storage data access from a general AI perspective. What do you see from your vision enabling that level of disruption in a good way for those next apps? What's the conversations with customers? Yeah, when you've been in an industry as long as I've been, and by the way, I'm a computer science engineer by training, and I remember as I was graduating way back then, AI or robotics was a field I was very interested in because it just spoke to my math mind. And at that time, if you break down AI, it was really about storage and compute. So if you have enough data and fast enough compute, you can actually build models which have a larger sample size than the smaller sample size, which by virtue of the sample size are inaccurate. So fundamentally, you have to put that as the basis on which AI is built, data and compute. Everything else, as they say, is detailed. Now, there is no dearth of data in this world. In fact, by the time I finished this interview, the data created in the world would be more than the data created in the entire year of 2005. I mean, that's the speed at which data is being created. Compute, you're seeing the fastest uptake in market capitalization in one day $270 billion are thereabouts in Nvidia. There's a reason for that. There is a need for compute for AI to prosper. So those are the two foundational blocks. Now, let's keep compute separate for a second. It's not my domain expertise, but from a data perspective, I've spent 25 years in data. When I was at Tipco, we were doing predictive AI. In fact, we wrote the book on it, and it is a power to predict. Now, we were naturally predicting on a very small sample size, which was really essentially confined to that of the enterprises context of information and data. Now, the big difference between predictive and generative big difference at the banner level is the sample size that they are using to train models. Open AI sample size is orders of magnitude bigger than what we did with predictive analytics. So fundamentally, that's the difference. Now, if you were to then say that LLMs have democratized and given enough sample size to organizations to be able to use RAG, et cetera, to use that in a enterprise or consumer or their customer context, that's fantastic. So if we've established those two blocks, then the third block, which is where we come in, we've been doing vectors for six years at Singlestore. So vectors is not a new thing. The sample size increasing then has the need for, as you said, vector indices, all the rest of it, because speed becomes a huge factor. Now, when we are talking speed, we are talking about single digit millisecond speed. We are talking about petabyte scale. And we are talking about, when you're talking about scaling, complexity never scales, only simplicity does. So the three tenants that you really have is speed, scale, and simplicity. If those are the three tenants onto AI, then you would need a application development platform which has millisecond response time, has petabyte scale, and has simplicity. And I like to touch upon simplicity a little bit. The fact is if you went to any enterprise which is a Fortune 2000 company and asked how many databases do you have to build applications, the answer is going to be between 50 and 100. So if you're going to use 20 databases to build an application by design, you have 400 points, 399, n squared minus one points of failure. If you have 399 points of failure, you're never going to be effective in AI. You could try whichever magic wand that you want. So unless you simplify your data estate and ensure that you have a contextual store which is effectively feeding your rag onto an LLM, you aren't going to be effective. So if I was to advise someone asked an enterprise, yeah, go for the low-hanging fruit, the chatbots and the conversational AIs and all the rest of it. But if you are playing the long or even the medium-term game on AI, unless you simplify your data estate and rationalize your data estate and get rid of redundancies in terms of data assets that you have, you will never succeed in the long-term in AI. How do you do that and get the scale? Okay, because that's what I'm trying to, as you talk about that, I'm thinking, okay, simplifying makes sense. Then I start thinking, wait a minute, but I need scale. And I think in the early days of TIPCO, by the way, maybe you don't know that TIPCO actually really was one of the first pioneers in this concept of horizontally scaling data, and TIPCO bus was well-documented as one of the best things that happened. But again, inside a constrained system enterprise, how do you get that level of scale and get the simplicity? Is it knowledge graphs? Is it vector embeds, the combination of multiple things? Is it a data model? I mean, what are companies doing? What's that equation? It's just, I can't see that. Yeah, and forgive me, I get passionate about this topic. We'll enjoy it, we did masterclass here. No, no, no, I think you're very kind. I think scale has two aspects to it. Scale is scale of data, all right? And then how do you, so scale of data, right? So you need petabytes and tens and hundreds of petabytes, exabytes of data to really have that. Now, not all data is equal. There's hot data, there's warm data, and there's cold data, right? The most contextual data is the hot and the warm data, and the historical context is in the cold data, right? By the way, I love the data warehouses and all the rest of the technologies, but they are the cold data. They are the last resort for data, right? So whilst they provide a lot of historical context, the immediate real-time context is not provided by data warehouses. Yeah, they're like, they're historical systems of record, essentially. Historical systems of record. And it's not called a glacier. Well, no, frozen in time. See? That's archived. And by the way, I love some of what the data warehouse stores, like Snowflake and Google BigQuery, we partner with them. However, they have a different value proposition. It's models, what have we learned, all the rest of it. And they have the importance in technology. How do you capture hot data, warm data, without breaking the bank and marry it with historical data is really what operating at scale from a data means. Now, fortunately for us, we made certain architectural decisions at single store, which provides us a three-tier storage architecture, which is in-memory, disk, and then object store. So you can manage your hot data in-memory, sort of warm data in disk, and then as it cools, it goes to object store. So you get hot and warm data at the best ECU. And that's really why we called it single store. You guys use Genevieve AI to help make manage those tiers. Well, actually, the feature that managed those tiers when we were called MemSQL was single store. The name of the feature was single store. And I thought it was so applicable to what our mission, vision, and what the future of databases would be that we named the company single store. So absolutely relevant question. And yes, the answer is yes. There is another topic to scale, which then goes on to simplicity. The scale has to be scale of various kinds of data, transactional, analytical, and then the vector and the contextualization. Remember, no one talks about the fact that the archaic way of looking at application development of having a transactional system and an analytical system is just so archaic. You have to have the future of databases will be a database where you can have transactional and analytical system in one. Hence single store, right? That simplicity married with the scale that I was talking about, warm, hot, and cold data, is the recipe for AI. So can we stay on this for a second? Be sure. You're absolutely right. And you see the historical systems of record. Snowflake, Databricks taking a different path, but they're all trying to get to transactions. Whether it's Unistore or Mongo, maybe the reverse, trying to bring analytic to their system of record database. So you see those worlds coming together. We've seen some examples of organizations doing very big memory, you've got memory tiering, bringing those together. So where do you see that going? Because we had just had Uber in here and we use the metaphor Uber for all where you have a digital representation of your business, people, places, things, riders, drivers, BTAs, destinations, et cetera, in a single system that is not moving data from the transaction system into the analytic system, analyzing it and then pushing it back. It's actually a system of agency that takes action in real time. Do you see that as feasible technically and how do you play? So, you know, I'm going to ask. Without 3000 Uber engineers. Yeah. So I should have my marketing department just basically write down exactly what we said and put it on a single store brochure because that's exactly what the single store technology is. We have the ability to transact with high fidelity and reason with data without moving data in a hybrid, multi-cloud environment with millisecond response times. So that is single store. You are spot on that the future of databases, the war would be won by someone with that vision, whether it's us or someone else. And by the way, it's further validated if you really see by Amazon and even Google, which are talking about stitching various kinds of databases through zero ETL, all right? The fundamental basis of that comes back to speed scale and simplicity. The problem is when you stitch with zero ETL, you're still having two different databases. Data is still moving when data moves, bad things happen and in the cloud world is also expensive. So, and I like to address one other point that you asked because by the way guys, I love talking to you guys because you're so informed and it's so awesome. It's just, it's a real conversation. The fact is that if you really see the islands of computing and we talk about snowflake and data breaks because they've really taken, for the right reasons, the oxygen out of the data conversation because they've executed so brilliantly, right? The three kind of swim lanes that you have, which they are trying to dominate is the data warehouse or the data repository, the ML, AI ML ops, yeah? And in the middle is what you said, the application store or a database on which you can write applications, which is what the snowflake vision of Unistore is. Now, both of them have to whatever varied degree, the bookends of the data warehouse or Delta Lake as data breaks has and the ML AI ops, the transactional layer where you can actually have applications take advantage of these both is what the battle is and that's essentially the layer that we provide. Mongo, you're right, they are more transactional and they are limited in their analytical capability and also understand that data in the transactional stores are very transitionary and hence if you want to go into the consumption model which is the most rewarded model, OLTP is probably not the place you want to be at. So I do think that there isn't going to be a 800 pound gorilla in this field. There are going to be a few players at least for the next five or seven years. There's going to be consolidation in our sort of industry. I do think there are going to be a number of acquisitions that'll happen over the course of next 18 months to three years to fulfill that vision of ML AI ops application development layer and then the Delta Lake or the data warehouse. It's interesting to hear how you describe single store. Thank you for that. But I agree with you, there won't be one to rule them all. But it feels like today we're setting up for a number of partial solutions. Like you look at what Amazon is doing, you're right. It's like, they've got metadata in some data store with glue and they've got a different one for data zone and you look at what you mentioned Snowflake and Databricks, they're all these sort of, Stopeite, the metadata is still very separate and it seems like when you listen to Satya talk about the AI actually taking action. To do that, you have to trust it. You have to have a single source of truth, single source of truth. To your earlier point, John, how can you do that to scale? How can you not have a unified data model and scale? It's been possible. And so what's your vision for a unified metadata model as we transition from a world of sort of application centric world to a data centric world? Yeah, I think the applications would, fundamentally the difference is, when I've learned programming way back when, it was the if then, else loop, right? So we were feeding data into the application logic, right? So there was an application logic and we fed data into it and then the application did what it did. It's actually, the model driven applications are slightly different where the data drives the logic of that application. So that's fundamentally the shift that's happened in applications. However, you have to have the ability to transact, as I said, with high fidelity in what I'm really referring to as asset compliance, et cetera, where there is an indelible signature of a transaction happening. Otherwise, there's no fidelity of that transaction. And how do you store that transaction closest to the analytical or the reasoning layer is the seminal challenge, right? You can do it by moving data in a deeply constrained structured for purpose or pipeline, which is what zero ETL is, or have a database by design that has you transact and analyze without moving data. So for example, in single store and it's on the web, you can actually see transactions happening on the left side of your screen. And based on the transaction, the next likely action changing on the right side of the screen in real time. Same data. Same data. And what that does to you is that is where interactive analytics come into play, which is the stepping stone towards generative AI. So I'm glad we are actually talking about real problems facing gen AI than the vanity issues, which are probably three years from now. How do you focus on the hygiene of your data estate? Is the seminal challenge that people should be thinking about today if they are thinking AI? And that hygiene factor is exactly what you refer to. How do you close the distance between transactions and analytics, right? Without having to use 30 different databases because that creates complexity and complexity doesn't scale. And without scale, you don't have. So you can do it with a lot of memory. That's one approach. We see that with, for instance, MySQL HeatWave. Big, Mongo, no pun intended memory. You have taken a tiered approach. Others are trying to rethink how you lay out data on a disk or a flash. Can you explain how you see a single store participating? Yeah, I think we've got the advantage that we don't have, and I hate the term legacy because I think some of my partners like Microsoft and Google and others get offended by that. Let's just call, they have a heritage, right? SQL Server is not distributed and it sells $14 billion of it. No one in their right mind should be buying a SQL Server. I mean, again, that's just the truth. It's just there. Some of my best friends are work for it and they're very bright engineers. But how can... Cash, Cal, because it's... Exactly right, exactly right. And call it what it is, right? It's not distributed, can you imagine? So every time you want to upgrade, you have to go buy a bigger machine on Azure which is really the business Microsoft wants to promote. So yeah, you can be a visionary and talk about whatever else you want. However, your business model talks completely differently to what the visionary conversations are. That's such a life, right? Snowflake and Databricks do not have another business to rely on but software. So out of Microsoft and or CSP providers and us data providers, we have no option if we fail in data, we just as a company, we fail, right? So there is some integrity of thought when it comes to data from our sides, right? We could be, we could defer and be, that's okay. However, the fact is that if our data strategy doesn't succeed, we don't succeed. And I think all of us agree that there shouldn't be data movement between databases or data centers. No question. Not even data centers, but data repositories because that is not beneficial to anyone. And that's why Snowflake, again, the Unistore aspect, Databricks is trying to integrate all of that unified stack. And I once got a great advice that when you pitch your company, never say it's an Uber 4 or a Groupon 4 or an iPhone 4. However, the fact really is where the data companies are going is the smartphone iPhone concept where you will have an integrated device of sorts where you have transactions, analytics, and contextualization of your data in one store. When you're, you know, hopefully when we are sitting here after a nice round of golf with John, three or five years for sure, we said, remember when we were still debating whether there should be one store or more? So we're playing golf in three years for this weekend. Next week, Friday. Just to follow up on that point, do you build intelligent apps on top of that store? All right, that's the key. Yes, so that's a very important point. Yes. The fact is a little bit of the oxygen got taken out of the room with the vector-only databases, right? The pine cones and the milvices of the world. I haven't had greater conviction about anything in my life more than the conviction I have that the vector-only databases will never survive. It's a feature. It's a feature set, right? And I think they're facing the heat in year two, or two and a half of their revenues, you know, facing a steep decline. And the reason for that is the incumbent databases, whether it's me or it's Mongo or Snowflake or whatever else, they are going to have a vector ability. Now, vector-only databases might have a three month, six month sort of advantage on us vendors from a vector search indices perspective. We'll catch up. Yeah, better future, but you're going to buy a separate database. Right, and then you're adding complexity, and then you are screwing around with scaling because simplicity scales. And so the incumbent vector store will be the way companies will go, right? And I talked to some of the biggest Fortune 500 companies. They are not using a vector-only database, right? So the application development on generative AI will happen on a general purpose database with vector capability. So there is no if and and buts about it, which sits on top of a data repository, a Delta Lake or a Snowflake or a, you know, whichever Redshift or what have you. That is going to be the architecture and connects onto the LLMs and has the RAG functionality that it enables. Well, Raj, it's been great to have you here for this fireside chat. I appreciate you spending your valuable time coming into theCUBE studios to speak with us on this AI innovator segment. Great to have you on. And what's new with you? What's next off to another trip? You're doing a lot of media, noticing a lot of public presence. What's new, what's next for you? What are you doing? Yeah, no, it was a fairly cathartic experience writing a book, John. Time is now and I am still uncomfortable plugging it. So it's just, the name is Time is Now. It encapsulates really three parts of in the book. One is, you know, just the people who influenced me in my life, mostly women and I'm a product of women as I say. The second is how fortunate I was to be born in this time where I went on this technology ride from selling printers and PCs in India, climbing up stairs and lugging PCs to now talking about generative AI. And the last part of the book just talks about the fact that the culmination of everything that I have been through serendipitously, the pointy end of the spear is now AI. And then I have a few recommendations in terms of how organizations can approach AI. Every time you have an inflection point, this one bigger than any of the ones in the past creates wealth creation opportunities, entrepreneurial opportunities, new market opportunities, life changing, society changing opportunities and the time is now. Yeah, I'll just leave you with one thought and you know, I was talking to Van Jones who have become slightly close with over the years and he says some of the biggest changes happen in the world when, and I hope I can remember the four pillars that he talks about. He says education, finance, which is New York, the federal government, right? And Hollywood connect on something because that is really the biggest change that happens and especially for the underprivileged and the vulnerable. And AI is that one glue that binds these four agents together so, and I think especially for the vulnerable and the underserved in society, it's gonna be a huge thing. Whenever the game changes, it's up to level up and tear down those sacred cows that were once institutional blockers for people who can't advance. Appreciate you coming on theCUBE. Of course, we bring all the action we can as fast as we can here in theCUBE in our super studio. We're super cloud six is ongoing and we're going to continue live coverage. I'm John Furrier, Dave Vellante. Stay with us for more AI innovators after this short break.