 Welcome back everyone, live CUBE coverage here at Chicago. We're here for KubeCon, CloudNativeCon, part of the CNCS, and we've never missed a KubeCon, been there from the beginning. theCUBE has been there documenting front row seat, all the action being on the, in the arena, as well as front row. I'm John Furrier, your host, Rob Stretches here with me. Savannah Peterson's been on theCUBE. Joe Peterson tomorrow will be here. Rob, great to see you. We wind down day two. We've got a great special guest, Joel Inman. He's the CEO of compute.ai, CUBE alumni also presented in our SuperCloud 4 event recently in Palo Alto. Joel, great to see you again. Great to see you. We've got SuperCloud 5 coming up. Not only we don't have them this close together, but because of the Amazon's web services and annual user conference, we're doing a special edition and coming off the heels of KubeCon, Microsoft Ignite, open AI developer day. Yesterday was a huge success. The Amazon show is going to be the battle for AI supremacy. It's all the conversation everyone's talking about is AI. And you're smiling because I know why, because you guys have a product coming out and it's out being developed. Compute.ai is your company name. It's also your URL. It speaks for itself. We love more compute. It should be oxygen as Vikram said on theCUBE, SuperCloud 4. But here at KubeCon, this is the audience actually gets into the, it's into like hardware and infrastructure to build platform engineering environments. A sweet spot for you guys, actually. That's a long-winded intro, but welcome to theCUBE. Yeah, thank you for having me. And I love talking to you guys because it's a continuing conversation. You guys are evolving your thoughts. You're doing research. You're putting out different thought pieces and feel like we're able to join in and just kind of come right along with you. In terms of kind of what we do and where we're focused, we're building the compute engine for AI. It's all in the name, right? And the way we see the world is that AI ML is going to drive the future of compute, compute needs. It's going to be machine generated SQL and it's going to require a scale that we've never seen before. So we're implementing efficiencies within that compute layer to be able to handle that demand. This is a great topic. We're going to unpack here. Before we do that, I just want to mention the folks you guys are a startup. Self-funded. Some angel. Not entirely self-funded. But pretty much not the traditional VC backed company. It's a tough market right now. What's it like being an early stage startup right now because you've got, it's a buyer's market on the venture side but it's a huge enablement on the AI side. It's a huge wave coming. What's it like? You know, I think there's good conditions, there's bad conditions, but sometimes when you have a strong product, a strong team and a strong vision, those are the factors that really transcend market conditions and finding the right capital, visionary capital that kind of pairs with your vision is essential. So it is a lot like matchmaking. But funding is just the lowest common denominator. Everybody needs money, but who are you going to partner with? Who's going to take you to the next level? Who really understands your vision is going to introduce you to the right people within their network, who's going to help inform your team and grow your team and really be right alongside with you as you grow. You know, that's what we're looking for in investors. What are the opportunities for entrepreneurs out there that want to go get VC, venture capital, because you don't really need that much to get in the game now, but to scale, you probably will have to do an institutional round of financing through a venture capital, although again, lots of changing in that market, but still, you need capital to grow. What's the opportunities out there for entrepreneurs? Well, I think the opportunities are to have confidence in your vision and just to continue to press, who do you know, who do you know, introduce me to find the right people that are going to back you? All right, let's unpack the compute, because you mentioned something earlier, I heard you before we came on camera, talking with another entrepreneur and influencers here around how you see the next chapter after our current one we're in, because we're in this euphoric moment of chat, GPT, open AI, just had their development day, you're starting to see that grosser, and by the way, they're lowering costs, opening up the context windows, so you're starting to see that progress. What's next, after we get through this first wave of AI, what's next, and where do you see the compute being the key there? Yeah, great question, so love all the attention that AI is getting through chat, GPT, and LLNs, right? I mean, the cool things you can do with it, you can cheat on your high school essays, you can manufacture marketing terms out of thin air, there's a whole world beyond generative AI, and when you're looking at the AI implementation of the future, Enterprise A is going to come with all sorts of different models, and it's sort of like hearing a lot of people say I was there when, I was there before chat, GPT, because the reality is we have a community of data scientists that have been working on this for over a decade, they've been training their models, and training their models, and training their models. If you look at the Databricks survey from the summer, one in three models are now being operationalized, okay? So the way we view it is that it's time to move from training your models into production of your models, into implementing those use cases, and generative AI can't be applied to certain workloads, so I'm thinking of credit card fraud detection right now. When you have that generative aspect of AI, that's kind of licensed for the AI to come up with some hallucinations, which you cannot absolutely have, so it's more like an inferential AI, so bottom line is as AI starts to infiltrate our various infrastructure, that's the second way, if that's where Enterprise A is going to really come up with. And where do you see the unlimited compute opportunity because that's kind of your narrative right now, is talking about this idea of unlimited compute, like oxygen, it's going to be plentiful for the problem set, which is data. What do you see that fitting in? What problem are you going to solve? So the way that everybody is thinking about compute right now is AI is incredibly compute hungry, we know that, right? We're buying as many GPUs as we possibly can, they're flying off the shelves, we're even having to rent them, there are startups that are taking GPUs and they're renting them by the minute and by the second, right? But as you move into the fabric of our infrastructure, you're really going to see GPUs and CPUs working together and they're going to work together in a common SQL environment. So you have our data platforms, you have Spark, Presto, Trino, they all can invoke models using UDFs, they can all kind of connect right to that AI, you have a big query, you can go right from your trained model into a production model, when you're having production, you need SQL and you need to be able to operationalize that model at cloud scale. Is that how you see really compute and data coming together or how is it that, because there's a lot of people taking different approaches to the platform, be it from a storage up perspective, be it from like you said, the data warehouse data lake down perspective and then the apps, the data apps are being built on top of those different data platforms and AI being a data app as far as I can, it can be a, not a data product but more a data app where it's using multiple data products to go maybe I have one for the CFO's office and maybe I have one for HR, we call them SLMs or segmented or specific language models for those. Is that how you're seeing this kind of more than one data platform for the right type of AI? Is that where you're seeing it? When you think about the platform required for AI, so I kind of touched on it but I didn't complete the sentence so let me complete the sentence. You need GPUs and CPUs working together. You need to feed that AIML model, you need to feed it structured data, you need to feed it unstructured and semi-structured data. So you're going to have vector databases, you're going to have relational databases, you're going to have GPUs, you're going to have CPUs and providing the compute infrastructure for that is going to require a SQL based framework that is able to scale. The output in the backend of many of these AIM models is machine generated SQL. So we're seeing these use cases in business intelligence where it's spitting out complex SQL automatically. And then it's going to layer deeper, you're infusing AI into those BI tools that's going to generate 1000 times more SQL in the future. That complexity of that SQL requires compute efficiency and that's where we come in. So we're providing that compute efficiency to scale in a platform that hosts AIML models, vector databases, relational databases together. Where do you see the integration happening because one of the things that we're also seeing is the large language models have all this training built in, why not use it there, integrate that into my data infrastructure which might have proprietary data or no zero tolerance for hallucinations or any potential miscue. So this idea that you got to blend them together is a big topic and people are looking, what's your vision on how you see that playing out? Because this world is going to be dealing with a lot of data, machine generated data, whether it's SQL or other stuff, you're going to see dynamic things happen. You know, I think it's a broad question. I think you break it down into pieces and you say, okay, well, what is my use case for the AI? What do I need it to do? And you have these LLMs that have a chorus of public data, publicly available data. Well, essentially what that is is scrubbing the internet, taking the entire internet and saying, we're going to feed it back to you in ways that make sense to you. But other sort of use cases, other sort of use cases require private infrastructure. So I think you're going to see that mirroring sort of the cloud infrastructure versus on-premise infrastructure discussion. Do I need it to be private? Do I need it to be in a Colo facility? Yeah, I think that's what we're seeing and we have this idea of a power law and that John, myself and Dave Vellante and George have worked on. And I think part of what you see is, again, that the size of the models for those public ones that are scraping the internet and things like that are massive and they're up kind of at the front and the top and then the power law comes and there's this long tail that's kind of being pulled upwards from a size perspective, but the number of them and the specificity of them is very long. Like you have, once they're specific to telco, finance, like for instance, I use the example of if I'm building 10Ks as the CFO, that would actually be a really good job for a LLM to go and do is or an SLM to go to very specifically, but I want to keep that data private because it's my financial data. Are you starting to see people engaging with you that says, hey, I'm looking for this compute for this and I need the security and the kind of walled garden as it would be? Yeah, oh yeah, absolutely. I was on the phone with someone earlier today who was doing credit card fraud detection. That's why it kind of pops to mind, right? And his issue is, well, we need to keep this data absolutely private, absolutely secure. Can you imagine the implications of people's credit card information leaking out? I mean, yes, you can because it's happened before. But if AI gets a whole of it, it's a whole different ball of wax. So there's absolutely the use case and I appreciate that article you did on the power law because it's a beautiful graph. It shows that the long tail of vertical specific AI, that's going to translate into on-prem data centers. Not everything is hosted, not everything is open. So we're going to have a mix and match of solutions that are necessary. The way that we're looking at it is compute is compute. And when you have GPUs paired with CPUs, you're really going to need to get the most out of that platform. I got to ask you about data management. We've got a couple minutes left. I want to understand the implications of how the data management market changes with this future. Data formats have been a big discussion for proprietary formats, open formats, data bricks, they introduced concepts over kind of mind-blowing, Rob. Remember, you got Iceberg and Parquet out there now. Data management clearly is changing. What's the impact and or order of magnitude, importance of these formats? Can you scope that for us and how you see that? Sure, so the proliferation of Iceberg and Parquet and the standardization on Iceberg and Parquet, you know, it's fantastic for data lakes and cloud data warehouses and everybody to kind of in the Spark ecosystem in particular, Spark, Presto and Trino. But it's much more profound than formatting. And the way that we view this at Compute AI is for the first time in history, you're able to separate transactional workloads. So your typical DDL and DML from queries. And queries are the ones that are taking up 80% of our compute requirements. So what we've done is we've built a query engine and- 80% is coming from the query side, you said? 80% of compute or more is coming from the query side. And this DQL, okay, by separating that, you can build a dramatically simplified engine that is focused on queries only. The time to market is fantastic. It's fast, anybody can do this. What we bring to the table is to say, aha, now this is possible, we can build something that is far more efficient and utilizing the hardware resources that are already there that has never been done before. Joel, really appreciate you coming on theCUBE and just getting shared in the vision. We love the, you know, I love the URL compute.ai. I think it's a great name speaks for itself. I love the idea of compute as oxygen. Rob, you can't get enough bandwidth and you can't get enough compute and now GPUs. The world wants more power and more compute. So great job, great mission. For the last minute we have put a quick commercial in for what you're doing. Your status as the company, are you looking for funding? Are you knocking on doors? Are you waiting? What's for the people who are curious on how to engage with you? Whether it's an investor, potential investor or customer. Are you open for business? Give the quick pitch. Yeah, okay, so yes, we're open for business. So I joined the company about five months ago. Before then it was just the founding team of engineers, so the four engineers. So only five of us in the company now but we're getting a lot of traction. We're empowering channel partners right now. We're also speaking with customers and we're engaging with customers. The quick commercial on us is the compute efficiency that we're seeing, we no longer require memory over provisioning. So we are dramatically shrinking the memory footprint that is required for compute. And when we do that we balance the core to memory ratio. We run CPUs much higher efficiency and we're getting a 10x performance improvement while reducing infrastructure at the same time as 5x. In terms of the stage of the company we have early capital and I mean, my job we're always raising capital. We're always in those conversations and as I mentioned at the beginning we're looking for those partners to really share in the big business. What's kind of profile investor you're looking for? If someone's watching, is there a certain category or mindset or background affinity that you're looking for to have that match? Obviously you guys are, I won't say age out in terms of you guys are systems guys never age out with systems. You're not the young 20-something year olds. I mean, Rob, we're starting a company you're competing against 20-something year olds. This is the big challenge. What's going on? We're looking for big figures. We're looking for big visionaries who see the world the same way we do. This machine generated SQL is going to rule the world and we're providing the compute for them. Joe, great to see you and again congratulations on the venture. Love the idea, we'll be tracking it. Thanks for chairing. Again, AI is just beginning. The picks and shovels, then it goes into mining, the data and again, compute will be a big part of it. Of course theCUBE, bringing you all the compute action and content from KubeCon, we'll be back as we start to wrap up day one. Stay with us, we'll be right back after this short break.