 Welcome back, everyone, here at SuperCloud Six, live in Palo Alto, California. I'm John Furrier, Dave Vellante. AI Innovators is the topic of this session and we got a live remote guest, Neil Cerebrani, CEO and founder of Calypso AI. Neil, thanks for coming in remote live here in Palo Alto. So thanks for coming in. Thanks for having me, John. Look forward to the conversation. You know, obviously you guys have been doing this for a couple of years now going in, coming out of the big general AI wave has come in. Everyone is talking about how do you secure the large language models as foundation models become super important in how data is organized but also the applications themselves. So you got fast moving application development around the data with the crown jewels of companies and their IP and the infrastructure is changing as well. So you got kind of a lot of moving parts in progress. This is the top conversation. How do you see the whole security paradigm in AI developing? Yeah, thanks for the question, John. Great one. It's actually really simple. There are a lot more organizations that are leveraging generative AI than there are organizations that are building their own foundation model. And so when you get to the leveraging side of the house, it becomes a question of how do you support them from an access control perspective, from a policy based access management perspective, how do you enable them from an input output security control perspective? And how do you exist in a world where there are more and more models? One of the really big macro trends that we've seen in AI for the last 10 years is the importance of increasing simplicity. So from the use of PyTorch, TensorFlow and Keras 10 years ago to the AutoML platforms like Amazon SageMaker or data haiku or data robot to now being able to prompt engineer a generative model, it's become easier and easier and easier for anyone and everyone inside of an enterprise to use generative models. And what ends up becoming more and more important as you have more and more models is how do you control how those models are orchestrated? How do you control how those models are used? And how do you control the input and output data? Right, so you've got this trend in the industry now where I think Amazon, LLM optionality is the watchword, the right tool for the right job. They appear to be well-funded, whether or not they'll all be here 10 years from now, we'll see, most likely there'll be some consolidation but nonetheless, people are out there experimenting, there's got to be a lot of diversity in those LLMs. What are you seeing there? How are you helping customers build trust in that variability of the results of those LLMs, whether it's hallucinations, whether it's data quality, et cetera? Yeah, that's a really good question. For us at Calypso, we're a completely agnostic platform that takes in internal RAG models, internal open source models, fine-tuned models, as well as any external model, whether we're talking about open AI or a Coher or Adept or Microsoft, we are completely agnostic because we believe that ultimately the enterprise should make a decision around what is the best model for a task and there is becoming greater and greater variance in terms of what model is the best model for a task. The best model for sentiment analysis of customer success tickets is very, very different than the best model for doing factory QA in computer vision. And so for us, what it really comes down to is, one, being completely agnostic, two, being there in the input and the output side of the house, and then three, supporting the use of multiple models and the orchestration of multiple models. I want to get your thoughts on the number one conversation in the consumer side of AI is hallucinations. They see chat GPT, they see the answers. As we were pointing out in our opening of this SuperCloud 6 earlier this morning was the rise of proprietary specialty models that companies are starting to implement. So you start to see the mix of models, but hallucinations is a big concern. How do you see hallucinations being managed going forward? Is it going to be a new way to check, do a compile or some sort of equivalent of like, hey, that's the hallucination, let's fix that. Is there going to be a model collision? Is it going to be a model interaction? What is the future of hallucination management or minimizing the output of a response? Is it to come from more reasoning? Is it just response reasoning? What's your vision on how you see hallucinations being managed and avoided? So we're going to increasingly see compound models. So rather than just having, you know, a single transformer architecture model, we're going to pair transformer architecture models with RAG models, retrieval augmented generation models, along with access to information sources. And as we pair these models with, you know, the open internet or other trustworthy data sources together, we should start to see different facets of the broader hallucination challenge solved. And it's a really, really important challenge for every enterprise that's thinking about their journey from internal use cases all the way to external use cases, which is where every enterprise is trying to go to solve. And obviously, as you pointed out, a key stumbling block. Neal, take a minute to explain what Calypso does. What's your company's North Star mission vision? And what are you guys offering as a solution? What are you specifically working on? So Calypso AI's vision is to enable every enterprise to build on top of generative AI. And the way to think about us is it's an orchestration and security platform. At a base level, we provide the policy-based access controls as well as the model orchestration solution you need in order to be able to assign specific models to specific apps and to specific users. And then on top of that, we layer on security policy controls, we layer on model management and model orchestration controls. So making sure that if your instance of open AI is slow, we can automatically redirect you to a model that might be a better fit for the task while doing so in a secure manner. You can kind of think of us as the one-stop shop for your GNAI development. So if we go back to 2012, we first saw social media being used to get out the vote. 2016, obviously a lot more subterfuge and negative sort of sentiment was fomented. And then of course 2020 was like nothing we've ever seen before. 2024 is going to introduce deep fakes for quality deep fakes for the first time. How much of a problem do you see that, not only for the election, but just in general. And can you help solve that problem? It's going to be a massive problem. And it's going to be a massive problem for a couple of reasons. One, we're getting to multimodality from an underlying model perspective, meaning we have the ability to take and output different types of data. So whether these are images or these are videos and to create synthetic images, videos, audio, that's incredibly lifelike. And two, we're seeing a larger and larger percentage of the content on the open internet being generated by other generative AI models. And so we're getting into a do loop of data generated by one model being used for training or fine tuning or embedding of another model, thereby leading to more and more and more false information. Solutions like ours are a key part of being able to actually verify whether information is trustworthy or not. And we actually have a feature that allows enterprises to as they get responses from generative AI models to verify if a response is true or not true and build their own internal taxonomy of trustworthy information on the generative AI side of the house. Is trustworthy AI and explainable AI to the hottest conversations doable today? And if so, what's the areas where it's reliable and stable and what areas are evolving that need attention? The explainable AI isn't possible and isn't gonna be possible from a pure scientific perspective. We have way too many parameters in the context of one of these models to ever be able to have a full explanation. The best science that I've seen is actually using more sophisticated generative models to explain simpler generative models in terms of being able to understand how they make decisions. I think that what will happen over time is we'll get to societal level of understanding of the pros and the cons of generative AI and we'll better understand what do we have to do from a workplace design perspective, from a workplace integration perspective in order to actually be able to leverage these models. Neal, I want to thank you for coming in remotely live with theCUBE. We appreciate it. For the last 30 seconds we have left on the time put a plug in for the company. What are you guys still gonna do? You hiring, you fundraising, how many people do you have? Put a commercial out there for you guys. Yeah, of course. So at CUBE AI, we're looking to secure every enterprise that wants to leverage generative AI and enable every one of these enterprises. We have a compound solution for being able to secure and enable the enterprise, including visibility across all of your models, orchestration across the models that you're working with. And we're growing rapidly, hiring on the go-to-market side of the house as well as hiring on the engineering side of the house. We're just under 50 people today and growing very, very rapidly, expecting to double headcount and look forward to it. Thank you very much and congratulations. We look forward to keeping in touch. And again, thanks for coming in remotely live with theCUBE. Appreciate your contribution to SuperCloud 6. Appreciate it. Yeah, thanks for having me. Okay, we'll be back more with SuperCloud 6. AI Innovators, we've got the founders, we've got big companies in here who are leading the charge and changing the game. And with new infrastructure, new solutions, we'll be right back after the show. And we're live. And we're live. We're really live. We'll be right back.