 All right, welcome back to the SuperCloud 6, AI innovators segment here. I'm John Furrier, host of theCUBE here in Palo Alto, California. This episode of SuperCloud is really focused on the innovators in AI. People making it happen, the newsmakers, the builders, the creators, the practitioners. And this segment, Alyssa Visnick, co-founder and CEO of Y Labs, ylabs.ai is featured. Doing some very innovative things around our favorite topic, observability for AI, governance for AI, how do you scale AI? And well-funded company, welcome to theCUBE. Thank you so much for having me, John, really excited to be here. Alyssa, you guys are doing some really cool work. I want to go over real quickly what you guys do, but the ML ops market is becoming AI ops. You guys are in the middle of the Gen AI wave. Right on that growth curve, we covered the news of Lankit was released in 2023, open source project you guys released. Really kind of creating that momentum. You're former Amazonian on the ML team back in the day. So, you know, your roots are there. So as you come into this next wave, the Gen AI opens up massive growth as the applications are starting to come out. You're starting to see a lot more activity around setting up infrastructure, standing up applications, starting to people see some point use cases now out there, some point solutions. And everyone's going there, but now you got to stand up and run stuff on it. So you have to build the infrastructure and have the operations. So here we go again. Another operational challenge, yet a huge opportunity. You guys are in the middle of it. Take us through what your vision is for AI and what YLabs does. Yeah, it's indeed very, very exciting time for the entire industry and community with the excitement around Gen AI and its potential transformative nature for every enterprise, the operations and you know, getting value out of this amazing technologies up of mind for every enterprise. So to talk about how, you know the operations are transforming, I want to start with a big picture. And if we think about the enterprise AI stack and what happened in the last year with the introduction of LLMs, we can see how the AI stack is very rapidly evolving. There are a few things that are happening. The model building has completely changed. Previously, there will be a lot of resources needed to build a model. There were data science teams that were created around each particular task for civilization, marketing, et cetera. Now with foundation models and how they're becoming more and more common across different use cases, there's less building and more tuning. And the power of creating AI applications is now in the hands of any developer. It's no longer kind of closed in and within your data science or ML team. So it's becoming a lot more accessible to build AI powered LLM powered applications. Now the deployment part became easier over the past few quarters I would say because the big cloud companies and kind of infrastructure providers invested an immense amount of money into making deployment as easy as possible. And there are new tools that have launched like lane chain that makes the entire application building much, much easier. However, the resulting application is incredibly complex and really hard to operate and no longer has the visibility that we want to see when we're talking about observability for traditional applications, non-LM non-AI applications. So it's easier to build AI. The resulting applications are a lot more complex and a lot less transparent. That's where we're kind of stuck right now. And the role of Y-Labs here is to simplify the operation part. We believe that the main kind of blocker in wide enterprise AI adoption is an ability to standardize operations starting with observability and then go in into security which enables GarGrayl and then go in into and simplifying debugging and gathering more data for optimizing and improving the model. So Y-Labs kind of brings all of those activities around operating which starts with observability together and we were the release of Lankit last year which has gotten amazing traction we're doing around 30,000 downloads a month and major Fortune 500s are using Lankit in their LM applications for enabling transparency. Lankit is the starting point for that. And this year we've been expanding rapidly introducing new products like GarGrayls, tracing of the complex LM applications to further improve that observability and control. So I would say it's super exciting time. Enterprises, every Fortune 500 has some kind of OKR to launch a Gen AI powered application on their roadmap this year or multiple. Operations are still hard but with the products that we're putting forward we believe they'll become increasingly easier and the community is all kind of gathered together around making sure that LLMs can be adopted rapidly and safely which I think is on everybody's mind. It's one of those things where the CEO or the execs we need to get into that business immediately we're going to be left out of the cold we're going to be in the wrong side of history so it's a little bit of hype there, I love that hype however the hype is matching with reality starting to see benefits. So the question I have for you is I'm just curious, are you guys more like developer focused with the open source growing it up, bottoms up, getting in with the developer teams or are you targeting much more of an enterprise ops team because with, or both because obviously with the LAN kit the data dogs of the worlds come in start from the developer and go up. We also believe that developers will dictate the standards so that's important. So the question is are you guys doing both or more leaning into the open source just kind of seed the base and organically grow the market? So we're doing both and the reason for that is that first of all, you know the developers are dictating what gets adopted and it's very important to have open source tools to give developers the power to evaluate and decide and assess whether the tool fits their needs. However, it's equally as important to have an enterprise motion in place because when it comes to enabling observability for AI applications, the deployment involves touching companies most proprietary, most confidential data which is the data that goes through the model inference through the AI inference. And in order for that process to go smoothly and we want to make sure that as we're introducing safety and control into our applications we're not taking safety out from, you know the overall kind of IT teams policies. So in order for that to go smoothly we have to engage very closely with the security team. So that becomes an enterprise motion. Essentially we're helping align all of the stakeholders from hands on keyboard developers who need to evaluate the tools all the way to CISOs because, you know, security is a super top of mind topic for everyone. And now with LLEM, CISOs get involved in the purchasing decision as well. Though in a way it's a very top job, lots of stakeholders but then this is the state of where we're at right now and I think things are evolving pretty rapidly. I do think that eventually the way the space is going to evolve is very similar to how Datadog took observability to market which is through bottoms up through standards created by practitioners and through the tools adopted first by practitioners. What's interesting about what you just mentioned which I think is notable is that the highlighting that both the organic and the pre-existing assets that you have to touch being proprietary and probably sensitive data but also strategic data, pre-existing apps because Generai brings a home of the level of scale and benefit to that. So it's not like, hey go build an app and we'll roll it out and test it out. This is a, bring this in main streets you got to kind of do both you got to get the developers comfortable with the technology but also how do you build it? So I think that speaks to the opportunity clearly. That's why Generai is going to be huge. The question I have for you is do you see this becoming a whole new infrastructure? Does AI have its own operational operating system? Is it going to be another abstraction layer? We hear neural nets, we see graph databases you're starting to see a lot more LLMs interacting with other small language models some call them proprietary sovereign models whatever you want to call them you now have model integration happening. So you're starting to see this wave of hmm, it may look differently. So the question is, do you see that happening? And then if you do, how do you run observability on an infrastructure that's not yet defined or is being retooled in real time? Ooh, this is a tough question and I think it's tough not just for me but for our entire community to answer. You're introducing it kind of a very or you're bubbling up a very important topic which is the infrastructure is evolving rapidly and it hasn't finished evolving. So it's going to be evolving this year and next year and probably a few years from now. So how do you adopt to that? I think the first thing is it's too early to tell whether there's going to be, you know a completely separate kind of AI IT organization that will come to exist. I'm not sure if it's smart to separate those two concerns because at the end of the day it's all software. So my bet would be that essentially the IT organization is going to evolve and everybody is going to get very AI smart. So every IT practitioner would be as comfortable using and operating the AI side of the technology stack as they are operating containerized deployments or microservices, you know or as we've been evolving through like what software means we started with software being something that's pretty static on your box under your desk to something that software is ever evolving which is like what you experience with an LLM application and we have adapted through all of the cycles of like moving to the cloud, moving to a container as deployment and IT got caught up and expanded their capabilities. So I don't think AI is going to introduce anything more kind of revolutionary that what have been going through the IT organization evolution so far. But I do think that the needs to adapt for the IT organization at enterprise they need to become very comfortable with both the tools that the AI technology is bringing into the IT infrastructure world as well as getting comfortable with the AI technology as a whole. How do you operate? What are some gotchas? How is observability for AI applications different from observability for application performance monitoring, for example getting smart about that I think is the most important thing and something that everybody should start doing yesterday. Allison, you know, your point about the software is key because I think that's the end game, right? It's all software. So AI will bring on new capabilities in here. So you're going to have a probability some adaptive, some smart AI so people will be operating with humans and AI working together. And that's been discussed a lot. So I think it's still an open question we're going to keep monitoring but I think that's the question is what does the infrastructure look like? Now that was a lead into the next question which is observability because generative AI is generating new things. You see that today with current LLMs whether it's a hallucination or the answer is different every time. How do you log that? The good answers in. So let's just say you got prompting with responses, which we see today and then you got reasoning which is inference based, kind of kind of. So you got response from prompts and more prompts and then you got reasoning a little bit a little deeper thinking. As those things come together that's going to be an observability challenge. Let's call that down the road. How do companies today start taking baby steps to start figuring out what is their observability posture? Is it gettable? Are they in position? What's your vision on the roadmap of observability knowing that if something pops out a good answer you want to save it or log it and then did it happen again? So this is like an open question because like I can see a lot of complexity there. Yes, and this is exactly the type of challenge that we're tackling. So first I'm going to talk a little bit more abstractly from what organizations are doing today when it comes to getting smart about observability for a lot of applications and AI, gen AI applications in general. So there are kind of two angles to it. There's the quality angle, right? The user sends a prompt the application comes back with a response. Is this a good response or not? How do you decide that? Is this a response that you can even return back to the user? Because if we're talking about highly regulated industries for example, your chatbot cannot return an answer to the question about any kind of legal advice, medical advice, marriage advice, you don't want to get into any kind of legal gray area there. So how do you control what your LLM can respond? What kind of questions can the LLM respond to and how do you control the quality? That's like one area of operations that I think enterprises are grappling with. The second area is security. So LLMs and gen AI applications open up a whole new set of security challenges that we haven't solved before. And how do you, those include kind of how do you identify prompt injections, jail breaks or any kind of adversarial engagement from the user side with your LLM application? And there I would say the all last top 10 for a large language model has been kind of leading the way with the recommendations of what can be tracked. So given these two areas, what do organizations practically do today? I would say the focus right now is figuring out what to measure and how to measure it consistently. So if we're talking about quality and the performance of the LLMs oftentimes enterprises want to measure the chance of hallucinations, the relevance of the responses to the prompts, particular topics that are discussed within both the prompts and the responses from LLMs measuring that and tracking that over time. So with Y Labs, one of the things we do is we have what's called a guardrail which allows organizations to define policies and then track prompts and responses with respect to those defined policies. Practically what it does, I'll give you kind of a very, very simple example. If there's a given a prompt, let's say we're talking about topics. Given a prompt, we would extract the types of topics that are covered in the prompt. So the interesting topics could be like a legal conversation for example or health advice conversation or discount conversation which is top of mind for airlines lately. So we would extract that information that would essentially decorate or label the prompt with these tags, policy tags and we'll do the same thing for responses. So now our users have a set of prompts, basically blobs of text and responses blobs of text which are a lot more consumable by observability analytics because they're labeled with tags that describe whether they contain information that is prohibited by the policies or contain the information that the operational team wants to watch. And that turns them into essentially a really big database that you can then mine for tags and do various types of alerts and analytics and then use that information to further improve your customer experience. That's the focus that we have today at YLabs with some of the new tools that we're bringing to market. Well, I really appreciate your time. I know you're busy running a company, building a company. The industry needs a platform, agnostic AI monitoring solution that can run across multiple clouds, Edge, no matter where the models are and I think that's going to be key but also models start to work with each other and I really appreciate you taking the time. The last minute we have put a plug in for the company. What are you guys doing now? You're hiring, talk about the funding, give a quick commercial for YLabs. Absolutely. So YLabs is growing continuously. One of the big focus hiring focus areas is go-to-market team. We're always looking for amazing solutions, architects and sales people. If you have experience with AI that's an extra gold star on our side. And most importantly, we are a open source for a standard creating company. So the feedback from the community on the standards that we have for YLabs being the standard for traditional predictive ML and lane kit being the standard for generative applications. Having feedback on that, having community engagement on that is the way we grow and bring further benefits not just to the enterprises that are buying YLabs but to the entire community. So we'd love community engagement on that and discussions on how we take the operations and observability to the next level, the one that has a lot of generative AI applications. Alyssa, thank you for being part of theCUBE's AI innovators show. We appreciate the work that you do and it's going to be evolving fast. So we need to get that observability and then this generative AI market is going to be hot. So thanks for taking the time. Excellent. Thank you so much. Okay, we'll be back with more SuperCloud 6 coverage here in Palo Alto. I'm John for your host state with us. We'll be right back.