 Hello, and welcome to this CUBE conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. We're excited to have Adam Wenchell, who's the co-founder and CEO. Thanks for joining us today. Appreciate it. Yeah, thanks for having me on, John, looking forward to the conversation. I got to say, it's been an exciting world in AI or artificial intelligence, just the explosion of interest, kind of in the mainstream with the language models, which people don't really get up, but they're seeing the benefits of some of the hype around open AI, which kind of wakes everyone up to, oh, I get it now. And then, of course, the pessimism comes and all the skeptics are out there. But this breakthrough in general AI field is just awesome. It's really a shift. It's a wave. We've been calling it the, probably the biggest inflection point. Then the others combine of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware is completely going to get turbocharged. So we're totally obviously bullish on this thing. So this is really interesting. So first question that I got to ask you, what's you guys taking? Cause you've been doing this, you're in it and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? Yeah, I mean, it's amazing for, so for Saturday I've been in AI for over 20 years and just seeing this amount of like excitement and the growth and like you said, the inflection point we've hit in the last six months has just been amazing. And what we're seeing is like people are getting applications into production using LMS. I mean, really all this excitement just started a few months ago, right? With chat, GBT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And you know, these language models are just, they can be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to any. Adam, you know, you've been in this for a while. So, you know, it's interesting point you're bringing up and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he were talking about, there's been a lot of work been done on ethics. So there's a little bit, it's not like it's new. It's like, there's a lot of stuff that's been baking over many, many years and you know, decades. So now we're, now everyone wakes up in the season. So I think that is a key point. I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching just so we can kind of get a definition on the record. What's an LLM? What's a foundational model? And what's Genevieve AI? Can you just quickly explain the three things there? Yeah, absolutely. So LLM are a large language model. It's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top of for all sorts of different things. That includes traditional NLP tasks like, you know, document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques. They've been around for a while. But what's changed is just this new class of models that's based on, you know, new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top. So that's the other big thing is, you know, this kind of access to these models that can power generative tasks has been democratized, you know, in the last few months. And it's just opening up all these new possibilities. And then the third one you mentioned, foundation models is sort of a broader term for L, you know, for the category that includes LLAMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for, you know, image classification, object detection. That's something we've had customers doing since, you know, for three or four years already. And so, you know, like you said, there are, there are antecedents to like everything that's happened. It's not entirely new, but it does feel like a step change. Yeah, I did ask ChatGPT to give me a riveting introduction to you. And it gave me an interesting, and if we have time, I'll read it. It's kind of, it's fun. You get a kick out of it. Ladies and gentlemen, today we're privileged to have Adam Wenchell, founder of Arthur who's going to talk about the exciting world of artificial intelligence. And then it goes on with some really riveting sentences. So if we have time, I'll share that. It's kind of funny. It was good. So anyway, this is what people see. And this is why I think it's exciting because I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about, about a couple of months is that, you know, there's a scene in Moneyball where, you know, Billy Bean sits down with the Red Sox owner and the Red Sox owner says, if people aren't rebuilding their teams on your model, they're going to be dinosaurs. And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and they're saying, if we're not going, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency. There's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things, like do an intro or do this for me, write some code, you know, write a counter measure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision. Again, another huge field where 5G, things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that? I need to rethink my business. Yeah, it's 100% accurate. And what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to do that, rethinking of their business. And those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well. I mean, we're seeing the number of startups. Every week, a couple of new startups get in touch with us for help with their LLMs. And there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming legal and healthcare and content creation and all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit. Now you have vertical specialization where data shines, large language models all set up perfectly for kind of this piece. And as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company with Drobio to start it because as an entrepreneur, you're probably still as opportunity for other people like, hey, this is finally it, it's here. Can you share the origination story of what you guys came up with? How you started it? What was the motivation and take us through that origination story? Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One that I started working there. And shortly after I joined, I was asked to start their AI team and scale it up. And for the first time, I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper aware of these issues of when you get models into production, so I think people who are earlier in their AI maturity, look at that as a finish line, but it's really just the beginning. And there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this. There wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this was needed to be addressed. And so I joined with my co-founder, John Dickerson, who was on the faculty at University of Maryland, and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using it for now? Where's the action? What's the customers look like? What do prospects look like? Obviously, you mentioned production. This has been the theme. It's not like people woke up one day and said, hey, I'm going to put stuff into production. This has kind of been happening. There's been companies that have been doing this at scale. And then yet there's a whole follower model coming on, mainstream enterprise and businesses. So the early adopters are there now in production. What do you guys do? I mean, because I can think about just driving the car off the lot is not, it could, you got to manage operations. I mean, that's a big thing. So what do you guys do to take, talk about the value proposition and how you guys make money. Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing what's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically, and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time, just manually checking in on their model, day to day babysitting, as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer didn't make a change in an upstream data system. There's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way, if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. You know, I was talking to Swami at Amazon who I've known for a long time, for 13 years, been on theCUBE multiple times. And I watched Amazon try to pick up that steam with StageMaker about six years ago. And so much has happened since then. He and I were talking about this wave and I kind of brought up this analogy to how when cloud started it was, hey, I don't need a data center because when I did my startup at that time, when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before it could ride over a line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic. You don't want to have to provision a large language model or do all this heavy lifting. So the seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. How do you look at that? Because this seems to be a wave that's going to be coming in. And how do you guys help companies who are going to move quickly and start developing? Yeah, so I think in the race, this kind of gold rush mentality race to get these models into production, they're starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, whether it's hallucination or just making things up. It's been, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when they can, there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems, but you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people. We say, go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. Let's frame the challenge for the companies now that you have obviously there's the people who are doing large scale production, and then you have companies maybe as small as us who have large linguistic databases of transcripts, for example, right? So what are customers doing? And why are they deploying AI right now? And is it a speed game? Is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? Yeah, absolutely. So I mean, we're seeing some of the, across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rail so that people can data scientists aren't encumbered by having to like hunt down data, get access to it, they're not encumbered by having to stand up new platforms every time they wanna deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur that you can tap into, compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself. And that's not the case now. And so it's really nice to have things like you mentioned AWS SageMaker and a whole host of other tools that they can really accelerate things. What's your profile customer? Is it someone who already has a team? Or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will? How do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who just get going? Yeah, people do start using it pre-production for validation. But I think a lot of our customers do have a team going and they're starting to put either close to putting something into production or about to. It's everything from large enterprises that have really sort of complicated, they have dozens of models running all over, doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company. And so they need to guarantee that it works well. And we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements whether it's in training data or featureization work or algorithm selection. There's a number of, depending on these symptoms, there's a number of things you can do to increase performance over time. And we help guide people on that journey. So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters kind of an overused word, but they're killing it. They're putting stuff in the productions, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like, like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I'll see you're in the front end now with the top people, really nailing AI and working hard. What's the- Yeah, I think a lot of these tools are becoming, every year they get easier and more accessible, easier to use. And so, you know, even for that kind of like as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is every business is unique and they have their own kind of data. And so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant. But then, you know, in most cases, you're either going to want to create a model or fine tune a model using data that's, you know, really kind of comes from your particular customers, the people you serve. And so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and it's broadening as these tools just become easier to use. And also the knowledge about how to build these systems becomes more widespread. Talk about your customer base you have now. What's the makeup? You know, what size are they? Give a taste a little bit of a customer base you got there. What's they look like? Obviously, Capital One, we know very well where you're at there. They were large scale, a lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? Yeah, so we have a variety but I would say one area we're really strong. We have, you know, several of the top 10 U.S. banks. That's not surprised that that's a strength for us. But we also have, you know, Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So it's, you know, what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, our opinion, those are the ones that for the next, you know, multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically. And so they provide great feedback and make sure that, you know, we're continuing to innovate and staying abreast of everything that's going on. You know, these early markets, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up? And what's your secret sauce that will help them get there? Yeah, I think that the, you know, the secret to tee it up is just dive in and start like, I think these are, there's not really a secret. I think it's amazing how, you know, how accessible these are. I mean, there's all sorts of ways to access LMs either via either API access or, you know, downloadable in some cases. And, and so, you know, go ahead and get started. And then our secret sauce really is the way, you know, that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way like, hey, here's where your model is doing good things. Here's where it's doing bad things. Here's something you want to take a look at. Here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that type feedback loop. It's interesting. We've been kind of riffing on this super cloud idea because it was just different name than multi-cloud and you see apps like Snowflake built on top of AWS without even spending any cat-backs. You just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call it AI apps because I think there's a different distinction. If you, ML ops and AI apps seem a little bit old. Almost a few years back. How do you view that? Because everyone's like, is this AI ops? And they're like, no, not kind of, but not really. How would you, when someone's to just shoots off the hip, hey, Adam, aren't you doing AI ops? Do you say yes, we are? You say yes, but we do it differently. Because it doesn't seem like it's the same old AI ops. What's your answer? Yeah, it's a good question. AI ops has been a term that was co-opted for other things and ML ops also has, you know, people who use it for different meanings. So I like the term just AI infrastructure. I think it kind of like describes it really well and succinctly. But you guys are doing the ops. I mean, that's the kind of ironic thing. It's like the next level. It's like next gen ops, but it's not, you don't want to be putting that in pocket. Yeah, no, it is very operationally focused platform that we have going to fires alerts. People can, I should offer them, if you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a desock of data science operations center where you all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, you know, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. You know, one of the things I think is interesting is that if you don't mind commenting on it, is that the aspect of scale is huge. And it feels like that was made up. And now you have scale and production. What's your reaction to that when people say, you know, how does scale impact this? Yeah, scale is huge for some of, you know, I think, I think, look, the most, the highest leverage business areas to apply these two are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale. And so, you know, we're seeing more and more companies. I think they started out deploying AI in sort of, you know, important, but not necessarily like the crown jewel area of their business, but now, you know, they're deploying AI right in the heart of things. And yeah, the scale that some of our companies are operating at is pretty impressive. Well, super exciting, great to have you on. And congratulations. I got a final question for you, just random. What are you most excited about right now? Because, I mean, you got to be pretty pumped right now with the way the world is going. And again, I think it's just the beginning. What's your personal view? How do you feel right now? Yeah, the thing I'm really excited about for the next couple of years, and you touched on it a little bit earlier, but it's a sort of convergence of, you know, AI and AI systems with sort of turning into AI native businesses. And so, you know, as you sort of do more, get a good further along this transformation curve with AI, it turns out that like, the better the performance of your AI systems, the better performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so, one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions of performance and tie them to all their business KPIs. So that way, you know, you really are, it's kind of like the operating system for running your AI native business, you know? And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks. We're an AI first company. It's going to be a great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you to have an interview directly. That sounds good, why didn't you go hang out on the beach? Right, so that's good. Thanks for coming on. I really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. Yeah, thanks John. Okay, this is Cube Conversation. I'm John Furrier here in Palo Alto. AI going next gen. This is legit. This is going to a whole nother level. It's going to open up huge opportunities for startups. It's going to use opportunities for investors and the value to the users and the experience will come in in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and the cube.net. Thanks for watching.