 Welcome back, day two of theCUBE's coverage of Remars in Las Vegas, Amazon Remars. It's part of the re-series they call it at Amazon. Reinvent is their big show. Reinforces a security show. Remars is the new emerging machine learning automation robotics in space. The confluence of machine learning, powering a new industrial age and inflection point. John Furrier, host of theCUBE. We're here to break it down for another wall-to-wall coverage. We've got a great guest here. CUBE alumni from our AWS startup showcase, Chris Nogate, founder and CEO of Fiddler.ai. Welcome back to theCUBE. Good to see you. Great to see you, John. In person, look at the remote one before. Absolutely, great to be here. And I always love to be part of these interviews and love to talk more about what we're doing. Well, you guys have a lot of good street cred, a lot of good word-of-mouth around the quality of your product, the work you're doing. I know a lot of folks that I admire and trust in the AI machine learning area say great things about you. A lot going on. You guys are growing companies. You're kind of like a startup on a rocket ship getting ready to go, pun intended here at the space event. What's going on with you guys? You're here. Machine learning is the centerpiece of it. Swamy gave the keynote here at day two and it really is an inflection point. Machine learning is now ready, it's scaling and some of the examples that they were showing with the workloads and the data sets that they're tapping into. You know, you've got code whisperers they announced. You've got trust and bias now becoming being addressed. We're hitting a level, a new level in ML. ML operations, ML modeling, ML workloads for developers. Absolutely, I think machine learning now has become an operational software. A lot of companies are investing millions and billions of dollars in creating teams to operationalize machine learning based products and that's the exciting part. I think the thing that is very exciting for us is we are helping those teams to observe how those machine learning applications are working so that they can build trust into it because I believe as Swamy was alluding to this today without actually building trust into AI it's really hard to actually have your business users use it in their business workflows and that's where we are excited about bringing their trust and visibility factor into machine learning. A lot of us all know what you guys are doing here in the ecosystem of AWS and now it's spending here. Take a minute to explain what Fiddler's doing for the folks that are in the space that are in discovery mode trying to understand who's got what because like Swamy said on stage it's a full time job to keep up on all the machine learning activities and tool sets and platforms. Take a minute to explain what Fiddler's doing that we can get into some good questions. Absolutely, as the enterprise is taking on operationalization of machine learning models one of the key problems that they run into is lack of visibility into how those models perform. For example, let's say if I'm a bank I'm trying to introduce grid risk scoring models using machine learning. How do I know when my model is rejecting someone's loan when my model is accepting someone's loan and why is it doing it? And I think this is basically what makes machine learning a complex thing to implement and operationalize. Without this visibility you cannot build trust and actually use it in your business. With Fiddler what we provide is we actually open up this black box and we help our customers to really understand how those models work. For example, how is my model doing? Is it accurately working or not? Why is it actually rejecting someone's loan application? We provide these both fine grain as well as coarse grain insights so our customers can actually deploy machine learning in a safe and trustworthy manner. Who is your customer? Who you're targeting? What percentage of the data engineer? Is it data science? Is it the CISO? Is it all the above? Our customer is the data scientist and the machine learning engineer. And we usually talk to teams that have a few models running in production. That's basically our speed spot where they're trying to look for a single pane of glass where to see which models are running in their production, how they're performing, how they're affecting their business metrics. So we typically engage with head of data science or head of machine learning that has a few machine learning engineers and data scientists. Okay, so those people that are watching, you're into this, you can go check it out. It's good to learn. I want to get your thoughts on some trends that I see emerging. And I want to get your reaction to those. Number one, we're seeing the cloud scale now and integration a big part of things. So the time to value is brought up on stage today. Swami kind of mentioned, time to value showed some benchmark where they got four hours. Some other teams were doing eight weeks. Where are we on the progression of value, time to value on the scales like, can you scope that for me? I mean, it depends, right? Depending upon the company. So for example, when we work with banks, for them to time to operationalize a model can take months actually because of all the regulatory procedures that they have to go through. They have to get the models reviewed by model validators, model risk management teams. And then they audit those models. They have to then ship those models and constantly monitor them. So it's a very long process for them. And even for non-regulated sectors, if you do not have the right tools and processes in place, operationalizing machine learning models can take a long time. With tools like Fiddler, what we are enabling is we are basically compressing that lifecycle. We're helping them automate model monitoring and explainability so that they can actually ship models more faster. Like you get velocity in terms of shipping models. For example, one of the growing FinTech companies that started with us last year started with six models in production. Now they're running about 36 models in production. So within a year they were able to grow 10X. So that is basically what we are trying to see. It's another issue. We're at Remarch. So first of all, you got a great product and a lot of market to grow onto. But here you got space. Anyone who's coming out of college or university PhD program and if they're into Arrow, they're going to be here. This is where they are. Now you have a new core companies with machine learning, not just the engineering that you see in the space or Arrow space area. You have a new engineering. Now I go back to the old days where, my parents, there was Fortran. You used Fortran, it was Lingua Franklin to manage the equipment. Little throwback to the old school. But now machine learning is companion, first class citizen to the hardware. And in fact, some will say more important. I mean machine learning model is the new software artifact. It is going into production in a big way. And I think it has two different things that compared to traditional software. Number one, unlike traditional software, it's a black box. You cannot read up a machine learning model's code and see why it's making those predictions. Number two, it's a stochastic entity. What that means is its predictive power can vain over time. So it needs to be constantly monitored and then constantly refreshed so that it's actually working in depth. So those are the two main things you need to take care of. And if you can do that, then machine learning can give you a huge amount of ROR. There is some practitioner kind of like craft to it. Like you said, you got to know when to refresh, what data sets to bring in, what to stay away from, certainly when you get to the bias, but I'll get to that in a second. My next question is really along the lines of software. So if you believe that open source will dominate the software business, which I do, I mean most people won't argue. I think you would agree with that, right? Open source is driving everything. If everything's open source, where's the differentiation coming from? So if I'm a startup entrepreneur or I'm a project manager working on the next Artemis mission, I got open source. Okay, there's definitely security issues here. I don't want to talk about shift left right now, but like, okay, open source is everything. Where's the differentiation? Where do I have the proprietary edge? That's a great question, right? So I used to work in tech companies before Fiddler. When I used to work at Facebook, we would build everything in-house. We would not even use a lot of open source software. So there are companies like that that build everything in-house. And then I also worked at companies like Twitter and Pinterest, which are actually used a lot of open source, right? So now, like the thing is, it depends on the maturity of the organization. So if you're a Facebook or a Google, you can build a lot of things in-house. Then if you're like a modern tech company, you probably leverage open source. But there are lots of other companies in the world that still don't have the talent pool to actually take things from open source and productionize it. And that's where the opportunity for startups comes in too so that we can commercialize these things, create a great enterprise experience, so they actually operationalize things for them so that they don't have to do it in-house for them. And that's the advantage working with them. I don't want to get all operating system with you on theory here, on the stage here, but I will have to ask you the next question, which I totally agree with you, by the way. That's the way to go. There's not a lot of people out there that are peaked. And that's just statistical. And that's to get better. Data engineering is really narrow. That is like the SRE of data. That's a new role emerging. Okay, all the things are happening. So if open source is there, integration is a huge deal. And you're starting to see the rise of a lot of MSPs, managed service providers. I run Kubernetes clusters, I do this, that. And the other thing, so what's your reaction to the growth of the integration side of the business and this role of new services coming from third parties? Yeah, absolutely. I think one of the big challenges for a Chief Data Officer or someone like a CTO is how do they devise this infrastructure architecture and with components, either homegrown components or open source components or some vendor components, and how do they integrate? You know, when I used to run data engineering at Pinterest, we had to devise and data architecture combining all of these things and create something that actually flows very nicely, right? And this is why- And if you didn't do it right, it would break. Absolutely. And this is why it's important for us, like at Fiddler, to really make sure that Fiddler can integrate to all varieties of ML platforms. Today, a lot of our customers use machine learning, build machine learning models on SageMaker. So Fiddler nicely integrates with SageMaker so that they get a seamless experience to monitor their models. Yeah, I mean this might not be the right words for it, but I think data engineering as a service is really what I see you guys doing as well as other things. You're providing all that- ML engineering as a service. ML engineering as a service. Well, it's hard. I mean, it's like the hard stuff here, but that has to enable. So you as a business entrepreneur, you have to create a multiple of value proposition to your customers. What's your vision on that? What is that value? It has to be a multiple of at least five to 10. I mean, the value is simple, right? You know, if you have to operationalize machine learning, you need visibility into how these things work. You know, if you're CTO or like Chief Data Officer is asking, how is my model working and how is it affecting my business, you need to be able to show them a dashboard how it's working, right? And so like a data scientist today struggles to do this. They have to manually generate a report, manually do this analysis. What Fiddler is doing them is basically, you know, reducing their work so that they can automate these things. They can still focus on the core aspect of model building and data preparation and this boring aspect of monitoring the model and creating reports around the models is automated for them. Yeah, you guys got a great business. I think it's a lot of great future there and it's only going to get bigger. Again, the time's going to span as the growth rising tide comes in. I want to ask you while we're on that topic of rising tides, Dave Vellante and I since reinvent last year have been kind of kicked down around this term that we made up called SuperCloud. And SuperCloud was a word that came out of these clouds that were not Amazon hyperscalers. So Snowflake, Goldman Sachs, Capital One, you name it. They're building massive proprietary value on top of the capex of Amazon. Jerry Chen at Greylock calls it castles in the cloud. You can create these motes. So this is a phenomenon, right? And you land on one and then you go to the others. So the strategy is everyone goes to Amazon first and then hits Azure and GCP. That then creates this kind of multi-cloud. So okay, so SuperCloud's kind of happening. It's a thing. Charles Fitzgerald will disagree. He has a platform where he's against the term. I get why but he's off base a little bit. We can't wait to debate him on that. So SuperClouds are happening. But now what do I do about multi-cloud? Because now I understand multi-cloud, I have this on that cloud. Integrating across clouds is a very difficult thing. Right, right, right. If I'm Snowflake or whatever, hey, I'll go to Azure, more TAM expansion, more market. But are people actually working together? Are we there yet? Where it's like, okay, I'm going to re-operationalize this code base over here. I mean the reality of it, enterprise wants optionality, right? I think they don't want to be locked in into one particular cloud vendor or one particular software. And therefore you actually have in a situation where you have a multi-cloud scenario where they want to have some workloads in Amazon, some workloads in Azure. And this is an opportunity for startups like us because we are cloud agnostic. We can monitor models wherever you have. So this is where a lot of our customers, they have some of their models running in their data centers and some of their models running in Amazon. And so we can provide a universal single pane of glass. So we can basically connect all of those data and actually showcase. I think this is an opportunity for startups to combine the data streams come from various different clouds and give them a single pane of experience. That way the sort of the, where is your data? Where are my models running? Which cloud are they are? It's all abstracted out from the customer. Because at the end of the day, enterprises will want optionality and we are in this multi-cloud situation. Yeah, I mean this reminds me of the interoperability days back when I was growing into the business. Everything was interoperability and OSI and the standards came out. But what's your opinion on openness? Okay, there's a knee-jerk reaction right now in the market to go silo on your data to get for governance, whatever reasons. But yet machine learning gurus and experts will say, hey, you want a horizontal scalability and have the best machine learning models, you've got to have access to data and fast in real time or neoreal time. That's the antithesis to siloing. So what's the solution? Yeah. Customers control the data plane and have a control plane that's, what do customers do? It's a big challenge. Yeah, absolutely. I think there are multiple different architectures that have emerged. We've seen like where vendors like us used to deploy completely on-prem, right? And they still do it. We still do it in some case, come customers. And then you had this managed cloud experience where you just abstract out the entire operations from the customer. And then now you have this hybrid experience where they split the control plane and data plane. So you preserve the privacy of the customer from the data perspective, but you still control the infrastructure, right? I don't think there's a right answer. It depends on the product that you're trying to solve. Databricks is able to solve this control plane data plane split really well. I've seen some other tools that have not done this really well. So I think it all depends upon- Well, it's no-flagged. I think they have a- Sorry, correct. They have a managed cloud service, right? So predominantly that's their business. So I think it all depends on what you're trying to, what is your go-to market, which customers you're talking to, what's your product architecture look like? From Fiddler's perspective, today we actually have chosen, we either go completely on-prem or we basically provide a managed cloud service. And that's actually simpler for us instead of splitting the control plane. So it's customer choice. Exactly. However you want to use Fiddler, go on-prem, no problem, or cloud. Right, or cloud, yeah. You'll deploy and you'll work across whatever observability space you want to. That's right, that's right. Okay, yeah, so that's a big challenge. All right, what's the big observation from your standpoint? You've been on the hyperscaler side, your journey, Facebook, Pinterest. So back then you build everything. Because no one else had software for you. But now everybody wants to be a hyperscaler, but there's a huge CapEx advantage. What should someone do? If you're a big enterprise, obviously I could be a big insurance, I could be the financial services, oil and gas, whatever vertical, I want a super cloud. What do I do? Right, I think the biggest advantage enterprise today have is they have a plethora of tools. When I used to work on machine learning way back in Microsoft on Bing Search, we had to build everything. From training platforms, deployment platforms, experimentation platforms, how do we monitor those models? Everything has to be homegrown. A lot of open source also did not exist at the time. Today the enterprise has this advantage, they're sitting on this goldmine of tools. Obviously there's probably a little bit of tool fatigue as well, which tools to select. There's plenty of tools available. Exactly, right? And then there's services available for you. So now you need to make smarter choices to cobble together this to create a workflow for your engineers. And you can really get started quite fast. And actually get on par with some of these modern tech companies. And that is the advantage that a lot of enterprises are seeing. If you were going to be the CTO or CEO of a big transformation, knowing what you know, could you just brought up the killer point about why it's such a great time right now? You got platform and the source and the tooling to essentially reset everything. So if you're going to throw everything out and start fresh, you're basically building a system architecture. It's a complete reset. That's doable. How fast do you think you could do that for say a large enterprise? See, I think if you set aside the organization processes and whatever kind of comes in the friction from a technology perspective, it's pretty fast. If you can devise a data architecture today with like tools like Kafka, Snowflake and Redshift and you can actually devise a data architecture very cleanly right from day one and actually implement it at scale. And then once you have accumulated enough data and you can extract more value from it, you can go and implement your ML Ops workflow as well on top of it. And I think this is where tools like Fiddler can help as well. So I would start with looking at data. Do we have centralization of data? Do we have governance around data? Do we have analytics around data? And then kind of get into machine learning operations. Krista, always great to have you on theCUBE. You're a great master class guest. Obviously great success in your company. Been there, done that and doing it again. I got to ask you since you just brought that up about the whole reset. What is the superhero persona right now? Because it used to be the full stack developer. You know, and then it's like, then I called, I didn't go over very well on theCUBE, the half stack developer because nobody wants to be a half stack at anything. No one, half sounds worse than full. But cloud is essentially half a stack. I mean, you got infrastructure, you got tools. Now you're talking about a persona that's going to reset, look at tools, make selections, build an architecture, build an operating environment, distribute a computing operating. Who is that person? What's that persona look like? I mean, I think the superhero persona today is ML engineer. I'm just really surprised how much, how much is put on an ML engineer to do actually these days. You know, when I entered the industry as a software engineer, I had three or four things in my job to do. I write code, I test it, I deploy it, I'm done. Like today as an ML engineer, I need to worry about my data, how do I collect it? I need to clean the data, I need to train my models, I need to experiment with what it is, I need to deploy them, I need to make sure that they're working once they're deployed. And you got to do all the DevOps behind it. And all the DevOps behind it. And so I'm like working half time as a data scientist, half time as a software engineer, half time as like a DevOps engineer. Cloud rocket tech. It's like a heroic job. And I think this is why, you know, this is why obviously these jobs are like now really hard jobs and people want to be more and more machine learning engineers. And they're paid commensurately as well. And this is where I think an opportunity for tools like Fiddler Exiles exists as well because we can help those ML engineers, you know, do their jobs better. Thanks for coming on theCUBE. Great to see you. We're here at 3Mars. And good, great to see you again. And congratulations for being on the AWS startup showcase that we were in year two, episode four coming up. We'll have to have you back on. Krishna, great to see you. Thanks for coming on. Okay, it's theCUBE's coverage here at 3Mars. John Furrier bringing all the signal from all the noise here. Not a lot of noise in this event is very small, very intimate, little bit different, but all on point with space, machine learning, robotics, the future of industrial. We'll be back with more coverage after this short break. Thank you John.