 Good afternoon, everyone. We're so happy that you're here, watching theCUBE live on day one of our coverage of Google Cloud Next 23 from San Francisco. Moscone itself specifically is where we are. Lisa Martin with Dustin Kirkland. We're going to be having a great conversation next about the value of generative AI. If you happen to catch the keynote this morning, a lot of famous characters on stage, lots of news on AI and gen AI. We've got Ted Quattler here, field CTO of DataRobot, welcome. Thank you for having me. I've been in a garage, in a garage in Israel, I'm sorry. Vice President of Data Science and Analytics at GANET, USA Today Network. Welcome guys, great to have you. Yeah, speaking of the news, huh? Let's talk about the news. Let's talk about the news. Guys, talk about some of the things, did you, were you in the keynote this morning? No, actually I had to be here answering questions at the booth about generative AI. All the news, talk about, Ted, you were working. Talk a little bit about, give us kind of a glimpse into the value gen, German gen AI. What does that look like? Lots of news from Google this morning. Lots of good news, absolutely. But I think organizations really have to focus on how to go from kind of that innovation and thinking about where it could fit to actually delivering value to the enterprise. I think that focus really is taking shape now and will continue to do so over the next 12 months probably. For us, I think it's about being flexible. Models are coming up all the time and we want to make sure that we're flexible and avoiding technical debt, only building in one direction. Everyone's talking about gen AI all over the globe. We can't not talk about it, but there's, and this potential is huge, but there's also some barriers to success. Ted, talk about some of those barriers and how DataRobot can help customers really start dialing down or removing some of those barriers. Yeah, absolutely. And I think the truth of the matter is there's 140 or more open source LLMs. There's a lot of proprietary LLMs. They're all great. They, you know, but you need to have an ability to evaluate them, right? And I think the monitoring and governance aspect is where a place like DataRobot really shines. We can help measure for toxicity, cost, truthfulness, a bunch of different dimensions, all on one platform, no matter which LLM you choose. Arvind, tell us a little bit about your company, what you guys are doing, and then we'll get into the partnership. Yeah, at Genet USA Today, we own the USA Today Network, which is our national publishing outlet. Along with that, we also own more than 200 local publishing outlets as well across the US and UK. So maybe connect us, connect the dots, all the way from the end user customer subscriber to a USA Today property. Walk that all the way back to DataRobot and then ultimately Google Cloud and show us how all this, you know, AI technology is impacting the lives of presumably millions of people, right? Yeah, that's a great question. As Ted pointed out, we have a problem of plenty, right? So, Gen AI is a rapidly evolving field and there are thousands of ways to, you know, put together an end-to-end application and at Genet, we want to be really thoughtful and measured about, you know, how we leverage Generative AI. You know, we want to, you know, proceed in a way that's responsible and safe for our end consumers. So, you know, from an application standpoint, you know, we are prioritizing use cases that doesn't impede in the way, you know, we want to integrate you know, nuances of Generative AI in the end-to-end workflow of an application and we always want to have the newsroom or the editor, you know, to have the last say, you know, having a human in the loop is critical, especially, you know, at this stage of evolution of Generative AI. So, you know, immediately before the consumer we'll have, you know, our editor who is just looking at the output that was generated by a Generative AI application, you know, having the ability to, you know, edit it or approve it and, you know, send the feedback loop to, you know, whether the application works well or not. And then there are like so many steps in the process right from, you know, getting the right LLM on board, you know, development, maintenance of the model, monitoring the model, adding the governance layer. Well, I have to think, you know, some of what we heard today from Google and the keynote and others, some of the new features around attributions, being able to connect directly back, you know, you ask a Generative AI a question, how do you ensure that it's not hallucinating and it's just not making up something? That's got to be super important in your industry too. Yeah, absolutely, we have the ability to provide a confidence score with a model and you want to know what words in your query are driving the response, what words from the vector database are most relevant in the response. And then of course you would want a site, like I have a citation if possible, and say we believe that this summarization came from this chunk or from this article. I think especially in media, you know, what is truth in our interpretation of live events, we've seen that that is super impactful on so many different dimensions, that it's important that we have those types of guardrails in place. Go ahead. Sorry, it's powerful to have, you know, the citations of, you know, how did this output come about and also enabling the ability for the human in the loop to provide feedback, you know, to enable learning of the model and fine-tuning the model, it's critical. It is critical. Arvind, talk a little bit about why you chose to work with DataRobot and Google. You know, you had me at the confidence factor because that's one of the huge challenges. Yeah, absolutely. I worry about it. Right. But what was the sort of deciding factors that really told you DataRobot is the right technology with Google Cloud to do it? Yeah, so I mean, first of all, we are strategic. Google is a strategic partner of ours. You know, Google is a unified platform that has data and AI. So it's a one-stop shop for us. And a lot of our machine learning models, the predictive AI models, we have been building on Vertex AI. And, you know, DataRobot is really flexible in the sense that they could fit into like any step in that end-to-end process and help us with automation, you know, of those models. And they also help us extend, you know, the feature set of a model, where for example, in an existing, you know, model like forecasting or propensity modeling, we could leverage DataRobot just to provide the monitoring, you know, and the observability aspects of that model. So it's pretty synergistic how we work with Google and DataRobot. Talk a little bit, Ted, about the competence factor. We're going to go back to that in a minute. Because it sounds to me like that may be an element of differentiation for DataRobot. Can you talk a little bit about that from that, why DataRobot perspective and how that might be one of those obvious no-da, why? Yeah, so over the last decade, DataRobot has really pioneered a lot of ML ops and governance, and we always thought that what we wanted to do is build a stable enterprise grade models on the predictive kind of good old-fashioned AI, right? So along comes LLMs, language models are a couple years old, but in particularly these instruct models, these chat models, we thought, you know, rather than jump into a very crowded space, building our own LLMs, where the technology is going to iterate and be nascent, we decided we should be really focused on how do you monitor and make enterprises feel comfortable with this type of technology? Yeah, so let me clarify that. So there's a lot of LLMs out there. For sure. More every day, are you creating your own or? We're not, we want you to bring the best of Brie tool and use any vector database. What I will say is we have very tight integrations with like BigQuery, we can do a lot of the feature engineering's there, we can monitor vertex models, we can monitor palm, med palm, you know, we have demos of med palm and others. So we want people to use the tools that are best suited for their task, but you want an ability to measure costs, toxicity, confidence of the output, right? Arvind, share some of the, you guys have been working together for a few months now, I believe. What are some of the outcomes or the benefits that GANET USA today has gleaned so far from this partnership? Yeah, you know, with DataRobot, we've already automated multiple steps in the machine learning life cycle for hundreds of our models now. You know, and these are models mostly in the realm of predictive AI as of now. And what that allows is it creates efficiencies and saves time for my team of data scientists, you know, with steps like data pre-processing automated, model building automated, and you know, just governance of those models and you know, measuring just the performance of those models, all automated. So we can, you know, allow the data scientists to go do what, you know, they're really passionate about and build more and build more partnerships within our organization rather than, you know, just putting the head down and coding all through. So those automations really help. And we believe that that partnership can extend to the generative AI realm as well. And we've already started talking about, you know, partnering on, you know, proof of value of sorts to start building generative AI applications together. Yeah, yeah, along those lines, how much of it is it the learning versus the inferencing, are you taking advantage of both, you know, customizing the content? You know, how are you packaging that up for those end users and subscribers? So we are, like I mentioned, really thoughtful about which use cases to prioritize, right? So for now, we are prioritizing use cases that generate efficiencies for our teams. So our teams can reinvest that time into just adding more value for our audiences and creating, you know, richer content experiences for our audiences. So it's all about, again, enabling our teams to, you know, just do what they do faster, more efficiently, be more confident about the output. So we are prioritizing those kinds of outputs now. Share a little bit about, from the audience perspective, we're so demanding, right? We want relevant content, we want personalized content, we want to believe it. We want it to be updated in real time. Share a little bit, Arvin, about, from a cultural perspective at, again, at USA Today, been around for a long time. The cultural appetite clearly is there to go in the generative AI direction. A lot of companies are dipping their toe in the water, not really sure where to go, but there's always a challenge when it's a history to organization. Talk about the appetite at your organization to go the AI direction, to help really serve the audience and give them what they want. Yeah, you know, we have one of those companies that are like tipping our toe in the water, we are still in pretty much in the experimentation phase in that journey as well. And we are trying to strike the right balance between moving fast, but being sure about how we are going about it because, again, like you said, getting our customers to trust us with the content that they're consuming is really critical for us. So, which is again why we are prioritizing use cases that just for now helps create efficiencies within a workflow rather than use cases that directly engage with the audiences. We're really strategic about the use cases we prioritize. Ted, talk a little bit about healthcare. I think you guys made some announcements recently. Yeah. What are some of the key use cases where data robot is helping healthcare organizations adopt AI? Yeah, absolutely. So you think about healthcare is this space where you have this dialogue with your doctor or care provider where you're processing a lot of information. Sometimes you don't even understand all of it, right? And it's a great use case for summarization. So we actually announced something with one of our customers where you can observe and have the audio for a patient dialogue. It gets summarized, put it into the doctor's notes so that improves the doctor throughput. That means they can see more patients or spend more time with you. And you also then have the ability to have advocacy, right? Because now I can understand it. I can ask for it to be explained in a way that I can understand it. Oh, what is this drug? What are the side effects, right? You start building out these very specialized language summarizations. I think there's a real value to patient care there. Yeah, so in that scenario, and I'm asking this as a former product manager, in that scenario, who do you look at as your customer? Is it the patient receiving the care? Or is it indirect? Is it B2B2C? Are you selling that to the doctors in the hospitals? Yeah, what I'll say as someone who used to run our AI ethics group is that I would actually focus on the person who's most impacted. So that's the patient. If you nail that and it comes with the right and efficiencies, the others will fall into place. If you go after, oh, I'm going to sell into the healthcare space without thinking about the patient, you could miss the mark and actually do patient harm from an AI and responsible AI perspective. That's the worst case scenario. Right, and so tying this back to Gannett and USA Today, you certainly look at the content absorber, that subscriber, as the end customer and you're serving all that together. If you build for that person and they're happy, and in that case, they may not be happy, it's a medical examination, right? But if their needs are met in a way that's ethically sound, that you can feel confident in the summarization, then I think it will pay dividends, I think, for the patient and the doctors, the healthcare providers. I think it's all about centering everything around just driving value for that end consumer, right? It's as simple as that. You don't want to miss the mark, I would say, you know? No, but you bring up a great point. It was a great point that you brought up about in terms of who are you selling to, to your point, you need to understand the use case from the customer's perspective, the patient's perspective, the subscriber's perspective, the banking consumer's perspective and really understand what problems do they have that we can help solve and how do we then work with the organization to apply technologies like Gen AI, Vertex AI, to help solve and meet the needs that the ultimate absorber of that has. Yeah, absolutely, and I think generative AI and these chatbots have shown the average consumer wants to interact with machine learning technology in an intuitive and contextualized way. These predictive models have been out there for years. Language models have been out there for years, but putting a chat interface and allowing that, and you see this hyper growth of users, it shows that that's the way to really interact with people. When DataRobot does a lot of its very sensitive machine learning work with our customers, we often start with an impact assessment, and we don't talk about just the impact to the business, talk about often the people who are most at risk, what's their impact, how do we mitigate that risk? That's a great viewpoint. What's next for Gannett USA today from an AI kind of journey perspective, Arvind? No, so we've already implemented a couple of generative AI use cases that you're still testing, learning about the opportunities and the pitfalls of leveraging a powerful and rapidly evolving technology like generative AI. And we have a long list of potential use cases that we've, again, we've sort of scored them against our ability to pull it off. How much value does it drive for? The subscriber, how feasible it is, and so on and so forth. So we already have a prioritized list of use cases that we again, we'll continue experimenting and we'll keep learning by doing. And that's where partners like Google and DataRobot come in and we lean on them to execute on that vision. Sounds like a very symbiotic, strategic partnership. Last question, Ted, for you, take us out. What are some of the things that we can expect from DataRobot in the next, say, six to 12 months? Any sneak peeks you can give us? Well, I would say I think there's a few things I'm very excited about. One would be our AI play, or our generative AI playground so people can have the same prompt, choose different models, evaluate their responses, and really feel comfortable if this is the right one. I think we're going to see a further merger of predictive and generative together. So not that I'm just giving you a prediction that Arvin is 82% likely to re-subscribe at USA Today, but why is that, why? And I can start contextualizing that. That comes to MLLM, so it's not just the point prediction. I think we're going to see this merger of technologies. Very soon. Cool stuff. Guys, thank you so much. It was a pleasure. For joining Dustin and me on the program. Sharing the use case, you know, what you're doing to more than dip your toe in the gen AI water and how DataRobot and Google are facilitators. We appreciate your insights and your time. Thank you. Thank you so much for having me. Our pleasure, guys. For our guests and Dustin Kirkland, I'm Lisa Martin. You're watching theCUBE live, day one coverage of Google Cloud Next 23, live from Moscone Center. Stick around, our next guest joins our analysts in just a minute.