 from theCUBE Studios in Palo Alto in Boston. It's theCUBE, covering IBM Think. Brought to you by IBM. Hi everybody, this is Dave Vellante of theCUBE and you're watching our coverage of the IBM digital event experience, the multi-day program, tons of content and it's our pleasure to be able to bring in experts, practitioners, customers and partners. Triram Raghavan is here, he's the Vice President of IBM Research in AI. Triram, thanks so much for coming on theCUBE. Thank you, pleasure to be here. I love this title, I love the role, it's great work if you're qualified for it, so tell us a little bit about your role and in your background, you came out of Stanford, you had the pleasure of sure of hanging out in South San Jose at the Alma Den Labs, a beautiful place to create, but give us a little background. Absolutely, yeah, so let me start, maybe go back to the present time, but what do I do now? My role is responsible for AI strategy, setting and execution in IBM Research across our global footprint, all our labs worldwide and they're working in AI. I also closely with the commercial, the parts of IBM are software and services business that take the innovation, AI innovation from IBM Research to market to our product, right? That's the second part of what I do. And where did I begin life in IBM? As you said, I began life at our Albertan Research Center up in San Jose, up in the hills, beautiful, I still think it's the best view I had. I spent many years there doing work at the intersection of AI and large scale data management and LP, went back to India. I was running the India Lab there for a few years and now I'm back here in New York, running the AI strategy for IBM. That's awesome. Let's talk a little bit about sort of AI, the landscape of IBM has always made it clear that you're not doing consumer AI, you're really trying to help businesses, but how do you look at the landscape? So it's a great question and it's one of those things that we constantly measure ourselves and we are partners sellers. I think you probably heard us talk about the cloud journey, that look, we're like very 20% of the workloads are in the cloud, 80% still waiting. AI, that number is even less. And but of course it varies, but depending on who you ask, you would say AI adoption is anywhere from 4% to 30% depending on who you ask in this pre-use case. But I think it's more important to look at where is this directionally? And it's very, very clear that adoption is rising. The value is more, it's getting better appreciated. But I think more important I think is there is broader recognition awareness and interest in knowing that to get value out of AI, you start with where AI begins, which is data. So the story around having a solid enterprise information architecture as the base on which to drive AI is starting to happen. So as the investments in data platform becoming, making your data ready for AI starts to come through, we're definitely seeing that adoption increase. And I think the second imperative that business look for obviously is the skills, the tools and the skills to scale AI. It can't take me months and months and hours to go build AI model, I got it accelerated and then comes operationalizing. But this is happening and the upward trajectory is very, very clear. We've been talking a lot on the cube over the last couple of years. It's not the innovation engine of our industry is no longer Moore's law, it's a combination of data. You just talked about data, applying machine to that data, being able to scale it across clouds, on-prem, wherever the data lives. So having said that, you've had a journey. You started out kind of playing Jeopardy, if you will. It was a very narrow use case and you're expanding that use case. I wonder if you could talk about that journey specifically in the context of your vision. Yeah, so let me step back and say for IBM Research AI, when I think about what's our strategy and vision, we think of it as in two parts. One part is the evolution of the science and techniques behind AI itself. And you said it, right? From narrow bespoke AI that all it can do is this one thing that it's really trained for, it takes a large amount of data, a lot of compute power. To how do you have the techniques and the innovation for AI to learn from one use case to the other? Be less data hungry, less resource hungry. Be more trustworthy and explainable. So we call that the journey from narrow to broad AI. And one part of our strategy as scientists and technologists is the innovation to make that happen. So that's one part. But as you said, as people involved in making AI work in the enterprise and IBM Research AI vision would be incomplete without the second part, which is what are the challenges in scaling and operationalizing AI? It isn't sufficient that I can tell you AI can do this. How do I make AI do this so that you get the right ROI, the investment relative to the term makes sense and you can scale and operationalize it. So we took both of these imperatives, the AI to go to the company and the need to scale and operationalize it. And what are the things that are making it hard? The things that make scaling and operationalizing harder, data challenges, we talked about that, skills challenges. And the fact that in enterprises you have to govern and manage AI. And we took that together. We think of an AI agenda and really keep advancing, trusting and scaling AI. Advancing is the piece of pushing the boundary, making AI narrow to product. Trusting is building AI which is trustworthy, is explainable. You can control and understand its behavior, make sense of it and all of the technology that goes with it. And scaling AI is when we address the problem of how do I reduce the time and cost for data prep? How do I reduce the time for model tweaking and engineering? How do I make sure that a model that you built today when something changes in the data, I can quickly allow for you to close the loop and improve the model? All of the things, think of day two operations of AI. All of that is part of our scaling AI strategy. So advancing, trusting, scaling is sort of the three big mantras around which the way we think about AI. Yeah, so I've been doing a little work in this, around this notion of data ops. Essentially, you know, dev ops applied to data and data pipeline. And I had a great conversation recently in Bandari, I'm a Global Chief Data Officer. And he explained to me how, first of all, customers will tell you, it's very hard to operationalize AI. He and his team took that challenge on themselves and have had some great success. And you know, we all know the problem. It's that, you know, AI has to wait for the data, has to wait for the data to be cleansed and wrangled. Can AI actually help with that part of the problem, compressing that? 100%. In fact, the way we think of the automation and scaling story is what we call the AI for AI story. So AI in service of helping you build the AI that helps you make this thing, right? So we, and I think of it really in three parts. It's AI for data automation or data ops. AI used in better discovery, better cleansing, better curation, faster linking or quality assessment. All of that, using AI to do all of those data problems that you have to do. And I call it AI for data automation. The second part is using AI to automatically figure out the best model. And that's AI for data science automation, which is feature engineering, hyper parameter optimization. Having done all that work, why should a data scientist make weeks and months experimenting if the AI can accelerate that from week to a matter of hours? That's data science automation. And then comes the important part also, which is operations automation. Okay, I put a data model into an application. How do I monitor its behavior? If the data that it's seeing is different from the data that was trained on, how do I quickly detect? And a lot of the work from research that was part of the Watson open scale offering is really addressing the operational side. So AI for data, AI for data science automation and AI to help automate production of AI is the way we break that problem out. So I always like to ask folks that are deep into R&D, how they ultimately are translating into commercial products and offerings? Because ultimately you got to make money to fund more R&D. So can you talk a little bit about how you do that and what your focus is there? Yeah, so it's a great question and I'm going to use a few examples as well. But let me say at the outset, this is a very, very close partnership. So between the research part of AI and our portfolio, close partnership where we're constantly both drawing properly as well as building the quality that goes into the offering. So a lot of our work, much of our work in AI automation that we were talking about is part of our Watson studio, Watson machine learning, Watson open scale. In fact, open scale came out of research working in trusted AI and is now a centerpiece of that, our Watson machine. Let me give a very different example. We have a very, very strong portfolio and focus in NLP natural language process. And this directly goes into capabilities out of Watson Assistant, which is our system for conversational support and customer support. And Watson Discovery, which is about making enterprise understand unstructured data. And a great example of that is the working project debater, which is a grand challenge in research about building a machine that can do debate. Now, look, we weren't looking to go sell your debating machine, but what did we build as part of doing that is advances in NLP that are all making that into Assistant and Discovery. And we actually just talked about earlier this year, announced a set of capabilities around better trust, advanced summarization, deeper sentiment analysis. These make their way into Assistant and Discovery, but are born out of research innovation and solving a grand problem like building a debate. That's just an example of how that journey from research to product happen. Yeah, I mean, the debater documentary, I've seen some of that. It's actually quite astounding. I don't know what you're doing there. It sounds like you're taking natural language and turning it into complex queries with data science and AI, but it's quite amazing. Yes, and I would encourage you. You will see that documentary, by the way, on channel seven in the think-even, and I would encourage you, actually, the documentary around how debater happened, sort of featuring back with the, backdoor interviews with the scientists who created it, was actually featured last minute, the Copenhagen International Documentary Festival. So I'll invite viewers to go to channel seven and data and AI tech on demand to go take a look at it now. Yeah, you should take a look at it. It's actually quite astounding and amazing. Suryam, what are you working on these days? What's kind of exciting projects or what's your focus area today? Look, I think there are three imperatives that we're really focused on. And one is very, it's really the question we're talking about NLP. NLP in the enterprise, look, text is a language of business, right? Text is the way businesses communicate within each other with their partners with the entire world. So helping machines understand language, but in an enterprise context, recognizing that data in the enterprises live in complex documents, unstructured documents and email. They live in conversations with the customers. So really pushing the boundary on how all our customers and clients can make sense of this vast volume of unstructured data by pushing the advances of NLP. That's one focus area. Second focus area, we talked about trust and how important that is. And we've done amazing work in monitoring and explainability. And we're really focused now on this emerging area of causality. Using causality to explain, right? The model makes this because the model believes this is going to cost, this is a beautiful way to do it. And the third big focus continues to be on automation. So NLP, trust automation, those are like three big focus areas for us. So how far do you think we can take AI? I know it's a topic of conversation, but from your perspective, deep into the research, how far can it go? And maybe how far should it go? Look, I think we are... Let me answer it this way. I think the art of the possible is enormous. But I think we are at this inflection point in which I think the next wave of AI, the AI that's going to help us, this narrow to broad journey we talked about. Look, that narrow to broad journey is not like a one week one year. We're talking about a day of innovation. But I think we are at a point where we're going to see a wave of AI that we like to call Neurosymbolic AI, which is AI that brings together two sort of fundamentally different approaches to building intelligence. One approach to building intelligence system is what we call knowledge driven. Understand data, understand concept, logically reason about it. We human beings do that. That was really the way AI was born. The more recent last couple of decades of AI was data driven, machine learning. Give me vast volumes of data. I'll use neural techniques, deep learning to get back. We're at a point where we're going to bring both of them together. Because you can't build trust towards the explainable systems only using one. You can't get away from not using all of the data that you have to make sense of. So Neurosymbolic AI is I think going to be the lynchment of how we advance AI and make it more powerful and trust. So are you like living your childhood dream here? Oh, oh, oh, I've been fascinated. I've always been fascinated and anytime, you can't have find a technology person who has the strength of building intelligent machines. Because we have a job where I can work across our worldwide set of 3,000 plus researchers and think and brainstorm and strategy to AI. And then most importantly, not to forget, that you talked about being able to move it into our portfolio so it actually makes a difference for our client. I think it's a dream job and I'm having a lot of fun. Well, Sriram, it's a great having you on theCUBE. A lot of fun interviewing folks like you. I feel a little bit smarter just talking to you. So thanks so much for your opportunity to be here. All right, thank you for watching everybody, watching theCUBE's coverage of IBM Think 2020. This is Dave Vellante. We'll be right back right after this short break.