 Live from Las Vegas, it's theCUBE. Covering IBM Think 2018, brought to you by IBM. Hello, and I'm John Furrier. We're here in theCUBE Studios in Think 2018, IBM Think 2018 in Mandalay Bay in Las Vegas. We're extracting the signal from the noise, talking to all the executives, customers, thought leaders inside the community of IBM and theCUBE. Our next guest is Rakita Gunnar, who's the VP of Product for Watson and AI, cloud data platforms, all the goodness of the product side. Welcome to theCUBE. Thank you, great to be here again. So we love talking to the product people because we want to know what the product strategy is, what's available, what's the hottest features. Obviously we've been talking about, in my, these are our words, Ginny introduced the innovation sandwich. She did. The data's in the middle, and you have blockchain and AI on both sides of it. This is really the future. This is where they're going to see automation. This is where you're going to see efficiencies being created, inefficiencies being abstracted away. Obviously blockchain has got more of an infrastructure, futuristic piece to it, AI in play now, machine learning. You get cloud underneath it all. How has the product morphed? What is the product today? We've heard of Watson, World of Watson in the past. You got Watson for this, you got Watson for IoT, you got Watson for this. What is the current offering? What's the product? Can you take a minute, just to explain kind of what semantically it is? Sure. I'll start off by saying, what is Watson? Watson is AI for smarter business. I want to start there. Because Watson is equal to, how do we really get AI infused in our enterprise organizations? And that is the core foundation of what Watson is. You heard a couple of announcements at the conference this week about what we're doing with Watson Studio, which is about providing that framework for what it means to infuse AI in our clients' applications. And you talked about machine learning, it's not just about machine learning anymore. It really is about, how do we pair what machine learning is, which is about tweaking and tuning single algorithms to what we're doing with deep learning. And that's kind of one of the core components of what we're doing with Watson Studio is, how do we make AI truly accessible, not just machine learning, but deep learning to be able to infuse those in our client environments really seamlessly. And so the deep learning as a service piece of what we're doing in the studio was a big part of the announcements this week because deep learning allows our clients to really have it in a very accessible way. And there were a few things that we announced with deep learning as a service. We said, look, just like with predictive analytics, we have capabilities that easily allow you to democratize that to knowledge workers and to business analysts by adding drag and drop capabilities, we can do the same thing with deep learning and deep learning capabilities. So we have taken a lot of things that have come from our research area and started putting those into the product to really bring about enterprise capabilities for deep learning, but in a really de-skilled way. Yeah, and also to remind the folks, there's a platform involved here. So you're, I won't say, I mean, maybe you can say it's been re-platformed, I don't know, maybe you can answer that, has it been re-platformed, or is it just the platformization of existing stuff because there's certainly demand. I mean, TensorFlow at Google showed that there's demand for machine learning libraries that can, and then deep learning behind you. You got Amazon Web Services with SageMaker, Touting, everything. This, there's, as a service model for AI is definitely in demand. So talk about the platform piece underneath. What's it, what is it? How does it get rendered and then we'll come back and talk about the user consumption side. So it definitely is not a re-platformization. Okay, good. You recall what we have done with a focus initially on what we did on data science and what we did on machine learning. And the number one thing that we did was we were about supporting open source and open framework. So it's not just one framework, like a TensorFlow framework, but it's about what we can do with TensorFlow, Keras, PyTorch, Cafe, and be able to use all of our builder's favorite open source frameworks and to be able to use that in a way where then we can add additional value on top of that and help them accelerate what it means to actually have it in the enterprise and what it means to actually de-skill that for the organization. So we started there, but really if you look at where Watson has focused on the APIs and the API services, it's bringing together those capabilities of what we're doing with unstructured pre-trained services and then allowing clients to be able to bring together the structured and unstructured together on one platform and adding the deep learning as a service capabilities which is truly differentiating. Well, I think the important point there just to amplify and for the people to know is that it's not just your version of the tools for the data, you're looking at bringing data in from anywhere the customer, your customer wants it and that's super critical. You don't want to ignore data. You can't, you got to have access to the data that matters. Yeah, I think one of the other critical pieces that we're talking about here is, data without AI is meaningless and AI without data is really not very useful or very accurate. So having both of them in a ying-yang and then bringing them together as we're doing in the Watson studio is extremely important. And the other thing I want to get now to the user side, the consumption side you mentioned making it easier, but one of the things we've been hearing that's been the theme in the hallways and certainly in the cube here is bad data equals bad AI. Bad data equals bad AI. So it's not just about bolting AI on, you really got to take a holistic approach and a hygiene approach to the data and understanding where the data contextually is relevant to the application. Talk about, that means kind of nuance, but what, I mean, break that down. What's your reaction to that? And how do you talk to customers saying, okay, look, you want to do AI? Here's the playbook. I mean, how do you explain that in a very simple way? Well, you've heard of the AI ladder, making your data ready for AI. This is a really important concept because you need to be able to have trust in the data that you have, relevancy in the data that you have. And so it is about not just the connectivity to that data, but can you start having curated, enriched data that is really valuable, that's accurate, that you can trust, that you can leverage. And so it becomes not just about the data, but about the governance and the self-service capabilities that you can have in and around that data. And then it is about the machine learning and the deep learning characteristics that you can put on there. But all three of those components are absolutely essential. And what we're seeing is it's not even about the data that you have within the firewall of your organization. It's about what you're doing to really augment that with external data. And that's another area where having pre-trained, enriched data sets with what we're doing with the Watson data kits is extremely important, industry-specific data. Well, you know my pet peeve is always, I love data, I'm a data geek. I love innovation, I love data-driven. But you can't have good data without good human interaction. The human component is critical. And certainly we're seeing trends where startups like Elation that we've interviewed are taking a social approach to data, where they're looking at it like, you don't need to be a data geek or data scientist. The average business person's creating the value, and especially blockchain, we're just talking on theCUBE, that it's the business model innovations that's the new intellectual property. And the technology can be enabled and managed appropriately. This is where the value is. So what's the human component? Is there like, you want to know who's using the data? Why are they using the data? It's like, do I share the data? Can you leverage other people's data? This is kind of a melting pot. What's the human piece of it? It truly is about enabling more people access to what it means to infuse AI under their organization. So when I said it's not about re-platforming, but it's about expanding, you know we started with the data scientist and we're adding to that the application developer. But the third piece of that is, how do you get the knowledge worker? The subject matter expert, the person who understands the actual machine or equipment that needs to be inspected, how do you get them to start customizing models without having to know anything about the data science element? That's extremely important. Because I can auto tag and auto classify stuff and use AI to get them started. But there is that human element of not needing to be a data scientist, but still having input into that AI. And that's a very beautiful thing. You know, it's interesting. In the security industry, you've seen groups, birds of a feather flock together where they share hacks. And it's a super important community aspect of it. Data has now, and now with AI, you get the AI ladder, but this points to AI literacy within the organization. So you're seeing people saying, hey, we need AI literacy, not coding per se, but how do we manage data? But it's also understanding who within your peer group is evolving. So you're seeing now a whole formation of user base out there, users who want to know who they're, the birds of the other feather flock together. This is now a social gamification opportunity because they're growing together. What's your thoughts on that? There are two things there, I would say. First is, you know, we often go to the technology. And as a product person, I just spoke to you a lot about the technology. But what we find in talking to our clients is that it really is about helping them with the skills, the culture, the process transformation that needs to happen within the organization to break down the boundaries and the silos exist to truly get AI in the organization. That's the first thing. The second is when you think about AI and what it means to actually infuse AI into enterprise organizations, there's an ethics component of this. There's ethics and bias and bias components which you need to mitigate and detect. And those are real problems. And by the way, IBM, especially with the work that we're doing within Watson, with the work that we're doing in research, we're taking this on front and center and it is extremely important to what we do. And you guys used to talk about that as cognitive, but I think you're so right on. I think this is such a progressive topic. Love to do a deeper dive on it, but really you nailed it. Data has to have a consensus algorithm built into it, meaning you need to have, this is why I brought up the social dynamic because I'm seeing people within organizations address regulatory issues, legal issues, ethical, societal issues all together and it requires a group. So not just how it's worth people to synthesize. And that's either diversity, diverse groups from different places and experiences, whether it's an expert here, user there, all coming together. This is not really talked about much. How are you guys? I think it will be more. It will, you think so. Absolutely it will be more. What do you see from customers? You've done a lot of client meetings. Are they talking about this? Are they still more in the, how do I stand up AI and get AI literacy? They are starting to talk about it because look, imagine if you train your model on bad data. You actually have bias then in your model and that means that the accuracy of that model is not where you need it to be if you're going to run it in an enterprise organization. So being able to do things like detect it and to proactively mitigate it, or at the forefront. And by the way, this is where our teams are really focusing on what we can do to further, to further the AI practice in the enterprise. And it is where we really believe that the ethics part of this is so important for that enterprise or smarter business component. Iterating through the quality of the data is really good. Okay, so now I was talking to Rob Thomas talking about data containers. We were kind of nerding out on Kubernetes and all that good stuff. You almost imagine Kubernetes and containers making data really easy to move around and manage effectively with software. But I mentioned consensus on kind of the understanding the quality of the data and understanding the impact of the data. When you say consensus, the first thing that jumps in my mind is blockchain, cryptocurrency. So is there a tokenization economics model and data somewhere? Because all the best stuff going on in blockchain and cryptocurrency that's technically and more impactful is the changing of the economics. The changing of the technical architectures. So you almost can say, hmm. You can actually see over time that there is a business model that puts more value, not just on the data and the data assets themselves, but on the models and the insights that are actually created from the AI assets themselves. I do believe that that is a transformation just like what we're seeing in blockchain and the type of cryptocurrency that exists within there and the kind of where the value is. We will see that same shift within data and AI. Well, you know, we're really interested in exploring and if you guys have any input to that, we'd love to get more access to thought leaders around. The relationship people and things have to date. Obviously, internet of things is one piece, but the human relationship to data, you're seeing it play out in real time. Uber had a first death this week. That was tragic, first self-driving car fatality. You're seeing Facebook really get handed huge negative press on the fact that they mismanaged the data that was optimized for advertising, not user experience. So you start to see a shift in an evolution where people are starting to recognize the role of the human and their data and other people's data. This is a big topic. It's a huge topic and I think we'll see a lot more from it in the weeks and months and years ahead on this. I mean, I think it becomes a really important point as to how we start to really innovate in and around, not just the data but the AI that we apply to it, and then the implications of it and what it means in terms of if the data's not right, if the algorithms aren't right, if the bias is there. It is big implications for society and for the environment as a whole. Well, I really appreciate you taking the time to speak with us and I used to be busy. My final question is much more, share some color commentary on IBM Think this week, the event you're reacting to, obviously it's massive, and also the customer conversations you've had. You told me that you're in client briefings and meetings, what are they talking about? What are they asking for? What are some of the things that are low-hanging fruit use cases? Where's the starting point? Where are people jumping in? Can you just share any data you have on? Oh, I can share, that's a fully loaded question. Like, that's like 10 questions all in one. But the Think conference has been great in terms of when you think about the problems that we're trying to solve with AI, it's not AI alone, right? It actually is integrated in with things like data with the systems, with how we actually integrate that in terms of a hybrid way of what we're doing on-premise and what we're doing in private cloud, what we're doing in public cloud. So actually having a forum where we're talking about all of that together in a unified manner has actually been great feedback that I've heard from. Many customers, many analysts and in general from an IBM perspective, I believe has been extremely valuable. I think the types of questions that I'm hearing and the types of input and conversations we're having are one of where when clients want to be able to innovate and really do things that are like in horizon three type things, what are the things that they should be doing in horizon one, horizon two, and horizon three when it comes to AI and when it comes to AI and how they treat their data? And this is really important because- What's horizon one, two, and three? So you think about horizon one, those are things that you should be doing immediately to get immediate value in your business. Horizon two are kind of midterm like 18 to 24, 24 plus months out, or is horizon three. And so when you think about an AI journey, what does your AI journey really look like in terms of what you should be doing in the immediate term? Small, quick wins, what are things that you can do kind of projects that'll pan out in a year? And what are the two to three year projects we should be doing? Those are the most frequent conversations that I've been having with a lot of our clients in terms of what is that AI journey we should be thinking about? What are the projects right now? How do we work with you on the projects right now and H1 and H2? What are the things we can start incubating that are longer term? And these are extremely transformational in nature. It's kind of like, you know, what do we do to really automate self-driving, self-driving not just cars, but what we do for trains and what we do to really revolutionize certain industries and professions. And how does your product roadmap map to your horizons? Can you share a little about the priorities on the roadmap? I know you don't want to share a lot of data, competitive information, but can you just give an anecdotal or at least a trajectory of what the priorities are and some guiding principles? I handed it some of it, but I talked only about the studio, right? So during this discussion, but studio is just one of a three pronged approach that we have in Watson. The studio really is about laying a foundation that is equivalent for how do we get AI in our enterprises for the builders? And it's like a place where builders go to be able to create, build, deploy those models, machine learning, deep learning models and to be able to do so in a de-skilled way. Well, on top of that, as you know, we've done thousands of engagements and we know the most comprehensive ways that clients are trying to use Watson and AI in their organizations. And so taking our learnings from that, we're starting to harden those in applications so that clients can easily infuse that into their businesses. So we have capabilities for things like Watson Assistant which was announced this week at the conference that really helped clients with pre-existing skills like how do you have a customer care solution, but then how can you extend it to other industries like automotive or hospitality or retail? So we're working not just within Watson, but within broader IBM to bring solutions like that. We also have talked about compliance. Every organization has a regulatory or compliance or legal department that deals with either SOWs, legal documents, technical documents. How do you then start making sure that you're adhering to the types of regulations or legal requirements that you have on those documents? Compare and comply actually uses a lot of the Watson technologies to be able to do that. And scaling this out in terms of how clients are really using the AI in their business is the other point of where Watson will absolutely focus going forward. That's awesome. Partica, thank you for coming on theCUBE, sharing the awesome work. And again, gooding across IBM and also outside in the industry, but the more data, the better the potential. Absolutely. Well, thanks for sharing the data. We're putting the data out there for you. theCUBE is one big data machine. We're data-driven. We love doing these interviews and of course getting the experts and the product folks on theCUBE is certainly important to us. I'm John Furrier, more coverage from IBM Think after this short break.