 Live from San Francisco, it's theCUBE. Covering IBM Think 2019, brought to you by IBM. Hello everyone, welcome back. Live here in San Francisco, it's theCUBE's coverage of IBM Think 2019. I'm John Furrier, Stu Miniman. Stu, four days, we're on our fourth day, the sun's shining. They shut down Howard Street here at IBM. Big event for IBM in San Francisco, not Las Vegas. A lot of great cloud action, a lot of great AI data developers. Great story, good to see you again. Our next two guests, Julie Lockner, Director of Offering Management Portfolio Operations at IBM Data Plus AI, great to see you. Thank you, it's great to see you too. Thanks for coming on. And Jay Lindbergh, Director of Offering Management, IBM, Data Plus AI, thanks for coming on. Thank you guys, great to see you. So we've chatted many times at events, the role of data, so we're religious about data. We, data flows through our blood, but IBM has put it all together now. All the reorgs are over, everyone's kind of the table is set for IBM. The data path is clear, it's part of applications. Feeding the apps, AI is the key workload inside the application. This is now a fully set up group, give us the update, what's the focus? Yeah, it's really exciting because if you think about it, before we were called IBM analytics, and that really is only a part of what we do. Now that we're data plus AI, that means that not only are we responsible for delivering data assets and technology that supports those data assets to our customers, but infusing AI not only in the technologies that we have, but also helping them build applications so they confuse AI into their business processes. It's pretty broad, I mean data is very much a broad swath of things. Analytics to wrangling data, setting things up, cataloging them, take me through how you guys set this up, how do you present it to the marketplace, how are clients engaged with it, because it's pretty broad. Sure, sure. But it needs to be specific, take us through the methodology. So you've probably heard a lot of people today talk about the ladder to AI. This is IBM's view of how we explain our client's journey towards AI. It really starts at the bottom of the rung of the ladder where we've got the organized, sorry, collection of information, collect your data. Once you've collected your data, you move up to the next rung, which is the organized. And this is really where all the governance stuff comes in. This is how we can provide a view across that data, understand that data, provide trust to that data, and then serve that up to the consumers of that information so they can actually use that in AI. That's where all the data science capabilities come in, allowing people to actually be able to consume that information. So the bottom set is just really kind of all the hard and heavy lifting that data science actually don't want to do. I'm writing algorithms, the collecting, the ingesting of data from any source. That's the bottom. And then talk about that next layer up from the collection. So collect is the physical assets or the collection of the data that you're going to be using for AI. If you don't get that foundation right, doesn't really make sense, but you have to have the data first. The piece in the middle that Jay was referring to that's called organize, our whole divisions are actually organized around these ladders to AI. So collect, organize, analyze, infuse. On the organize side, as Jay was mentioning, it's all about inventorying the data assets, knowing what data you have, then providing data quality rules, governance, compliance type offerings that allow organizations to not just know your data, trust your data, but then make it available so you can use your data and the users are those data scientists that the analytics teams and the operational organizations that need to be able to build their solutions on top of trusted data. So where does the catalog fit in? Which level does that come into? Yeah, so think of the data catalog as the DNS for data. It's the way in which you can provide a full view of all of your information, whether it's structured information, unstructured information, data you've got on-prem and your data lake, data you've got in a cloud somewhere. That's in the organized layer, right? That's all in the organized layer. So if you can collect that information, you can then provide capabilities that allow you to understand the quality of that data, know where that data's come from. And then finally, if you serve that up inside a compelling business-friendly experience so that a data scientist can go to one place, quickly make a decision on if that's the right data for them and allow them to go and be productive by building a data science model, then we're really able to kind of move the needle on making those data science organizations efficient, allowing us to build better models to transform their business. Yeah, and a big part of that is, if you think about what makes Amazon successful is because they know where all their products are from the vendor to when it shows up on the doorstep. What the catalog provides is really the similar capability of, I would call it inventory management of your data assets where we know where the data came from. It's source and that collect layer, who's transformed it, who's accessed it, if they're even allowed to see it. So if data privacy policies are part of that and then being able to serve it, just serve up that data to those users, being able to see that whole end to end lineage is a key critical point of the ladder to AI, especially when you start to think about things like bias detection, which is a big part of the analyzed layer. You know, one of the things we've been digging into on theCUBE is, you know, is data the next flywheel of innovation? You talk about, you know, it used to be, I just had my information, many years ago we started talking about, okay, I need to be able to access all that other information. We hear things like, you know, 80% of the data out there isn't really searchable today. So how do you see data, data gravity, all those pieces as the next flywheel of innovation? Yeah, I think it's key, right? I mean, we've talked a lot about how, you know, you can't do AI without information architecture, right? And it's absolutely true. And getting that view of that data in a single location, so it is like the DNS of the internet, right? So you know exactly where to search, you can get hold of that data, and then you've got tools that give you self-service access to actually get hold of the data without any need of support from IT to get access to it. It's really a key to that. Yeah, but to the point you're just asking about data gravity, right? I mean, being able to do this where the data resides. So for example, we have a lot of our customers that are mergers and acquisitions. Some teams have a lot of data assets that are on-premises, others have large data lakes in AWS or Azure. How do you inventory those assets and really have a view of what you have available across that landscape? Part of what we've been focusing on this year is making our technology work across all of those clouds, right? And having a single view of your assets but knowing where it resides. Yeah, so Julie, this environment is a bit more complicated than the deal data warehousing, or even what we were looking at with big data and Hadoop and all those pieces. Isn't that the truth? Help explain why we're actually going to be able to get the information, leverage, and drive new business value out of data today when we've struggled so many times in the past. Well, I think the biggest thing that's changed is the adoption of DevOps, right? And when I say adoption of DevOps and things like containerization and Docker containers, Kubernetes, the ability to provision data assets very quickly, no matter where they are, build these very quick value-producing applications based on AI, artificial intelligence APIs, is what's allowing us to take advantage of this multi-cloud landscape. If you didn't have that DevOps foundation, you'd still be building ETL jobs and data warehouses, and that was 20 years ago, right? Today, it's much more about these micro-services-based architecture, building up these data. Well, that's the key point, and the fused part of the stack, I think, or ladder stack, ladder. Ladder. Ladder to success is key because you're seeing the applications that have data native into the app has to have certain characteristics, whether it's a real-time health care app or retail app, and we had some, the retail folks on earlier, it's like, oh my God, this now has to be addressable very fast, so the old fenced-off data warehouse, yeah, even that data, pull it over, just you need a sub-second latency, or a millisecond. So this isn't now a requirement. That's right. So how are people getting there? What are some use cases? Sure, I'll start with the health care because you brought that up. I mean, one of the big use cases for technology that we provide is really around taking information that might be real-time or batch data and providing the ability to analyze that data very quickly in real-time to the point where you can predict when someone might potentially have a cardiac arrest. Right, in yesterday's keynote that Rob Thomas presented, a demonstration that showed the ability to take data from a wearable device, combine it with data that's sitting in an Amazon MySQL database, be able to predict who is the most at risk of having a potential cardiac arrest, and then present that to a call center of cardiologists. So this company that we work with iCure really took that entire stack, collect, organize, analyze, infuse, and built an application in a matter of six weeks. Now, that's the most compelling part is we were able to build the solution, inventory their data assets, tie it to the industry model, healthcare industry model, and predict when someone might potentially have a risk. You got that demo on you, the little device? Oh, I do, of course I do. Come on, bring it out. I know, I know. So here is, this is called a Braveheart Life Sensor, and essentially it's a Bluetooth device. I know if you put it on. I put it on, it'll track biometric. It'll start capturing information about your heart, ECG, and on Valentine's Day, right? My heart to yours. Happy Valentine's Day to my husband, of course. The ability to be able to capture all this data here on the device, stream it to an AI engine that can then immediately classify whether or not someone has an anomaly in their ECG signal. You couldn't do that without having a complete ladder to AI capability. So real-time telemetry from the heart, so I see timing's important if you're about to have a heart attack. Yeah. Pretty important. And that's a great example of kind of, you mentioned the speed. It's all about being able to capture that data in whatever form it's coming in, understand what that data is, know if you can trust that data, and then put it in the hands of the individuals that can do something valuable with the analysis from that data. Yeah, you have to be able to trust it. So you brought up earlier bias and data, so I'm going to bring that up in context to this now. This is just one example of wearables, Fitbits, all kinds of things happening in healthcare, retail, all kinds of edge, real-time, is bias of data, and the other one's privacy because now you have a new kind of data source going into the cloud. Exactly. And then so this fits into what part of the ladder, so the ladder needs a secure piece. Yeah, it does, so that really falls into that organized piece of that ladder, the governance aspects around it. If you're going to make data available for self-service, you've got to still ensure that that data's protected and that you're not going to go and break any kind of regulatory law around that data. So we actually can use technology now to understand what that data is, whether it contains sensitive information, credit card numbers, and expose that information out to those consumers, yet still masking the key elements that should be protected. And that's really important because data science is a hugely inefficient business, all right? Data scientists are spending too much time looking for information. And worse than that, they actually don't have all the information available that they need because certain information needs to be protected. But what we can do now is expose information that wasn't previously available, but protect just the key parts of that information so we're still ensuring it's safe. That's a really key point. It's the classic iceberg, right? What you see, oh, data science is going to change our game or our business. And then when they realize what's underneath the water, it's like all this setup, incompatible data, dirty data, data cleaning, and then all of a sudden it just doesn't work. Well, right? This is the reality. You guys are seeing this, you see that? Yeah, absolutely. I think we were only just really at the beginning of a kind of a crest of a wave here. I think organizations know they want to get to AI. The ladder to AI really helps explain and helps to understand how they can get there. And we're able then to kind of solve that through our technology and help them get there and drive those efficiencies that they need. Yeah, just to add to that, right? I mean, now that there's more data assets available, you can't manually classify, tag, and inventory all that data, determine whether or not it contains sensitive data. And that's where infusing machine learning into our products has really allowed our customers to automate the process. I mentioned the only way that we were able to deploy this application in six weeks is because we used a lot of the embedded machine learning to identify the patient data that was considered sensitive, tag it as patient data. And then when the data scientists were actually building the models in that same environment, it was masked. So they knew that they had access to the data, but they weren't allowed to see it. It's perfectly like, especially with HIMS conference this week as well. You were talking about this there. Great use case with healthcare. Yeah, well, I'd love to hear you speak about the ecosystem being built around this, everything, open APIs, I'm guessing, and what kind of partners are- Yeah, I mean, Jay, talk a little bit about ORS. Yeah, so one of the key things we're doing is ensuring that we're able to keep this stuff open. We don't want to curate our proprietary system. We're already big supporters of open source, as you know, in IBM. One of the things that we're heavily invested in is our open metadata strategy. Open metadata is part of the open source ODPI foundation, Project Egeria, defines a standard for common metadata interchange. And what that means is that any of these metadata systems that adopt this standard can freely share and interchange metadata across that landscape, so that wherever your data is, whichever systems it's stored in, wherever that metadata is harvested, it can play part of that network and share that metadata across those systems. I'd like to get your thoughts on something, Julie. You've been on the analyst side, you know, at IBM. Jay, if you could weigh in on this, too, it'd be great. Here, we see all the trends that go to all the events, and one of the things that's popping up that's clear within the IBM ecosystem is because you guys have a lot of business customers, is that a new kind of business app developer is coming in, and we've seen data science highlight the citizen data scientists, so if data is code, part of the application, and all the latter stuff kind of falls into place, that means we're going to see new kinds of applications. So how are you guys looking at this? This is kind of not like the cloud native hardcore DevOps developer. It's the person that says, hey, I can innovate a business model. I see a business model innovation that's not so much about building technology, it's about using insight and a unique formula or algorithm to tweak something. That's not a lot of programming involved, because with cloud and cloud private, all these backend systems, that's an ecosystem partner opportunity for you guys. But it's not your classic ISV, so there's a new breed of business apps that we see coming. Your thoughts on this? Yeah, it's almost like taking business process optimization as a discipline and turning it into micro applications. You want to be able to leverage data that's available and accessible, be able to insert that particular artificial intelligence machine learning algorithm to optimize that business process and then get out of the way. Because if you try to reinvent your entire business process, culture typically gets in the way of some of these things. But it's an application value, because there's value creation here, right? So you were talking about, so is this a new kind of genre of developer? Or is this just a? It really is. I mean, if you take the citizen data science example that you mentioned earlier, it's really about lowering the entry point to that technology. How can you allow individuals with lower levels of skills to actually get in and be productive and create something valuable? It shouldn't be just a practice that's held away for kind of the hardcore developer anymore. It's about lowering the entry point with a set of tools. One of the things we have in Watson Studio, for example, our data science platform is just that. It's about providing wizards and walkthroughs to allow people to develop productive use models very easily without needing hardcore coding skills. Yeah, but I also think though that in order for these value-added applications to be built, the data has to be business-ready, right? That's how you accelerate these application development life cycles. That's how you get the new class of application developers productive is in making sure that they start with a business-ready foundation. So how are you guys going to go after these new market? What's the marketing strategy? Again, this is like forward pioneering kind of things happening, what's the strategy? How are you going to enable this? What's the plan? Well, there's two parts of it. One is when Jay was mentioning the open metadata repository services, our key strategy is embedding catalog everywhere and anywhere we can. We believe that having that open metadata exchange allows us to open up access to metadata across these applications. So really that's kind of first and foremost is making sure that we can catalog and inventory data assets that might not necessarily be in the IBM cloud or in IBM products. That's really the first step. The second step I would say is really taking all of our capabilities, making them from the ground up micro services enabled, delivering them through Docker containers and making sure that they can port across whatever cloud deployment model our customers want to be able to execute on and being able to optimize the runtime engines, whether it's data integration, data movement, data virtualization, based on the data gravity that you had mentioned. This is a whole new developer program opportunity to bring to the market. Absolutely, yeah, I mean there is, I think there is a huge opportunity from an education perspective to help our customers build these applications, but it starts with understanding the data assets, understanding what they can do with it and using self-service type tools that Jay was referring to. Yeah, and all of that underpinned with the trust. If you don't trust your data, the data scientists is not going to know whether or not they're using the right thing. So the latter is great, great way for people to figure out where they are. Like looking at the mirror or the organization. How early is this? What inning are we in? How do you guys see the progression? How far along are we? Obviously you have some data examples. Some people are doing it end to end. What's the maturity look like? What's the uptake? Yeah, go ahead Jay. So I think we're at the beginning of a crest of a wave, right? As I say, there's been a lot of discussion so far. Even if you compare this year's conference to last year, a lot of the discussion last year was what's possible with AI? This year's conference is much more about what are we doing with AI, right? And I think we're now getting to the point where people can actually start to be productive and really start to change their business through that. Yeah, I know just to add to that. I mean, the latter to AI was introduced last year and it has gained so much adoption in the marketplace and our customers. They're actually organizing their business that way. So the collect divisions are the database teams are now expanding to Hadoop and Cloudera and Hortonworks and Mongo. They're organizing their data governance teams around the organized pillar where they're doing things like data integration, data replication. So I feel like the maturity of this latter to AI is really enabling our customers to achieve it much faster than they have before. I was talking to Dave Vellante about this and we're seeing that, you know, we've been covering IBM since it's the 10th year of theCUBE all 10 years. It's been watching the progression. The past couple of years it's been kind of setting the table. Everyone seems to be pumping that it makes sense. Everything's hanging together. It's done one group. Data is not one, this group does that group. It's all data AI, all analysts, all Watson. It's all smart and the latter just allows you to understand where a customer is and then fill those solutions and. And we mentioned the emphasis on open source. You know, it allows our customers to take an inventory of what do they have internally with IBM assets externally open source so that they can actually start to architect their information architecture using the same kind of analogy. An opportunity for developers too. Grave, Julie, thanks for coming on. Jay, appreciate it. Thank you so much for the opportunity. Happy Valentine's Day. Happy Valentine's Day. We're at theCUBE. I'm John Furrier. I'm live in San Francisco at the Moscone Center. The whole street shut down Howard Street. Huge event, 30,000 people. We'll be back with more day four coverage after this short break.