 Live from San Francisco, it's theCUBE. Covering IBM Think 2019, brought to you by IBM. Welcome back everyone, this is theCUBE's live coverage in San Francisco at Moscone Center for IBM Think 2019. I'm John Furrier, Dave Vellante. Dave, it's been AI, it's been cloud, it's been data. Changing the game. We've got two great guests here, Morali, Nemani, CMO of Science Logic. Your CEO has been on theCUBE before, and Joe DeMassa, who's the VP of strategy and offerings for hybrid cloud servers at IBM. Thanks for joining us, appreciate it. Welcome to theCUBE. Thank you guys, thank you. So, day four of four days coverage. You can see the messaging settling, the feedback settling. AI clearly front and center, role of data in that, and then cloud scale across multiple capabilities. Obviously on-premise, multi-cloud is existing already. Software's changing all this. And so AI impacting operations is key. So, how do you guys work together? What's the relationship between Science Logic and IBM? Can you take a minute to explain that? I think, clearly, as you talk about it, the hybrid nature of what we're dealing with, the complexity of it, it's all going to be about the data. Software is great, but it's about software that collects the data, analyzes the data, and gives you the insights so you can actually automate and create value for our clients. So, it's really this marriage, it's a technology, but it's a technology that allows us to get access to the data so we can make change. It's all about the data. It's a lot of what IBM has been doing is building the analytics engines, and Watson, and so forth. And our partnership has been really building the data, and the data lake, and the real-time aspects of collecting and preparing that data so that you can really get interesting outcomes out of it. So, when you think about predictive models, when you think about the way that data can be applied to doing things like anomaly detection that ultimately accelerate and automate operations, that's where the relationship really starts taking hold. So, you guys are specializing in AI ops and IT operas as it transforms with scale and data, which you need machine learning, you need kind of automated automation, which is the DevOps ethos. Operations is, don't go down, right? They're up and running, high availability. So, on the cloud services side, talk about where the rubber's meeting the road from a customer standpoint, because the cultural shift from IT service management, IT operations has been this manual, some software here and there, but it's been a process. Older processes change a little bit, but this is a new game. Talk about how you guys are engaging the customers. Well, part of it, I mean, it's interesting, when you step back and you stop breathing your own exhaust in terms of pushing what you're trying to sell and you listen to your customers, what we're hearing is that they all understand the destination, they understand the moving to the cloud, they understand the value that it's going to bring. They're having a hard time getting started. It's, how do I start the journey? I've got all of this estate and traditional IT operations capabilities, it's got to move, how do I modernize it? How do I make it so it's portable across different environments? And so when you step back, we basically said, hey, you need the portability of the platform. So what we've done, what we're doing with Red Hat, what we're doing with IBM Cloud Private, it creates that portable containerizing ability to take our existing workloads and start moving them, right? And then the other thing that the clients need are the services. Who's going to help me, advise me on what workload should move, which one shouldn't? Most of the stuff fails because you moved the wrong things. How do you manage that? How do you build it? And then when you're done and I've got this hybrid complex environment, how do I actually get insights to it and the data I need to operationalize it? How do I do IT ops when I don't own everything within the four walls of my dataset? Now are you guys going to market together? Do you guys sell each other's products, the relationship with Science, Logic and IBM? Is it a partnership? Is it a joint development? Can you explain a little bit more on how you guys work together? Well, we're one of the largest sort of services provider in the industry. So as we bring our products, our technologies and our capabilities to market, we bring Science, Logic into those deals. We use Science, Logic in our services so that we can actually deliver the value to our clients. So it is sort of a co-development, co-joint partnership, plus also a go-to market. So use that as a tool to do discovery and identify the data that's, and the data that we're talking about is everything I need to know about my IT operations, my applications, the dependencies, maybe you could describe a little bit more about that. If you think about one of the things that Joe was mentioning is, today the workloads are shifting, you're going from, let's say, management platforms, performance monitoring and management platforms that are, that you need to evolve to incorporate new technologies like containers and microservices and serverless architectures. That's one area of how did the tool sets fundamentally evolve to support the latest technologies that are being deployed? So think about that. Second is, how do you consolidate the set of tools now you're managing because you're adopting cloud-based technologies or new capabilities and so get consolidation there. And the third is these workloads that are now migrating out of your private cloud or private data center into public clouds, right? And then that workload migration, I think it is Forrester that was saying about 20% of the total workloads are currently in some sort of a cloud, public cloud environments. So there's a lot of work to do in terms of getting to that tipping point of where workloads are now truly in a multi-cloud hybrid cloud. So as they, as IBM accelerates that transition and their core competencies in helping these large enterprises make that transition, you need a common manageable environment, that common visibility across those workloads. So that's at the heart of what we're pulling. And then the data sets happen to be data sets that are coming either from the application layer, data coming from the log management systems. It could be data coming from a service desk in terms of the kind of CMDB-based data sets. And we're building a data lake that ultimately allows you to see across these heterogeneous systems. It could be service requests. That really touches the business process. So you can now start to sort of map the value and how changes can affect that value, right? Yeah, exactly. I mean, what's interesting about, you know, ScienceLogic as a partner, it's the breadth of their platform in terms of the different things it can monitor, the depth, the ability to go into containers and kind of understand what the applications are doing and the scale in terms of the types of devices. So when you think about, you know, the types of devices we're going to have to manage, everything from, you know, sensors in an internet of things environment to routers, to sophisticated servers and applications that could be running anywhere, you need the flexibility of the platform that they have in order to be able to deliver that. And I think that's a key point. I mean, you talk about containers and Kubernetes. We heard your CEO, Jeanne Romini, mentioned Kubernetes on stage, like, that's great, it's two times, like Kubernetes, six months ago, we had to know what Kubernetes, now it's mainstream. So this is showing what's going on in the industry, which is the on-premise decomposition of on-premise with cloud private, you guys have, is giving them the ability to use containers to manage their existing stuff and do that work and then have the extension to cloud, public cloud, or whatever public cloud, this gives them more modern capabilities. So the question is, this changes the game, we know that. But how does it change the AI ops and what does it mean? So I guess the first question is, what is AI ops? And what is this new on-premise with cloud private and full public cloud architecture look like in AI ops 2.0? So for me, it's a very simple definition. It's really using algorithmic mechanisms, right? Towards automating operations, right? It's a very simplistic way of looking at it. But ultimately the end game is to automate operations because you need to move at the pace of business and machine speed. And if you want to go move at machine speed, you can't have, I mean, you can't throw enough humans at this problem, right? Because of the pace of change, the ephemerality of the workloads spinning up and spinning down, we have a bank as a customer who turns up containers for every 90 seconds and then turns them down. Just can't keep that in that real-time state of change and being able to understand the topological relationships between the application layer and the underlying infrastructure so that you can truly understand the service health because when an application degrades in performance, the biggest issue is a war room scenario where everyone's saying, it's not me, it's not me, and because everyone's green on their fronts. But it's now, how do you get that connective tissue all the way running through? That's the problem. Well, it's also not only the change, it's also the velocity of data coming off that exhaust. So the changes in services is throwing off tons of data that you need machines to help. I mean, that's kind of the key. Exactly, yeah. I would add to that. I think part of the definition of AI ops is evolving, what we know we're coming from is more fit for purpose analytics, right? I have this problem, I'm going to collect this data, I'm going to put these automations in place to address it. We need to kind of take a data model approach that says, how do I ingest all of this data? Even at the start, when you're looking at which workloads and you're doing discovery and assessment of workloads, that data should go into a data lake that can be used later when you're actually doing the operations and management of those workloads. So what data do we collect at every stage of the migration and the transformation of it in including the operational data? And then how do we put form analytics on it and then get the true insights? I think we're just scratching the surface of applying true AI, because it's all been very narrow cast, narrow focus. I have this problem, I collect this data, I can automate this server. It needs to move much beyond that to it. And services are turning up and on and off so fast is a non-deterministic angle here. And you got state, non-deterministic. I mean, those are hard technical computer science problems to solve. You don't can't just put a process around saying, oh, yeah. Well, that's back to the scalability of the platform, the ability in real time to be monitoring and looking at that data and then doing something. Now humans aren't completely removed from the equation, right? And so I'm interested in how the humans are digesting and visualizing all this data, especially at this speed. Is there a visualization component? How does that all evolving? Yeah, I think that, I mean, that's part of the biggest challenge is humans are A, they have to be the ones that kind of analyze what's coming and say what does this mean when you haven't already algorithmically built it into your automation technology, right? And then they also have to be the ones that train. The system is to actually do it. So one of the things that we're struggling with, we're not struggling with, we're experimenting with is how best to visualize this, right? We do some things now. We've got a hybrid cloud management platform. We're teaming with the product guys. And it's the ability to have, in fact, four consoles. One from a consumption, how do I consume services from Amazon, IBM cloud, on-premise? How do I deploy it? So in a DevOps model, how do I fulfill that very quickly? And operational consoles, right? And then cost and asset management. So you can act at a glance, say, oh, I've got a big Hadoop cluster that's been spun up. I'm paying $100,000 for it and a zero utilization. So how do you visualize that? So you can say, oh, I need to put a rule in that if somebody's spinning something up on IBM cloud and they're not using it, I either shut it down or I send messages out or I put governance in top of it. So it's putting business rules and logic in terms in addition to visualization to help automate. And Ginny talked about this in her keynote, efficiency versus innovation around how to manage that. And this is where the scale comes in because if you know that something's working, you want to double down on it, you can then kind of automate that away and then you just move someone, the humans to something else. This is where the AI ops I think is going to be, I think going to change the category. I mean, it's the Gartner Magic Quadra for the IT operations. AI potentially decimates that. Yeah, there's this argument that, you had these nice quadrants or let's say nicely defined market segments. You had the NPMD, the ITSM, the ITOM, you have APM. And so what's happening is in this world of AI ops, none of those D-marks really fit anymore because you're seeing the convergence of that. And then the other transition that's happening is this movement from a classic ops or dev and a dev to ops, dev ops and now a dev sec ops. You're trying to get worlds to converge. And so when we talk about the data and being able to build data models, those data models need to converge across those domains. So a lot of the work we do is collect data sets from log management, from service desk and service management, from APM, et cetera, and then build that data model in real time. So you can. Kind of building an Uber, CMDB or, I mean, right? Do most of your clients have a single CMDB? Probably not, right? So this is sort of a new guidepost, isn't it? Yeah, part of it is there are these data puddles, if you will, right? Data exists in a lot of different places. How do you bring them together so you can federate different data sources, different catalogs into a common platform because if a user is trying to decide, okay, should I spin this up on this environment or that one, you want the full catalog of capabilities that are on-premise in your CMDB system with the legacy environment out of the catalogs that may exist on Amazon, Azure, et cetera, and you want data across all that. It seems that everything's a data problem now and the data's are being embedded into the applications which are then workloads of defining infrastructure architecture or our sole cloud, multi-cloud, whatever the resource is. So we had JP Morgan Chase, top data geek on and she was talking about, we have models for the models and IBM's been talking about this concept of reasoning around the data. This is why I always like the cognition kind of angle of cognitive because that's not just math. Math is math. You do math on supervised machine learning and known processes to be efficient but the cognition and the reasoning really helps get at that data set, right? So can you guys react to that? I mean, the data is, is everything a data problem? Is that how you should look at it and how does reasoning fit into all this? Well, I mean, that's back to your point about what is the human's role in this, right? So we're moving in a services business from primarily labor-based with tools to make them more efficient to the technology doing the work but the humans have to then say when the technology gets stumped, what does that mean? So should I build a new, should I, how do I train it better? How do I, you know, take my domain expertise? How do I do the deep analytics to tell me, all right, how do I solve those problems in the future? So the role changes, I think Jenny talks about it in terms of new collar workers. I mean, these are data scientists. These are people that understand the dynamics of being in a relationship with the different data. The data models need to get built and they are guiding and effect the automation, right? I thought your CDO was on theCUBE talking about, and your Paul was talking about, you know, take the heavy and Rob Thomas was also on the GM of the data plus AI team. I think he really nailed it. If you guys could take away the heavy lifting of the setup work, then the data science who actually are there to do the reasoning or help assist in managing what's going on and putting guardrails around whatever business policy is. Today, I mean, we talk in this, it's about 79%, I think it's a Gardner stat, of 79% of the data scientists and these are these PhDs that are highly valuable, spend their time collecting, preparing, cleansing those data models, right? So you're not really applying that PhD level knowledge base towards solving a problem. You're just trying to make sense of the data. So one, do you have a holistic enough view? Two, do you, is there a way to automate those things? So you can then apply the human aspects towards the things that Joe was talking about. So that's a big part of what we're trying to come together in terms of the market for. Well guys, thanks for the insight. Thanks for coming on. Great topic. I think we can talk for, you know, an hour or two a panel on cultural shift because you mentioned the sex in here, ops and deaths, you know, it's a melting pot and it's a cultural shift. I think that topic's worth following up on. But I'd like you guys to just get a quick plug-in for what you work on. I know you've got an event coming up and you've got some work. You talk about what you guys are doing. You've got an event coming up. What's your pitch? Give a quick plug. So we've got our symposium, which is our big user conference. It's in April, it's right in, it's on April 22nd to the 23rd to the 25th. It's in downtown Washington, DC. Cherry Blossom Festival season at the, you know, at the Ritz-Carlton. And so a lot of that, we'll have the cube there as well. So we're looking forward to it. A lot of great energy to be carried over. We love going to the district. Yeah, you guys have a great, great vision. Josh, give the plug to some of the service you're doing. Just give an update on what you guys are up to. Yeah, I think, I mean, we're investing in the technology. I mean, we're full on board with the containerizations we talked about. We're putting together a services portfolio. I think Ginny mentioned that we're taking a whole bunch of capability across IBM, the global technology services, global business services, and really coalescing it into about 23 offerings to help customers advise on cloud, move to cloud, build for cloud and manage on cloud. And then you've seen the announcements here about what we're doing around the multi-cloud management system, those four consoles I talked about. How do we help put a gearbox in place to manage the complexity of the hybrid nature that our customers are dealing with? It seems IBM's got clear visibility on what's happening with cloud, cloud private, I think, a really big announcement. I think it's not talked enough about at the show, and I'll always kind of mention it's the key linchpin. But you see cloud, multiple clouds, hybrid cloud, and you got AI, and you got partnerships, ecosystem. Now it's execution time, right? Exactly, and frankly, that's the challenge, right? So you used to be able to manage it all over four walls, right? Your SAP instances was in the data center, your servers were in the data center, your middleware was in the data center. Now I got my applications running on Salesforce.com off of software as a service. I've got three or four different infrastructures as servers and providers. I still have the legacy that I got to deal with. I mean, the integration problems are just tremendous. Joe DeMas, VP of strategy at IBM Hybrid Cloud and Marelli Nemanee, CMO of Science Logic, AI operations, bringing hybrid clouds to cube, bringing you all the coverage day four I'm with Dave Vellante. It's all about cloud AI developers all happening here in San Francisco this week. Stay with us for more of this short break.