 Hi, I'm Peter Burris, and welcome to another Wikibon action item. Hi, once again, we're broadcasting from our beautiful Palo Alto Studios, theCUBE Studios, and this week we've got another great group, David Floyer in the studio with me, along with George Gilbert, and on the phone we've got Jim Cabellus and Ralph Finos. Hey guys. Hi there. So we're going to talk about something that's going to become a big issue. It's only now starting to emerge, and that is what will be the roadmap to automation. Automation is going to be absolutely crucial for the success of IT in the future and the success of any digital business. At its core, many people have presumed that automation was about reducing labor. So introducing software and other technologies were effectively able to substitute for administrative, operator, and related labor. And while that is absolutely a feature of what we're talking about, the bigger issue is ultimately is that we cannot conceive of more complex workloads that are capable of providing better customer experience, superior operations, all the other things that digital business ultimately wants to achieve if we don't have a capability for simplifying how those underlying resources get put together, configured, or organized, orchestrated, and ultimately sustained delivery of. So the other part of automation is to allow for much more work that can be performed on the same resources much faster. So that's the basis for how we think about plasticity and the ability to reconfigure resources very quickly. Now the challenges this industry, the IT industry, has always used standards as a weapon. We use standards as a basis of creating ecosystems or scale or mass for even something like mainframes where there weren't hundreds of millions of potential users, but IBM was successful at using that as a basis for driving their costs down and improving a superior product. That's clearly what Microsoft and Intel did many years ago, was achieve that kind of scale through the driving more and more and more, ultimately, volume of the technology and they won. Along the way though, each time, each generation has featured a significant amount of competition at how those interfaces came together and how they worked. And this is going to be the mother of all standard oriented competition. How does one automation framework and another automation framework fit together? One, being able to create value in a way that serves another automation framework, but ultimately as a, for many companies, a way of creating more scale onto their platform, more volume onto that platform. So this notion of how automation is going to evolve is going to be crucially important. David Floyer, is API, are APIs going to be enough to solve this problem? No, as a short answer to that. This is a very complex problem and I think it's worthwhile spending a minute just on what are the component parts that need to be brought together. We're going to have a multi-cloud environment, multiple private clouds, multiple public clouds and they've got to work together in some way. And the automation is about, and you've got the edge as well. So you've got a huge amount of data all across all of these different areas. And automation and orchestration across that, as you said, not just about efficiency, they're about making it work, making it able to be to work and to be available. So all of the issues of availability, of security, of compliance, all of these difficult issues are subject to getting this whole environment to be able to work together through a set of APIs, yes, but a lot, lot more than that. And in particular, when you think about it, to me, volume of data is critical, is who has access to that data? And- Now why is that? Because if you're dealing with AI and you're dealing with any form of automation like this, the more data you have, the better your models are. And if you can increase that amount of data as Google show every day, you will maintain that handle on or that control over that. So you said something really important because the implied assumption, and obviously it's a major feature of what's going on, is that we've been talking about doing more automation for a long time. But what's different this time is the availability of AI and machine learning, for example, as a basis for recognizing patterns, taking remedial action or taking predictive action to avoid the need for remedial action. And it's the availability of that data that's going to improve the quality of those models. Now, George, you've done a lot of work around this and the whole notion of ML for ITOM. What are the kind of different approaches? If there's two ways that we're looking at it right now, what are the two ways? So at two ends of the extreme, one is I want to see end to end what's going on across my private cloud or clouds, as well as if I have different applications in different public clouds. But that's very difficult. You get end to end visibility, but you have to relax a lot of assumptions about what's where. And that's called the breadth first. So the pro is end to end visibility. Con is you don't know how all the pieces fit together quite as well. So you get less fidelity in terms of diagnosing root causes. So you're trying to optimize at a macro level while recognizing you can't optimize at a micro level. Right. Now the other approach, the other end of the spectrum is depth first, where you constrain the set of workloads and services that you're building and that you know about and how they fit together. And then the models based on the data you collect there can become so rich that you have very, very high fidelity root cause determination, which allows you to do very precise recommendations or even automated remediation. What we haven't figured out how to do yet is marry the depth first with the breadth first so that you have multiple focus depth first. That's very tricky. Now, if you think about how the industry has evolved, there's, we wrote some stuff about what we call, what I call the iron triangle, which is basically a very tight relationship between specialists in technology. So the people who were responsible for a particular asset, be it storage or the system of the network. The vendors who provided a lot of the knowledge about how that worked and therefore made that specialist more or less successful and competent. And then the automation technology that that vendor ultimately provided. Now that was not automation technology that was associated with AI or anything along those lines. It was kind of out of the box, buy our tool and this is how you're going to automate various workflows or scripts or whatever else it might be. And every effort to try to break that has been met with screaming because, well, you're now breaking my automation routines. But so the depth first approach, even without ML, has been the way that we've done it historically. But David, you're talking about something different. It's the availability of the data that starts to change that. So are we going to start seeing new compacts put in place between users and vendors and OEMs and a lot of these other folks? And it sounds like it's going to be about access to the data. Absolutely. So you're going to start at the bottom. You've got people who have a particular component, whatever that component is. It might be storage, it might be networking, whatever that component is. They have products in that area which will be collecting data and they will need for their particular area to provide a degree of automation like a degree of capability. They need to do two things. They need to do that optimization and also provide data to other people. So they have to have an OEM agreement not just for the equipment that they provide but for the data that they are going to give and the data they're going to give back. The anonymization of the data, for example, going up and the availability of data to help themselves. So contracts effectively mean that you're going to have to negotiate value capture on the data side as well as revenue side. Absolutely. The ability to do contracting historically has been around individual products and so we're pretty good at that. So we can say you will buy this product, I'm delivering you the value and the utility of that product is up to you. When we start going to service contracts we get a little bit different kind of an arrangement. Now it's an ongoing continuous delivery but for the most part a lot of those service contracts have been predicated to known in advanced classes of function like Salesforce for example or the SaaS business where you're able to write a contract that says over time you will have access to the service. When we start talking about some of this automation though now we're talking about ongoing but highly bespoke and potentially highly divergent over a relatively short period of time that you have a hard time writing contracts that will prescribe the range of behaviors and the promise about how those behaviors are actually going to perform. I don't think we're there yet. What do you guys think? No way, I mean. Especially when we think about real time. Yeah, it has to be real time to get to the end point of automating the actual reply, the actual action that you take. That's where you have to get to. You can't, it won't be sufficient in real time. I think it's a very interesting area, this contracts area. If you think about solutions for it I would be going straight towards blockchain type architectures and dynamic blockchain contracts that would have to be put in. They're not real time. The contracts aren't real time. The contracts will never be real time but the access to the data and the understanding of what data is required those will be real time. Yeah, we'll see. I mean, Ethereum's what, every 12 seconds everything gets updated. To me that's good enough. That's real time enough. It's not going to solve the problem of somebody. It's not going to solve the problem of the edge. At the very edge. But it's certainly sufficient to solve the problem of contracts. Okay. And I would add to that and say in addition to having all this data available let's go back like 10, 20 years and look at Cisco. A lot of their differentiation and what entrenched them was sort of universal familiarity with their admin interfaces. And they might not expose APIs in a way that would sort of make it common across their competitors. But if you had data from them and a constrained number of other providers for around which you would build, let's say these modern big data applications. It's, if you constrain the problem you can get to the depth first. Yeah, but Cisco is a great example. It's an archetype for what I said earlier that notion of an iron triangle. You had Cisco admins that were certified to run Cisco gear and therefore had a strong incentive to ensure that more Cisco gear was purchased utilizing a Cisco command line interface that did incorporate a fair amount of automation for that Cisco gear. And it was almost impossible for a lot of companies to penetrate that tight arrangement between the Cisco admin that was certified, the Cisco gear and the CLI. And the exact same thing happened with Oracle. The Oracle admin skill set was pervasive within large enterprises. It happened with everybody. But there's a difference. The only reason it didn't happen in the IBM mainframe, David, was because of a, well, but it did happen, but governments stepped in and said, this violates anti-trust. And IBM was forced by law, by court decree, to open up those interfaces. That's true. Are we going to see the same type of thing? I think it's very interesting to see the shape of this market when we look a little bit ahead. You're going to, people like Amazon are going to have IAS, they're going to be running applications. They are going to go for the depth way of doing things across or which way around, is it? The breadth. They're going to be as broad as the end. But they will go depths in individual components. Oh, sure. But they will put together their own type of things for their services. Equally, other players like Dell, for example, have a lot of different products, a lot of different components in a lot of different areas. They have to go piece by piece and put together a consortium of supplies to them, storage suppliers, chip suppliers, and put together that outside. And it's going to have to be a different type of solution that they put together. HP will have the same issue there. And there's people like CA, for example, we will see an opportunity for them to come in again with great products and overlooking the whole of all of this data coming in. Oh, sure. Absolutely. So there's a lot of players who could be in this area, Microsoft, I missed out. Of course, they will have the two ends that they can combine together. They may have an advantage that nobody else has because they're strong in both places. But Jim Cabilis, let me check, are you there now? Do we got Jim back? Can you hear me? I can barely hear you Jim. Can we bring Jim's volume up a little bit? So Jim, I asked a question earlier about, we have the tooling for AI. We know how to build, we know how to get data, how to build the models, and how to apply the models in a broad brush way. And we're certainly starting to see that happen within the IT operations management world, the ITOM world. But we don't yet know how we're going to write these contracts that are capable of better anticipating, putting in place a regime that really describes what are the limits of data sharing, what are the limits of derivative use, et cetera. I argued, and here in the studio, we generally agreed, that we still haven't figured that out and that this is going to be one of the places where the tension between, at least in the B2B world, data availability and derivative use and where you capture value and where those profit pools go, is going to be significant. But I want to get your take. Has the AI community started figuring out how we're going to contractually handle obligations around data, data use, data sharing, data derivative use? Short answer is no, they have not. The longer answer is that, can you hear me first of all? Barely. Okay, can I keep talking? Okay, the short answer is no, that the AI community has not addressed those, those IP protection issues. But there is a growing push in the AI community to leverage blockchain for such requirements in terms of blockchains to store smart contracts were related to downstream utilization of data and derivative models. But that's extraordinarily early on in its development in terms of insight in the AI community and in the blockchain community as well. In other words, in fact, one of the posts that I'm working on right now is looking at a company called 8Base that's actually using blockchain to store all of those assets, those artifacts, for the development life cycle along with the smart contracts to drive those downstream uses. So what I'm saying is that there's lots of smart people like yourselves are thinking about these problems, but there's no consensus definitely in the AI community for how to manage all those rights downstream. All right, so very quickly, Ralph Finos, if you're there, I want to get your perspective on what this means from markets, market leadership. What do you think? What's, how is this going to impact who are the leaders? Who's likely to continue to grow and gain even more strength? What are your thoughts on this? Yeah, I think my perspective on this thing in the near term is to focus on simplification and to focus on debt because you can get return, you can get payback for that kind of work and it simplifies the overall picture. So when you're going broad, you've got less of a problem to deal with to link all these things together. So I'm going to go with the shaker kind of perspective on the world is to make things simple and to focus there. And I think the complexity of what we're talking about for breadth is too difficult to handle at this point in time. I don't see it happening anytime in the near future. Although there are some companies like Splunk, for example, that are doing a decent job of presenting a more of a breadth approach, but they're not going deep into the various elements. So George, really quick, last talk to you. Disagree on that one. They built a platform originally that was breadth first. They built all these essentially forwarders which could understand the formats of the output of all sorts of different devices and services. But then they started building what they call curated experiences which is the equivalent of what we call depth first. They're doing it for IT service management. They're doing it for what's called user behavior analytics which is, it's a way of tracking sort of bad actors or bad devices on a network. And they're going to be pumping out more of those. What's not clear yet is how they're going to integrate those so that like IT service management understands security and vice versa. And I think that's one of the key things, George, is that ultimately the real question will be, or not the real question, but when we think about the roadmap, it's probably that security is going to be early on one of the things that gets addressed here. And again, it's not just security from a perimeter standpoint. Some people are calling it a software based perimeter. Our perspective is the data is going to go everywhere and ultimately how do you sustain a zero trust world where you know your data is going to be out in the clear. So what are you going to do about it? All right, so look, let's wrap this one up. Jim Kubilis, let's give you the first action item. Jim, action item. Action item, well, action item automation is just to follow the sort of the stack of assets that drive automation and figure out your, your overall sharing architecture for sharing out these assets. I think the core asset will remain orchestration models. I don't think predictive models and AI are a huge piece of the overall automation pie in terms of the logic. So just focus on building out and protecting and sharing and reusing your orchestration models. Those are critically important in any domain end to end or in specific automation domains. David Floyer, action item. So my action item is to acknowledge that the world of building your own automation yourself around a whole lot of piece parts that you've put together are over. You won't have access to a sufficient data. So enterprises must take a broad view of getting data, of getting components that have data, be giving them data, make contracts with people to give them data, masking or whatever it is, and become part of a broader scheme that will allow them to meet the automation requirements of the 21st century. Ralph Finos, action item. Yeah, again, I would reiterate the importance of keeping it simple, taking care of the depth questions and moving forward from there. The complexity is enormous. George Gilbert, action item. I say start with what customers always start with a new technology, which is a constrained environment like a pilot. And there's two areas that are potentially high return. One is big data where it's been a multi-vendor or multi-vendor component mix and a mess. And so you take that and you constrain that and make that a depth first approach in the cloud where there is data to manage that. And the second one is security where we have now more and more trained applications just for that. I say don't start with a platform, start with those solutions, and then start adding more solutions around that. All right, great. So here's our overall action item. The question of automation, the roadmap to automation is crucial for multiple reasons, but one of the most important ones is it's inconceivable to us to envision how a business can institute even more complex applications if we don't have a way of improving the degree of automation on the underlying infrastructure. How this is going to play out, we're not exactly sure, but we do think that there are a few principles that are going to be important that users have to focus on. Number one is data. Be very clear that there is value in your data, both to you as well as your suppliers. And as you think about writing contracts, don't write contracts that are focused on a product, now focus on even that product as a service over time where you are sharing data back and forth in addition to getting some return out of whatever assets you've put in place. And make sure that the negotiations specifically acknowledge the value of that data to your suppliers as well. Number two, that there is certainly going to be a scale here. There's certainly going to be a volume question here. And as we think about where a lot of the new approaches to doing these or this notion of automation is going to come out of the cloud vendors. Once again, the cloud vendors are articulating what the overall model is going to look like, what that cloud experience is going to look like. And then it's going to be on a challenge to other suppliers who are providing that on-premises, true private cloud and edge orientation where the data must live sometimes. It's not something that they just want to do because they want to do it because that data requires it to be able to reflect that cloud operating model and expect ultimately that your suppliers also are going to have to have very clear contractual relations with the cloud players and each other for how that data gets shared. Ultimately, however, we think it's crucially important that any CIO recognize that the existing environment that they have right now is not converged. The existing environment today remains operators, suppliers of technology and suppliers of automation capabilities. And breaking that up is going to be crucial not only to achieving automation objectives, but to achieve and converged infrastructure, hyper-converged infrastructure, multi-cloud arrangements, including private cloud, true private cloud and the cloud itself. And this is going to be a management challenge. It goes way beyond just products and technology to actually incorporating how you think about your shop being organized, how you institutionalize the work that the business requires and therefore what you identify as the tasks that will be first to be automated. Our expectation, security is going to be early on. Why? Because your CEO and your board of directors are going to demand it. So think about how automation can be improved and enhanced through a security lens but do so in a way that ensures that over time you can bring new capabilities on when a depth first approached at least to the breadth that you need within your shop and within your business, your digital business to achieve the success and results that you want. Okay, once again I want to thank David Floyer and George Gilbert here in the studio with us on the phone Ralph Finos and Jim Cabellas. Couldn't get Neil Raiden in today, sorry Neil. And this, and I am Peter Burns and this has been an action item. Talk to you again soon.