 Hi, and welcome to this year's annual Wikibon Predictions. This is our 2018 version. Last year, we had a very successful webinar describing what we thought was going to happen in 2008 or 2017 and beyond, and we've assembled the team to do the same thing again this year. I'm very excited to be joined by the folks listed here on the screen. My name is Peter Burris, but with me is David Foyer. Jim Kabilis is remote. George Gilbert's here in our Palo Alto studio with me. Neil Raden is remote. David Vellante's here in the studio with me. And Stuart Miniman is back in our Marlboro office. So thank you, analysts, for attending. And we look forward to a great teleconference today. Now, what we're going to do over the course of the next 45 minutes or so is we're going to hit about 13 of the 22 predictions that we have for the coming year. So if you have additional questions, I want to reinforce this. If you have additional questions or things that don't get answered, if you're a client, give us a call, reach out to us. We'll leave you with the contact information at the end of the session. But to start things off, we just want to make sure that everybody understands where we're coming from and let you know who is Wikibon. So Wikibon is a company that starts with the idea of what's important is to research communities. Communities are where the action is. Community is where the change is happening and it's communities where the trends are being established. And so we use digital technologies like the Cube, CrowdChat, and others to really ensure that we are surfacing the best ideas that are in a community and making them available to our clients so that they can succeed more successfully and be more successful in their endeavors. When we do that, our focus has always been on a very simple premise. And that is that we're moving to an era of digital business. For many people, digital business can mean virtually anything. For us, it means something very specific. To us, the difference between business and digital business is data. A digital business uses data to differentially create and keep a customer. So borrowing from what Peter Drucker said, if the goal of business is to create customers and keep and sustain customers, the goal of digital business is to use data to do that. And that's going to inform an enormous number of conversations and enormous number of decisions and strategies over the next few years. We specifically believe that all businesses are going to have to establish what we regard as the five core digital business capabilities. First, they're going to have to put in place concrete approaches to turning more data into work. It's not enough to just accrete data, to capture data, or to move data around. You have to be very purposeful and planful in how you establish the means by which you turn that data into work so that you can create and keep more customers. Secondly, it's absolutely essential that we build kind of the three core technology issues here, technology capabilities, of effectively doing a better job of capturing data, and IOT and people, or Internet of Things and People, mobile computing, for example, is going to be a crucial feature of that. You have to then, once you capture that data, turn it into value. And we think this is the essence of what big data, and in many respects, AI is going to be all about. And then once you have the possibility, kind of the potential energy of that data in place, then you have to turn it into kinetic energy and generate work in your business through what we call systems of agency. Now, all of this is made possible by this significant transformation that happens to be determinist with this transition to digital business. And that is the emergence of the cloud. The technology industry has always been defined by the problems it was able to solve, catalyzed by the characteristics of the technology that made it possible to solve them. And cloud is crucial to almost all of the new types of problems that we're going to solve. So these are the five digital business capabilities that we're going to talk about, where we're going to have our predictions. Let's start, first and foremost, with this notion of turn more data into work. So our first prediction relates to how data governance is likely to change on a global basis. If we believe that we need to turn more data into work, well, businesses haven't generally adopted many of the principles associated with those practices. They haven't optimized to do that better. They haven't elevated those concepts within the business as broadly and successfully as they have, or as they should. We think that's going to change, in part, by the emergence of GDPR or the general data protection regulation. It's going to go in full effect in May 2018. A lot has been written about it, a lot has been talked about, but our core issues, ultimately, are, is that the dictates associated with GDPR are going to elevate the conversation on a global basis. And it mandates something that's now called the data protection officer. We're going to talk about that in a second, Dave Vellante. But it is going to have real teeth. So we were talking with one chief privacy officer, not too long ago, who suggested that had the Equifax breach occurred under the rules of GDPR, that the actual fines that would have been levied would have been in excess of $160 billion, which is a little bit more than the $0 that has been fined thus far. Now, we've seen new bills introduced in Congress, but ultimately our observation in our conversations with a lot of chief privacy officers or data protection officers is that in the B2B world, GDPR is going to strongly influence not just how businesses behave regarding data in Europe, but on a global basis. Now that has an enormous implication, Dave Vellante, because it certainly suggests this notion of a data protection officer is something now we've got another potential chief here. How do we think that's going to organize itself over the course of the next few years? Well, thank you, Peter. There are a lot of chiefs in the house and sometimes it gets confusing. There's the CIO, there's the CDO and that's either chief digital officer or chief data officer. There's the CSO, could be strategy. Sometimes that could be security. There's the CPO, is that privacy or product? As I say, it gets confusing sometimes. On theCUBE, we talked to all of these rules, so we wanted to try to add some clarity to that. First thing we want to say is the CIO, the chief information officer, that role is not going away. A lot of people predict that, we think that's nonsense. They will continue to have a critical role. Digital transformations are the priority in organizations and so the chief digital officer is evolving from more than just a strategy role to much more of an operational role. Generally speaking, these chiefs tend to report in our observation to the chief operating officer, president, COO. And we see the chief digital officer is increasing operational responsibility, aligning with the COO and getting incremental responsibility that's more operational in nature. So the prediction really is that the chief digital officer is going to emerge as a charismatic leader amongst these chiefs. And by 2022, nearly 50% of organizations will position the chief digital officer in a more prominent role than the CIO, the CISO, the CDO and the CPO. Those will still be critical roles. The CIO will be an enabler. The chief information security officer has a huge role, obviously, to play, especially in terms of making security a team sport and not just falling on IT's shoulders or the security team's shoulders. The chief data officer who really emerged from a records and data management role in many cases, particularly within regulated industries, will still be responsible for that data architecture and data access working very closely with the emerging chief privacy officer and maybe even the chief data protection officer. Those roles will be pretty closely aligned. So again, these roles remain critical, but the chief digital officer we see is increasing in prominence. Great, thank you very much, Dave. So when we think about these two activities, what we're really describing is, over the course of the next few years, we strongly believe that data will be recorded more as an asset within business and we'll see resources devoted to it and we'll see certainly management devoted to it. Now that leads to the next set of questions. As data becomes an asset, the pressure to acquire data becomes that much more acute. We believe strongly that IoT has an enormous implication, longer term, as a basis for thinking about how data gets acquired. Now, operational technology has been in place for a long time. We're not limiting ourselves just to operational technology when we talk about this. We're really talking about the full range of devices that are going to provide and extend information and digital services out to consumers, out to the edge, out to a number of other places. So let's start here. Neil Raden, when we start talking about this notion of how the edge is going to have an impact in thinking about digital business design, what are we really talking about? What are going to be some of the key issues that really define those network choices? Neil, are you on mute? All right, I'll jump in and take this one. So if we can go back to this slide, we believe very strongly, ultimately, that over the course of the next few years, the edge analytics are going to be an increasingly important feature overall of how technology decisions get made, how technology or digital business gets conceived, and even ultimately, how business gets defined. Now, Dave Floyd has done a significant amount of work in this domain, and we've provided that key finding on the right-hand side. And what it shows is that if you take a look at an edge-based application, a stylized edge-based application, and you presume that all the data moves back to a centralized cloud, you're going to increase your costs dramatically over a three-year period. Now that moderates the idea or moderates the need, ultimately, for providing an approach to bringing greater autonomy, greater intelligence down to the edge itself. And we think that ultimately, IoT and edge analytics become increasingly synonymous. The challenge, though, is that as we evolve, while this has a pressure to keep more of the data at the edge, that ultimately, a lot of the data exhaust can someday become regarded as valuable data. And so as a consequence of that, there's still a countervailing pressure to try to still move all data, not at the moment of automation, but for modeling and integration purposes, back to some other location. The thing that's going to determine that is going to be the rate at which the cost of moving the data around go down. And our expectation is that over the next few years, when we think about the implications of some of the big cloud suppliers, Amazon, Google, others, that are building out significant networks to facilitate their business services may in fact have a greater impact on the common carriers, or as great an impact on the common carriers as they have on any server or other infrastructure company. So our prediction over the next few years is watch what Amazon, watch what Google do as they try to drive costs down inside their networks because that will have an impact on how much data moves from the edge back to the cloud. It won't have an impact necessarily on the need for automation at the edge because latency doesn't change, but it will have a cost impact. Now that leads to a second consideration and the second consideration is ultimately that when we talk about greater autonomy at the edge, we need to think about how that's going to play out. Jim Kobielus, can you? You have to search for Jim and unmute him. Really? All right. So Jim Kobielus, why don't you, gracious, I apologize everybody, we're having an issue here. Here we go. Jim, there you go, Jim. Thanks a lot, Peter. Little glitch there, Neil is also available. Yeah, so what we're seeing in Wikibon is that more and more of the AI applications, more and more of the application development involves AI and more and more of the AI involves deployment of those models, deep learning, machine learning and so forth to the edges of the internet of things and people and much of that AI will be operating autonomously with little or no round tripping back to the cloud. What that's causing, in fact, what we're seeing really about a quarter of the AI development projects in 2018 will involve autonomous edge deployment. What that involves is that more and more of that AI will be those applications will be bespoke. They'll be one of a kind or unique or unprecedented application. And what that means is that there's a lot of different deployment scenarios within which organizations will need to use new forms of learning to be able to ready those AI applications to do their jobs effectively, be it doing predictions, real-time guiding of an autonomous vehicle and so forth. Reinforcement learning is the core of many of these kinds of projects, especially those that involve robotics. So really software is eating the world and the biggest bites are being taken at the edge and much of that is AI, much of that is autonomous where there is no need or less need for real-time latency. You need adaptive components, AI-imposed components that you guys can learn by doing from environmental variables and can adapt their own algorithms to take the right action. So they'll have far-reaching impacts on application development in 2018. For the developer, the new developer really is a data scientist at heart. They're going to have to tap into a new range of sources of data, especially edge-sourced data from the sensors on those devices. They're going to need new types of training and testing, especially reinforcement learning which doesn't involve training data so much as involves being able to build an algorithm that can learn to maximize what's called a cumulative reward function. And you can do the training that they're adaptably in real-time at the edge and so forth and so on. So really, much of this will be bespoke in the sense that every edge device, increasingly, will have its own set of parameters and its own set of objective functions that will need to be optimized. So that's one of the leading edge forces trends in development we see in the coming year. Back to you, Peter. Excellent, Jim. Thank you very much. So we're going to find out now if I've successfully unmuted everybody by going on to the next question here is how are you going to create value from data? So once you've gone through a couple of trends and we have multiple others about what's going to happen at the edge, but as we think about how we're going to create value from data, Neil Raden, have we successfully unmuted you? I'm here, thank you. There you go. Um, boy. You know, the problem is that data science emerged rapidly out of sort of a perfect storm of big data and cloud computing and so forth. And people who had been involved in quantitative methods rapidly glommed onto the title because it was, let's face it, it was very glamorous and paid very well, but there weren't really good best practices. So what we have in data science is a pretty wide field of things that are called data science. My opinion is that the true data scientists are people who are scientists and are involved in developing new or improving algorithms as opposed to prepping data and applying models. So the whole field really kind of generated very quickly. It really just in a few years. To me, I called it, you know, generation zero, which was more like a data prep and model management all done manually. And it wasn't really sustainable in most organizations because for obvious reasons. So generation one, then some vendors stepped up with toolkits or benchmarks or whatever for data scientists and made it a little better. And generation two is what we're going to see in 2018 is the need for data scientists to no longer prep data or at least not spend very much time with it and not to do model management because the software will not only manage the progression of the models but even recommend them and generate them and select the data and so forth. So it's in for a very big change. And I think what you're going to see is that the ranks of data scientists are going to sort of bifurcate to old style. Let me sit down and write some spaghetti code in R or Java or something. And those that use these advanced toolkits to really get the work done. That's great Neil. And of course, when we start talking about getting the work done, we are becoming increasingly dependent upon tools, aren't we George? But the tool marketplace for data science for big data has been so much fragmented and fractured and hasn't necessarily focused on solving the problems of the data scientists but in many respects focusing the problems that the tools themselves have. What's going to happen in the coming year when we start thinking about Neil's prescription that as the tools improve, what's going to happen to the tools? Okay, so the big thing that we see supporting what Neil's talking about that what Neil was talking about is partly a symptom of a product issue and a go to market issue where the product issue was we had a lot of best of breed products that weren't all designed to fit together, that's in the broader big data space that's the same issue that we faced with more narrowly with on-prem Hadoop where we were trying to fit together a bunch of open source packages that had an admin and developer burden. More broadly, what Neil is talking about is sort of richer end-to-end tools that handle both everything from the ingest all the way to the operationalization and feedback of the models. But part of what has to go on here is that with open source, these open source tools the price points and the functional footprints that many of the vendors are supporting right now can't feed an enterprise sales force. Everyone talks with their open source business models about land and expand and inside sales. But the problem is once you want to go to wide deployment in an enterprise you still need someone negotiating commercial terms at a senior level, you still need the technical people fitting the tools into a broader architecture. And most of the vendors that we have who are open source vendors today don't have either the product breadth or the deal size to support traditional enterprise software accounting which typically had a million and a half to two million quota every year. So we see consolidation and the consolidation again driven by the need for simplicity for the admins and the developers and for business model reasons to support enterprise sales force. All right, so what we're going to see happen in the course of the coming year is a lot of specialization and recognition of what is data science, what are the practices, how's it going to work, supported by an increasing quality of tools and a lot of tool vendors are going to be left behind. Now the third kind of notion here for those core technology capabilities is we still have to enact based on data. The good news is that big data is starting to show some returns in part because of some of the things that AI and other technology is capable of doing but we have to move beyond just creating the potential for work, we have to turn that into work and that's what we mean ultimately by this notion of systems of agency. The idea that data driven applications will increasingly act on behalf of a brand, on behalf of a company and building those systems out is going to be crucial, it's going to have a whole new set of disciplines and expertise required. So when we think about what's going to be required it always starts with this notion of AI. A lot of folks are presuming however that AI is going to be relatively easy to build or relatively easy to put together. We have a different opinion George, what do we think is going to happen as these next few years unfold related to AI adoption in large enterprises? Okay so let's go back to the lessons we learned from the sort of the big data, the raw, you know let's put a data lake in place which was sort of the top of everyone's agenda for several years. The expectation was it was going to cure cancer, taste like chocolate and cost a dollar and it didn't quite work out that way, partly because we had a burden on the administrator again of so many tools that weren't all designed to fit together even though they were distributed together and then the data scientists, the guys who had to take all this data that wasn't carefully curated yet and turn that into advanced analytics and machine learning models. We have many of the same problems now with the tool sets that are becoming more integrated but at lower levels. This is partly what Neil Raden was just talking about. What we have to recognize is something that we've seen all along, I mean since the beginning of corporate computing we have different levels of abstraction and they're at the very bottom when you're dealing with things like TensorFlow or MXNet, that's not for mainstream enterprises, that's for the big sophisticated tech companies who are building new algorithms on those frameworks. There's a level above that where you're using like a Spark cluster and the machine learning built into that that's slightly more accessible but when we talk about mainstream enterprises taking advantage of AI, the low hanging fruit is for them to use the pre-trained models that the public cloud vendors have created with all the consumer data on speech, image recognition, natural language processing and then some of those capabilities can be further combined into applications like managing a contact center and we'll see more from like Amazon, like recommendation engines, fulfillment optimization, pricing optimization. So our expectation ultimately George is that we're going to see a lot of this, a lot of AI adoption happen through existing applications because the vendors that are capable of acquiring a talent, taking, experimenting, creating value, software vendors are going to be where a lot of the talent ends up. So Neil, we have an example of that. Give us an example of what we think is going to happen in 2018 when we start thinking about exploiting AI and applications. Neil's un-mute again. Let me see if I can un-mute him. All right, Neil, are you there? All right, Neil. Okay, good, thank you. I think it's fairly clear that the application of what's called advanced analytics and data science and even machine learning, they're really rapidly becoming a commonplace in organizations, not just at the bottom of the triangle here. But I like the example of Salesforce.com. What they've done with Einstein is they've made machine learning and I guess you could say AI applications available to their customer base and why is that a good thing? Because their customer base already has a giant database of clean data that they can use. So you're going to see a huge number of applications being built with Einstein against Salesforce.com data. But there's another thing to consider and that is a long time ago, Salesforce.com built connectors to a zillion kinds of external data. So if you're a Salesforce.com customer using Einstein, you're going to be able to use those advanced tools without knowing anything about how to train a machine learning model and start to build those things. And I think that they're going to lead the industry in that sense. That's going to push their revenue next year I don't know, $11 billion or $12 billion. Great, thanks, Neil. All right, so when we think about further evidence of this and further impacts, we ultimately have to consider some of the challenges associated with how we're going to create application value continually from these tooling. And that leads to the idea that one of the cobbler's children that's going to gain or benefit from AI will in fact be the development organization. Jim, what's our prediction for how auto programming impacts development? Just a second, Jim. All right, Jim. Thank you very much, Peter. Yeah, automation. Wow, auto programming, like I said, is the centerpiece of enterprise application development going forward. We have a little bit of code generation, but that really understates the core of auto programming as it's evolving. Within 2018, what we're going to see is that machine learning driven code generation approaches will come to the forefront of innovation. We're seeing a lot of activity in the industry and the institutions in scope like to use ML to drive the productivity of developers for all kinds of applications. We're also seeing a fair amount of what's called RPA, robotic process automation that really how they differ that ML will deliver code generation from what I call the inside out meaning creating reams of code that are geared to an optimized for particular application scenario versus RPA, which really takes sort of an outside in approach which is essentially the evolution of screen scraping that is able to infer the underlying code needed for applications of various sorts from the external artifacts, the screens and sort of the flow of interactions and clicks and so forth for a given application. We're going to see is that ML and RPA will complement each other in the next generation of auto programming capabilities. And so really application development medium is really the enemy of one of the enemies of productivity for someone. This is a lot of work, very detailed painstaking work and what they needed to be better, more nuanced and more adaptive auto programming tools to be able to build the code at the pace that's absolutely necessary for this new environment of cloud computing. So really AI related technologies can be applied and are being applied to application development productivity challenges of all sorts. AI is fundamental to RPA as well. We're seeing a fair number of the vendors in that space incorporate ML driven OCR and natural language processing and screen scraping and so forth into their core tools to be able to quickly build up the logic needed to drive sort of the very much outside in automation of fairly complex orchestration scenario. In 2018 we'll see more of these technologies come together but they're not a silver bullet because fundamentally for organizations that are considering going deeply down into auto programming they're going to have to factor AI into their overall plans. They're going to need to get knowledgeable about AI. They're going to need to bring more AI specialists into their core development teams to be able to select from the growing range of tools that are out there, RPA and ML driven auto programming. But really what we're seeing is that the AI, the data scientist who's been the fundamental developer of AI, they're coming into the core of development tools and skills and organizations and they're going to be fundamental to this whole trend in 2018 and beyond. As AI gets proven out in auto programming these developers will then be able to evangelize the core utility of this technology AI in a variety of other back end but critically important investments that organizations have been making in 2018 and beyond especially in IT operations and management, AI is big in that area as well. Back to you there Peter. Yeah we'll come to that a little bit later in the presentation Jim, that's a crucial point but the other thing we want to note here regarding ultimately how folks will create value out of these technologies is to consider the simple question of okay, how much will developers need to know about infrastructure? And one of the big things we see happening is this notion of serverless and here we've called it serverless developer more. Jim, why don't you take us through why we think serverless is going to have a significant impact on the industry? At least it's certainly from a developer perspective and developer productivity perspective. Yeah serverless is really having a big pack already and has for the last several years now. Now, many are familiar in the development world AWS Lambda which is really the groundbreaking public cloud service that incorporates serverless capabilities which essentially is an abstraction layer that enables developers to build event driven stateless code that executes in a cloud environment without having to worry about, to build microservices without having to worry about the underlying management of containers and virtual machines and so forth. So many ways serverless is a simplification of strategy for developers. They don't have to worry about the annoying plumbing they need to worry about the code, of course. What are called Lambda functions or functional methods and so forth. The functional programming has been around for quite a while but now it's coming to the fore in this new era of serverless environments. What we're seeing in 2018 is that we're predicting is that more than 50% of new microservices deployed in the public cloud will be deployed in serverless environments. There's AWS and Microsoft has Azure functions, IBM has their own, Google has their own. There's a variety of private, there's a variety of open source cloud of code bases for private deployment of serverless environments that we're seeing evolving and beginning to mature in 2018. They all involve functional programming which really along, when a couple with serverless of their cloud enables greater scale and speed in terms of development and it's very agile friendly in the sense that you can quickly gin up a functionally programmed serverless microservice in a hurry without having to manage state and so forth. It's very DevOps friendly in the very real sense that it's far faster than having to build and manage and tune containers and VMs and so forth. So it can enable a more of our real time and rapid and iterative development pipeline. Going forward in cloud computing. And really fundamentally what serverless is doing is that it's pushing more of these Lambda functions to the edge, the edges. If you were at AWS re-invent last week or the week before but you notice that AWS is putting a big push on putting Lambda functions at the edge and devices for the IOT I think we're going to see in 2018 pretty much the entire cloud arena. Everybody will push more of the serverless functional programming to the edge devices. It's just a simplification strategy that actually is a powerful tool for speeding up to the development metabolism. All right, so Jim, let me jump in here and say that we've now introduced the some of these benefits and really highlighted the role that the cloud's going to play. So let's turn our attention to this question of cloud optimization. And Stu, I'm going to ask you to start us off by talking about what we mean by true private cloud and ultimately our prediction for private cloud. Do we have, do we have, why don't you take us through what we think is going to happen in this world of true private cloud? Sure, Peter, thanks a lot. So when Wikibon, when we launched the true private cloud terminology, which was about two weeks ago, next week, it was in some ways coming together of a lot of trends, similar to things that George and Neil and James have been talking about. So it is nothing new to say that we needed to simplify the IT stack. We all know that the tried and true discussion of way too much of the budget is spent kind of keeping lights on what we'd like to say is kind of running the business. If you squint through this beautiful chart that we have on here, a big piece of this is operational support. A big piece of this is operational staffing is where we need to be able to make a significant change. What we've been really excited and was what led us to this initial market segment and what we're continuing to see good growth on is the move from traditional, really siloed infrastructure to you want to have infrastructure where it is software-based. You want IT to really be able to focus on the applications and services that they're running. And what our focus for this year, for the 2018 is of course it's the central point, it's the data that matters here. The whole reason we have infrastructure is to be able to run applications. And one of the things that is a key determiner as to where and what I use is the data and how can I not only store that data but actually gain value from that data. Something we've talked about time and again. And that is a major determining factor as to am I building this in a public cloud or am I doing it in my core? Is it something that's going to live on the edge? So that's what we were saying here with the true private cloud is not only are we going to simplify our environment and therefore it's really the operational model that we talked about. So we often say the line cloud is not a destination but it's an operational model. So true private cloud giving me some of the feel and management type of capability that I had had in the public cloud. As I said, not just virtualization, it's much more than that. But how can I start getting services? And one of the extensions is true private cloud does not live in isolation when we have kind of the core public cloud and edge deployments. I need to think about the operational models, where data lives, what processing happens and needs of these environments and what data will need to move between them. And of course there's fundamental laws of physics that we need to consider in that. So the prediction of course is that, we know how much gear and focus has been on the traditional data center and true private cloud helps that transformation to modernization. And a big focus is many of these applications we've been talking about and uses of data set are starting to come into these true private cloud environments. So, we've had discussions of Spark, there's modern databases, many of these, there's going to be many reasons why they might live in the private cloud environment. And therefore that's something that we're going to see tremendous growth and a lot of focus. And we're seeing a new wave of companies that are focusing on this to deliver solutions that will do more than just a step function for infrastructure or get us outside of our silos but really help us deliver on those cloud native applications where we pull in things like what Jim was talking about with serverless and the like. All right, so Stu, what that suggests ultimately is that data is going to dictate that everything's not going to end up in the private or in the public cloud or centralized public clouds because of latency cost, data governance and IP protection reasons. And there will be some others. At bare minimum that means that we're going to have it in most large enterprises at least a couple of clouds. Talk to us about what this impact of multi-cloud is going to look like over the course of the next few years. Yeah, critical point there, Peter, because right. Unfortunately, we don't have one solution. There's nobody that we run into that says, oh, you know, I just do a single, you know, one environment, you know, it'd be great if we only had one application to worry about. But as you've done this lovely diagram here, we all use lots of SaaS and increasingly, you know, Oracle, Microsoft, Salesforce, you know, all pushing everybody to multiple SaaS environments that has major impacts on my security and where my data live. Public cloud, you know, no doubt is growing at the, you know, leaps and bounds. And many customers are choosing applications to live in different places. So just as in data centers, I would kind of look at it from an application standpoint and build up what I need. Often there's, you know, Amazon doing phenomenal, but, you know, maybe there's things that I'm doing with Azure. Maybe there's things that I'm doing with Google or others, as well as my service providers for locality, for, you know, specialized services, that there's reasons why people are doing it. And what customers would love is an operational model that can actually span between those. So we are very early in trying to attack this multi-cloud environment. There's everything from licensing to security to, you know, just operationally, how do I manage those? And a piece of them that we were touching on in this prediction here is Kubernetes actually can be a key enabler for that cloud-native environment. As Jim talked about with Serverless, what we'd really like is our developer to be able to focus on building their application and not think as much about the underlying infrastructure, whether that be, you know, Oraka servers that I built myself or public cloud infrastructure. So we really want to think more it's at the data and application level, it's SaaS and pass is the model and Kubernetes holds the promise to solve a piece of this puzzle. Now, Kubernetes is not by no means a silver bullet for everything that we need, but it absolutely is doing very well. Our team was at the Linux, the CNCF show, KubeCon last week, and there is, you know, broad adoption from over 40 of the leading providers, including Amazon is now a piece, even Salesforce signed up to the CNCF. So Kubernetes is allowing me to be able to manage multi-cloud workflows, and therefore the prediction we have here, Peter, is that 60% of developing teams will be building sustaining multi-cloud with Kubernetes as a foundational component of that. That's excellent, Stu, but when we think about it, the hardware technologies, especially because of the opportunities associated with true private cloud, the hardware technologies are also going to evolve. There will be enough money here to sustain that investment. David Floyer, we do see another architecture on the horizon where for certain classes of workloads, we will be able to collapse and replicate many of these things in a economical, practical way on premise. We call that Unigrid. NVMe over fabric is a crucial feature of Unigrid. Absolutely. So NVMe takes NVMe over fabric, or NVMe OF takes NVMe, which is out there as storage, and turns it into a system framework. It's a major, major change in system architecture. We call this Unigrid, and it's going to be a focus of our research in 2008. Early vendors are already out there. This is the fastest movement from early standards into products themselves. You can see on the chart that IBM have come out with NVMe over fabrics with 900 storage connected to the power nine systems. NetApp have the EF750. A lot of other companies are out there. Mellanox is out there looking for networks, high-speed networks. Accelero has a major part of the storage software. So, and it's going to be used in particular with things like AI. So, what are the drivers and benefits of this architecture? The key is that data is the bottleneck for applications. We've talked about data. The amount of data is key to making applications more effective and higher value. So, NVMe and NVMe over fabrics allows data to be accessed in microseconds as opposed to milliseconds, and it allows gigabytes of data per second as opposed to megabytes of data per second. And it also allows thousands of processors to access all of the data in very, very low latencies, and that gives us amazing parallelism. So, what it's about is disaggregation of storage and network and processors. There are some huge benefits from that, not least of which is that you save about 50% of the processor. You get back because you don't have to do storage and networking on it. You save from stranded storage. You save from stranded processor and networking capabilities. So, overall, it's going to be cheaper. But more importantly, it makes it a basis for delivering systems of intelligence. And systems of intelligence are bringing together systems of record, the traditional systems, not rewriting them, but attaching them to real-time analytics, real-time AI, and being able to blend those two systems together because you've got all of that additional data you can bring to bear on a particular problem. So, systems themselves have reached pretty well the limit of human management. So, one of the great benefits of UniGrid is to have a single metadata layer from all of that data, all of those processes. All those infrastructure elements. All those infrastructure elements. And applications. And applications themselves. So, what that leads to is a huge potential to improve automation of the data center and the application of AI to operations, operational AI. So, George, this sounds like it's going to be one of the key potential areas where we'll see AI be practically adopted within business. What do we think is going to happen here as we think about the role that AI is going to play in IT operations management? Well, if we go back to the analogy with big data that we thought was going to cure cancer, it tastes like chocolate cost a dollar. And it turned out that the application, the most widespread application of big data was to take offload ETL from expensive data warehouses. And what we expect is the first widespread application of AI embedded in applications for horizontal use where Neil mentioned Salesforce and the ability to use Einstein to access Salesforce data and connected data. Now, because the applications we're building are so complex that as Stu mentioned, we have this operational model with a true private cloud. It's actually not just the legacy stuff that's sucking up all the admin overhead. It's the complexity of the new applications and the stringency of the SLAs means that we would have to turn millions of people into admins, the old, you know, when the telephone network started, everyone's going to have to be an operator. The only way we can get past this is if we sort of apply machine learning to IT ops and application performance management. The key here is that the models can learn how the infrastructure is laid out and how it interoperates. And it can also learn about how all the application services and middleware works behaving independently and with each other and how they tie with the infrastructure. The reason that's important is because all of a sudden you can get very high fidelity root cause analysis. In the old management technology, if you had an underlying problem, you'd have a whole sort of storm of alerts because there was no reliable way to really triangulate on the, or triage the root cause. Now what's critical is if you have high fidelity root cause analysis, you can have really precise recommendations for remediation or automated remediation, which is something people will get comfortable with over time, that's not going to happen right away. But this is critical and this is also the first large scale application of not just machine learning, but machine data. And so this topology of collecting widely disparate machine data and then applying models and then reconfiguring the software, it's training wheels for IoT apps where you're going to have it far more distributed and actuating devices instead of software. That's great, George. So let me sum up and then we'll take some questions. So very quickly, the action items that we have out of this overall session, and again, we have another 15 or so predictions that we didn't get to today. But one is, as we said, digital business is the use of data assets to compete. And so ultimately, this notion is starting to diffuse rapidly. We're seeing it on the cube, we're seeing it on the crowd chats, we're seeing it in the inquiries with our customers. Ultimately we believe that users need to start preparing for even more business scrutiny over their technology management. For example, something very simple, and David Florey, you and I have talked about this extensively in our weekly action item research meeting. The idea of backing up and restoring a system, it's no longer in a digital business world, it's not just backing up and restoring a system or application, it's not about restoring the entire business. That's going to require greater business scrutiny over technology management. It's going to lead to new organizational structures, new challenges of adopting systems, et cetera. But ultimately our observation is that data is going to indicate technology directions across the board. Whether we talk about how businesses evolve and the roles that technology takes in business, or whether we talk about the key business capabilities, digital business capabilities of capturing data, turning it into value, and then turning it into work, or whether we talk about how we think about cloud architectures, and which organization of cloud resources we're going to utilize. It all comes back to the role that data is going to play in helping us drive decisions. The last action item we want to put here before we get to the questions is clients, if we don't get to your question right now, contact us. Send us an increase, support at siliconangle.freshdesk.com and we'll respond to you as fast as we can over the course of the next day, two days to try to answer your question. All right, Dave Vellante, you've been collecting some questions here. Why don't we see if we can't take a couple of them before we close out? Yeah, we got about five or six minutes and in the chat room, Jim Kabilis has been awesome helping out and so there's a lot of detail answered there. The first, there's some questions and comments. The first one was are there too many chiefs? And I guess, yeah, there's some title inflation. I guess my comment there would be titles are cheap results aren't. So if you're creating chief ex-officers just for the check a box, you're probably wasting money. So you got to give them clear roles. And I think each of these chiefs has clear roles to the extent that they are empowered. Another comment came up is we don't want Hadoop spaghetti soup all over again, a true that. That we at risk of having Hadoop spaghetti soup as the centricity of big data moves from Hadoop to AI and ML and deep learning. Well, my answer is we are at risk of that but that there's customer pressure and vendor economic pressure to start consolidating. And we'll also see what we didn't see in the on-prem big data era with cloud vendors. They're just going to start making it easier to use some of the key services together. That's just natural. And I'll speak for Neil on this one too, very quickly that the idea ultimately is as the discipline starts to mature, we won't have people that probably aren't really capable of doing some of this data science stuff, running around and buying a tool to try to supplement their knowledge and their experience. So that's going to be another factor that I think ultimately leads to clarity in how we utilize these tools as we move into an AI oriented world. Okay, Jim is on mute. So if you wouldn't mind unmuting him. There was a question isn't ML a more informative way of describing AI. Jim, when you and I were in our Boston studio I sort of asked a similar question. AI is sort of the Uber category. Machine learning is math. Deep learning is more sophisticated math. You have a detailed answer in the chat but maybe you could give a brief summary. Sure, sure. I don't want to be too pedantic here but deep learning is essentially more hierarchical deeper stacks of neural network layers to be able to infer higher level abstractions from data you know, faith, recognition, sentiment analysis and so forth. Machine learning is the broader phenomenon. That's simply a lot of different various approaches for distilling patterns, correlations sort of from the data itself. What we've seen in the last really five, six, 10 years let's say is that all of the neural network approaches for AI have come to the forefront and are in fact the core of the marketplace in the state of the art. AI is an ancient paradigm that's older than probably you or me that began for the longest time was rules based systems, expert systems. Those haven't gone away. The new era of AI we see as a combination of both statistical approaches as well as rules based approaches and possibly even orchestration based approaches like graph models for building the broader context for AI for a variety of applications especially distributed edge applications. Okay, thank you. And then another question slash comment at AI like graphics in 1985 moved from a separate category to a core part of all apps. AI infused apps. Again, Jim, you have a very detailed answer in the chat room but maybe you could give a summary version. Yes, I mean, the most disruptive applications we see across the world enterprise, consumer and so forth these days involve AI at the heart of its machine learning more than neural networking. I wouldn't say to every single application doing AI but the ones that are really placing the trail in terms of changing the fabric of our lives very much most of them have AI at their heart. That will continue as the state of the art of AI continues to advance. So really one of the things we've been saying in our research at Wikibon is that the data scientists or those skills and tools are the nucleus of the next generation application developer really in every sphere of our lives. Great, quick comment is we will be sending out these slides to all participants we'll be posting these slides so thank you, Kim for that question. And very importantly, Dave over the course of the next few days most of our predictions docs will be posted up on Wikibon and we'll do a summary of everything that we've talked about here. So now the questions are coming through fast and furious but let me just try to rapid fire here because we've only got about a minute left. True private cloud definition, just say to us that we have a detailed definition that we can share but essentially it's substantially mimicking the public cloud experience on-prem. The way we like to say it is bringing the cloud operating model to your data versus trying to force fit your business into the cloud. So we've got detailed definitions there that frankly are evolving. What about PAS? There was a question about PAS. I think we have a prediction in one of our appendices predictions but maybe a quick word on PAS. Yeah, a very quick word on PAS is that the, there's been an enormous amount of effort put on the idea of the PAS marketplace. Cloud Foundry others suggested that there would be a PAS market that would evolve because you'd want to be able to effectively have mobility and migration and portability for this large cloud application. We're not seeing that happen necessarily but what we are seeing is that developers are increasingly becoming a force in dictating and driving cloud decision making and developers will start biasing their choices to the platforms that demonstrate that they have the best developer experience. So what do we call a PAS? What do we call it something else? Providing the best developer experience is going to be really important to the future of the cloud market. All right, great. And then George, George, oh, George Gilbert, you'll follow up with George O and that other question we need some clarification on. There's a question really, David, I think it's for you. Will persistent dims emerge first on public clouds? Almost certainly. Public clouds are where everything is going first. And when we talked about Unigrid, that's where it's going first. NVME over fabrics that architecture is going to be in public clouds and it has the same sort of benefits there. And NVDIMS will again develop pretty rapidly as a part of the NVME over fabrics. Okay, we're out of time. We'll look through the chat and follow up with any other questions. Peter, back to you. Great, thanks very much, Dave. So once again, we want to thank you everybody here that has participated in the webinar today. I apologize for, I feel like Han Solo in saying it wasn't my fault. But having said that, nonetheless, I apologize, Neil Raiden and everybody who had to deal with us finding and unmuting people. But we hope you've got a lot out of today's conversation. Look for those additional pieces of research on Wikibon that pertain to these specific predictions on each of these different things that we're talking about. And by all means, support at siliconangle.freshtest.com if you have an additional question, but we will follow up with as many as we can from the significant list that's starting to queue up. So thank you very much. This closes out our webinar. We appreciate your time. We look forward to working with you more in 2018.