 Hello, Wikibon community, and welcome to our 2017 predictions for the technology industry. We're very excited to be able to do this today. This is one of the first times that Wikibon has undertaken something like this. I've been here since about April 2016, and it's certainly the first time that I've been part of a gathering like this with so many members of the Wikibon community. Today I'm joined with Dave, or joined by Dave Vellante, who is our co-CEO. So I'm the Chief Research Officer here, and you can see me there on the left. This is from being on the Cube at Big Data, New York City, this past September. And there's Dave on the right-hand side. Dave, you want to say hi? Hi, everybody, welcome. So there's a few things that we're going to do here. The first thing I want to note is that we've got a couple of relatively simple webinar housekeeping issues. The first thing I notice is that everyone is muted. There is a Q&A option. You can hit the tab and window will pop up, and you can ask questions there. So if you hear anything that requires an answer, something we haven't covered, or you'd like to hear again, by all means hit that window, ask a question, and we'll do our best to get back to you. If you're a Wikibon customer, we'll follow up with you shortly after the call to make sure you get your question answered. If, however, you want to chat with your other members of the community or with either Dave or myself, you want to comment, then there's also a chat option. On some of the toolbars, it's listed under the More button. So if you go to the More button and you want to chat, you can probably find that there. Finally, we're also recording the webinar, and we will turn this into a Wikibon deliverable for the overall community. So we're excited to be doing this. Now, Dave, one of the things that we note in this slide is that we have the cube in the lower left-hand corner. Why don't you take us through a little bit about who we are and what we're doing? OK, great. Thanks, Peter. So I think many of you or most of you know SiliconANGLE Media Inc. is sort of the umbrella company. And underneath SiliconANGLE, we have three brands, the Wikibon Research brand, which was started in the 2007 timeframe. And it's a community of IT practitioners. The cube is, some people call it the ESPN of tech. We'll do 100 events this year and we extensively use the cube as a data gathering mechanism and a way to communicate to our community. We've got some big shows coming up pretty much every week. But of course, we get Amazon re-invent coming up and be in London, HPE Discover. And so, you know, we cover the world and cover technology, particularly in the enterprise. And then, of course, there's the SiliconANGLE Publishing Team, headed up by Rob Hoef. It was founded by my co-CEO, John Furrier. And Rob Hoef, former Business Week, is now leading that team. And so those are the three main brands. We've got a new website coming out this month on SiliconANGLE. So we're really excited about that. And just thank the community for all your feedback and participation. So Peter, back to you. Thanks, Dave. So what you're going to hear today is what the analyst team here at Wikibon has pulled together is for what we regard as some of the most interesting things that we think are going to happen over the next two years. Wikibon has been known for looking at this repute technologies. And so while we'll focus from a practical standpoint in 2017, we do go further out. What is the overarching theme? Well, the overarching theme of our research and our conversations with the community is very simple. It's put more data to work. The industry has developed incredible tools to gather data, to do analysis on data, to have applications, use data, and store data. I could go on with that list. But the data tends to be quite segmented and quite siloed to a particular application, a particular group, or a particular other activity. And the goal of digital business in very simple terms is to find ways to turn that data into an asset so that it can be applied to other forms of work. That data could include customer data, operational data, financial data, virtually any data that we can imagine. And a number of sources that we're going to have over the next few years are going to be astronomical. Now, what we want to do is we want to find ways so that data can be read up almost like energy in a physical sense to dramatically improve the quality of the work that a firm produces, whether it's from an engagement standpoint or customer experience standpoint or agile operations, and increasingly automation. So that's the underlying theme. And as we go through all of these predictions, that theme will come out and we'll reinforce that message during the course of the session. So how are we going to do this? The first thing we're going to do is we're going to have six predictions that focus in 2017. And those six predictions are going to answer crucial questions that we're getting from the community. The first one is, what's driving system architecture? Are there new use cases, new applications, new considerations that are going to influence not only how technology companies create the systems and the storage and the networking and the database and middleware and the applications, but also how users are going to evolve the way they think about investing. The second one is, do microprocessor options matter? Over 20 years now, we've pretty much focused on one limited class of microprocessors, the X386 architecture. But will these new workloads drive opportunities or options for new microprocessors? We have to worry about that. Thirdly, all this data has to be stored somewhere. Are we going to continue to store it, limit it, and only on HTTs or are other technologies going to come into vote? Fourthly, in the 2017 time frame, we see the cloud, a lot's happening, professional developers have flocked to it, enterprises are starting to move to it in a big way. What does it mean to code in the cloud? What kind of challenges are we going to face? Are they technological? Are they organizational, institutional? Are they sourcing? Later that obviously is, Amazon's had enormous momentum over the past few years. Do we expect that to continue? Is everybody else going to be continuing to play catch up? And the last question for 2017 that we think is going to be very important is this notion of big data complexity. Big data has promised big things and quite frankly has accepted some limited cases, been a little bit underwhelming as some would argue this last election showed. Now, we're going to move after those six predictions to 2022 where we'll have three predictions that we're going to focus on. One is what is new IT mandate? Is there a new IT mandate? Is it going to be the same old, same old or is IT going to be asked to do new things? Secondly, when we think about internet of things and we think about augmented reality or virtual reality or some of these other new ways of engaging people, is that going to draw new classes of applications? And then finally, after years of investing heavily in mobile applications, in mobile websites and any number of other things and thinking that there was this tight linkage where mobile equal digital engagement, we're starting to see that maybe that's breaking and we have to ask the question, is that all there is to digital engagement or is there something else on the horizon that we're going to have to do? Last prediction, in 2027, we're going to take a stab here and say, will we all work for AI? So these are the questions that we hear frequently from our clients, from our community. These are the predictions we're going to tend to and address. If you have others, let us know. If there's other things you want us to focus on, let us know. But here's where we're starting. All right, so let's start with 2017. What's driving system architecture? Our prediction for 2017 regarding this is very simple. The IoT Edge use cases begin shaping decisions in system and application architecture. Now the right-hand side, if you look at that chart, you can see a very, very important result of a piece of research that David Croyer recently did. And it shows IoT Edge option three-year costs. From left to right, moving all the data into the cloud over a normal data communications telecommunication circuit. In the middle, moving that data into a central location namely using cellular network technologies which have different performance and security attributes. And then finally, keeping 95% of the data at the edge processing it locally. And you can see that the costs are overwhelming, favoring, being smarter about how we design these applications and keeping more of that data local. And in fact, we think that so long as data communications costs remain what they are, that there's going to be an irrevocable pressure to alter key application architectures and ways of thinking to keep more of that crossing at the edge. The first point to note here is it means that data doesn't tend to move to the center as much as many are predicting, but rather, the cloud moves to the edge. The reason for that is that data movement isn't free. That means we're going to have even more distributive, highly autonomous apps that nonetheless have to be managed in ways that sustain the firm's behavior in a branded, consistent way. And very importantly, because these apps are going to tend to be distributed in autonomous, close to the data, it often really means that there's going to be a lot of operational technology players that impact the key decisions here that we're going to see made as we think about the new technologies are going to be built by vendors and the new application architectures are going to be deployed by users. So Peter, let me just add to that. I think the key takeaway there, as you mentioned, and just wanted to get lost, is 95% of the data we're predicting will stay at the edge. That's a much larger figure than I've seen from other firms or other commentary. And that's substantial. That's significant. It says it's not going to move. It's probably going to sit on flash, and the analytics will be done at the edge, as opposed to this sort of first bar of being cloud only. That 95% figures has been debated. It's somewhat controversial, but that's where we are today. Just wanted to point that out. Yeah, that's a great point, Dave. And the one thing to note here that's very important is that this is partly driven by the cost of telecommunications or data communications, but there also are physical realities that have to be addressed. So physics, the round trip times because of the speed of light, the need for greater autonomy and automation at the edge, OT and the decisions and the characteristics there, all of these will contribute strongly to this notion that the edge is increasingly going to drive application architectures and new technologies. So what are the, what's going to power those technologies? What's going to be behind those technologies? Let's start by looking at the CPUs. The microprocessor options matter. Well, our prediction is that evolution workloads, edge, big data, which we would just, for now, put AI and machine learning and cognitive underneath many of those big data things, almost as application forms, creates an opening for new microprocessor technologies, which are going to start grabbing market share from XA6 servers in the next few years, 2% to 3% next year in 2017. And we can see a scenario where that number goes to double digits in the next three or four years easily. Now, these microprocessors are going to come from multiple sources, but the factors driving this are first off the unbelievable explosion in devices servers, that it's just going to require more processing power all over the place. And the processing power has to become much more cost effective and much more tuned specifically to serving those types of devices. Data volumes and data complexity is another reason. Sumer economics is clearly driving a lot of these factors, has been for a number of years, and it's going to continue to do so. That we will see new ARM based processors and other and GPUs for big data apps, which have the advantage of being also supported in many of the consumer applications out there driving this new trend. Now, the other two factors, Moore's Law is not out of room. We don't want to suggest that, but it's not the factor that it used to be. We can't presume that we're going to get double the performance out of a single class of technology every year or so, and that's going to remove any and all other types of microprocessor since. So there's just not as much head room. It's going to be an opportunity now to drive at these new workloads with more specialized technology. And the final one is the legacy software issue is never going to go away. It's a big issue. It's going to remain a big issue. But these new workloads are going to create so much net new value in digital business settings. We believe that it will moderate to the degree to which legacy software keeps a hold on the server marketplace. So we expect a lot of ARM based servers that are lower cost, tuned and specialized, supporting different types of apps. A lot of significant opportunity for GPUs for big data apps. It's do a great job running those kinds of graph based data models and a lot of room still for risk in prepackaged HCI solutions, which we call single managed entities. Others call appliances. So we see a lot of room for new microprocessors in the marketplace over the next few years. I guess I'll add to that and I'll be brief just in the interest of time. The industry is marked to the cadence of Moore's law for, as we know, many, many decades. And that's been the fundamental source of innovation. We see the innovation curve shifting and changing to become combinatorial. A combination of technologies, Peter mentioned GPU, certainly visualizations in their AI, machine learning, deep learning, graph databases, combining to be the fundamental driver of innovation going forward. So the answer here is yes, they matter. Workloads are obviously the key. Great, Dave. So let's go to the next one. We talked about CPUs. Well, now let's talk about HDDs and more broadly, storage. So the prediction is that anything in the data center that physically moves gets less useful and loses share of wallet. Now, clearly that includes tape, but now it's starting to include HDDs. In our overall enterprise systems, the storage systems revenue forecast, which is going to be published very, very shortly, we've identified that we think that the revenue attributable to HDD-based enterprise storage systems is going to drop over the next few years while flash-based enterprise storage system revenue rises dramatically. Now, we're talking about storage system revenue here, Dave. We're not just talking about the HDDs themselves. The HDD market starts, continues to grow, perhaps not as fast, partly because even as the performance side of the HDD market starts to fade a bit replaced by flash, that bulk volume part of the HDD marketplace starts to substitute for tape. So why is this happening? One of the main reasons it's happening is because the storage revenue, the storage systems revenue is very strongly influenced by software. And those software revenues are being bundled into the flash-based systems. Now, there's a couple of reasons for this. First off, as we've predicted for quite some time, we do see a flash-only data center option on the horizon. It's coming well into focus. Number two is that the good news is flash-based products are starting to come down and are also in sight of HDD-based products at the performance level. But very importantly, and here's one of the key notions of the value of data and finding new ways to increase the use of data, flash, our research shows offer superior business value to precisely because you can make so many copies of it and have a single set of data serve so many different applications and so many users at scales that just aren't possible with traditional HDD-based enterprise storage systems. Now, this applies to labor today, doesn't it? Yeah, so a couple of points here. Yes, labor being one of those sort of areas that Peter's talking about in Jeopardy, we see about $200 billion over the next 10 years shifting from what we often refer to as non-differentiated IT labor, provisioning and networking configuration and laying cable, et cetera, shifting from where it is today in services and or on-prem IT labor to vendor R&D or the cloud. So that's a very important point. I think I just wanted to add some color to what you were talking about before when you talked about HDD, revenue continuing to grow, I think you were talking about specifically in the enterprise in this storage systems view. And then the other thing I want to add is Peter referenced sort of the business value of flash. As many of you know, David Floyer and Wikibon predicted very early on the impact that flash would have on spinning disk and not only because of cost related to compression and de-deplication, but also this notion that Peter's talking about of data sharing, the ability of development organizations to use the same data and minimize the number of copies. Now the thing to watch here, and kind of the wild card is the hyperscale model. Hyperscalers as we know are consuming many, many exabytes and petabytes of data. They do things differently than is done in the enterprise. So that's something that we're watching very closely in terms of that model, that model being the hyperscale model, how it mimics or how it does it mimic what traditionally has occurred in the enterprise and how that will affect adoption of both flash and spinning disk. But as Peter said, we'll be releasing this data very shortly and you'll be able to dig into it with us. And very importantly, Dave, in response to one of the comments in the chat, we're not talking about duplication of data everywhere. We're talking about the ability to provide logical and effective copies to single data sources so that just because you can just drive a lot more throughput. So that's the HDD. Now let's turn to some of this motion of coding the cloud. What are we gonna do with coding the cloud? Well, our prediction is that the new cloud development staff which is centered on containers and APIs and chures rapidly, but institutional habits in development constrain change. Now, why do we say that? I wanna draw your attention to the graphic on the right-hand side. This is what we think the mature or the maturing cloud development stack looks like. As you can see, it's a lot of notions of containers, a lot of notions of other types of technologies. We'll see APIs interspersed right here as a primary way of getting to some of these container-based application services, microservices, et cetera. But this same exact chart could be mapped back to SOA from 10 years ago. And even from some of the distributed computing environments that were put forward 20 years ago. The challenge here is that a sizable percentage, and we're estimating about 80% of in-house development, is still set up to work the old way. And so long as development organizations are structured to build monolithic apps or take care of monolithic apps, they will tend to create monolithic apps with whatever technologies available to them. So while we see these stacks becoming more vogue and more news, we may not see in 2017 shops being able to take full advantage of them precisely because the institutional work forms are going to change more slowly. Now, big data will partly contravene these habits. Why? Because big data is going to require quite different development approaches because of the complexity associated of analytic pipelines, building analytic pipelines, managing data, figuring out how to move things from here to there, et cetera. There's a very, very complex data movement that takes place within big data applications. And some of these new application services, like cognitive, et cetera, will require some new ways of thinking about how to be developed. So there will be a contravening force here, which we'll get to shortly. But the last one is, ultimately we think time-value metrics are going to be key. As KPIs move from project costs and taking care of the money, et cetera, and move more towards speed, as agile starts to assert itself, as organizations start to not only build part of the development organization around agile, but also agile starts leading in other management locations like even finance, then we'll start to see these new technologies really start asserting themselves and having a big impact. So I would add to that this notion of the iron triangle being these embedded processes, which as we all know, people process technology, the people in process are the hardest to change. I'm interested, Peter, in your thoughts, on you hear a lot about waterfall versus agile. How will organizations, sort of, how will that affect organizations in terms of their ability to adopt some of these new methodologies like agile and scrum? Well, the thing we're saying is that technology is going to happen fast. The agile processes are being well-adopted and are being used certainly in the development, but I have had lots of conversations with CIOs, for example, over the last year and a half, two years ago, where they observed that they're having a very difficult time with reconciling the impedance mismatch between agile development and non-agile budgeting. And so a lot of that still has to be worked out. It's going to be tied back to how we think about the value of data, to be sure. But ultimately, again, it comes back to this notion of people. If the organization is not set up to fully take advantage of these new classes of technologies, if they're set up to deliver and maintain more monolithic applications, then that's what's going to tend to get built and that's what's going to get tent and that's what the organization's going to tend to have and that's going to undermine some of the new value propositions that these technologies put forward. Well, what about the cloud? What kind of momentum does Amazon have? And our prediction for 2017 is Amazon's going to have yet another banner year, but customers are going to start donating a simplicity reset. Now, the CIO is going to be at Amazon re-invent with John Furrier and Stu Miniman are going to be up there, I believe they. And we're very excited. There's a lot of buzz happening about re-invent. So follow us up there through the CIO that re-invent. But what I've done on the right-hand side is exactly the piece of Wikibon research. What we did is we went up and we did an analysis of all of the AWS cases put forward on their website about how people are using AWS and there's well over 650, at least there were when we looked at it. And we looked at about two thirds of them. And here's what we came up with. Somewhere in the vicinity of 80% or so of those cases are tied back to firms that we might regard as professional software delivery organizations, whether they're SaaS or business services or games, providing games or social networks. There's a smaller part piece of the pie that's dedicated to traditional enterprise high-class customers. And that's a growing and important piece and we're not diminishing it at all. But the broad array of this pie chart, folks are relatively able to hire the people and apply the skills and devote the time necessary to learn some of the more complex 75 plus Amazon services that are now available. The other part of the marketplace, the part that's moving into Amazon, the promise of Amazon is that it's simple, it's straightforward and it is, certainly more so than other options. But we anticipate that there will have to be a new type of, and Amazon's gonna have to work even harder to simplify it as it tries to attract more of that enterprise crowd. It's true that the flexibility of Amazon is certainly spawning complexity. We expect to see new tools. In fact, there are new tools on the market from companies like Apptoio, for example, for handling and managing AWS billing and services. And that is, our CEOs are telling us, they're actually very helpful and very powerful in helping to manage those relationships. But the big issue here is that other folks like VMware and have done research that suggests that the AdWords Shop has two to three big cloud relationships. That makes a lot of sense to us. As we start adding hybrid cloud into this and the complexities of inter-cloud communication and inter-cloud orchestration starts to become very real, that's going to even add more complexity overall. So I'd add to that. So just in terms of Amazon momentum, obviously, those of you who follow what I read, I've been covering this for quite some time, but to me, the marginal economics of Amazon's model continue to be increasingly attractive. You can see it in the operating profits. Amazon's gap operating profits are in the mid-20s, 25, 26%. Just to give you a sense, EMC was an incredibly profitable company. Its gap operating profits are in the teens. Amazon's non-gap operating profits are in the 30%. So it's an incredibly profitable company. The more it grows, the more profitable it gets. Having said that, we agree with what Peter's saying in terms of complexity. Think about API creep in Amazon and different proprietary APIs for each of the data services, whether it's Kinesis or EC2 or S3 or DynamoDB or EMR, et cetera. So the data complexity and the complexity of the data pipeline is growing. And I think that opens the door for the on-prem folks to at least mimic the public cloud experience to a great degree, as great a degree as possible. And you're seeing people, certainly, companies do that in their marketing and starting to do that in the solutions that they're delivering. So by no means are we saying Amazon takes over the world, despite the momentum, there's a window open for those that can mimic, to a large extent, the public cloud capabilities. Yeah, very important point, David. And as we said earlier, we do expect to see the cloud move closer to the edge and that includes on-prem in a managed way, as opposed to presuming that everything ends up in the cloud. Physics has something to say about that as do some of the costs of data movement. All right, so we've got one more 2017 prediction and you can probably guess what it is. We spent a lot of years and have a pretty significant price in big data and we've been pretty aggressive about publishing what we think is gonna happen in big data or what is happening in big data over the last year or so. One of the reasons why we think Amazon is going to increase is precisely because we think it's gonna become a bigger target for big data. Why? Because big data complexity is a serious concern in many organizations today. Now, it's a serious concern because the spoke nature of the tools that are out there, many of which are individually extremely good, means that shops are spending an enormous amount of time just managing the underlying technology and not as much time as they need to, learning about how to solve big data problems, doing a great job of piloting applications, demonstrating to the business the financial returns are there. So as a result of this, the spoke big data tool aggregates and get multi-source and need to cobble it together from a lot of different technology sources, a lot of uncoordinated software and hardware updates that dramatically drive the cost of on-prem administration, a lot of conflicting commitments, both from the business as well as from the suppliers and very, very complex contracts. And as a result of that, we think that that's been one of the primary reasons why there's been so many pilot failures and why big data has not taken off the way that it probably should have. We think, however, that in 2017, we're gonna see, and here's our prediction, we're gonna see failure rates for big data pilots dropped by 50% as big vendors, IBM, Microsoft, AWS and Google amongst the key points. And we'll see if Oracle gets into that list, bring pre-package, problem-based analytic pipelines to market. And that's what we mean by this concept here of big data single managed entities. The idea that we can pull together, a company can pull together, a vendor can pull together all the various elements necessary to provide the underlying infrastructure so that a shock can focus more time making sure that they understand the use case, they understand how to go get the data necessary to serve that use case and understand how to pilot and deploy the application because the underlying hardware and system software is pre-packaged and used. Now, we think that the SMEs are gonna be most successful will be ones that are not predicated only on more proprietary software but utilize a lot of open source software. The ones that we see that are most successful today are in fact combining the pre-packaging of technology with the availability or access to the enormous value that the open source market continues to build as it constructs new tools and delivers them out with big data applications. Ultimately, you've seen this before or you've heard this before from us time to value becomes a focus. Similarly, the development and we think that's one of the convergences that we have here. We need to big data apps or app patterns and then start to solidify. George Gilbert sent some leading edge research on what some of those application patterns are going to be and how those application patterns are gonna drive analytic pipeline decisions and very important the process of building out the business capabilities necessary to build out repeatable big data services to the business. Now, very importantly, some of these app patterns are going to be or gonna look like machine learning cognitive AI in many respects, all of these are part of this use case to app trend that we see. So we think that big data is kind of an umbrella for all of those different technology classes. There's gonna be a lot of marketing done that tries to differentiate the machine learning cognitive AI. Technically, there are some differences but from our perspective, they're all part of the effort. Of trying to ensure that we can pull together the technology in a more simple way so that it can be applied to complex business problems more easily. One more point I'll note, Dave, is that and you adjust that world of Watson, love to get your comments on this but one of the more successful single manage agencies out there is in fact, Watson from IBM and it's actually a set of services and not just the device that you buy. Yeah, so a couple of comments there is one is you can see the complexity in the market data and we've been covering big data markets for a long time now and there were two things that stood out when we started covering this. One is that software as a percentage of the total revenue is much lower than you would expect in most markets and that's because of the open source contribution and the multi-year collapse that we've seen in infrastructure software pricing largely due to open source and cloud. The other piece of that is professional services which have dominated spending within big data because of the complexity. I think you're right, when you look at what happened at World of Watson and what IBM is trying to do and others and your prediction there, putting together a full end-to-end data pipeline to do ingest and data wrangling and collaboration between data scientists, data engineers and application developers and data quality people and then bringing in the analytics piece and essentially what many companies have done at IBM Included, they've cobbled together sets of tools and they've sort of layered on a way to interact with those tools. So the integration has still been slow in coming but that's where the market is headed so that we actually can build commercial off the shelf applications. There's been a lack of those applications. I remember probably four years ago, Mike Olson at a Hadoop World predicted this will be the year of the big data app and it still has not happened. And until it does, that complexity is going to rain. Yeah, and so again, as we said earlier, we anticipate that the big data of the need for developers to become more a part of the big data ecosystem and the need for developers to get more value out of some of the other new cloud stacks are going to come together and will reinforce each other over the course of the next 24 to 36 months. So those are our 2017 predictions. Now let's take a look at our 2022 predictions and we got three. The first one is we do think a new IT mandates on the horizon. Consistent with all these trends we talked about, the idea of new ways of thinking about infrastructure and application architecture based on the realities of the edge. New ways of thinking about how application developers need to participate in the value creation activities of big data. New ways of organizing to try to take greater advantage of the new processes and new technologies for development. We think very strongly that IT organizations will organize work to generate greater value from data assets by engineering proximity of applications and data. What do we mean by that? Well, proximity can mean physical proximity but it also is something that we mean in terms of governance, tool similarity, infrastructure commonality. We think that over the next four to five years we'll see a lot of effort to try to increase the proximity of not only data assets from a data standpoint of the raw data, but also understanding from an infrastructure, governance, skill set, et cetera standpoint so that we can actually do a better job of again generating more work out of our data by finding new and interesting ways of weaving together systems of record data, big analytics, IoT and a lot of other new application forms we see on the horizon, including one that I'll talk about in a second. Data value becomes a hot topic. We're gonna have to do a better job as a community of talking about how data is valuable, how it creates data in the business, how it has to be applied or has to be thought of as a source of value in building out systems we talked earlier about the notion of the people process and technology where we have to add to that data. Data needs to be an asset that gets consumed as we think about how business changes. So data value is gonna become a hot topic and it's something we're focused on as to what it needs. We think as Dave mentioned earlier it's gonna catalyze a lot of true private cloud solutions for legacy applications. I know Dave didn't wanna talk about in a second what this might mean. For example, things like the Amazon VMware recent announcement. But it also means that strategic sourcing becomes a reality. The idea of just going after the cheapest solution that costs us up to my solution, which don't get me wrong or don't get us wrong is not gonna go away. But it means that increasingly we're going to focus on new sourcing arrangements that facilitate creating greater proximity for those crucial assets that make our shop run. Okay, a couple of thoughts there, Peter. You know, there was a lot of talk a couple of years ago and it's slowly beginning to happen of bringing transaction and analytics systems together. What that oftentimes means is somebody takes their mainframe and for the transactions and sticks an infiniband pipe into an exadata. I don't think that's what everybody envisioned when you started to sort of discuss that mean. So that's sort of happening slowly but it's something that we're watching. This notion of data value and shifting from really a process economy to a data or an insight or a economy is something that's also occurring. You're seeing the emergence of the chief data officer and our research shows that there are five things that chief data officer must do. They really get started. The first is to understand data value and how data contributes to the monetization of their company. So not monetizing the data per se and I think that's a mistake that a lot of people made early on is trying to figure out how to sell their data but it's really to understand how data contributes to value to your organization. The second piece is how to access that data, who gets access to that data and what data sources you have. And the third is the quality and trust of that data and those are sequential things that our research shows a chief data officer has to do. And then the other sort of parallel items are a relationship with the line of business and reskilling. And those are complicated issues for most organizations to undertake and something that's going to take many, many years to play out the vast majority of customers that we talk to say they're data driven but aren't necessarily data driven. And so the one other thing I want to mention today is that we did some research for example on the VMware Amazon relationship and the reason why we were positive on it is quite simple. That it provides a path for VMware's customers with their legacy applications running under VMware to move those applications and the data associated with those applications if they choose to closer to some of the new big data applications that are showing up in Amazon. So there's an example of this notion of making it more proximate, making applications and data more proximate based on physics, based on governance, based on overall tooling and skilling. And we anticipate that that's going to become a new design center for a lot of shops over the course of the next few years. Now coming to this notion of a new design center the next thing we want to note is that IoT, the Internet of Things plus augmented reality is going to have an impact on the marketplace. We've gotten very excited about IoT simply by thinking about the things but our perspective is increasingly we have to recognize that people are going to always be a major feature and perhaps the greatest value creating feature of systems. An augmented reality is going to emerge a crucial actuator for the Internet of Things and people and that's kind of what we mean is that augmented reality becomes an actuator for people as will chat box and other types of technologies. Now an actuator in an IoT sense is the devices or set of capabilities that take the results of models and actually turn that into a real world behavior. So if we think about this virtuous cycle that we have on the right hand side the Internet, these are the three capabilities that we think people or firms are going to have to build out. You're going to have to build out an Internet of Things and people that are capable of capturing data and turning analog data into digital data so that it can be moved into these big data applications again with machine learning and AI and cognitive set of being part of that or underneath that umbrella so that from that data we can build more models, more insights, more software that then translates into what we're calling systems of an action or systems of an action, not in action, systems of an action. Businesses still serves customers and these systems of an action are going to generate real world outcomes from these models and insights and those real world outcomes will certainly be associated with things but they will also be associated with human beings and people. And as a consequence of this, we think it's so powerful, it's maybe so important in the next, over the course of the next five years that we anticipate that we will see a new set of disciplines focused on social discovery. Historically in this industry we've been very focused on turning insights or discovery about physics into hardware. Well over the next few years, and Dave mentioned moving from the process to some new economy, we're going to see an enormous investment in understanding the social dynamics of how people work together and turn that into software, not just how accountants do things but how customers and enterprises come together to make markets happen. And through that social discovery, create these systems of an action so that businesses can successfully can successfully attend to and deliver the promises and the predictions that they're making through their other parts of their big data applications. So Peter, you've pointed out many times that the big change relative to processes and historically in the IT business, we've known what the processes are. The technology was sort of unknown and mysterious. That's flipped. It's now really the process is the unknown piece. That's the mysterious part. The technology is pretty well understood. I think as it relates to what you're talking about here with IoT and NAR, what people tell us, the practitioners that are struggling with this, first of all, there's so much analog data that people are trying to digitize. The other piece is there's a limited budget that folks have and they're trying to figure out, do I spend it on getting more data and will that improve my data? Increase my observation space or do I spend it on better models and improving my models and iterating? And that's a trade-off that people have to make. And of course the answer is both, but how those funds are allocated is something that organizations are really trying to better understand. There's a lot of trial and error going on because obviously more data in theory anyway means you can make better decisions, but it's that iteration of that model, that trial and error and constant improvement and both of those take significant resources and budgets are still tight. Very true, Dave. And in fact, George Gilbert's research with the community is starting to demonstrate that more of the value is gonna come from the models as opposed to the raw data. We need the raw data to get to the models but more of the value is gonna come from the models. So that's where we think more people are gonna focus their time and attention is the value will be in the insights and the models. But to go back to your point, where do you put your money? Well, you gotta run these pilots, you gotta keep up with the competitors, you gotta serve customers better so you're gonna have to build all these models, sometimes in a bespoke way. But George is publishing an enormous amount of research right now that's very valuable to a lot of our community members. It really shows how that pipeline, how those analytic pipelines or the capabilities associated with those analytic pipelines are starting to become better understood so that we can actually start getting experience and generating efficiencies and generating a scale out of those analytic pipelines. And that's gonna be a major feature underlying this basic trend. Now, this notion of people is really crucial because as we think about the move to the internet of things and people, we have to ask ourselves, has digital engagement really fully considered what it means to engage people throughout their customer journey? And the answer is no, it hasn't. We believe that by 2022, IT will be given greater responsibility for management of demand chains, working to unify customer journey designs and operations across all engagement functions. By engagement functions, we mean marketing, sales, we mean products, we mean service, we mean fulfillment. That doesn't mean that they all report to IT. Don't mean that at all. But it means that IT is going to have to, again, find ways to apply data from all these different sources so that it can, in fact, simplify and unify and bring together consistent design and operations so that all of these functions can be successful and support reorganization if necessary because the underlying systems provide that degree of unity and focus on customer success. Now, this is in strong opposition to the prediction made a few years ago that marketing was going to emerge as the be-all and end-all is gonna spend more than IT. That was silly, it hasn't happened, and you'd have to redefine marketing very aggressively to see that actually happening. But when we think about this notion of putting more data to work, the first thing we know, and this is what all the digital natives have shown us, the data can transform a product into a service. That is the basis for a lot of the new business models we're talking about, a lot of these digital native business models and successes that they've had, and we think it's gonna be a primary feature of the IT mandate to help business understand more data can be put toward transforming products into services. It also means at a tactical level that mobile applications are then way too focused on solving the seller's problem. We wanna caution folks, don't presume that because your mobile application has gotten lost in some online store somewhere, that that means that digital engagement's a failure. No, it means that you have to focus digital engagement on providing value throughout the customer journey and not just from the problem to the solution where the transaction from money takes place. Too much mobile applications or too many mobile applications have been focused in a limited way on the marketer's problem within the business of trying to generate awareness and demand. And it has to be, mobile has to be applied in a coherent and comprehensive way across the entire journey. And ultimately, I hate to say this, but we think collaboration's gonna make a comeback. The collaboration to serve customers. So the business can collaborate better inside but in support of serving the customers. Major, major feature of what we think is gonna happen over the course of the next couple of years. Well, I think the key point there is we all, there's many mobile apps that we love and utilize, but there are a lot that aren't so great. And the point that we've made to the community quite often is that it used to be that the brands had all the power, they had all the information. There was an asymmetry of information. The customer, the consumer, didn't really know much about pricing. The web obviously has leveled that playing field. And what many brands are trying to do is recreate that asymmetry and maybe got over their skis a little bit before providing value to the customer. And I think your point is that to the extent that you can provide value to that customer that information advantage will come back to you, but you can't start with that information advantage. Great point, Dave, but it also means that we need to, that IT needs to look at the entire journey and see transactions in the discoverer. Evaluate, buy, apply, use and fix throughout this entire journey and find ways of designing systems that provide value to customers at all times and at all places. So the demand chain notion which historically has been focused on trying to optimize the value that the buyer gets in the buy process at a cost effective way, that notion of the demand chain has to be applied to the entire engagement lifecycle. All right, so that's 2022. Well, let's take a crack at our big prediction for 2027. And so on a lot of people's minds, will we all work for AI? There have been a lot of studies done over the course of the past year and year and a half that have been, that have suggested that 47% of jobs are gonna go away, for example. And that's not the only high end. Actually folks have suggested much more over the next 10, 15 years. Now if you take a look at the right hand side and you see a robot, think of it. Now you may not know this, but when the thinker was actually, when Rodin first constructed the thinker, what he was envisioning was actually someone looking down into the seven levels of hell as described by Dante. And I think that a lot of people would agree that the notion of no work is a hell for a lot of people. We don't think that that's gonna happen in the same way that most folks do. We believe that AI technology advances will far outpace the social advances. Some tasks will be totally replaced, but most jobs will only be partially replaced. We have to draw a clear distinction between the idea that a job performs only this or that task as opposed to a job or an individual or an employee is part of a complex community that ensures that a business is capable of serving customers. It doesn't mean we're not gonna see more automation, but automation is gonna focus mostly on replacing tasks. And to the degree that that tasks that particular job is replaced, then most jobs will be replaced. But ultimately there's gonna be a lot of social friction that gates how fast this happens. And one of the key reasons for social friction is something in behavioral economics is known as loss avoids. People are more afraid of losing something than they are of gaining something. And whether it's a union or whether it's regulations or any number of other factors, that's gonna gate the rate at which this notion that AI crushes employment occurs. AI will tend to complement, not substitute for labor. And that's been a feature of technology for years. It doesn't again mean that some tasks and some tasks set sort of a close new line with jobs aren't replaced. There will be people put out of work as a consequence of this. But for the most part, you will see AI tend to complement, not fully substitute for most jobs. Now this creates also a new design consideration. Historically, as technologists, we've looked at what can be done with technology and we've asked, can we do it? And if the answer is yes, we tend to go off and do it. And now we need to start asking ourselves, should we do it? And this is not just a moral imperative. This has other implications as well. So for example, the remarkably bad impact that a lot of automated call centers have had on customer service from a customer experience standpoint. This has to become one of the features of how we think about bringing together in these systems of an action all the different functions that are responsible for serving a customer. Asking ourselves, well, we can do it from a technical standpoint, but shouldn't we do it from a customer experience, from a community relations, and even from a cultural imperative standpoint as we move forward. Okay, I'll be brief, because we're wrapping up here. Well, first of all, machines have always replaced humans, but largely with physical tasks. Now we're seeing that occur with cognitive tasks. People are concerned, as Peter said. The middle class is obviously under fire. The medium income, median income in the United States has dropped from $55,000 in 1999 to just about $50,000 today. So something's going on, and clearly you can look around and see whether it's at an airport with kiosks or billboards, electronic machines and cognitive functions are replacing human functions. Having said that, we're sanguine because the story I'll tell is the greatest chess player in the world is not a machine. When Deep Blue beat Gary Kasparov, what Gary Kasparov did is he started a competition to collaborate with other human chess players with machines to beat the machine, and they succeeded at that. So again, I come back to this combination of technologies, combinatorial technologies are really what's gonna drive the innovation curve over the next, we think, 20 to 50 years. So it's something that is far out there in terms of our predictions, but it's also something that is relevant to society and obviously the technology industry. So thank you, everybody. So we have one more slide and it's a conclusion slide. So let me hit these really quick. And before I do so, let me note that George our big data analyst is George Gilbert. George Gilbert, G-I-L-B-E-R-T. All right, so very quickly. Tech architecture question, we're gonna get to IoT is gonna have a major effect in how we think about architecture in the future. Micro processor options, yep. You know how microprocessor options are gonna have an impact in the marketplace. With our HDDs, for the performance side of storage, flash is coming on strong. Go to the cloud, yes, the technologies are great, but the development has to change its habits. Amazon momentum, absolutely gonna continue. Big data complexity, it's bad and we have to find ways to make it simpler so that we can focus more on the outcomes and the results as opposed to the infrastructure and the tooling. 2022, your IT mandate drive the value of that data. Get more value out of your data. The internet of things and people is gonna become the proper way of thinking about how these new systems of an action work and we anticipate that demand chain management is gonna be crucial to extending the idea of digital engagement. Will we all work for AI? Dave just mentioned, as you said, there's gonna be dislocation, there's gonna be tasks that are replaced, but not by 2027. All right, so thank you very much for your time today. Here's how you can contact Dave and myself. We will be publishing this, the slides and the broadcast. Winky Bond is gonna deliver three coordinated predictions thoughts over the course of the next two days. So look for that. Go up to SiliconANGLE, we're up there, fair amount. Follow us on Twitter and we wanna thank you very much for staying with us during the course of this session. Have a great day.