 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. Moore's law is dead, right? Think again, massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software, and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's law is dead, you say? We must have heard that hundreds, if not thousands of times in the past decade. EET times is written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's law. But our friend, Patrick Morehead got it right when he said, quote, Moore's law by the strictest definition of doubling chip densities every two years isn't happening anymore, end quote. And you know what, that's true, he's absolutely correct. And he couched that statement by saying buy the strict definition. And he did that for a reason because he's smart enough to know that the chip industry are masters at doing workarounds. Here's proof that the debt of Moore's law by its strictest definition is largely irrelevant. My colleague, David Floyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's law is actually accelerating and acting quite dramatically. This graphic digs into the progression of Apple's SOC system on chip developments from the A9 and culminating with the A1415 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time. For three processor types, the CPU, which we measure here in terahertz, that's the blue line, which you can't even hardly see. The GPU, which is the orange, that's measured in trillions of floating point operations per second. And then the NPU, the neural processing unit, that's measured in trillions of operations per second, which is that exploding gray area. Now historically, we always rushed out to buy the latest and greatest PC because the newer models had faster cycles or more gigahertz. Moore's law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, you know Moore's law, as we know it, was dead. But combined, if you look at the improvements in Apple's SOC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that because the actual figure for these three processor types, we're not even counting the impact of DSPs and accelerator components of Apple's system on a chip. It would push this even higher. Apple's A14, which is shown on the right-hand side here, is quite amazing. It's got a 64-bit architecture. It's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here, they continue to evolve. Facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that we've now reached peak centralization. So this graphic paints a picture that's quite telling. We just shared that processing power is exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately $50 per chip. ARM at its V9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said this chip will cost a buck. A dollar to shave 10% off your refrigerator electricity bill. It's just astounding. A look at where the expensive bottlenecks are. It's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e. wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, and it's going to optimize workloads for latency performance bandwidth and security. And remember most of that data 99% is going to stay at the edge. We love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data, but some data will connect occasionally back to the cloud to train AI models. We're going to come back to that. But this picture says, if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell HPE, who kind of does maybe used to do a lot of its own custom silicon, but pure storage, NetApp. I mean, the list goes on and on and on. Either you're going to start designing custom silicon, or you're going to get disrupted in our view. AWS, Google, and Microsoft are all doing it for a reason. As is IBM, and as Sarbjit Johal said recently, this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. You know, just as an aside, interestingly, in his book, Seeing Digital, author Dave Michela says, there's nothing artificial about this. He uses the term machine intelligence instead of artificial intelligence and says that there's nothing artificial about machine intelligence, just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning, and think of them as subsets of AI. Machine learning applies algorithms and code to data to get, quote unquote, smarter, make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now, deep learning is a more advanced type of machine learning. It uses more complex math, but the point that we want to make here is that today, much of the activity in AI is around building and training models, and this is mostly happening in the cloud, but we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real-time data from sensors, processing that data locally, and then applying that training that has been developed in the cloud, and making micro-adjustments in real-time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn. The model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing and iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine, which is a chip that goes into a car and gets data from sensors and makes these micro-adjustments in real-time on steering and braking and the like. Now, as you said before, Tesla persists the data for a very short time because there's so much of it. It just can't push it back to the cloud. But it can, however, selectively store certain data if it needs to and then send back that data to the cloud to further train them all. Let's say, for instance, an animal runs into the road during slick conditions. Tesla wants to grab that data because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and other baby parts of the country or other regions of New England. And it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. And I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now, this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes a more and more mainstream. The opportunities for AI inference at the edge and in IoT are enormous and we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now, today's modeling workloads are pretty prevalent in things like fraud, ad tech, weather, pricing recommendation, pricing recommendation engines and those kinds of things. And now those will keep getting better and better and better over time. Now, in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Michela made in his book, he kind of explains why historically vertically industries are pretty stovepipe. They have their own stack, sales and marketing and engineering and supply chains, et cetera. And experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today you see all kinds of cross industry activity, Amazon entering grocery, entering media, Apple in finance and potentially getting into EV, Tesla eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data and they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time, you're going to have better data than insurance companies. Defire, decentralized finance platforms, they're going to use the blockchain and they're continue to improve. Blockchain today is not great performance, very overhead intensive, all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our V9 post. This is data from ETR, the vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the data set. The red line at 40% is like a subjective anchor that we use anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for a while. RPA is right there, frankly, it's an adjacency to AI. And you could even argue, so it's cloud where all the ML action is taking place today. But that will change, we think, as we just described because data is going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph. Spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys, they dominate AI and machine learning. Facebook's not on here, Facebook's got great AI as well but it's not enterprise tech spending. These cloud companies, they have the tooling, they have the data, they have the scale. And as we said, lots of modeling is going on today. But this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization, as Satya Nadella described. You see Databricks on here, they're seen as an AI leader, spark cognition, they're off the charts literally in the upper left, they have extremely high net score, albeit with a small sample. They apply machine learning to massive data sets. Data robot does automated AI, they're super high in the Y axis. Data IQ, they help create machine learning based apps. C3 AI, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm. Here are a lot of ads now doing AI in a responsible way. Real kind of enterprise AI that's sort of always been IBM, IBM Watson's calling card. There's SAP with Leonardo, Salesforce with Einstein. Again, IBM Watson is right there. Just at the 40% line. You see Oracle is there as well. They're embedding machine intelligence with their self-driving database they call it. That's machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies, they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table on that the post x86 era is here. It's a function of volume, arm volumes or wafer volumes are 10x those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors, it can't be understated. And today we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before. Forget Moore's law being dead, that's irrelevant. The original finding is being blown away this decade. And who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And as is most often the case, the innovation is coming from the consumer use cases first. Apple continues to lead the way. Apple's integrated hardware and software model, we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle, the kind of really good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum because there's so much opportunity for customization in chips as we discussed last week with ARMS announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel, trying to provide more flexibility. One aside, Pat Gelsinger, he may face massive challenges that we laid out a couple of posts to go with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past of these boom and bust cycles for semiconductor. We just think that prices are coming down, the market's elastic, and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about in trying to invent AI. Rather, you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data, as John Furrier has said in the past, is becoming the new development kit. He said that 10 years ago, he's right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon and AI throughout your product portfolio in storage and networking and you'll be increasingly bringing compute to the data. That data will mostly stay where it's created. Again, systems and storage and networking stacks, they're all being completely reimagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to be writing new applications to take advantage of this and use AI to change the world literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunities to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah, well, that's a whole nother topic. I think for now we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay, that's it for today. Thanks to David Floyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember, these episodes are all available as podcasts wherever you listen. Just search Breaking Analysis Podcast and please subscribe to the series. We appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on wikibon.com and siliconangle.com. So check that out. You can get in touch with me. I'm at david.volante at siliconangle.com. You can DM me on Twitter at dvolante or comment on our LinkedIn post. I always appreciate that. This is Dave Vellante for theCUBE Insights, powered by ETR. Stay safe, be well, and we'll see you next time.