 From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hey, welcome back everybody. Jeff Frick here with theCUBE. We are still getting through the year of 2020. It's still the year of COVID and there's no one inside I think until we get to a vaccine. That said, we're really excited to have one of our favorite guests. We haven't had him on for a while. I haven't talked to him for a long time. Used to I think have the record for the most CUBE appearances or probably any CUBE alumni. We're excited to have him joining us from his house in Palo Alto, Bill Schmarzo. You know him as the Dean of Big Data. He's got more titles. He's the Chief Innovation Officer at Hitachi Ventara. He's also, we used to call him the Dean of Big Data kind of for fun. Well, Bill goes out and writes a bunch of books and now he teaches at the University of San Francisco School of Management as an executive fellow. He's an honorary professor at NUI Galway. I think he likes to go that side of the pond. And a many time author now. Go check him out his author profile on Amazon. The Big Data MBA. The art of thinking like a data scientist and another big data kind of a workbook. Bill, great to see you. Thanks, Jeff. You know, I miss my time on the CUBE. These conversations have always been great. We've always kind of poked around the edges of things. A lot of our conversations have always been, I thought very leading edge. And the title Dean of Big Data is courtesy of the CUBE. You guys are the first ones to give me that name out of one of the very first strata conferences where you dubbed me the Dean of Big Data because I taught a class there called the Big Data MBA. And look what's happened since then. I love it. It's all on you guys. I love it. And we've outlasted strata. Strata doesn't exist as a conference anymore. So, you know, part of that, I think it's because big data is now everywhere, right? It's not the standalone thing. But there's a topic and I'm holding in my hands a paper that you worked on with a colleague, Dr. Sadawi, talking about what is the value of data? What is the economic value of data? And this is a topic that's been thrown around quite a bit. I think you list a total of 28 reference sources in this document. So it's a well researched piece of material, but it's a really challenging problem. So before we kind of get into the details, you know, from your position, having done this for a long time and what you're doing today, you used to travel every single week to go out and visit customers and actually do implementations and really help people think these through. When you think about the value, the economic value, how did you start to kind of frame that to make sense and make it kind of a manageable problem to attack? So Jeff, the research project was eye opening for me. And one of the advantages of being a professor is you have access to all these very smart, very motivated, very free research sources. And one of the problems that I've wrestled with as long as I've been in this industry is how do you figure out what is data worth? And so what I did is I took these research students and I sick them on this problem. I said, I want you to do some research. Let me understand, you know, what is the value of data? I've seen all these different papers and analysts and consulting firms talk about it, but nobody's really got this thing clicked. And so we launched this research project at USF, Professor Muwatha Sadaway and I together. And we were bumping along the same old path that everyone else took, which was hinged on how do we get data on our balance sheet? That was always the motivation because as a company, we're worth us so much more because our data is so valuable and how do I get it on the balance sheet? And so we're headed down that path and trying to figure out, how do you get it on the balance sheet? And then one of my research students, she comes up to me and she says, Professor Schmarzo. She goes, data is kind of an unusual asset. I said, what do you mean? She goes, well, you think about data as an asset, it never depletes, it never wears out and the same data set can be used across an unlimited number of use cases at a marginal cost equal to zero. And when she said that, it's like, holy crap. The light bulb went off. It's like, wait a second. I've been thinking about this entirely wrong for the last 30 some years of my life in this space. I've had the wrong frame. I keep thinking about this as an accounting conversation. An accounting determines valuation based on what somebody's willing to pay for. So if you go back to Adam Smith's 1776 wealth of nation, he talks about valuation techniques. And one of the valuation techniques he talks about is valuation in exchange. That is the value of an asset is what someone's willing to pay you for. So the value of this bottle of water is what someone's willing to pay you for it. So everybody fixates on this asset in valuation in exchange methodology. That's how you put it on balance sheet. That's how you run depreciation schedules. That's dictates everything. But Adam Smith also talked about in that book another valuation methodology, which is valuation in use, which is an economics conversation, not an accounting conversation. And when I realized that my frame was wrong, yeah, I had the right book. I had Adam Smith. I had wealth of nations. I had all that good stuff, but I hadn't read the whole book. I had missed this whole concept about the economic value where value is determined by not how much someone's willing to pay you for it, but the value you can derive by using it. So Jeff, when that person made that comment, the entire research project, and I got to tell you, my entire life did a total 180, right? Just total 180 degree change of how I was thinking about data as an asset. Right. Well, Bill, it's funny though, is that that's kind of captured. I always think of kind of finance versus accounting, right? And then you're right on accounting and we learn a lot of things in accounting basically. We learn more that we don't know, but it's really hard to put it in an accounting framework because as you said, it's not like a regular asset. You can use it a lot of times. You can use it across lots of use cases that doesn't degrade over time. In fact, it used to be a liability because you had to buy all this hardware and software to maintain it. Yes. But if you look at the finance side, if you look at the pure play internet companies like Google, like Facebook, like Amazon, and you look at their valuation, right? We used to have this thing, we still have this thing called Goodwill, which was kind of this capture between what the market established the value of the company to be, but wasn't reflected when you summed up all the assets on the balance sheet. And you had this leftover thing, you could just plug in Goodwill. And I would hypothesize that for these big giant tech companies, the market has baked in the value of the data, has kind of put in that present value on that for a long period of time over multiple projects. And we see it captured probably in Goodwill versus being kind of called out as an individual balance sheet item. So I don't think it's, I don't know accounting, I'm not an accountant, thank God, right? And I know that Goodwill is one of those things that I remember from my MBA program is something that when you buy a company and you look at the value you paid versus what it was worth, it got stuck into this category called Goodwill because no one knew how to figure out. So if the company at book value was a billion dollars but you paid five billion for it, well, you're not an idiot so that four billion extra you paid must be in Goodwill. And they stick it in Goodwill. And I think there's actually a way that Goodwill gets depreciated as well. So it's, it could be that, but I'm totally away from the accounting framework. And I think that's distracting, trying to work within the gap rules is more of an inhibitor. And we talk about the Googles of the world and the Facebooks of the world and the Netflix of the world and the Amazons and companies that are great at monetizing data. Well, they're great at monetizing data because they're not selling it. They're using it. Google is using their data to dominate search, right? Netflix is using it to be the leader in on demand videos. And it's how they use all the data, how they use the insights about their customers, their products and their operations to really drive new sources of value. So to me, it's this, when you start thinking about from an economics perspective, for example, why is the same car that I buy and an Uber driver buys, why is that car more valuable to an Uber driver than it is to me? Well, the bottom line is Uber driver is gonna use that car to generate value, right? That $40,000 of a car they bought is worth a lot more because they're gonna use that to generate value. For me, it sits in the driveway and the birds poop on it. So it's this value in use concept and when organizations can make that, by the way, most organizations really struggle with this. They struggle with this value in use concept. They want to, when you talk to them about data monetization, they say, well, think about the chief data officer trying not to try to sell data, docking on doors, shaking their tin cups saying, buy my data, no, no one wants your data. Your data is more valuable for how you use it to drive your operations than it's a sell to somebody else. Right, right. Well, one of the other things that's really important from an economics concept is scarcity, right? And a whole lot of economics is driven around scarcity and how do you price for scarcity so that the market evens out and the price matches up to the supply. What's interesting about the data concept is there is no scarcity anymore and you've outlined and everyone has giant numbers going up and to the right in terms of the quantity of the data and how much data there is and is going to be. But what you point out very eloquently in this paper is the scarcity is around the resources to actually do the work on the data to get the value out of the data. And I think there's just this interesting step function between just raw data, which has really no value in and of itself, right? Until you start to apply some concepts to it, you start to analyze it. And most importantly, that you have some context by which you're doing all this analysis to then drive that value. And I thought it was really an interesting part of this paper, which is get beyond the arguing that we're kind of discussing here and get into some specifics where you can measure value around a specific business objective. And not only that, but then now the investment of the resources on top of the data to be able to extract the value to then drive your business process forward. So it's a really different way to think about scarcity, not on the data per se, but on the ability to do something with it. You're spot on, Jeff, because the organizations don't fail because of a lack of use cases. They fail because they have too many. So how do you prioritize, right? Now that scarcity is not an issue on the data side, but it is an issue on the people resources side. You don't have unlimited data scientists, right? So how do you prioritize and focus on those opportunities that are most important? I'll tell you, that's not a data science conversation. That's a business conversation, right? And figuring out how do you align organizations to identify and focus on those use cases that are most important? Like in the paper, we go through several different use cases using Chipotle. As an example, the reason why I picked Chipotle is because, well, I like Chipotle. So I could go there and I could write it off as research, right? So, but there's a, think about the number of use cases where a company like Chipotle or any other company can leverage your data to drive their key business initiatives and their key operational use cases. It's almost unbounded, which by the way, is a huge challenge. In fact, I think part of the problem we see with a lot of organizations is because they do such a poor job of prioritizing and focusing. They try to solve the entire problem with one big fell swoop, right? It's like the old ERP big bang projects. Well, I'm just going to spend $20 million to buy this, this analytic capability from company X. I'm going to install it and then magic's going to happen. And then magic's going to happen. Right, right. And then magic's going to happen, right? And magic never happens. We get crickets instead because the biggest challenge isn't around how do I leverage a data? It's about where do I start? What problems do I go after and how do I make sure the organization is bought in to basically use case by use case, build out your data and analytics architecture and capabilities. Yeah, and you start backwards from really specific business objectives in the use cases that you outline here, right? I want to increase my average ticket by X. I want to increase my frequency of visits by X. I want to increase the amount of items per order from X to 1.2X or 1.3X. So from there, you get a nice kind of big revenue hit that you can plan around and then work backwards into the amount of effort that it takes and then you can come up is this a good investment or not? So it's a really different way to get back to the value of the data and more importantly, the analytics and the work to actually call out the information. The technologies, the data and analytic technologies available to us, the very composable nature of these allow us to take this use case by use case approach. I can build out my data lake, one use case at a time. I don't need to stuff 25 data sources into my data lake and hope that some of them are valuable. I can use the first use case to say, oh, I need these three data sources to solve that use case. I'm going to put those three data sources in data lake. I'm going to go through the entire curation process of making sure the data has been transformed and cleansed and aligned and enriched and meta, all the other governance, all that kind of stuff that goes on. But I'm going to do that use case by use case because the use case can tell me which data sources are most important for that given situation. And I can build up my data lake and I can build up my analytics then one use case at a time. And there is a huge impact then. Huge impact when I build out use case by use case that does not happen. Let me throw something that's not really covered in the paper, but it's very much covered in my new book that I'm working on which is in knowledge based industries, the economies of learning are more powerful than the economies of scale. Now think about that first thing. Say that again, say that again. Yeah, the economies of learning are more powerful than the economies of scale. And what that means is what I learned on the first use case that I build out, I can apply that learning to the second use case to the third use case to the fourth use case. So when I put my data into my data lake for my first use case in the paper that covers this, well once it's in my data lake, the cost of reusing that data in my second, third and fourth use cases is basically the marginal cost is zero. So I get this ability to learn about what data sets are most important and to reapply that across the organization. So this learning concept, I learn use case by use case. I don't have to do a big in economies of scale approach and start with 25 data sets of which only three or four might be useful but I'm incurring the overhead for all those other non-important data sets because I didn't take the time to go through and figure out what are my most important use cases and what data do I need to support those use cases? Right. I mean, should people even think of the data per se or should they really readjust their thinking around the application of the data? Because the data in and of itself means nothing, right? 55, is that fast or slow? Is that old or young? Well, it depends on a whole lot of things. Am I walking or am I in a brand new Corvette? So it's funny to me that the data in and of itself really doesn't have any value. It doesn't really provide any direction into a decision or a higher order predictive analytics until you start to manipulate the data. So is it even the wrong discussion? Is data the right discussion? Are we really be talking about the capabilities to do stuff with it and really get people focused on that? So Jeff, there's so many points to hit on there. So the application of the data is what's of value and the cube, you guys used to be famous for saying separating noise from a signal from a noise, right? Well, how do you know in your data set what's signal and what's noise? Well, the use case will tell you, right? If you don't know the use case and you have no way of figuring out what's important. One of the things that I still rail against and it happens still, I know somebody will walk up to my data science team and say, here's some data. Tell me what's interesting in it. Well, how do you separate signal from noise if I don't know the use case? So I think you're spot on, Jeff, that the way to think about this is don't become data-driven, become value-driven. And value is driven from the use case or the application or the use of the data to solve that particular use case. So organizations that get fixated on being data-driven, I hate the term data-driven. It's like, as if there's some sort of frigging magic from having data. No, data is of no value. It's how you use it to derive customer product and operational insights that drive value. Right. So there's an interesting step function and we talk about it all the time. You're out in the weeds working with Chipotle to increase their average ticket by 1.2X. We talk more here kind of conceptually and one of the great kind of conceptual holy grails within a data-driven economy is kind of working up this step function and you've talked about it here. It's from descriptive to diagnostic to predictive and then the holy grail, right? Prescriptive where we're way ahead of the curve. This comes into tons of stuff around unscheduled maintenance and there's a lot of specific applications. But do you think we spend too much time kind of shooting for the fourth order of greatness impact instead of kind of focusing on the small wins? Well, you certainly have to build your way there. I don't think you can get to prescriptive without doing predictive and you can't do predictive without doing descriptive and such. But let me throw a really one at you, Jeff. I think there's even one beyond prescriptive when we're talking more and more about autonomous, autonomous analytics, right? And one of the things the paper talked about that didn't click with me at the time was this idea of orphaned analytics. You and I kind of talked about this before the call here. And one thing we had noticed in the research was that a lot of these very mature organizations who had advanced from the retrospective analytics of BI to the descriptive to the predicted to the prescriptive, they were building one-off analytics to solve a problem and getting value from it, right? But never reuse those analytics over and over again. They were done one-off and then they were thrown away. And these organizations were so good at data science and analytics that it was easier for them to just build from scratch than to try to dig around and try to find something that was never actually ever built to be reused. And so this whole idea of orphaned analytics, right? Didn't really occur to me. Didn't make any sense into me until I read this quote from Elon Musk. And Elon Musk made this statement. He says, I believe that when you buy a Tesla, you're buying an asset that appreciates in value, not depreciates through usage. I was thinking, wait a second, what does that mean? He didn't actually say it through usage. He said, he believes you're buying an asset that appreciates, not depreciates in value. And of course, the first response everybody had was, oh, it's like a 1964 and a half, you know, Mustang. It's rare, so everybody's gonna want these things. So buy one, stick it in your garage and 20 years later you're bringing out and it's worth more money. No, no, there's 600,000 of these things roaming around the streets, you know? They're not rare. What he meant is that he is building an autonomous asset. That the more that it's used, the more valuable it's getting, the more reliable, the more efficient, the more predictive, the more safe this asset's getting. So there is this level beyond prescriptive where we can think about how do we leverage artificial intelligence, reinforcement, learning deep learning to build these assets that the more that they are used, the smarter they get. That's beyond prescriptive. That's an environment where these things are learning, in many cases, they're learning with minimal or no human intervention. That's the real aha moment. That's what I miss with orphaned analytics and why it's important to build analytics that can be reused over and over again because every time you use these analytics in a different use case, they get smarter. They get more valuable. They get more predictive. To me, that's the aha moment that blew my mind. I realized I had missed that in the paper entirely and it took me basically two years later to realize, don't, I missed the most important part of the paper. Right. Well, it's an interesting take really on why the valuation I would argue is reflected in Tesla, which is a function of the data. And there's a phenomenal video if you've never seen it where they have autonomous vehicle day. It might be a year or so old and he's got his number one engineer from, I think the microprocessor group, the computer vision group, as well as the autonomous driving group. And there's a couple of really great concepts I want to follow up on what you said. One is that they have this thing called the fleet to your point. There's hundreds of thousands of these things if they haven't hit a million that are calling home, reporting home every day as to exactly how everyone took the Northbound 101 on ramp off of University Avenue. How fast did they go? What line did they take? What G-forces did they take? And every one of those cars feeds into the system so that when they do the autonomous update, not only are they using all their regular things that they would use to map out that 101 Northbound entry, but they've got all the data from all the cars that have been doing it. And when that other car, the autonomous car couple of years ago hit the pedestrian in Phoenix, which is not good, sad, killed a person, a dark, tough situation. But you know, we were doing an autonomous vehicle show and the guy made a really interesting point, right? That when something like that happens, typically, if I was in a car wreck or you were in a car wreck, hopefully not. You know, I learned the person that we hit learns and maybe a couple of witnesses learned, maybe the inspector. But nobody else learns. But nobody else learns. But now in the economy, every single person can learn from every single experience with every vehicle contributing data within that fleet. To your point, it's just an order of magnitude different way to think about things. Think about a 1% improvement compounded 365 times equals I think 38X improvement. The power of 1% improvements over these 600,000 plus cars that are learning, by the way, even when the autonomous FSD, the full self-driving mode module isn't turned on, even when it's not turned on, it runs in shadow mode. So it's learning from the human drivers, the human overlords. It's constantly learning. And by the way, not only they collect all this data, I did a little research. I pulled down some of their job search ads and they've built a giant simulator, right? And they're basically every night simulating billions and billions of more driven miles because of the simulator. They are building, you know, he's gonna have a simulator not only for driving, but think about all the data that he's capturing as these cars are riding on the road. By the way, they don't use LiDAR. They use video, right? So he's driving by malls. He knows how many cars are in the mall. He's driving down roads. He knows how old the cars are and which one should be replaced. I mean, he has, he's sitting on this incredible wealth of data. If anybody could simulate what's going on in the world and figure out how to get out of this COVID problem, it's probably Elon Musk in the data he's captured with the courtesy of those all those cars. Yeah, yeah, it's really interesting. It's a, and we're seeing it now. There's a new autonomous drone out, the Skydio, and they just announced their commercial product, right? And again, it completely changes the way you think about how you use that tool because you've just eliminated the complexity of driving. I don't want to drive it. I want to tell it what to do. And so you're saying, you know, this whole application of Air Force and companies around things like, you know, measuring piles of coal and measuring these huge assets that are volumetric measure that these things can go and map out and farming, et cetera, et cetera. So the autonomy piece, that's really insightful. I want to shift gears a little bit, Bill and talk about, you had some theories in here about thinking of data as an asset, data as a currency, you know, data as monetization. I mean, how should people think of it? Because I don't think currency is very good. It's really not kind of an exchange of value that we're doing. This kind of classic asset, I think the data as oil is horrible, right? To your point, it doesn't get burned up once and can't be used again. It can't be used over and over and over. It's basically like feedstock for all kinds of stuff, but the feedstock never goes away. So again, is, or is it that even the right way to think about, do we really need to shift our conversation and get past the idea of data and get much more into the idea of information and actionable information and useful information that, oh, by the way, happens to be powered by data under the covers. Yeah, good question, Jeff. Data is an asset in the same way that a human is an asset, but just having humans in your company doesn't drive value. It's how you use those humans. And so it's really, again, the application of the data around the use cases. So I still think data is an asset, but I don't want to, I'm not fixated on putting it on my balance sheet. That nice talk about put on a balance sheet, I immediately put a blinder zone. It inhibits what I can do. I want to think about this as an asset that I can use to drive value, value to my customers. So I'm trying to learn more about my customers' tendencies and propensities and interests and passions and trying to learn the same thing about my car's behaviors and tendencies and my operations have tendencies. And so I do think data is an asset, but it's a latent asset in the sense that it has potential value, but it actually has no value per se and put it into a balance sheet. So I think it's an asset, but I worry about the accounting concept immediately hijacking what we can do with it. And to me, the value of data becomes in how it interacts with maybe with other assets. So maybe data itself is not so much an asset as it's fuel for driving the value of assets. So it fuels my use cases. It fuels my ability to retain and get more out of my customers. It fuels ability to predict when my products are going to break down and even have products who self monitor, self diagnose and self heal. So data is an asset, it's only a latent asset in the sense that it sits there and doesn't have any value until you actually put something to it and shock it into action. So let's shift gears a little bit and stop talking about the data and talk about the human factors. Cause you said one of the challenges is people trying to bite off more than they can chew. And we have the role of Chief Data Officer now and to your point, maybe that mucks things up more than it helps. But in all the customer cases that you've worked on, is there a consistent kind of pattern of behavior, personality, types of projects that enables some people to grab those resources to apply to their data to have successful projects. Because to your point, right? There's too much data and there's too many projects and you talk a lot about prioritization but there's a lot of assumptions in the prioritization model that you can, that you know a whole lot of things especially if you're comparing project A over in group A with project B. Group B and the two may not really know the economics across that. But from an individual person who sees the potential, what advice do you give them? What kind of characteristics do you see either in the type of the project, the type of the boss, the type of the individual that really lends itself to a higher probability of a successful outcome? So first off, you need to find somebody who has a vision for how they want to use the data and not just collect it but how they're going to try to change the fortunes of the organization. So it always takes a visionary. May not be the CEO, might be somebody who's ahead of marketing or the head of logistics or it could be a CIO, it could be a chief data officer as well but you've got to find somebody who says we have this latent asset that we could be doing more with and we have a series of organizational problem challenges against which I could apply this asset and I need to be the matchmaker that brings these together. Now the tool that I think is the most powerful tool in marrying the latent capabilities of data with all the revenue generating opportunities on the application side because there's a countless number. The most important tool that I found doing that is design thinking. Now the reason why I think design thinking is so important because one of the things that design thinking does a great job is it gives everybody a voice in the process of identifying, validating, valuing and prioritizing use cases you're going to go after. Let me say that again. The challenge organizations have is identifying, validating, valuing and prioritizing use cases they want to go after. Design thinking is a marvelous tool for driving organizational alignment around where we're going to start and what's going to be next and why we're going to start there and how are we going to bring everybody together. Big data and data science projects don't die because of technology failure. Most of them die because of passive aggressive behaviors in the organization that you didn't bring everybody into the process. Everybody's voice didn't get a chance to be heard and that one person who's voice didn't get a chance to get heard, they're going to get you. They may own a certain piece of data, they may own something but they're just waiting in the lane, they're just laying there waiting for their chance to come up and snag him. So what you got to do is you got to proactively bring these people together. This is part of our value engineering process. We have a value engineering process around envisioning where we bring all these people together. We help them to understand how data in itself is a latent asset but how it can be used from an economics perspective to drive all this value. We get them all fired up and how these can solve any one of these use cases but you got to start with one. And you got to embrace this idea that I can build out my data and analytic capabilities one use case at a time. And the first use case I go after and solve makes my second one easier, makes my third one easier, right? It has this ability that when you start going use case by use case, two really magical things happen. Number one, your marginal cost flat. That is because you're building out your data lake one use case at a time and you're bringing all the important data lake into that data lake one use case at a time. At some point in time, you've got most of the important data you need and the ability that you don't need to add another data source, you got what you need. So your marginal cost start to flatten. And by the way, if you build your analytics as composable, reusable, continuous and learning analytic assets, not as orphaned analytics, pretty soon you have all the analytics you need as well. So your marginal cost flatten. But effect number two is that you, because you have the data and analytics, I can accelerate time to value and I can de-risk projects as I go use case by use case. And so then the biggest challenge becomes not in the data and analytics is getting all the business stakeholders to agree on, here's a roadmap we're going to go after. This one's first and this one is going first because it helps to drive the value of the second and third one and then this one drives this and you create a whole roadmap of rippling through how the data and analytics are driving this value to across all these use cases at a marginal cost approaching zero. So should we have chief design thinking officers instead of chief data officers that really actually move the data process along? I mean, I first heard about design thinking years ago, actually interviewing Dan Gordon from Gordon Bearsh and they were, he had just hired a couple of Stanford grads, I think it was where they pioneered it and they were doing some work about introducing, I think it was a new Apple based alcoholic beverage, Apple Slider. And they talked a lot about it and it's pretty interesting, but I mean, are you seeing design thinking proliferate into the organizations that you work with either formally as design thinking or as some derivation of it that pulls some of those attributes that you highlighted that are so key to success? So I think we're seeing the birth of this new role that's marrying capabilities of design thinking with the capabilities of data and analytics and are calling this dude or dude at the chief innovation officer. Surprise. And I got to tell you a little story. So I have a very experienced design thinker on my team. All of our data science projects have a design thinker on them. Every one of our data science projects has a design thinker because the nature of how you build and successfully execute a data science project models almost exactly how design thinking works. I've written several papers on it and it's a marvelous way design thinking and data science are different sides of the same point. But my respect for data science or for design thinking took a major shot in the arm, maybe a major boost when my design thinking person on my team whose name is John Morley introduced me to a senior data scientist at Google and I was bottom coffee. I said, no, this is back in before I even joined a touch of Antara. And I said, so tell me the secret to Google's data science success. You guys are marvelous. You got to do things that no one else is even contemplating. And what's your key to success? And he giggles and he laughs. He goes, design thinking. I go, what the hell is that design thinking? I never even heard of this stupid thing before. He goes, I'd make a deal with you. Friday afternoon, let's pop over to Stanford's D school and I'll teach you about design thinking. So I went with him on a Friday to the D school, design school over at Stanford. It's, and I was blown away, not just in how design thinking was used to ideate and bring it and to explore, but I was blown away about how powerful that concept is when you marry it with data science. What is data science in the simplest sense? Data science is about identifying the variables and metrics that might be better predictors of performance. It's that might phrase that's the real key. And who are the people who have the best insights into what values or metrics or APIs you might want to test? It ain't the data scientists. It's the subject matter experts on the business side. And when you use design thinking to bring the subject matter experts with the data scientists together, all kinds of magic stuff happens. It's unbelievable how well it works. And all of our projects leverage design thinking. Our whole value engineering process is built around marrying design thinking with data science around as prioritization, around these concepts of all ideas are worthy of consideration and all voices need to be heard and the idea of how you embrace ambiguity and diversity of perspectives to drive innovation. It's marvelous, but I feel like I'm a lone voice out in the wilderness crying out, yeah. Tesla gets it, Google gets it, Apple gets it, Facebook gets it, but most other organizations in the world, they don't think like that. They think design thinking is this foo foo thing. Oh yeah, they're going to bring people together and sing kumbaya. It's like, no, I'm not singing kumbaya. I'm picking their brains because they're going to help making your data science team much more effective in knowing what problems are going to go after and how I'm going to measure success and progress. Maybe that's the next dean for the next 10 years. The dean of design thinking is that a data science and who know, they're one and the same. Well, Bill, that's a super insightful. I mean, it's so, it's so, is validated and supported by the trends that we see all over the place just in terms of democratization, right? Democratization of the tools, more people having access to the data, more opinions, more perspective, more people have the ability to manipulate the data and basically experiment, you know, does drive better business outcomes and it's so consistent. If I could add one thing, Jeff, I think that what's really powerful about design thinking is when I think about what's happening with artificial intelligence or AI. There's all these conversations about, oh, AI is going to wipe out all these jobs and going to take all these jobs away. And what we're actually finding is that if we think about machine learning driven by AI and human empowerment driven by design thinking, we're seeing the opportunity to exploit these economies of learning at the front lines where every customer engagement, every operational execution is an opportunity to gather not only more data, but to gather more learnings, to empower the humans at the front lines of the organization, to constantly be seeking to try different things to explore and to learn from each of these engagements. I think it's AI to me is incredibly powerful and I think about it as a source of driving more learning, a continuous learning and continuously adapting an organization where it's not just the machines that are doing this, but to see humans who have been empowered to do that. And my chapter nine in my new book, Jeff, is all about human empowerment because nothing you do with AI is going to matter a squat. If you don't have empowered teams who know how to take and leverage that continuous learning opportunity at the front lines of customer and operational engagement. Bill, I couldn't have said it better. I think we'll leave it there. That's a great close. When is the next book coming out? So today I do my second to last final review then it goes back to the editor and he does her review and we start looking at formatting. So I think we're probably four to six weeks out. Okay. Well, thank you so much. Congratulations on all the success. I just love how the Dean is really the Dean now teaching all over the world, sharing the knowledge and attacking some of these big problems and like all great economics problems often the answer is not economics at all. It's completely, you know, really twist the lens and don't think of it in that all that construct. Exactly. All right, Bill. Thanks again and have a great week. Thank Jeff. All right. He's Bill Schmarzell. I'm Jeff Rick. You're watching theCUBE. Thanks for watching. We'll see you next time.