 Welcome to the get the podcast for enterprise leaders delivering timely insights for today's global economy and tomorrow's competitive advantage. I'm your host, Chris Kane, president of the Center for Global Enterprise. And today we sit down with two renowned business leaders to discuss how generative AI will redefine the future of work. Michael Spence, Nobel Laureate, former Dean of Standard School of Business and author of the forthcoming book, Permacrisis, a plan to fix a fractured world. And Doug Haynes, managing partner for Neurius Research Group and a former leading partner for McKinsey and Company. Mike and Doug, welcome back to the get and thanks for being with us today. Generative AI has broken onto the business and economic scene lately as one of the most promising and provocative technologies, perhaps in the last 15 years. As it advances, there is a lot of focus and quite honestly concern about how it will redefine the employment landscape for workers and for businesses. While new technologies and automation have traditionally over time displaced lower level jobs, many feel generative AI is quite different and that it will reshape the future of work for all workers at all levels. Knowledge and creative workers in particular stand to experience both the challenges and the opportunities. Generative AI has the potential to automate certain aspects of knowledge work, such as software development, data analysis and content generation. But it can also empower knowledge workers by providing powerful tools to enhance their productivity, their creativity and decision making. CEOs and business leaders will have the opportunity and perhaps even the obligation to explore the potential of generative AI as a tool that can redefine and reorganize work. From understanding the technologies capabilities to addressing ethical considerations, companies will need to navigate this landscape strategically, exploring best practices for effectively deploying generative AI while ensuring a human-centric approach that values employee well-being and collaboration. Mike, perhaps we can start with you. From a macroeconomic perspective, how do you see generative AI impacting the overall economy, various sectors, the nature of work and workers themselves? That's a very large question, Chris, but I'll take a shot at parts of it and Doug can take over from there. To me, the revolutionary aspects of generative AI as it's emerged in the last few months, although the research was done before that, are one, it has this capacity for seamless domain switching, which basically no previous AI's had. It has enormous scope because it's trained essentially on the entire internet. And finally, it's very accessible, that is, you don't need technical training to interact with the large language models, you can just ask it a question, maybe you can get a little better at prompts to get the response you want. To me, from a macroeconomic perspective, in the best case scenario, I can see generative AI providing the technological underpinnings for a really powerful surge in productivity, pretty much globally. And that would be reversing two decades of declining productivity and relieving numerous supply side pressures that we're all aware of, having to do with labor shortages, supply chain disruptions of a variety of kinds, aging populations and so on. And if it does that, then it'll take away some of the supply side non-contribution to the inflationary pressures we've experiencing. Overall, it could be really quite dramatic. So why is this likely to happen? Not in the next couple of years, because we're in a period of intense experimentation and sort of innovation, and nobody I know thinks they have a complete picture of where we're going to emerge. But I think in the second half of the decade, there's a reasonable chance that we'll see these effects if it's properly used to enhance service quality in a wide range of sectors like healthcare and education, as well as in standard business processes, customer service and a host of others. It will also likely increase the transparency and therefore ultimately the performance of highly complex, very complex systems that are at present impenetrable, partially opaque, like the global supply chains that CGE has studied and contributed to. All of this will be explored in the next few years with lots of entrepreneurs. Sure, we'll have height, we'll have excess valuations, we'll have irrational exuberance and so on, but then I think we'll start to see the things. The last thing I'll say by way of introduction, Chris, is we're going to see lots of different applications. I think the one that's most interesting is what I call the powerful digital assistant model, where the AIs are used mainly as a complement rather than a substitute for labor. There's a certain amount of preliminary research that suggests this. The AIs capture and encapsulate a whole lot of accumulated experience and when delivered properly to people like customer service agents, there's a leveling up effect. The least experienced get the biggest kick in terms of kind of performance. We're going to see AIs producing first drafts of everything from doctors' reports to hospital reports to first drafts of software. Is it going to be full-throated automation? I don't think so, at least not for the foreseeable future. There'll be elements of that, but I like the word first draft. I think a human's going to check it. After all, these are prediction machines. They're fallible, they make mistakes and get lost. And so I think for our executives, using these things as complements rather than substitutes for labor is going to be the sort of dominant-wise use of the technology. So let me stop there and turn it over to Doug. Thanks, Doug, from a macroeconomic perspective. Any thoughts and reflections? I'm going to ask you about the micro implications of this because I know you work with organizations deeply and talk about productivity and work at the firm level, but at the macroeconomic perspective, any thoughts that add to micro complement? So I think the macro and the micro actually converge here. I think what we're going to experience as a society and economy is what the companies are going to experience individually. I think it's important to start with the distinction between what generative AI is adding, because a lot of people are presuming the things that they see, for example, chat GPT doing, are all related to chat GPT. But what generative AI does is create a much better human interface and that opens up access to other forms of technology, as an example, computational AI, because a lot of people who are accessing computational AI through generative AI through which like chat GPT, they assume that it's all the generative AI that's doing the work, but it's not. The effect of that though is that the democratization of access to advanced computing technologies is going to have an effect at the companies, but it's also going to have an effect on the general economy and society. So one of the things Mike said was it has this leveling up effect. We're going to get a leveling up effect where many more of us are going to be accessing advanced computational capabilities in our day-to-day lives and in our work lives. And I do think it is potentially a big unlock for productivity in the economy generally. So access was a topic of one of our previous Get episodes on AI and the distinguishing factor or the historical factor that was offered was that because the access is now at the personal level and no longer just at the institutional level, the speed at which this will transform society will be dramatic, something that we haven't seen before. In fact, Mike was talking about we're in a period of experimentation. You both know that some of our listeners know that we have a program at CGE called the African Women Entrepreneurship Co-operative. And that has 1200 women business owners from across all 54 African countries in it. And we recently did a generative AI activity with them called the generative AI scavenger hunt, where we asked them to go and search and use generative AI and apply it to their business instantaneously. And the experience was remarkable. We have a readout on that and we're happy to provide it to anybody who is interested. But the areas of graphic design, the areas of how do you make an effective query, were the ones that really were quite firsthand to the women who participated in this. Doug, let me go to the micro perspective for a second. If you were still at McKinsey or leading any of the various companies that you've managed from a CEO perspective, what would you be doing right now to understand the implications of generative AI on your organization and in particular how it gets done? Well, the last part of your question I think is the most powerful. The first thing to do is to take the process of a business today, whatever that process is, whatever that business is, whatever the process is, and break it down into the series of decision steps. So as you stated in the very first minutes of this discussion today, that generative AI is, look, generative AI is going to have an effect on higher end roles, on more higher educated, more advanced roles that have more training, more development, etc. over time. Those roles are all processes, right? A series of decisions that are linked together to lead you to a certain place. If I'm running a company of any kind, I would look at kind of our core value stream and look at those processes as they lay out and ask ourselves where would access to more sophisticated, more powerful computational decision making make us better. And then what I would say is, when I spot that place at the process at a much more granular level, that's where I'm going to do my experimentation in the company on using a generative AI application to make that technology broadly accessible. Let me use a specific example and I'll go to the investing industry. There have been a lot of articles written, in fact there was one just a few weeks ago, about using chat GPT to pick stocks. I'm going to have the query or the prompt given to chat GPT was construct a diversified portfolio with a high expected return, right? Took a very aggregated view, like let's have chat GPT replace the entire investment industry activity in one prompt. That's never going to produce anything that's particularly valuable. But here's an application I was actually talking to somebody about earlier this morning that could be really valuable. If I'm a really big investment firm, I can provide my investors, my investment professionals, let's say my analysts with tremendous tools that make it possible for them to access all sorts of data, all sorts of publicly available information and very quickly answer question after question about how a stock might be affected by anything from the war between Russia and the Ukraine to how inflation rates are going to change. I can get answers in the hands of my investment professionals very quickly because if I'm a really large investment organization, I've spent millions of dollars on providing that tooling. But if I'm a midsize investment company investment fund, I don't have any of those tools. And what generative AI will do is essentially let that midsize fund have the same information as the largest funds and it will have this leveling effect as Mike described it earlier, leveling up. I'm going to take the analysts in the midsize fund and level them up so that they're working with the same information as the analysts in a multi-billion dollar. In every industry, that's exactly how I would be thinking about it. Like you and Vi as many companies both directly and indirectly, where do you see companies starting the experimentation process with generative AI and are there particular areas that you see companies flocking to first? The range of things that companies do in the potential applications of this, I find it hard to generalize about it. But let me try to respond by building a little bit on what Doug just said. I think especially for the smaller medium-sized firms that don't have the resources to kind of build their own computational applications. Generative AI is just a wonderful kind of interface. And so what I see is happening is that a number of companies will look at the processes Doug described and see if AI, generative AI can help. But probably, at least as I see it more important, what you're going to see is that the relatively small number of people who have the computing power to build the generative AI model on the full internet are going to license it. And if things go well, they're going to license it with an API. And then a whole bunch of people are going to start building applications for specific use cases. And so if I were a small and medium-sized business, in addition to just experimenting with the user interface, I'd be starting to scan and look for those people who are building the verticals, if you like, on top of the generative AI interface that are potential value creators in my business. At that level, it's hard to generalize because what goes on in software will be different than what goes on in customer service and a whole bunch of other applications. But I think that's an important model and policies that make sure that the steps that you need to have very broad access, lots of entrepreneurial activity, ability to license at reasonable cost, the generative AI kind of platform on which these things are built, I think is part of the future that I hope to see. So Doug, we were talking before about the differentiating characteristic of generative AI, which is now being accessed by individuals. Does that translate into businesses as well? Do you see small and medium businesses having just as much of an opportunity to advance their competency and their growth and their competitive advantage as large enterprises? The answer is not exactly, and the nuance is around the application. So Mike just described a minute ago what we would call a vertical large language model, and a vertical large language model would be delivered by a company. It's usually not the user accessing the AI capability directly, but a company forming that has been built around solving particular problems for a particular industry. So for an example, there's a company out there, Jasper, that does marketing copy. It helps you write marketing copy more efficiently. Harvey is another example. It's called Harvey AI that is designed for law practices and it's built around it. It's the prompts are very easy to use to, if I were a paralegal or if I were an early law associate, it's going to be a great research accelerator for me. Those sort of vertical applications are going to be accessible to mid-sized companies and there'll be a great sort of leveling between large and smaller enterprises with those vertical applications. In fact, the strength of the value proposition of those vertical application companies is that leveling, leveling the playing field. There are other applications though that really are only going to be valuable for large companies. And this would be the bespoke models built to often to either heavily augment or replace labor. And so for an example, imagine that we're running a large retail bank and we've got frontline call center operators or frontline support desk operators in the hundreds. It is worth developing customized applications to heavily augment or even replace that labor because we have so many people in those roles that we can monetize it. But if I'm a small company and I've only got a handful of help desk operators or a handful of frontline people, the return on investment isn't there to build that bespoke application. So it's going to split a little bit based on the nature of the application. Mike also mentioned earlier what I refer to as horizontal applications, which are these co-pilot. I think you called them assistants, but some people refer to them as co-pilots. And these are tools that simply enhance my personal ability to be productive. So if I were to go back to the investment industry and look at an equity analyst, a big part of my time as an equity analyst is spent transferring information from one format into a tabular format that I can put into a model that I can do work on. Well, there's a lot of co-pilot style AI, even stuff that's available broadly today, like the new co-pilot in Microsoft XL that can just do that for me. And frankly, it'll do better than I do. And it'll do it instantly and it'll do a better job of it than I would do with it. That kind of horizontal application is going to be available to almost everybody. So what I'm taking from this part of our conversation is that this is an opportunity for both large and small businesses along with large enterprises. But the approach will be different based upon your own resources and the ability to understand where the pump factors are, where the return is for you. Okay, perhaps we could talk a little bit about global adoption now and how you all see the adoption of generative AI playing out across different geographies and countries. And are there things that specific factors, Mike, maybe we can start with you, that you think will influence the outcome for how countries and companies within those countries will be able to take advantage of generative AI? There's a number of things I think that need to happen that I'd mentioned. One, you know, there's a lot of concern about these things. Sometimes people are afraid of jobs and whatnot. And there's a lot of experience that Doug and others who have had leadership positions have with basically introducing new technology that's potentially threatening. So a lot of attention is going to have to go into communication and being clear about what you're doing. And at a more macro level, business government and the research community need to collaborate and generating what I think of as widely accepted norms and practices with respect to the use and inappropriate uses with respect to data, which is already at issue. And eventually we'll have rules and regulations. And the trick is not to promulgate them too quickly so that you stifle the innovation process. There's another thing I think that the research community needs to guard against but maybe the business community as well. It's what Eric Bernolson calls the touring trap. There is a very strong tendency to measure AIs against human performance and to declare victory when they soar past human performance, whether it's image recognition or taking the law school LSAT tests and so on. But that mindset produces a bias. And the bias it produces in the direction is automation. Once you pass the humans then you replace them, right? And I think what Doug and I have been saying is that's not the best guess. It's the right way to use AI. We can use Eric's term, the touring trap. We need everybody to think clearly about the incentives and appropriate uses that lead us in the direction of augmentation as opposed to focusing solely on automation. It's not to exclude partial automation and lots of applications, but there's a real potential bias there. And the third thing I'll mention, and I'll just do it quickly, is repeating myself slightly, you can count on the fingers of two hands the entities that have the computing power to generate the large language to train them, not use it but train them. And so they have a collective kind of monopoly on this and maybe there's enough competition among the mega platforms, Mate, Microsoft, Google, Alibaba, Tencent, etc. that access is going to turn out to be just fine. But there's some due diligence that's needed in making sure. If we really want the benefits we talked about before, Doug and Chris, we want to be alert to potential sort of blockage, whether they're on cost or access or other things. And finally, I think McKinsey has done a numerous studies on previous rounds of digital innovation and documented very clearly that a typical pattern is a huge dispersion of cross companies and sectors in terms of adoption, right? So tech and finance tend to get high scores and retail and healthcare get terrible scores and so on. And I think some serious business and policy kind of thinking that goes with why do we get this kind of dispersed application and can we reduce the dispersion and level up the adoption process because we all benefit in the end from that. So those are the things I'd mention that on a kind of short list of things. I'm sure there's many. Great. Thanks, Doug. Any thoughts about specific factors that will influence outcomes that countries and governments and leaders in the business community ought to be thinking about in order to maximize the benefit and minimize the risks across the globe that will have access to this new promising and yet provocative technology? Well, I think this is an area where governments I'm going to come at it from a business point of view. I'll end with how governments will think about it. I think this is an area where governments are going to have very different responses around the acceptability of the use of AI by businesses to serve consumers or maybe in some cases businesses serving other businesses but governments tend to get more engaged with the relationship between businesses and consumers. And the two areas that businesses worry about I think will play out in the form of how and whether and how the applications get regulated. The two things businesses worry about are reliability and accountability. So reliability is if I use generative AI to solve a problem, do I get the answer that I would get from an expert human user and do I get an answer that's grounded in fact or is it potentially at risk of being grounded in information that the AI can't discern as being factual or not factual? And businesses worry about augmenting tasks and augmenting workflow if they can't be confident that the answers that are produced will be reliable, fact-based, will follow the pattern of expertise that they would expect. Accountability also matters. So assuming that with a human being, if somebody makes a decision and I know who made the decision, I know if something goes wrong down the road, I know how to trace that back and find that decision maker. Without some form of tagging, AI workflows sort of the decision emerges, but then it's lost in the ether. And if I'm trying to backtrack to either, I might be backtracking because I'm trying to improve outcomes and I'm in the continuous learning process. I might be backtracking for legal liability. I might be backtracking to determine attribution so I can determine who should be promoted and how people should be rewarded. And most businesses really want AI usage to be tagged and to be tagged in a reliable way so that you can build on the decision as opposed to just have the decision appear out of thin air. The reason why I said that this will ultimately, this reliability and accountability will ultimately wash back to governments looking at regulating or protecting consumers from this, the public internet right now has a certain amount of content that is generated by generative AI. Estimates are that by 2025, that could be as much as 50% of all content that's accessible online. That is going to create a lot of concern. I believe that's going to create a lot of concern on the part of governments trying to determine if their society is being systematically misinformed. Either misinformed by outside interests or misinformed by businesses. And we already know that governments have a lot of anxiety over the access and consumption of personal information by technologies. That's an input worry. Are these businesses capturing personal information? We're about to go to an output worry on the part of governments where what are these technologies telling people? How are they guiding people's decision making? How are they influencing the thinking and behavior of people in our country? I think that this notion of reliability and accountability of information is going to be a big deal for regulation. So the accountability point is excellent. And as you were talking, I was thinking about information that is not tagged. Therefore, less accountable. It's like the quintessential black box. And it came out, but nobody knows what went in and how the black box operated. So let me talk about as we close. First of all, thank you both for being part of this conversation today. But now I'm going to have the... This will be the toughest part. At the end of every one of our episodes, we like to close last minute or so for our listeners and offer them one strategic advice, insight to consider. And we call it, as you both know, our emerging critical issues moment. But today we're going to change it up and we're going to go right to some accountability on your part, which is in one word or one phrase, if you were to advise your child or grandchild to do in a generative AI area, what would it be? So Doug, we'll start with you and then Mike will come to you. I would say emphasize human judgment and insight. I never change on this, Chris. And Doug, I think a huge range of things that people study and become our contributors. This is consistent with what Doug said. So my advice is always do what you're passionate about so when you get up in the morning, you're really excited about getting on with it. Great. For our listeners, you heard it here first, tell your kids and grandkids. Mike, Doug, thank you very much for your time today and your insights. We really appreciate you being a part of the Get community and for coming back again. You've been listening to The Get, sponsored by the Center for Global Enterprise, celebrating 10 years of convening global enterprise leaders around the most important business transformational issues.