 Welcome to the Gag, the podcast for enterprise leaders delivering timely insights for today's global economy and tomorrow's competitive advantage. I'm your host, Chris Kane, president of the Center for Global Enterprise. And in today's episode, we continue our conversation about generative AI and its potential to fundamentally change how we live and deliver economic growth. Today we sit down with two experienced public and private sector leaders to discuss how can governments strike the important balance between supporting innovation and enacting responsible regulation so that generative AI can deliver benefits to society and minimize risks. Tom Dashel is the former U.S. Senator from South Dakota and U.S. Senate Majority and Minority Leader and currently Chairman of the Dashel Group. And David Byer, who is a San Francisco-based venture capitalist, a former senior executive at Amgen and Genentech, and former Chief of Domestic Policy for Vice President Al Gore. Tom and David, welcome and thank you for being with us today. There is both excitement and worry about the power of generative AI. Public officials have begun discussing what role government regulation should play in shaping its uses. We know from experience that governments and public policy can create, modify, and eliminate markets overnight if they so choose. How governments will choose to deal with generative AI will directly influence private sector decisions regarding investment and market opportunities. Already, there are calls in the U.S. to establish a new federal agency to oversee generative AI applications. New York City passed a law requiring companies to disclose the use of AI in hiring. Meanwhile, China recently published new generative AI rules and other regulatory proposals are under consideration around the globe, including in Australia, Canada, the E.U., and the U.K. The concept of guardrails has been used to discuss AI regulatory models. The idea is that policy delineates the values society holds true, and AI uses are given direction, but not command and control regulation, creating space for AI innovation to reveal new contributions to economies and societies. Business leaders will need to participate, along with policy makers and other stakeholders, in discussions around what guardrails make the most immediate sense if countries are to hit the sweet spot of balancing innovative economic growth and responsible societal behavior. Tom, perhaps we could start with you. Given your experience, in your mind, what is the best analogy for the policy changes ahead regarding generative AI? Some have suggested the analogies of nuclear energy and weapons, biotechnology and the internet, but from your experience, what are your thoughts on the policy changes that we see and how have we dealt with them in the past? Well, Chris, I think my best answer is all of the above and none of the above. There are certainly things we can do and learn from the past transformational developments we've experienced, the ones you've noted, but we should all understand that we can avoid making some of the mistakes that we made as we consider those other developments, but they may not be applicable here. AI is actually world-culturing. It's unlike anything we've ever dealt with before. Other issues like the ones you mentioned have a long history, but here we're actually starting from scratch. We're not even sure which policy questions we ought to be asking about right now. So we have that whole challenge, I think, as we consider AI. Rather than learning from past mistakes, I think relying too much on old models may not be particularly helpful. So as we start from scratch, when it comes to public policy, I think Senator Geordie leader of Schumer recently laid it out about right, actually. He argues that there ought to be a two-part action plan. First, develop a framework and then develop a process. And I agree with his assertion that framework should recognize at least two major goals. First, innovation has to be the North Star. The U.S. says we all know as long been the leader of innovation. I understand that we had over 590,000 U.S. patents granted just in 2021 alone, and 60 of the top 100 companies worldwide are still American. Secondly, he argues that guardrails are recent, and I couldn't be more emphatically agreeing. If guardrails are not in place, we actually could stifle or even halt innovation. So there seems to be a growing consensus that a good framework has four primary components. First, clearly, securing. We need to do everything we can to install guardrails that make sure know what uses our AI advances for illicit or bad purposes. We also have to ensure that we consider security for America's workforce. Globalization, I think, is a cautionary tale. We all know that story. Congress was way too slow, and I have to admit some guilt myself here, to hate Americans who lost their jobs due to globalization over the past decades. Well, writers, drivers, and many others could actually be next. Second, accountability. We need to ensure privacy is protecting. We need to make sure that there's certain practices are out of bounds. And third, we need to protect our democratic foundations. I'm involved with a number of organizations dedicated to democracy, and the misinformation that we're experiencing right now is a major threat to democracy. We have to ensure that people can engage in democracy without outside influence. And finally, we need to ensure that we have explainability. That would be our biggest challenge, and I think care transparency is so key. We need to develop a system that is simple and understandable to the average user. Just to close on your good question, that policy here is essential as well. And I just, I think we need to ask four basic questions. What is the proper balance between collaboration and competition among the entities developing AI? How do federal intervention questions be considered? How do we address federal intervention? How much should there be? What is the proper balance between private AI and open AI? And how do we ensure innovation and competition is open to everyone, not just the big companies? So as we consider past experience, and the models you suggest are the ones that I think we ought to know, let's learn from our past mistakes, but then let's focus on the proper framework and process for creating the new global technological infrastructure. That's great. Thank you, Tom. There are a number of things you said that have been touched on in our two previous episodes, looking at different dimensions of generating AI. I will come back to those later. But David, any thoughts about the analogies and policy change challenges that you foresee? Thank you, Christian. Thank you for hosting the podcast. First, maybe I could start by laying out a definition for artificial intelligence. And this is something that I was involved in writing in 2018 for California State Commission, and we found that artificial intelligence is machine or computers that can sense, reason, act, adapt like a human being. And as Tom suggested, that's a profound technological breakthrough. And I also agree with Tom that it is useful to look at previous examples, biotechnology and the internet are things that I know reasonably well, having worked on internet policy in the Clinton White House, and having been part of the biotechnology industry and government regulation of that industry for more than 20 years. And so let me focus a little bit on what we can learn from biotechnology. It's the case that both academia and the private sector engaged in pauses in research when it was appropriate to prevent abuses. That kind of activity may be appropriate in the context of artificial intelligence with respect to some uses in policing, for example, or in defense context. Authorization to use weapons may need rules to prevent algorithms from acting without adequate human supervision. So I think looking at the previous case studies is useful, but it also points out a structural question. What is the appropriate role for companies, CEOs and leaders within companies? What do they need to know? When do they need to act? And how do they act relative to the government? And analogy Chris, you referenced is nuclear power. That was largely a government driven activity. And one could argue it was appropriate with respect to defense uses, but it may have had a stifling effect with respect to civilian energy creation. So I think there are lessons and we shouldn't ignore those lessons as we go forward. So a couple of the themes that we both have touched on have given rise to points in our previous conversations. Tom, one about accountability, second around the access of this technology and breakthrough technology that is so different than many of the other breakthrough technologies that we have been linking to. For instance, in a previous episode, we had a discussion around what makes generative AI so unique and so different is because it is at the personal level accesses from each and every individual who want juices to use it. Where in previous technologies, there were either institutions or gatekeepers that provided access to these technologies and therefore the pace of distribution was slower because it was more limited to the number of people who were experimenting with it. I mean, even the internet as it started really wasn't fully accessed until smart phones came into being. Now that's a long time after the internet was actually deployed because companies and institutions like government were able to be the gatekeepers in some respects. So those legal frameworks that were developed previously to deal with these breakthrough technologies that seemingly were always behind the technology advancement curve had fewer people to consider in the regulatory model. Just wondering Tom and David, whether you think the existing legal frameworks today are sufficient to deal with the unique capabilities and characteristics of generative AI. David, maybe we could start with you and then Tom, I'll come to you. The blunt answer Chris is I don't think in the United States that we have a sufficient regulatory governance system in place. And in part, we are substantially behind other political jurisdictions. The European Union, for example, appointed an expert committee on AI I believe in 2017. And the parliament has just this summer come up with a pretty comprehensive framework. I'm not talking about the merits. We can talk about that later. But the fact that they've gone through and studied and educated themselves about potential uses and abuses of artificial intelligence puts them ahead in some ways in the regulatory scheme. Scientifically and technologically, I think the United States is substantially ahead of all other countries with the exception, perhaps of China in some respects. So I think we have a lot of work to do. I agree with Tom, Senator Schumer's framework is a good place to start. The Biden White House put out an excellent white paper. But we're just at the beginning of this process. And we need to frame the questions with greater precision. And think through the applications or use cases for the application of generative AI in a variety of industries and contexts. And until we do that hard work, it's going to be very difficult to think through exactly what the structure of a solution set looks like. Tom? Well, Chris, I agree completely with what David has just said. This transformation is going to be bigger and broader than anything we've experienced in all of human history. So we're going to need a new regulatory framework. There's no question about that. We ought to be taking into account what other countries are doing. David mentioned the Biden administration. I think they've got it about right. They actually propose five criteria for achieving the right balance in this new regulatory framework. And I think it's a good start. The first is safe and effective system. We need to identify concerns and risks and potential impact. We should be sure that systems should be subjected to pre-deployment testing, something we don't probably do as much as we should sometime. They have to be designed to proactively protect you from harms stemming from unintended consequences. There should be an independent evaluation that confirms that the system is safe. So security is going to be a big component of this regulatory question as we've already discussed. Second, we have to be aware that there is the potential for discrimination by algorithm here. The system should be used and designed to be equitable and algorithmic discrimination occurs, obviously when automating systems lead to unjustified different treatment. This is just a dismay for some people and we've already seen some examples of that. We've got to be ensuring that isn't going to happen as we design this infrastructure. An independent evaluation is going to be critical. Third, we need to ensure data privacy. One should be protected from violations of privacy and data collection has to conform to regional expectations. We're going to make that a high priority. And fourth, let's prioritize notice and explanation. One should know that an automated system is being used. One should know how and why an outcome impacting you is determined by an automated system. And then fifth and finally, we've just got to make sure that we ensure that there are human alternatives. We should be able to opt out of automated systems. We should have access to timely human consideration and fallback if an automating system fails. Those seem to me to be the sort of the criteria as we put this new framework in place. At the time you talked about a period of experimentation, a learning process that we're going through. And I think one of the strengths from a regulatory perspective that the United States has had is that it's been perhaps more accepting of the time that it takes for experimentation to provide us with lessons of what to do from a public policy standpoint. And then some other jurisdictions, some other jurisdictions just make regulatory decisions and move forward. The areas around identification, we had a conversation the other day about tagging. And basically to your point about opting in or out of the use of generative AI, it will be really important for businesses to know where the source of information is coming from. Because you don't want to make investment decisions and operational decisions based upon bad inputs that generative AI, which is basically an aggregation mechanism delivers to you. So the idea, your idea about having the individual know what the inputs were is equally as important for businesses as they adopt the tools of generative AI. So as we shift the focus maybe a little bit to the enterprise or the distance usage of this powerful and provocative technology. What would you each ask CEOs and business leaders listening today to do to help policymakers take the most appropriate action over the next 12 months. So Tom, how about if we start with you. Well Chris, I have an acronym that I like to use and questions like this and an acronym is that too. I would recommend to every CEO listening that he consider he or she consider a plate of rice, R I C E. Rice is my acronym and starts with our privacy resilience. I think we can almost guarantee in these turbulent times ahead there's going to be a lot of setbacks mistakes and disappointments. So we've got to we've got to be resilient. We've got to be able to bounce back. We're going to make mistakes. We've got to recognize that we can learn from those mistakes and move forward. But but resilience is really going to be critical for a CEO. Secondly, the eyes innovation. I've already said how much I believe it has to be the North Star. We've always benefited from innovation. We I think we need it now more than ever. We've got to be innovative not just in the product but in the practice. We've got to be willing to think out of the box more than ever. So innovation is really key. Third C Mike my C is coordination always going to require a tremendous amount of public private cooperation and partnership. So coordinating at all levels internally externally between private public sectors is key. So coordination is is absolutely imperative. And my E is engagement. This isn't a time for any CEO to be sit back and be a spectator. Interest and involvement in public policy is absolutely critical. We need their engagement personally, corporately, and in every other way we can think of to ensure that we get the maximum degree of participation as we work through these challenging times. So David, do you like rice? Yes, I would focus first on what technical information does a CEO need to have about artificial intelligence. And there was a article in the Stanford business review, which I think captures it pretty well. CEOs and boards and directors need to know enough to frame the value issues and to pose the right questions for setting safeguards. And the value issue point I think is important to underline. If you're going to augment human knowledge and human productivity, you need to have a human centric way of thinking about artificial intelligence. That means you have to value the individual, whether it's your employee, your supplier or your customer. That means they all have to be educated about the opportunities and the risks from artificial intelligence. And people need to be trained, especially in the employment context, how to use it effectively to augment and improve their work. And lastly, one of the challenges of economic growth over the last several hundred years has been productivity gains have not been equally shared with workers. And Tom talked about that before. One of the opportunities here is to get the balance right from a CEO point of view so that employees and future employees think of this enterprise as helping them grow as human beings and advance their values and advance their economic well-being. And that it's not just an economic output of increased productivity. So Tom, you talked about coordination. You also talked about explainability. And we seem to be living at a time where coordination is only taking place with people who you agree with. And I wonder if this clearly is not one of those topics that is well-defined. I'm wondering whether or not the public and the private sector, let's personalize it. Let's say legislators in Congress or in parliaments around the world and CEOs can coordinate to a certain degree that will make this opportunity and risk explainable to people. What will it take to meet your explainability criteria? Because with the access being so personalized and so distributed around the world, there'll be lots of points of view, which is good for experimentation and learning if people are open to learning. But the two points you've mentioned that I think link so directly and importantly together are coordination and explainability. And what do you see would be useful over the next 12 months or so to make progress on the explainability aspect? Well, Chris, I think the most important thing in explainability is to make sure we all are on the same page when it comes to what it is we want to explain and how we can agree on the principles of AI that we want as part of our message. That's the way to begin successfully getting that message out. What is it we want to share and how can we simplify that message in a way that most people can understand? I think oftentimes policy makers get all wrapped around and I must admit my own fault here. I'm guilty of it myself. I start talking about legislation in terms of bill numbers and all the kinds of legislative rhetoric that we tend to rely upon using acronyms as even as I've done today, that I think sort of fall flat trying to communicate with the rest of the world or the country. And so I think we've got to keep it simple. We've got to make sure we've got a consensus on what it is trying to explain and ensure that simplicity is repeated. One of the things that politicians oftentimes fail to do as well is understand the value of repetition. We've got to repeat it over and over and over again just to make sure that it actually does get the traction we want it to is as we try to work through these elaborate and complicated challenges and explanation going forward. So David, over the course of your career, both in the public sector and the private sector, you've worked on a lot of tricky and complicated issues. So explainability was always going to be an important part of your output. Maybe talk a little bit about explainability. What were the challenges that you encountered when you were working in the private sector and biotechnology that made it allow you to become more understandable and the benefits of the breakthroughs you were working on became more evident? That's a great question, Chris. Part of it is understanding the audience. And if you're talking to CEOs and senior executives, they need to understand that the role of self-regulation, that is how companies behave between one company and another company or a company and its customers is a form of governance. And in order to do that effectively, you have to communicate why the human condition is going to be advanced by whatever you're doing. And two good examples just from recent press accounts, Salesforce has put out some accepted uses of its technology. They call them trusted principles. They'll put out, I think yesterday, something they call a AI nutrition guide. Think of the nutrition guide in the back of a cereal box. And what those two companies are doing is they're saying, here's where our data comes from. Here's how we analyze it. Here's how we think it should be used. And here are some uses which we find unacceptable and we will not sell to you if you're going to use our technology for these purposes. And discrimination would be one, violation of human or privacy rights would be another. Setting those conditions in the commercial marketplace can be a very strong step in aiding government in understanding how much the private sector can contribute and the limitations of self-regulation. So I think explainability coming from both the creators of AI and the users of AI across multiple industries can help people be less fearful of the technology and more accepting of how it can help them in their daily lives. If we get to a situation where people are afraid of robots or self-driving cars and we end up in a situation where people don't want to use that technology or want the government to ban it, then I think we're going to be missing an opportunity to have dramatically improved human lives where routine tasks can be done by algorithm. And the best example of that is doctors, nurses and other health providers ought to provide care and not spend 60% of their time filling out forms on electronic health records. We all have seen countries compete and collaborate over various issues that have come up. And yet some countries use new technologies as leverage to attract investment, but others use it as a way of slowing down their competitors when they think their particular country is behind. And we've talked a little bit about the positioning about generative AI today. Tom, do you think that we will see different countries collaborating or competing over generative AI and how it should be regulated? And if you do, what nations do you see as first movers or meters in bringing forth regulatory strategies that would either help them achieve their objectives for economic development and investment or slowing down the competition? Well, I think we're likely to see both competition and cooperation, Chris. I don't think there's any doubt. China is probably the best example of that. We're going to have to recognize that they will continue to be a very critical competitor, but they're also, we rely on China, of course, for a lot of the resources we use for our own technological advancement. And so out of necessity has to be some cooperation. We've seen Russia's intervention in technology and especially on the effects of democracy and a lot of other examples. So you're going to continue to see this delicate tension between, and maybe I should say not so delicate tension sometimes between competitors and those who would be interested in cooperating. I think we can look to the West and some of the Asian Pacific countries as leaders in technological innovation. There are plenty of innovative approaches now being contemplated and employed, frankly, and I think that by and large is good. I don't think we ought to limit ourselves, however, just to countries. I think you're going to see private sector engagement, maybe even legal and ill with their own agendas here that are also going to have to be considered as we look at the future and how we might look at both competition and cooperation. But it's going to be an ever-evolving process, but I think we've got to be aware, need to be, again, as I say, very resilient and innovative as we take on the challenges of both competition and cooperation. David, any particular countries as a business executive you would be paying attention to be given the regulatory clivities that countries may have because they are going to clearly influence investment decisions and market opportunities. Given limited resources, I would only focus on three, China for being less regulatory than we would likely accept as a democracy. The European Union has likely more regulatory, more prescriptive and more bureaucratic than is likely to be adopted in the United States. Their current plan has a artificial intelligence office and artificial intelligence council. It's based on a deep set of complicated, hard to explain regulations and prior approval for certain kinds of technology to be adopted. I don't think that's a route we're likely to go down. However, in the United States, I think there's a big tension and I don't mean this in a political way, but one of the challenges is delegation of authority from Congress after it passes a law to executive branch agencies. Some people call that the deep state. In the case of artificial intelligence, detailed analysis of how AI is going to apply to food or drugs or energy is going to have to take place at the agency level and there's no way Congress can get to that level of detail. So I think in some ways the best protection the United States has in its form of government is the multiple layers starting with self regulation from companies, but then states have an opportunity to regulate some things that are not central to interstate commerce. Litigation is a way of regulating conduct that's obviously used more in the United States than elsewhere. But at that federal level, just make a plug for not adopting a single overarching artificial intelligence agency. I think that would be a mistake. However, as the Biden administration points out in their white paper, coordination between different agencies in different use cases is going to require a set of principles. And the ones that Tom outlined, both from Senator Schumer and from the White House, I think are a good set of organizing principles for that kind of central regulatory coordination in the White House. Well, thank you both very much. I appreciate you sharing your ideas and thoughts. This is an important factor for businesses around the world because governments do shape, create and eliminate markets. And before we close, we always like to use the last minute or so to get your insights into one strategic thing that you believe business leaders should be putting on their radar today. And so in one word or one phrase, we call it our emerging critical issues moment. Can you tell us one issue that you think a CEO or a business leader must be putting on their radar that you see coming over the horizon? And David, why don't we start with you and we'll let Tom close things up. One word would be foresight. Plan ahead as best you can because you have seen nothing yet. Chris, I would say state of our democracy. We've got to be concerned about the fragility of democracy in our country and around the world today. That affects CEOs more profoundly than they probably fully appreciate today. Two very good thoughts and recommendations. I want to thank you both for your time and your insight today. It's been great having you. This is a third episode that we've devoted to generative AI and its implications for our society and globe as well as business. You've been listening to the get sponsored by the Center for Global Enterprise celebrating 10 years of convening global enterprise leaders around the most important business transformation issues.