 Welcome to the Dufkhand Ethics Village again, and it is our honor to have with us Commissioner Rohit Chopra of the Federal Trade Commission. Welcome, Commissioner. Thanks, Andrea, for having me. No, thank you for taking the time to join us. So before we get started with nerdy substantive questions, could you tell us a bit about your own career somewhere out there in the wilds of the internet? There may be a teenager whose dream is to be an FTC commissioner. How does that teenager fulfill the dream of growing up to be like you? Well, you know, it's really easy for me to pinpoint what totally changed my thinking on the world. And that was the subprime mortgage crisis, the absolute failure of not only big Wall Street institutions, but also the failure of our regulators to actually take action to avoid massive pain for so many people was a wake-up call to me. I really saw how clearly that regulators in Washington were willing to be spoon-fed, you know, tall tales about market efficiency and how everything is self-correcting. But in reality, many of these large, powerful firms had weapons of destruction, and they could cause damage while not really getting hurt much themselves and even getting bailouts. So that really taught me about the need to jump into the fray and fix this broken culture. And you know, I'm the first non-lawyer in about a quarter century at the FTC. I'm one of the first to have a technologist on my team. So we are trying to do things differently because it's clear the status quo is so fundamentally broken. So with that, let's start talking about some of the things that are challenging and how the FTC is working to address them. So the first issue, let's talk a bit about supply chains. So supply chain integrity is something that is near and dear to the heart of every security professional. And it's also something that's been in the news a lot, whether it's with Kaspersky's software being viewed as suspect for use in government operations or the most recent conversations about whether TikTok should be limited in its access to US consumer markets. You also recently concurred in a notice of proposed rule making around made in the USA labeling. Could you tell us about that concurrence and talk to us about some of these broader issues around supply chain management and their impact on integrity and security of software? Yeah, sure. So just to take a step back, I think everyone is really asking hard questions about how mergers and how a lot of our policies, including offshoring and our trade agreements, have actually undermined the resilience of our supply chains and how we produce goods and services. It's obviously been in full view in the context of PPE and responding to the COVID crisis. It's obviously top of mind for me when it comes to needed pharmaceuticals, especially generic drugs, but it's also really come up in the context of our agricultural supply chain. We saw how epidemics that took place in certain meatpacking facilities, we're having ripple effects. And what it really represents is, I think, a tension between the concept of efficiency and the value of resilience, you know? Resilience and many of you watching will know this well. Systems are only resilient when they have back stops, when there's redundancies. And I think a lot of our economic policy in America has really been in favor of consolidating to one single source. And when that single source is disrupted, boy, is there chaos and panic. You know, in the telecommunications and semiconductor sector, we saw many years ago how one natural disaster near Taiwan led to OEMs in the United States being unable to produce for an extended period of time because there was so much dependence of their supply chains on that. And I think we're really rethinking a lot of that in the US about what does our healthcare supply chain look like and really what does our technology and telecommunications supply chain look like? There's obviously been serious geopolitical tensions with respect to the treatment of Huawei in many communications infrastructure in Europe and around the world. And I do think there are some serious questions we have to tackle when it comes, one, to this issue of resilience and redundancies and back stops, and two, whether we're really able to trace the components of a supply chain, particularly in the digital sector. We obviously really have some lack of understanding about where those supply chains share data, the extent to which it could be shared with potentially an authoritarian government. So there's a lot of live issues. You obviously mentioned the made in USA question. I've really been critical of the government's no money, no fault settlement approach when it comes to made in USA. I think this is totally a slap in the face to many honest businesses in the United States or frankly any honest business that doesn't lie about where their products and services are made because when you can have some lie about it and get a market advantage and then settle it for no money and no admission of liability and really kind of nothing except some paperwork and a promise not to do it again, that's not a penalty, that's an incentive. So we put forth a proposal a year ago to create new penalties for this. I'm really happy to see that we're pushing forward with it. And I think there's a lot of open questions about in software and digital services about how we think about country of origin claims but I do know that our law protects against those who lie about it and I think we're gonna need to develop our thinking around that especially as it relates to technology around election security and other things that have such fundamental and big consequences for our democracy and our economy. So there's a lot there obviously right now there's major discussions about TikTok. Previously the FTC has sanctioned TikTok for violating children's privacy. I thought the violations were totally unacceptable and I think there's more discussion that you're hearing frankly across political views around the national security implications of the TikTok corporate structure and whether the existing corporate structure should be permitted to operate as is in the US. To be clear, I don't favor, of course we want to allow a broad ecosystem of apps and ways for people to connect but of course their corporate structure may need to be modified if there are serious national security implications around the ownership structure and how data is being handled, transferred or potentially abused without user knowledge. So you recently wrote that algorithmic discrimination is a silent pick pocket, a lovely turn of phase that robs consumers of opportunities and the ability to foresee their access to various services. What are the sorts of concerns that you're thinking about in the context of algorithmic transparency and discrimination and how do we think about creating a more transparent system of algorithms that preserves information integrity for consumers and gives them maximum fair access to various consumer markets? Yeah, so I think there's a myth out there that algorithms take the bias out of things and that humans are the ones that are biased and therefore the algorithms will be unbiased and I think that is really such a dangerous point of view. The reality is that these biases are built into these algorithms often by the data set that they are trained on. We know that that is how machine learning, artificial intelligence and algorithmic decision making works and we know that it can have a disparate impact on certain groups of individuals and I'm worried that some algorithmic decision making reinforces discrimination rather than taking discrimination out of it. So one of the things that I have been writing about and speaking about to build more support around is to use the FTC's prohibition on unfair practices and there's a legal framework for that and to use that law to combat discriminatory algorithms and discrimination, I believe, is an unfair practice under the FTC Act and we currently have some existing laws that we can use to tackle discriminatory algorithms, whether it's our laws protecting against housing discrimination, employment discrimination, credit discrimination and I do believe that the FTC Act could be a place that is a broad anti-discrimination protection when it comes to algorithms. So I'll say this, in many cases there will be algorithmic discrimination but I would like to see more individuals and companies that deploy those algorithms do what many other responsible entities do and do some basic due diligence and testing to see what some of the impacts might be. I mean, to deploy some of this and to not understand how it can cause harm and damage I just think is so blind. So there's a lot more to do on this but I do think that this moment that we're living in is such a reminder and it's not just facial recognition, it's not just biometric identification, it's all sorts of automated decision-making that I fear can be weaponized or I fear can be used as a tool to exclude rather than to be a fair form of commercial activity. Particularly as databases get merged with other databases and we're perposed. So along those lines but shifting gears a little bit it's been an exciting week in terms of congressional testimony. So I'm sure that you heard about or watched the CEO testimony of big tech companies this week or last week, I guess, in Congress would you have any impressions that you'd like to share with us about the kinds of core FTC concerns that you saw reflected in the testimony and some observations about what was asked or what was not asked? Yeah, well, I wanna be mindful about my comments because every single one of those witnesses represents a company that is under an FTC order for one reason or another. Obviously, I had major objections to the FTC settlement with Facebook. I thought that it was completely not the right way to do things especially without even deposing or collecting the documents in the custody of Mark Zuckerberg and then awarding a release of liability for him and other top executives. But let's just say this, I think that it's important for top business decision makers including CEOs to have to answer questions under oath when there is scrutiny about whether their business practices are lawful. Too often I see a mentality in Washington including at the FTC that treats small businesses, app developers and others who don't have a big corporation standing behind them to defend them. Those individuals get put on the hot seat. They often get named in law enforcement actions while too often the largest firms or the individuals at the top called the shots completely evade any scrutiny or accountability. And I just think that's fundamentally wrong. And it goes back to something earlier I said about the subprime mortgage crisis a decade ago. We really didn't see many executives face serious scrutiny or accountability for the conduct of their institutions while in the savings and loan scandal a generation ago we saw dozens, if not hundreds of bank executives that were from smaller banks in many cases locked up. So we need to remind ourselves that the law does not have an exemption for the CEOs of the largest corporations on the planet and obtaining their sworn testimony whether it be in a congressional investigation or a federal law enforcement investigation or a state AG investigation I think is critical. And they should not be able to lobby their way out of that. So continuing along those lines, he recently tweeted that until we address Big Tech's business models, any effort to combat disinformation is like shoveling snow during a blizzard, another vivid image. Which aspects of Big Tech's business model specifically concern you? Well, there's a lot there. I mean, I hear so many responses from some of our largest firms about what they're doing on everything from fake reviews to harmful content and often the responses to hire more people to do more checking. And I think that misses the point. And we're hearing having some of these debates and there's a lot of discussion around the, some of the ad boycotts with respect to hateful content. My view is that we're always going to have content that we disagree with or that is hateful. In some ways that is the price of living in an open society. Where the question is, is to what extent are companies profiting from that? Are they profiting from amplifying it, from making it more prominent, from knowing in their own research or machine learning or artificial intelligence that it is this type of divisive content that is keeping us more hooked and more engaged because that's what makes us click and therefore makes them more money. Until we really look at some of the business incentives, I don't think we're really gonna ever be able to fix some of it and they can hire all the people they want but some of it is just gonna be too little too late. So I'd really like us to diagnose a lot of the problems we see on the platforms by looking at their core business incentive of whether they even make money by fixing it. They make a lot of claims that it's better for them to fix it but I don't always see that to be the case. Sometimes I see their own financial interests come ahead including even when it comes to bots and disinformation. So take the issue in the context of elections. We hear a lot about what they're doing but let's be honest with ourselves. There are billions of fake accounts. There are so many false accounts that are propagating and amplifying intrusive disinformation and often that is connected to state and non-state actors. So the question is how are some of the platforms actually benefiting from these fake accounts and bots? It may actually be helping them when it comes to juicing their ad metrics or making it look like more people are clicking so that they can charge advertisers. So there's a really very little transparency in this and I think we need to do much more to really understand what those business incentives are, how they are actually deciding to curate and amplify content and there's a lot of talk about political bias and all this. I'm really focused on the business incentives, what makes them money and why certain types of content are amplified and why what we need to do in order to fix that rather than hearing a bunch of PR stories about what they're doing to clean up their act. The other half of that question might be the micro-targeting of consumers. Is micro-targeting of consumers a problem in your mind? I do think it's a problem. I think that we have not fully appreciated that when you, in normal advertising markets, you advertise to a demographic but in today's advertising markets online, you can advertise to a single individual and we know that this can be weaponized for the purposes of disinformation and manipulation. In fact, there's been reports by NATO, by even reports for the Department of Homeland Security in Australia and in Europe about how these tools can be used to be weaponized to influence certain individuals and when you can essentially provide a list of individuals and send a message that follows them across the digital world, that is a recipe for manipulation. And look, we saw this in the Cambridge Analytica scandal and that scandal, I think, was just a small-scale reflection of Facebook's overall business model. And I'm not so sure that the behavioral advertising model or the surveillance-based advertising model that is tied to an individual person. I'm not really sure that is completely in coexistence with our views on an open internet and a fair digital marketplace. So we'll need to look at everything from the legal immunities they enjoy and maybe they shouldn't have some of those legal immunities if they have certain business models as well as how we correct the core business incentive that drives some of the massive intrusions in privacy and security. You previously hinted at this issue, but could you talk with us a bit more about how you see disinformation and misinformation management happening on social networks and other platforms and whether you feel that it's meeting expectations or whether more needs to be done? I don't. I think the idea that there is just going to be more content moderation. I have to tell you, one, I don't think it's gonna work. I have serious concerns that this is being adjudicated by the tech platforms themselves. It's an enormous amount of power they hold. And I think some of those issues needed to be decided through democratic governance rather than a few people who control the massive platforms for whatever decisions that they choose. And I think ultimately they will make certain decisions that are in their shareholders' best interest and their best interest, not necessarily in society's best interest. I think we have to change our mindset completely when it comes to abuse and misuse of data. And I know that many of you who think about ethics and the role of technology in society, you know that data can be weaponized. I mean, just take a look at some of the massive intrusions. Equifax, Anthem, Marriott, these were not done in order to open up credit cards in consumers' names. We believe, according to what we're hearing from government enforcement, that these were connected to state and non-state actors and almost all of them, or the majority of them, are related to essentially Chinese actors that are related to their own state's objectives. So we have to start thinking about this as a national security issue rather than as a narrow identity theft issue. The remedy is not really credit monitoring for individuals. The remedy is much different. And we have to quickly accept the harms that are not just to our pocketbooks but are to our safety and security as well. Following up on that, but switching gears a bit, let's talk about anti-competitive conduct. Do you have concerns about the level of competition that's currently occurring or rather not occurring in some technology spaces? I do. I really think that the existing way that many platform businesses and gain scale is often through a model of domination where if they have deep pockets to lose a lot of money in order to create network effects. So think ride share, think food delivery. There's so many examples. You worry that that actually is a model that ultimately monetizes itself, not through a set of open protocols where there's dynamic competition but through essentially imposing more rents and regulations on every side of the market, whether it's riders and passengers, whether it's restaurants and their customers. We have to really think hard about the way the way that venture capital and Wall Street incentives work. I'm also really concerned about the app economy, the extent to which app developers face major gatekeepers and who can impose their own regulations and commissions. And ultimately, sometimes maybe they're even able to use their data in order to determine what new apps they might offer as a competitor or what new products or services they might offer. The original conception of an open internet was really low barriers to entry without gatekeepers. And I'm concerned that venture capital investment doesn't go to startups that could just be squashed out by a major incumbent. Ultimately, I think that reduces overall innovation. It slows us down and it's not good for our digital progress. And so I am worried about a slowdown in innovation because of the rent seeking and gatekeeper activities and choke points that many people raise concerns about in today's digital marketplaces. Following up on that question, one of the key dynamics that happens from the venture capital standpoint is a reliance on consumer information databases as a way to monetize businesses that may be otherwise losing money. How do you analyze the dynamics around the mergers of consumer databases, particularly in vertical integration contexts? And how does that connect with the short-term return on investment pressure that so many VC firms face and pass on to the startups that they work with? It's a great question. I do think that before I took office on the commission, the FTC closed an investigation into Amazon and Whole Foods. And while I was not on the commission, before I came on, I had the observation that a lot of deals like this are for the purposes of acquiring more consumer data in order to price discriminate, in order to essentially create more and more reliance on a particular platform. There's obviously been a lot of discussion around the world about Google's proposed acquisition of Fitbit. There's lots and lots of data acquisitions that I think ultimately could reduce competition. In both directions, in some ways, the target could have grown into a bigger challenger. And on the other side, the buyer, it may lead them to shut down their own internal investments in building a rival. And we need to really analyze that in multiple directions. I really supported an effort by the commission to issue orders to Google, Facebook, Microsoft, Apple, Amazon to report to us about all of their acquisitions that did not meet the merger reporting requirements so that we can understand whether we are even monitoring mergers and acquisitions appropriately. And I really believe that in the private equity space and in many acquisition strategies, we are seeing the roll-up of many small companies that may not meet merger reporting thresholds, but that can be used ultimately to reduce competition and innovation and exclude potential rivals. And it's a real concern that we all need to be thinking about and ultimately with the goal of making sure that our US markets are the friendliest when it comes to new market entrance, because ultimately that is what is going to drive the progress, not coddling a few politically connected behemoths. So the last question, which is a bit open-ended is sort of a hopes and dreams question. How do you see the FTC and how do you hope the FTC will use its authority going forward in the next five years to further its mission of consumer protection and encouraging market competition? Yeah, I could go on about that forever, but just a couple of things. One, I think our enforcement remedies when it comes to law-breaking and when it comes to mergers need to be more surgical and it can't just be okay, no money, no fault, don't do it again in egregious illegal conduct. And it also has to think about market structure. I mean, if we're gonna challenge consummated mergers that reduce competition and violation of the law, unscrambling that egg is gonna require real thinking, particularly in tech and digital markets, and we're gonna need to think with engineers and developers and experts about what is the right remedy? Is it gonna require cloning? Is it going to require open licensing? Is it gonna require spin-offs? Is it gonna require elimination of certain conflicts of interest within a business model? And the list goes on and on, but we're gonna need to be data-driven, not press-driven. We're gonna need to be analytical and not act on hunch. And I think ultimately that's gonna require diversifying a lot of the skill sets at the FTC and other regulators. And it's also gonna require us to be unearthing more information about existing business practices. We're gonna need to use our authorities to conduct more industry-wide investigations. We're gonna need to set real targets to make sure that we are making sure that barriers to entry are low and that there is not a lot of harm from exclusionary tactics, whether it be by digital gatekeepers or others in the economy. And that's just real when it comes to other key sectors, whether it be the pharmaceutical industry, whether it be retail, but we have to bring that lens of data-driven analysis and a broad thinking about remedies where we include broad participation from the public. That's just really imperative to me. And I think there's gonna be a new bipartisan consensus that is built around this and that ultimately really wants to hold bad actors accountable, that wants to clean up market structures so that anyone with a great idea can get in the game. And I hope that over the long-term that we will be able to move the FTC and a lot of our agencies in that direction. As a bonus question, is there anything that I should have asked you about that I failed to ask you about? Well, I think it's really important for all of us to think about our individual role when it comes to thinking about the response and the recovery from COVID-19 and the resulting economic crisis. We are all going to need to have an all-hands-on-deck mentality about what can every government agency be doing better? What can every community be doing to really respond to this and recover? And I'm so worried that our economy is gonna look so different in a way that might wipe out a lot of local businesses that might make it even harder to break into markets and ultimately an economy and society that has disproportionate negative effects of some of these cataclysmic events on the least fortunate among us and can also have effects that overwhelmingly or disproportionately affect people of color and others who may face discrimination. So I know that there's a lot we're doing in the digital world and in the tech community to think more about civil rights and to think about fair markets. But this economic crisis and this pandemic, I think put it all in full view that we all need to step up and every single sector of the economy and society needs to be thinking differently about it. Thank you so much, Commissioner Chopra, for taking the time to talk with us. And I realized I failed to introduce myself at the beginning. I was too excited to interview you. So I'm Andrea Matwishen from Penn State and we've been fortunate enough to talk with Commissioner Rohit Chopra and please stay tuned for a live question to answer session with the commissioner. Thanks. Thanks, Andrea.