 Good morning, good afternoon from Dublin and a very warm welcome to our webinar on the digital transformation addressing consumer vulnerability. My name is Joyce O'Connor and I chaired the digital group here at the IIEA, the Institute of International and European Affairs. It's a great pleasure today to tell you that this is the first of a series of webinars organized as part of a collaboration between the IIEA and the Economic Regulators Network, the ERN in Ireland. The ERN is composed of the Competition and Consumer Protection, the Commission, the Commission of Regulation of Utilities, the Broadcasting Authority of Ireland, the Central Bank of Ireland, the National Transport Authority, the Commission for Aviation Regulation and the Commission for Communication Regulation. And I'm delighted to be joined today here by a stellar panel, an international panel of expert speakers. Dr Jennifer King from the US, Professor Bruno Leverberg from Brussels, Dr Stephen Unger from the UK and Jeremy Gottfrey from Ireland. You're all very welcome and we really appreciate you being with us today. I know Jennifer, you're an early start, you're at the beginning of the day, we're at more or less the end of our day, so thank you all very much for being with us today. We appreciate very much that you've given your time to join us. We will begin our event with a keynote address from Dr Jennifer King, who will speak for 15 minutes, then the rest of our panel will speak for 10 minutes. And after that, their presentations, we will have a discussion on Q&A function, using the Q&A function at the bottom of your screen. I will come to you for your questions after the presentations. Perhaps you might consider sending in your questions during the presentations and join the discussion on Twitter using the handle at IIEA. Today's webinar is live streamed on YouTube. A reminder also that today's presentation on Q&A is on the record and our webinar will end approximately around 5.15 Dublin time. Now, digital technology has become central to almost every aspect of consumers' lives. Indeed, all of our lives have been affected, particularly during COVID and the pandemic. And the focus of this webinar is to discuss the digital transformation and how it impacts on consumer vulnerability, navigating the issue of consumer vulnerability and protection in our digital area requires regulators to develop a common understanding of the landscape in which the digital economy will thrive while mitigating the risks surrounding consumers' fundamental rights. Our speakers will address these key issues, the challenges and opportunities posed by these emerging technologies, like AI-driven applications, robotics, IoT, the increasing role of data as an economic asset, the role of the metaverse, and the central position of small number of companies, key online platforms, and the relationship with consumer vulnerability. Topics that will arise are the efficacy of consent, dark patterns or manipulative interfaces, personal data and how it can be used to a consumer's detriment, platform dominance and the lack of effective data portability, and the role of competition versus the role of regulation, and indeed much more. So today's keynote speaker is Dr. Jennifer King, a recognized expert and scholar in information privacy. She is the Privacy and Data Policy Fellow at Stanford University Institute for Human-Centered Artificial Intelligence and the former Director of Consumer Privacy at the Center for Internet and Society at Stanford Law School. Very welcome again, Jennifer, and we look forward to your presentation. Jennifer will set the scene and among other things will discuss what is it about online which makes consumers vulnerable, the increasing role of data and personal data as an economic asset, and dark patterns or manipulative interfaces which can include interfaces for quitting subscriptions or indeed declining to give consent. Dr. King, we look forward to your presentation. Thank you so much. Good morning from California. Good afternoon from Ireland. I'm still getting going here, so I really appreciate the invitation to open this event today. And so as an information scientist and researcher, my work focuses on both information privacy and manipulation and digital interfaces and I must say that over the past decade these two issues have become more prominent in many of our lives. Certainly that more than I maybe even anticipated myself and certainly not less as as you noted most aspects of our lives are mediated by digital technology. Before we discuss consumer vulnerability today in depth, I want to take my time to lay the groundwork for the discussion by talking about the factors that can make consumers uniquely vulnerable in online environments as opposed to offline environments. And so manipulation and deception are nothing new in the consumer context, but for the purposes of our discussion, I want to put aside cases of flagrant deception or fraud since those are often not contestable. You can often recognize fraud when you see it because what I think is motivating the concerns of this panel today are the sense that legitimate enterprises are engaging more often in these unique and sophisticated forms of online manipulation that tips the scale in their favor at the expense of consumers. So first, what makes consumers in online environments vulnerable as opposed to those in offline? So I'm going to walk through several examples. Most of this work is drawn from Stanford researcher B. J. Fogg who actually developed the field of online persuasion almost 20 years ago. So interesting, there's nothing new under the sun if you want to think about it that way. So while Fogg didn't advocate for the unethical use of his work, unfortunately, one not cannot say the same of many of his former students and disciples, many of which have come from Stanford and gone to form fairly large companies. So his insights about online persuasion tactics have been very widely adopted by industry. And in many cases, these cross line from persuasion, which we define as an attempt to change attitudes, behaviors, or both, into outright coercion and manipulation. Okay, so how has the exploitation of consumer vulnerabilities proliferated in the online sphere? So to begin online, unlike offline commerce, online commerce is available to us at times and places previously impossible. And so while we haven't yet reached a world where the instantaneous delivery of drones or by drones is possible, although my understanding is that's actually being tested here in California. So we'll see how quickly that develops. We are more accessible to online services than ever before. So if you combine that high availability with buying experiences that increasingly increasingly lack friction, there are just simply more opportunities for us as consumers to make impulsive decisions or to make decisions that where we're not necessarily in our best frame of mind. And so the context of you deciding during the daytime to go to the shops is a much different one from when you think about it being 11pm at night and you're browsing an online shop, you know, from the comfort of your couch. And so those different contexts kind of put us in different settings essentially. Of course, marketers and designers know this. And so they use research findings about human decision making, as well as about how we consume and process information online to take advantage of known biases and shortcuts also called heuristics in our decision making processes to design interfaces that capitalize on these flaws. And this is where we see the area of dark patterns emerging design patterns that essentially are designed with those heuristics and biases in mind to take advantage of instances where we make decisions impulsively or are just more easily tripped. And so the ability of online platforms to study consumers at scale and to further refine those designs increases purchases and conversions, which has led to this proliferation of dark design patterns, also called manipulative design. To be clear, many of these insights are derived directly from industry research and practices and not necessarily from academia. And so basically they're part of this immense online experimental lab that we all more or less unwillingly participate when we use websites and mobile apps. And so, you know, every time you click on a link on a large platform, you know, that behavior is observed, it's recorded, and it's used to kind of fuel more intelligence about how people and why make particular decisions. And so our behavior is observed, it's recorded, and it's analyzed in exquisite detail. And that yields these population level insights about why we click on things when we click on things and potentially, you know, what motivates us in doing so. But to be clear, this population level data is really only part of the story. Because now companies have substantial intelligence about consumer behavior at large, they also have substantial data about us all as individuals and how we conduct ourselves online. And so while you might be engaging in that late night couch shopping, you can be targeted not only in the basis of that context that it's late at night that you might be more vulnerable to something that you weren't at another time of day, but also on the factors that are unique to you as a consumer. And so it's well established that we respond as people to personalized offers more readily than generic ones. And being able to tailor ads and offers that are unique just to you is a very powerful tool, especially when companies claim to have the ability to make predictions about your wants and needs that you may not even be consciously aware of having. And so picking stock of all this for a moment, I'm painting a picture of what I would consider a very unlevel playing field between consumers and companies in the digital sphere. So on one level, you might consider this to be highly persuasive, but not necessarily manipulative or career-sive. So yes, companies know what makes us buy. They may even have an extra power boost in knowing what specifically makes me click on something as opposed to you. But the case for persuasion really relies on free will, this idea that you may find an offer attractive, you may find it relevant. But you have the ability to say no. And even if you may be more susceptible to these different persuasions at different points in the day or based on other factors, the key defense here is that you are not forced to buy something. You still made that choice. You still had the control, even if the company had the unfair advantage. And so what concerns me in this space is that this is not necessarily true any longer, that in many cases you're being forced or shamed or coerced or manipulated into actually saying yes. And so this is where we cross this line from persuasion into manipulation. And this is where, again, digital companies have the advantage for not only can they take advantage of all the facets I've mentioned so far, they also control the design of the online environment, the interface. So the journey that users take through the online experience, every aspect of that is controlled and designed by the company presenting it. And so through design, we can be tunneled from one point to another and presented only the options that a company wants us to consider. And shown information material to those decisions, charges or fees at the last possible moment and generally have our experience steered in a direction that benefits the trader or the platform over the consumer's wants and needs. And you can imagine this by the difference between walking through a physical store where, yes, that's a constructed environment, and yes, you may be funneled from one place to another. I think of IKEA as one of the universal examples of that. You can obviously be forced to walk through all of IKEA or you know where the shortcuts are. But the important part is you know where the shortcuts are if you've been there enough. And you have the ability to actually make that choice. But in many of these online environments that simply doesn't exist. And so we can siphon consent from users without giving them any true understanding of what they are consenting to or the longer term consequences of doing so. So through design, companies can engage in what we call operant conditioning, which rewards us for repetitive behaviors that benefit them. It reinforces specific choices over time. And it shifts the baseline of what we expect from our online experiences and how we actually consider how to conduct ourselves online. And so I want to note though that not all consumers are equally vulnerable. So we know that factors such as increased education, you know, fluency in the sites or your country's primary language, greater technical skill or experience, age, and particularly the youngest and the oldest in our populations, all of these factors have an impact on how we perceive and conduct ourselves in these environments. However, what I have found is that generally regulators have paid very little attention to those factors outside of either explicit disabilities or outside of considering children, for example. But for example, when we consider a consent, I would argue that it's pretty egregious that we haven't yet focused on how to make consent work broadly for everyone. And that we focus very much on this kind of imaginary average consumer or reasonable consumer rather than trying to think about how you might dissect the consent experience into something that somebody with limited, you know, English skills may understand or limited education, for example. And so when we think about these vulnerabilities, I would argue that we really need to consider how to make online services work for everyone and not just people with inherent advantages in these environments. And I want to close talking about digital manipulation for a second and artificial intelligence and AI. So when it comes to this space, and dark patterns in particular, there may be some forms of these things that we can educate people to avoid. That's a matter of building awareness and skills and education so that people don't fall victim to them as frequently. And I've seen from some of my own work and research by others that with dark patterns, there are some types of dark patterns that people do learn to recognize. And they learn to resist them not universally, but there are some I think that are easier than others. But much like we now inhabit a data ecosystem where our data is largely out of our personal control. Without more direct action from policymakers, I think we will similarly see dark patterns continue to proliferate throughout the digital ecosystem. I did recently co-author an article published by Tech Policy Press, which is a U.S. based website, where I walked through the DMA and the DSA's impact on dark patterns. And so I welcome folks to take a look at that. And while I think that both of those pieces of regulation are definitely a step forward within the EU, I don't think that they are actually sufficient to entirely curb questions of dark patterns. And there are many forms of manipulative design that I would argue don't yield easily to education and general awareness that are going to snag even the most sophisticated consumers among us. So we are very likely to see machine learning and artificial intelligence play a substantial role in this area of digital manipulation. And right now I'm the most concerned about machine learning and AI being used to categories not just our past behavior, because a lot of what I've talked about so far is intelligence based on past actions, past observations, and things that you've actively done online, which then gets recorded and analyzed and used to profile you. But the ways in which these tools are going to be used to try to predict our future behavior and try to classify and potentially condition us based on those predictions. And one of the concerning things about that practice is that while you may recognize when you've been mis-targeted for something, why did I get an ad for this vacation? I don't want to go to this place, for example. It is a much harder world where you're just simply not being shown things, again, based on the assumptions that an algorithm system may make about you. You don't notice the absence necessarily. And so this is the area where, well, again, we've kind of had these assumptions about free will and my ability to decline something or to make a rational choice about something in a world where we're being sorted and only shown things based on what an algorithm predicts about us, you have to try to figure out what you're not seeing. And that is a tremendously much more order magnitude harder problem, I think, than the one of being misclassified potentially. And so our ability to try to audit, to try to understand explainability and correction with these types of systems, I think there's going to be a really critical question, again, to try to understand why I might be predicted to do something by a system that is inscrutable. And so it comes down to this question of there being an absence rather than a presence, which I think is a much harder area to negotiate. So with that, I will pause my remarks and I look forward to the larger discussion and I'm happy to elaborate on any of these points I've made in our group discussion. Thank you so much. Thank you very much, Jennifer. And thank you for your excellent presentation, setting it out, as you said, the groundwork. And I think something we may come to, I think, all that you highlighted in terms of being us being an online experimental lab. But perhaps the key question, and I think we can come to it in the questions, is that whole issue of building awareness and education to the general public to citizens. I think it's a critical one, and perhaps one that I really like our panel to discuss later later on. So thank you very much for that, Jennifer. Our next speaker is Professor Bruno Leverberg. Bruno, you're very welcome back to the IIEA. I think it was back in February 21 that we last saw you. So you're particularly welcome again. Bruno, are you coming on? Don't quite see you. I'm here. Great, Bruno. It's lovely to see you again. You're very welcome. Thank you. Bruno is the Director General of the Center on Regulation in Europe, based in Brussels. He was also the first chair of the EU Observatory and Online Platform Economy. And Bruno, your presentation will focus, I think, on the power of online platform companies and citizens' lives in the economy and in society. Thank you very much, and we look forward to your presentation, Bruno. Good afternoon and thank you. Thank you very much, Joyce. Thank you to the IIEA for this invitation. It's indeed the second time that I have the pleasure to talk on your platform, and it's always a treat, and in particular with my great colleagues this afternoon. I'd like to make a few comments on this issue of the consumer vulnerability in the digital world. I think that while the digital transformation is introducing improvements to business-to-consumer practices, consumers are now having to deal with ever more complex choice environments where firms can, and I think my predecessor Jennifer said something like that, the firms can optimize these choice environments for their own ends. And one of the biggest challenges for consumers, I think, in a data-driven environment, is digital asymmetry. And that is the power imbalance in the markets between consumers and data-empowered traders. And perhaps we can just stay one minute on that, because digital asymmetry has three dimensions, at least. The first one is architectural, structural. It's rooted in control of the choice, architecture of the service, and access to data and the related difficulty of verifying compliance use of data in the supply chain. The second dimension of digital asymmetry is relational, because the bargaining power of the consumer is low, and they may either accept or leave with very, very limited alternatives. And the third aspect of digital asymmetry is knowledge-based, as the trader benefits from detailed insight about the consumer, while the consumer knows or understands very little of how the trader and the service operate. So therefore, when we think about the drivers to digital regulation, the first thing which comes to mind is the realization that markets don't self-correct, and the second one is that platforms may not self-regulate. But I think, and this afternoon gives us the opportunity to talk about that, that there's a significant third driver, and that is that even if markets function well, they will not do so for everyone. And a quite large number of people, a quite large number of consumers, will remain vulnerable. Initially, one thought that vulnerable consumers were a small subset of population with identifiable characteristics. And the challenge was to find them and target interventions to protect them, and how help them make better choices whilst allowing the rest of the population to continue to engage freely with markets. And the big questions were, but I think they're still valid, is number one, whether you can make markets work better for vulnerable customers, that is by getting them more engaged or informed, or whether you accept that the market is unimprovable and then have measures to protect vulnerable consumers from its worst effects. Second point, how many of us are actually vulnerable and in what circumstances? It changes very much because the context is different when we think about the energy price hikes, it's clear that the population is much larger. And the last point was how to preserve the benefits of the market whilst intervening. We're still at a relatively early stage in understanding all this. My colleague Amelia Fletcher, who's a Sarah Research Fellow and also a non-exec director of the UK financial conduct authority in the CMA, she's a leading thinker working on these questions. And I refer you to her work and she helped me also to prepare these notes. But now, focusing again on consumer vulnerability in digital, there are a number of points I would like to make. The first one is that digital exclusion and inability of people to access markets at all, because, for instance, of lack of broadband connection or unwillingness to engage at all. And this means that vulnerable people remain trapped in the more costly, less efficient, perhaps less diverse, analog world. And similarly, digital illiteracy is an issue. And when I mention it, you immediately realize that, of course, we have to care as much as possible about children when we think digital. But children are not the only category which is vulnerable. A vast majority of the elderly population is concerned as well and can therefore fall in the category of vulnerable. Second issue, once people can access markets, we realize, as I hinted at one minute ago, that we are all vulnerable to the effects of choice architecture, which firms employ to distort our decision-making processes. But a conception of what vulnerability is and who is vulnerable is likely to change and potentially quite radically. Third issue, digital gives us more choices than before. But that doesn't mean that consumers will necessarily be better off if the choices are illusory or can be manipulated. Fourth, some people react to digital content differently from others. Or we react differently in different contexts. Again, how do we protect those that are vulnerable and could be harmed while retaining the free flow of information for the rest? The DSA and the DMA, Digital Services Act and the Digital Markets Act of the European Union are both directed at these issues. The DMA's focus on fairness in terms of the terms of trade between digital platforms and consumers, including vulnerable consumers, that's a good hook for further action. By the way, the DMA and the DSA just complement a series of other initiatives which have been launched by the European Commission within the framework of the new consumer agenda in 2020. And that agenda recognized that new technologies and data-driven practices may limit the effectiveness of current consumer protection rules designed to protect consumers in the digital environment. And its stress also, the need to ensure that's fundamental point, equal fairness online and offline. And the Commission has launched last May a new targeted fitness check of EU consumer law which focus on digital fairness. And this will look among others at consumers' vulnerabilities. Dark patterns, as explained by Jennifer, those deceptive design patterns, tricks used in websites and apps that make you do things that you really didn't mean to, like buying up or buying or signing up for something, personalization practices, influencer marketing, contract cancellations, subscription service contracts, etc., etc. Since my voice is starting to leave me, I will make three final points on implementation and enforcement of the new rules. Implementation of the rules that I just described will require from regulators new expertise around technical issues such as A-B testing and algorithmic decision-making, excuse me, to understand, anticipate and remedy the myriad of ways that in the online economy consumers can be put at a disadvantage. And the potential is there for regulators to use the same techniques to assess measures, to protect consumers, or influence how they make choices. And we know the regulators are beginning to acquire resources and organize themselves and DGCOM is considering a chief technology officer and say that the DMA will lead to a significant recruitment of new digital expertise. Second conclusion, whatever upgrade regulators grant to their structure and processes, effective implementation will require a very close cooperation between regulators and the firms and the platforms and we should not underestimate that. And finally, regarding enforcement, there's an issue about institutional design. Should the enforcement be centralized as proposed and not proposed, provided for now in the DMA and the DSA? Or should it be like the current system in EU consumer law where you have a coordination by the country of consumption? I'm agnostic on that, not totally, but I don't think that, I mean, both systems can work. What will not work or is not desirable in my view, and I will leave you on that, is the origin country coordination. We see it's limits with the GDPR and I think that country of origin coordination for consumer protection in digital areas would provide a very limited incentive to efficiency. Thank you very much. Thank you very much, Bruno. I think you've added significantly to our discussion, raising again the issue of the complexity of the environment and all that goes with it, but also perhaps again raising issues about the expertise for regulators and how they can use that add to their expertise and perhaps use the same techniques, you know, as platforms are doing, and that close cooperation between regulators being very important. And of course, that centralized issue about the DSA and DMA, will that happen? But you've left us with a lot to discuss and think about. And I'd have to say we really appreciate you being with us today, Bruno. I know you're suffering from COVID and its effects, so we really appreciate you taking time to be with us under the circumstances. We wish you all the best for a speedy recovery and take good care. And thanks very much again for being with us. Thank you very much. Thank you. Our next speaker then is Dr. Stephen Unger, who is the chair, the UK Chair of the International Institute of Communications. He was previously Chief Technology Officer of OFCOM and Vice-Chair of the Body of European Regulations for Electronic Communication. Stephen will explore the area of regulation in the digital sector, and he will look at the history of telecom policies and ask the questions, what lessons can be learned and what can be applied to digital regulations? He will explore the practical design of regulation and how regulators should address issues. And I think Stephen, you'll be discussing examples, such as the online safety bill in the UK and the implications of the applications of AI principles through the development of AI transparency and explainability. So Stephen, we look forward to your presentation. Thanks very much Joyce, and I'll keep it brief because that's a topic we could easily cover all afternoon. Actually, can I first commend the Institute for Promoting International Dialogue on this topic? One of my roles while I was at OFCOM was leading our international engagement, and I must say that despite Brexit or perhaps because of it, I feel very strongly that the UK, Ireland, the EU and the US have to work together on these topics to find some common ground. And that's what that means in practice. My core argument actually builds on Bruno's closing comments, and it's that we really now need to focus on practicalities. A great deal has been said and written about the problems we're trying to solve in the fields of digital regulation, and that's certainly complex, but I think an even greater challenge is asking what can practically be done to address those problems. And as part of that assessment of practicality, we obviously need to remember that the internet does deliver some pretty neat services to consumers, and they shouldn't be collateral damage. As Joyce notes, I come at these issues building on my experience as a traditional regulator of telecoms and media. I spent about 20 years at OFCOM, and as if that wasn't enough, I'm currently writing a book about the last 200 years of telecoms regulation. And one recurring theme is that it's always been much easier to find problems than to fix them. There have been some successful interventions during the last 200 years, but also several well-intentioned interventions that have made things worse. There have, for example, been several occasions when regulating companies, as if they were natural monopolies, has simply reinforced their monopoly power. I can think of several other occasions when we tried to ensure that consumers were better informed by requiring providers to make complex information available to them, only to find that we'd simply confuse them further. So the practical design of interventions is really important. There's a nice quote I'd like to give from Stephen Littlechild, who in 1983, about a while ago now, provides a report to the UK government on how to protect consumers from the monopoly power held by the newly privatised BT. He recommended a particular approach to price controls, which has since been widely adopted, but he also set out this warning. What he said is that regulation is essentially a means of preventing the worst excesses of monopoly. It is not a substitute for competition, it's the means of holding the thought until competition arrives. And he was actually a bit naive about how long it would take for competition to arrive. I think back then we all thought it would be quite soon. It's still not alive, but anyway. I still have a great sympathy for the positioning in that statement. It shows demonstrations of humility as to how much can be achieved through rulemaking. And that does give me cause to worry about where we are now. In both the EU and the UK, we've basically moved very quickly in a very small number of years from having almost no regulation at all for digital platforms to having very detailed and prospective regulation. And my worry is that that rush to judgment risks that we've got to get it wrong. You know, we spent 200 years on telecoms regulation, we've still not got it quite right. It will take a little while to understand how to do this so well. And let me take a couple of examples. As Joyce mentioned, I'd like to touch on what's happening in the UK at the moment with the online safety bill. This is a major piece of legislation that actually until about yesterday was going through the UK Parliament. It's currently been suspended while we work out who's leading our country. But in any case, I think my remarks are still valid. The bill is a bit of a monster. It's got about 200 different clauses. Many of them are complex and several ambiguous. It's got about 15 different schedules after those 200 clauses. And behind the bill is a fundamental contradiction in that the stated purpose of the government in introducing the bill is to make the UK the safest place in the world to be online while defending free expression. And it seems to be pretty those two goals are simply incompatible. So I think a more realistic target will be to focus on the most harmful content. We need to understand what we're most worried about. We need to understand why that type of content is posted in the first place. We need to do what we can to address the incentives that lead to it being posted. We need to improve takedown processes and all the rest of it. But we need to do it in a targeted way. And at the same time, I do think we need to accept that the internet will never be completely safe. Trying to make it so through detailed rule books will do more harm than good. It will compromise freedom of expression. It will stifle innovation. And it risks creating a regulatory burden on platforms that will actually just reinforce the position of the currently dominant platforms. So just think about the unintended consequences. The other example I'd like to give is the regulation of AI where there's been there's been a lot of discussion about the principles which should be applied to the regulation of AI. My concern is that those principles are almost uniformly difficult to disagree with in principle, but several feel to me impossible to deliver in practice. So if I could take one example, it's often been suggested that the use of AI should be transparent or explainable. Now again, another interesting lesson from the history of regulation is we've often tried to apply information remedies to help consumers understand what's happening. And they've often been ineffective because consumers just don't make use of the information they're given. That's been the case in the past for relatively simple regulatory problems, such as dialing a premium rate telephone number, for example, or applying for a loan from a bank. And it will certainly be the case for something as technically complex as the use of AI. So in this case, I think what we have to do is make sure what appropriate mechanisms are in place to ensure accountability. And those need to take account of the specific risks that apply for each individual application of AI. So, for example, if AI is used as part of the flyby wire control system of an airplane, then I certainly want to know that someone competent will be held accountable for my safety. And I certainly want to know that person will be overseen by an independent regulator. But I don't expect to be told how that flyby wire control system works when I board the plane. Any more than I expect to be told how a jet engine works. And to take another example, if AI is used to recommend what movie I should watch, actually, I probably don't need a regulator. I can probably be trusted to make my own mind up, whether to accept the recommendations I'm given without oversight from anybody else. So again, we just need to get practical. In conclusion, I think we need to treat digital regulations as a journey. As I've said before, the journey we took on telecoms regulation took a very long time, a couple of hundred years. We need to take, be a bit patient with digital regulation. We need to deal with the worst excesses of the market as best we can. But we also need to be willing to adapt and learn as we go. We shouldn't pretend that we know now what the end point is. And with that, I'll stop. Thank you very much, Stephen. I think that's been a very interesting and practical presentation about key issues. And I think your point that it's easier to find problems than to fix them is one that perhaps we should think about. And also that it is about learning and the journeys, because I think if we think about the technologies, particularly emerging technologies, we don't know what they are going to be in the long run. So all the points you made about regulation, I think, are important and brings us back to that point of understanding enough to know and not to have unintended consequences. I think that's really very, very important. So thank you very much for raising those issues. Our final speaker now is Jeremy Godfrey. And he's the chair of the competition's consumer protection commission in Ireland. He has over 30 years experience in public service and in the telecoms and IT sector. He was a partner in PA Consulting and worked in the UK with the Department of Trade and Industry and in Hong Kong, telecom and the cable and wireless group. Jeremy will explore the CCP experience and reflection in this area and address what makes consumers particularly vulnerable. I think Jeremy, you will speak from a reflection on what has come to the CCP. You will also describe e-commerce and consumer vulnerability, digital deterrence rather than physical deterrence and how our data is used, giving us practical examples. Jeremy, look forward to your presentation. Thank you very much, Joyce. And thank you to the IIEA for inviting me to join this panel and meet some old friends on it. I'm going to start off, I think, just echoing something that Bruno had said. I think Jennifer also talked about, which is I'm glad that this is about consumer vulnerability rather than about vulnerable consumers, because in the right context we are all vulnerable and we all need to be protected. So I'm going to talk first of all about some categories of consumer detriment that we've come across and then talk about some stuff that might be done. And then I'm going to take my usual privilege of disagreeing with Steve about something. But I'll start off with some consumer categories of detriment. I'm going to start with the more boring and work up to the more interesting type. So I think in terms of detriment we see in the online world, one category is old-fashioned types of consumer detriment that are exacerbated in digital markets. So an example of that that has been particularly we've seen in Ireland came as a result of Brexit to Irish consumers by a lot of products from the UK online. So I think they're one of the biggest across the EU for buying from across borders and a lot comes from the UK. One thing that has happened is that some traders do not disclose whether that they are now from a third country and that customs duties will be payable when products arrive. It was a huge issue shortly after the trade and cooperation agreement came into force. And we also see some unscrupulous traders using .ie websites to pretend that they're in Ireland and people then get stung by that. Another example that happens is actually from our product safety perspective that in the old world we one of the ways that we prevented unsafe products coming on the market was through inspections of container loads of products arriving at the ports. Now a lot of products arrive in small packets and I think there's been some research in other markets that 60 or 80% of cosmetics for example which is a very popular product to be able to buy online don't comply with the European safety standards when they arrive. Second category is to do with digital content and services. So until quite recently normal consumer protection rules didn't apply to the purchases of digital content and services. So you'd have things where somebody would have subscribed to a digital streaming service and then the content changed out of all recognition and they weren't allowed out of their contract or we've seen issues with people buying products like in-app coins and the pricing transparency rules haven't been followed or people haven't been able to get their money back you know in the 14 days they should have to change their minds. So I'll talk a little bit about how that's changing. Then I think we come to the the types of manipulative practices that Jennifer and you know talked about so well. I must say one of the things I came across this about 18 months ago for the first time and I noticed that in Europe in the unfair commercial practices directive of course persuasive practices are allowed, misleading practices are not allowed, coercive practices are not allowed but there's no mention at all of manipulative practices. So a practice can be honest in the sense that it's not lying to you and it can be not coercive but it interferes with your free will by abusing your your heuristics and your behavioural your cognitive biases and I'll talk a little bit about that. Apart from that one thing I will also mention that has come across across across we've come across I suppose a heuristic we often use is just to take a recommendation from someone we trust. So the role of influencers online is quite important and they don't just influence the people who see them the people who see the influencers then influence their friends and family so that that is that their impact is is it is magnified in that way and we've certainly we've actually been involved in a collective pan-European action in relation to TikTok. One of the aspects of that was how our paid influencers had identified on that platform in a way that in the way that people will recognise. And the last thing I'll mention about a category of detriment is an insight from our product safety responsibilities. So we're as well we we are a market surveillance authority for a wide range of consumer goods including something called the general product safety directive and that's that's quite interesting because it just contains an obligation which is you shall not they shall not place an unsafe product on the market but there is no general service safety directive there's no equivalent of the services and when I think about what constitutes the safety of an online service it's not just about the harmful content there is a myriad of features of a service how is identity and age verification done how do recommendation algorithms work how is content moderated and taken down so there's assessing the safety of a service is a more holistic matter than as Steve said 200 pages 200 clauses in an online safety bill may actually still not address the question of is the service as a whole holistically safe and so I think it's this is there is obviously a link the ccpc is not a content regulator but looking at safety through a consumer protection lens I think actually does produce some interesting insights that come on to what can be done about this so one category stuff is obviously updating the regulatory framework so some of in in Europe some of that's already been done so for example it is now the case and in Ireland it will be transposed with a bill that should be passed in the autumn that digital content and services will have the same or equivalent consumer protection applied to them as physical goods and and other services then the digital services act and I'm very very interested in Jennifer's paper about this I think the one thing that I got really excited about about the digital services act was that it contained the word manipulation and talks about manipulative interface design and that again may not be the whole of dark patterns but I think that's a really big step forward although I would like personally to see a general prohibition on manipulative practices not just you know which again may go beyond manipulative interface design there's a lot of stuff in the dsa that starts to address some of these problems in Ireland we have an online safety and media regulation bill which I haven't counted the clauses Steve but I'm sure it's going to by weight will at least be comparable with the UK bill a couple of just other things I mentioned I think the DMA was mentioned and competition law was mentioned so I think there's been a lot of in competition law we have this concept of abuse of dominance and there are two types of abuse often that the experts will categorize them into there's this exclusionary abuse which is where you try to prevent competitors coming to the market and then there's exploitative abuse now a lot of the discussion around the DMA has actually been around exclusionary abuse but so there is I think still scope to look at exploitative abuse and it's quite interesting how you can see the same practice say around consent for the use of data it can be a GDPR issue have you know did you give consent and was it obtained properly it can be consumer protection issue it is the consent that you've been asked for an unfair contract term was it explained again was it explained to you properly did it did it violate your rights as a consumer and we've also seen some of the competition authorities in Europe say well actually it could be an abuse of dominance issue you're so dominant I think Bruno mentioned you haven't got a choice to go anywhere else is it actually an exploitative abuse that asking you to consent for your data to be used in this way is that the digital equivalent of charging an unconscionably high price for in a more traditional market so but I think I think we all recognize the limitations of regulation so I think ethical behavior by traders is I've got no idea how we promote that but I think it is it is something that we should not forget and the one thing I would say about ethics is that if a company adopts ethics by design as opposed to compliance by design if it does ethics by design it will certainly do compliance by design but it will not be trying to spend a lot of time working at exactly where the compliance boundary is so ethics by design may be an advantage not only in your corporate reputation but in fact in enabling faster innovation then there's the whole consumer education consumer inclusion piece that's been talked about and lastly a little bit regulatory enforcement the skills and knowledge that are needed are certainly different and hard to acquire but I think there's a lot of need for collaboration between enforcers so there's collaboration across national boundaries so both within the single market of the EU but also between Ireland and the UK and between between EU countries in China and so forth and I think Bruno was very brave to come to Ireland and talk about the country of origin principle which is definitely a hot potato here and but the other thing I'd say is there's also a need for collaboration between regulators in different fields so consumer regulators, competition regulators, data protection regulators, content regulators all need to collaborate closely because the same behavior can often be looked at helpfully through multiple through those different lenses and we certainly need to arrange between ourselves that we don't just point consumers in all sorts of different directions whenever they're complaint and fine I'm going to disagree with Steve about one thing which is his movie recommendation where the one thing I would say about it is if you have been manipulated into choosing a movie if it wasn't your free will now wasting your time watching a movie may not be a lot of harm but I think it is an example of I mean I would say if the way that movie recommendation was made was somehow manipulative and you know as Jennifer said what about the movie that wasn't recommended to you they tried to keep away from you I would not be quite so sanguine as to say that even something as as harmless as that you know anything should go and without I will close my remarks. Thank you very much Germany and I see Stephen isn't too upset by your level of disagreement there but I think you know you use the word collaboration on a number of levels and I think it's a theme that has come through collaboration between regulators between countries between the US Europe and different member states is absolutely critical consumer education and awareness and skills and development and opening up the discussion now into questions and I'd like to start with that broad issue about consumer awareness and education I mean a lot of research shows that there is reluctance for either employers citizens employees and indeed politicians to embrace new technology when they don't know anything about it I think your initiative in the orn is great because it's actually starting that process of communication and understanding but what do you think we can do for citizens in the society to help create awareness of all the issues that you have raised in in a sense giving them an overview not lecturing them but an overview of the choices that they make and the implications and Jennifer since you raised it at the beginning but perhaps we might start with you and I'd ask everybody else Steven and Jeremy to come in then thank you I actually was just thinking it's too bad that Bruno had to drop off because I thought some of his remarks on this point were very good yeah especially with regards to AI you know and transparency and decision making I mean I agree with him but for example in the AI context that leaning in on explainability is not probably going to be I mean it might be useful for regulators and experts like myself for trying to assess systems but for the average consumer I'm not very optimistic about it and I mean I coming from the US where we're more in a kind of a different extreme of there being very little regulation in the space and consumers are really left to fend for themselves it's it's an interesting one to consider where there are some actual rules the road in place consumers don't have the burden of trying to manage all of the weight of trying to understand these systems just to say I come from this more extreme perspective where I have often my response is often just like how because there's just too many practices yeah baseline practices that are that I wouldn't expect consumers to deal with but in and I guess thinking in the EU context more broadly and looking at consent perhaps you know which the GDPR obviously you know gave some design guidance on consent but very minimal that was maybe a good example of one where trying to think through how you actually obtain consent in a way that I don't think the standard is in quote-unquote informed consent that somebody has to really understand the implications of what they're doing but certainly trying to explain to someone the longer term implications of what they could do I think is part of what makes this such a hard problem because a single time disclosure at this point in time it's going to be impossible for me to look down the same so I guess to say it's a hard nut to crack certainly there are just going to be practices that I think are fundamentally not explainable but that said I would say like thinking about the consent context again there really hasn't been I think much work in trying to think through how I would dissect it in a way that's more broadly consumer friendly for different groups the way I think it was Bruno who mentioned that as well and one of the I worked on a project with the World Economic Forum where we kind of dove into these issues and I do think one area that maybe holds some promise is a shift towards not relying again on kind of information in the sense of trying to get people to read things and understand things but a more kind of automated consent system that can take into account your preferences and help just negotiate these decisions for you with you not having to actually navigate and do it yourself honestly I think that's going to be some of the most useful way forward again you know depending on a world where how much of this is already kind of taken off the table unlike in the in the US again there's a lot more burden I think on consumers to have to understand and navigate these spaces yeah thanks very much Jennifer Stephen and Jeffrey would you like to come in on that? Sure and I think we're probably all in a broadly similar place I mean I do worry I agree with everything that's been said in principle and then I worry about practicality and if I could give one historical example I mean one of the things we used to worry about OFCOM was people dialing premium rate telephone numbers those are numbers that start with 0.9 and getting charged a lot of money for dialing those numbers we found it very difficult to explain to consumers that if you dial 0.9 you're going to get a big bill and that's not a complex message so there is a really big challenge here and if we can crack it then I think job done actually I think the way into it though is through actually more research on how consumers actually behave so the issue for me is starting out the consumer end listening to consumers listening to what type of information consumers want rather than what type of information we want to give them and understanding what types of information will change behavior so a few years ago behavioral economics was very trendy and you know became a bit too trendy but it had the right idea it was about researching what information will actually nudge people's behaviors rather than what type of information will just get them confused and I think it's about so I think it's about us listening and then designing remedies interventions which take account of what consumers actually want to be able to use and by the way there will be times where that's just not going to be effective I mean the reason I gave the two examples I gave was because I think there are going to be some instances where things are so complex and safety of life is at stake you just have to regulate you just have to check that something is safe it's you can't deliver you can't deliver the outcomes you need any other way for other times where you know maybe you can find other ways of doing it I mean the movie example is a case in point but you've got to work out what it is consumers want yeah Jeremy would you like to add to that I'll just add two very quick things one is I am 100 degree the limitations of information saying to people here's some information make a wise choice the second thing I would say about education is it needs to start in primary school so this is not something we're going to be able to do to bolt on digital literacy understanding the world media literacy these these are these are so we're going to need to need to look right from right from the beginning of how we teach people how they interact with the world yeah thanks very much for that and just before I go to some other questions I was interested in all of you've mentioned collaboration and particularly international collaboration and how do you think the EU of the US and the UK can cooperate further on the challenges that are happening are there mechanisms for that to happen in a formal or indeed in an informal way Jeremy I might start with you there okay well I think actually there's two kinds of collaborations I said there's the international and there's the collaboration between regulators with in adjacent fields so actually I think Joyce you mentioned the ERN the economic regulators network one of the things the RN spawned was something we called the digital regulators group so that is a number of us within the ERN so the communications regulator the the broadcasting authority of Ireland ccpc and then also the data protection commissioner so we all we're all together we're very conscious of the need to collaborate that we might all end up being investigating the same sorts of behavior maybe we need to have some legal basis for sharing information and and so forth if we all compete with each other to to recruit the digital experts into our into our field then we will not only be beyond the public sector pay scales will be well outside them so so these are these are those those are issues in terms of collaboration of course within the EU the you know you have a single market with the same rules the collaboration is extremely important we have we have some mechanisms for it I mean the dsa I think does does actually say that for the very biggest platforms the commission will be the regulator so I think I think I think you get collaboration kind of on enforcement cases I think internationally there's certainly a it's good to have you know sessions like this and and and other sessions where people learn from each other to the the greater the extent we approximate our approach is the better I mean people talk about the Brussels effect which so that when you know is the EU regulation is sometimes something that companies if they have to comply with it in Europe you know they think they might as well comply with it in other markets as well whether that will quite apply to this sort of regulation I know Steve is looking very skeptical about it but but but certainly certainly I think one of the questions that was posed was what impact does regulation have on innovation if regulatory regimes are very different in different markets that is clearly going to be more costly for for people but we don't we don't have a an international institutional mechanism for you know likely you for for requiring people to do do the same thing and and and of course each jurisdiction has its own political processes to go through so I think really the best we will get is is information is sharing at the policy level and we will sometimes have joint investigations so I mean that does happen in the competition field that you do get a joint investigation between regulators in the EU and say with the UK or America if there's some big multinational that is involved in in an anti-competitive practice in multiple jurisdictions and people do coordinate and I could see that happening to some extent we might end up with some kind of joint investigations but you know it will need agreements to to enable that to happen Jennifer would you think and you know they is it the tech trade council would that be involved in particularly around privacy data protection you know or is there any other mechanisms that you see from California in terms of that level of cooperation internationally? Right I've been thinking about your question because I feel like all roads here lead to the Federal Trade Commission primarily at least at the federal level and I'm not working for the Federal Trade Commission I don't know to the extent to which they do collaborate with their regulators over the Atlantic but you know interestingly we do have a new office in California I mean it's really really our first state level I mean in the entire US data protection regulator if you want to think of it in those terms and so that might be one area where we begin to see potential cross-collaboration but I'm not aware of whether you know a lot of a lot of this a lot of these cases are brought by US state attorney generals which I know they do collaborate with each other but I'm not sure how much those folks also kind of reach outside the boundaries of the US so it'll be interesting to see if our California office is interesting example yeah yeah great thank you very much I have a question here now and I did go back to dark patterns from Robert Morek who's the chair of Comrade and he asked the question are the dangers of dark patterns worse regard to content on social networks current what these dark patterns can do on e-commerce platforms I'm happy to start that one and just to say so I mean I would say these are this is an apple to oranges comparison they're not precisely the same things I don't know if one would be more dangerous than the other because they have different effects and you know so primarily on social networks were the dark patterns that have been identified in those contexts are often around information disclosure you know so it's about especially if you're focusing on the platform itself whether it's a Facebook or a TikTok or what have you you know the the concern is about what you're sharing with the platform not as much about what you might be sharing with other people other members you know who are using the platform but usually it's it's a question of information disclosure whereas the e-commerce context it's much more a question of the harms I might suffer from you know primarily a loss of money right by being scammed and frotted you know overpaying for example having been forced towards one choice or another so I don't think they're they're you know necessarily I would think of them in different categories honestly in terms of their effects okay thanks would you like to come in Stephen or Jeremy and that no I'll come in briefly I I found with Jennifer on this I for me the financial effects I almost worried out less simply because it's slightly familiar territory actually sort of Jeremy's point that's a familiar type of concern over the internet but you would hope that we can repurpose some existing tools to deal with it whereas the types of harms we're seeing resulting uh on social media I mean if I think about my kids and the world they're growing up in those are things I really worry about yeah. The only thing I would add is that data collected from your behavior on social media platforms can then be used in dark patterns on e-commerce platforms and so the more data people have the darker the pattern can be and so in some ways there's also a link between the two. Thanks very much I've got a question here from Mary Brown from University College Dublin an IEA member and she asks a question is there a place for business stakeholders in the formation of regulation and collaboration? Stephen I see your head nodding there so we start with you. Sure I feel strongly there is because it seems to me on quite a few of these issues there is some alignment between what we're trying to achieve and what those platforms that have a brand to protect also want to achieve that doesn't mean we agree I mean I take the point that you know they'll design their interfaces to guide people to buy products and all so you know so our intentions aren't perfectly aligned but there are quite a few areas of harm which they also need to deal with if only for reputational reasons and they understand their platforms much better than we do so if we don't work with them in those areas I think we're missing a very big trick. Yeah and I suppose that's part of the digital service act as well and digital involving all the stakeholders. Germany and Jennifer would you like to add anything more to that? I'll just say that I have talked to some companies in relation to the questions of dark patterns and you know mostly what I'm hearing is this like tell us what to do what is this line between manipulation, coercion and just persuasion and so what you know what I take from that is there is a need for clarity and of course there are questions of I didn't obviously go into depth on the dark patterns issue in my opening but certainly this gray area of you know there's often the kind of I know it when I see it level of dark patterns where there I don't think there's a lot of contesting that they're manipulative or potentially deceptive but there is a gray area that needs to be navigated and I think you know companies do need clarity to try to understand what is what is the line between simply persuading and appealing to someone's kind of reason versus and is that you know what Stephen mentioned as unintended consequences or is it just a lack of understanding of of the the science and behind what's happening? You know I would say on that that we're at a stage where you can test anything so it might be that you make a design change where there's an unintended consequence but you know we're well within the world of where design researchers can absolutely test kind of any design change you make to try to understand its impact and so if you're seeing a trend it can be investigated and so I don't think that's necessarily an excuse okay you know potentially at scale it can be harder to maybe track those things but ultimately I still think it's something that can be assessed and just because we're coming up with time with two more questions Laura Cunningham from the international unit in Cumbergast the question what role will trusted flaggers and trusted service providers have in protecting consumers online? Will these two groups be a focus for digital regulatory cooperation? Yeah I might take that one I'll just also comment briefly on the previous question which is my experience as a regulator is businesses have a very important role to play but there's also they also often you get told we want principles based regulation don't tie us down with a lot of detail with 200 clauses but we want to know exactly what's permitted and what's not permitted as well so that's another big difficulty and I think part of the answer is Jennifer talked about the grey areas don't go into the grey areas if you're worried you know it's kind of stick to the white areas and you'll be grand and it's the grey areas so this is I think a very difficult issue but as Steve says absolutely rightly only the platforms only the businesses actually know what their platforms do and know what the effects are so there is a role I think for the high level principles as well as some detailed rules in terms of trusted I'm going to talk about trusted flaggers. Yeah and trust the service providers yeah so I mean the trusted flaggers are our concept I think in the digital services act so if they're you know the digital service essentially says any illegal content it's illegal for any particular reason if it's drawn to the attention of a platform they're obliged to take it down but that could be anything from an advert for an unsafe product to a foreign company a foreigner intervening in an election outside the election spending rules to people naming a rape victim online all these things are illegal content it's going to be impossible for any regulator to be reading all the content online and spotting what's illegal and what's illegal so I think the trusted flaggers are there as people who can you know who if they draw attention to something you say well they're probably going to point their their their raising of this should be prioritized because because that's their role so I think yes they will have an important role trust service providers are more to do with regulating digital identity so I think that's a very important task but I think it might be it may be a bit perpendicular to the issues we've been talking about today and I'm just going to end with one quick or Stephen you're going to just very quickly I'm nervous about the trusted flaggers concept because my worry is that we end up listening to the people who speak loudly and they're not necessarily the people who know what they're saying uh so I think we need to tread carefully and is there a way to address that issue Stephen I'm not sure I think over time you can work out who you can really trust but I think it takes time it's another journey okay and just the final question and I think it was raised throughout the presentations and it was really about the significant risk for innovation resulting for either current or potential policies measures intended to address consumer vulnerability what impact do you think it will have an innovation and development for a particularly small and medium-sized firms um in in in the economy Stephen go on well coming for me the simple one is just the burden of regulation you know we often talk this side of the Atlantic about a European Google uh you know if I was starting up a search company and I faced the set of obligations that are set out in the uh the OSB the bill that's going for the UK parliament at the moment I'm not sure I'd start a business in the UK simple as that any other comments uh a comment um maybe a couple of comments one is I'm kind of interested in competition actually so as a competition authority we often um look at regulations they does this inhibit competition uh and if everyone's innovation is a bit slowed down because of the need to comply but it's not inhibiting competition as Steve was talking effectively something that prevents market entry does inhibit competition um but I think I would tend to look at it from that point of view um to some of these regulations like the the digital markets act that you provide opportunities for small companies to um to innovate by dealing with some of the possibilities of large companies excluding um startups so um I would say and the last thing I would say is that not all innovation is good I mean some some innovation is there you know where do you think these dark patterns came from they were they they're innovations all right they're just not good ones so um so I think we you know we shouldn't be seduced by people saying oh you're gonna have an impact on the innovation sometimes that's exactly what we want to have yeah Jennifer would you like to have the last word on this one you know I literally in in the wild west both geographically and conceptually where you try to have these discussions with us companies and it is very much an absolutist like zero regulation perspective they've become very grudgingly to the table to even accept that we need some rules of the road uh so it's harder for me to weigh in on this because I coming from that more absolutist world I I you know I think we absolutely need you know more clarity more rules the road um and you know more of a an emphasis on I guess safety and that's one of the things that runs through my concerns with AI right now is just that we may see a lot of experimentation in a way that ends up having real harms for people that are just you know completely unanticipated because there just hasn't been kind of proper attention paid to really having to try to think you know what is the possible outcomes of the innovations you're coming up with so um that's where I feel like the kind of American style of innovation um may really lead us around down the wrong road well thank you very much time has caught up with this it's gone very quickly I just looked at my watch there now it's we're a little bit over the time but not too bad but I'd like to thank all our present presenters Jennifer, Steve and Jeremy and Bruno for your really interesting um presentations it shows the power of that international collaboration and discussion uh which which I think um was highlighted today and it's the beginning of the series here of the ERN and I think you've set very high standards and it's great to see this series starting so thank you very much for that it was an excellent all excellent presentations and interesting ideas that that you brought up and I'd like to thank our audience for their questions and for being with us today thank you very much and our team Lorcan Malali on the production side and Seamus Allen our digital policy researcher who brought everything together so I hope we'll meet again um thank you very much again for being with us goodbye from Dublin and we hope there will be a next time and thank you very much thank you enjoy your evening and enjoy your day Jennifer have a nice day