 and pause. Hello everybody. Let's try that again. Hello everybody. It is a beautiful day outside and it is a great privilege to have you inside with us to the Berkman Klein Center's event. Seven fellows predict the future. Thank you for joining us today. We are here to celebrate the Berkman Klein Center's fellows from this year, the 2022-2023 cohort and the research that they've conducted this year. Before we begin I want to note that we have two cameras that are recording today's session. It will live on the Berkman Klein site for perpetuity as long as the internet exists or the Berkman Klein Center exists after this. So I'm Rebecca Baski. I'm the director of the Berkman Klein Community at the Berkman Klein Center, a place that I called home for the past 17 years. And this event makes me so proud and so happy. This is our first cohort of fellows to join us physically at the center since Harvard opened up since the pandemic began. They've helped us to break in our new Lewis office to break things to find what's broken, what needs to be broken, what's fixable, what works. They've helped us to reflect on and redefine our fellows program as the center entered our 20th year. This is also the first cohort of fellows selected and designed with Sue Henderson, our executive director at the Berkman Klein Center. We also celebrate tonight and let me pass the mic to Sue to give a welcome to you all. Thank you all for coming out. I'm so excited to see so many people and to celebrate such an extraordinary collection of fellows. One of the things that's most awesome about this job is the ability to bring in extraordinary people with extraordinary ideas and help them actualize those ideas and take them out into the world and launch them in new ways. We've had this group exploring issues of digital transformation, intellectual property, migration, communication, the ways that technology and society interact. It's been an extraordinary collection of thought leadership in that, an amazing group of people and I'm just honored to be here and looking forward to their reflections and the ways that they're going to change the world both today and in the future. So as ever, we weren't exactly sure what this year would hold for the center and for our fellows as we re-entered the realm of the physical in the midst of this pandemic and we never really know how our fellows will vibe with one another when we select them. There's a little bit of alchemy, a little bit of guessing, a little bit of hope that we inject into the process, trusting on all of you, trusting on the fellows to work with one another, to learn from one another. It's one of the most scary and one of the most joyous parts of my job is to take this leap of faith with all of you. One of the things that we do know is that we assembled a remarkable group of people who are committed to the public interest, both in this fellow's cohort and in the other cohorts that we design and bring together at the Berkman Klein Center that's really crucial for us and I remember speaking during our kickoff about fighting fascism and their responsibility that we all hold with the privileges that we have to study systems of power and how they use technology. Similarly, I remember speaking about the power that individuals and communities hold when they come together, learn from one another, and speak up through things like fellowship, through things like this event. I'm so excited to hear from the fellows tonight as they help us understand the ways that we can all play roles in designing the future that we want. So without further ado, we're gonna start. So first up we will welcome Elizabeth Dubois, who's an associate professor and university research chair in politics, communication, and technology at the University of Ottawa. She hosts the Wonks and War Rooms podcast, which you should sign up for, subscribe to, give five stars, all the good things. Elizabeth. All right, so today I'm going to talk about social media influencers in politics and election campaigns, and I'm going to convince you that this is something we need to be worried about. Social media influencers are a new type of independent third party endorser who shape audience attitudes through blogs, tweets, other uses of social media. Political social media influencers do all that, but you know, for politics. There are influencers who create their entire online brand, their entire persona around being a political, around having a political opinion. There are other influencers who talk about politics only here and there. Their brand isn't actually to do with politics, but at certain moments it comes as part of what they do. Influencers can be big or small. We have micro influencers, as is the case in all three of these news articles here, but there's also mega influencers that look a lot more like celebrities and nano influencers that look a lot more like people like you and I who might have a very small but committed following of friends and family who pay attention to us. The appeal of using social media influencers in campaigns is pretty clear. From a rich perspective, social media influencers can give a campaign access to a wider variety of folks, often folks that they couldn't access on their own. Influencers also typically have a very detailed and nuanced understanding of their particular fan base, which means that using influencers can be very akin to you doing micro targeting. There's also something special about the relationship influencers have with their followers and fans. These are people who have cultivated relationships and so there are social dynamics, parasocial interactions, and sometimes that celebrity status. When we're thinking about why we should pay attention to political influencers, I want to highlight how important it is that we understand how different political actors act within our democratic systems. It's important to know how these new actors are interacting and it's important to know that social media influencers can be used for pro and counter democratic things. Get out the vote campaigns and disinformation campaigns are equally affected by these kinds of strategies. So my first thought was what kinds of laws and regulations actually exist? If it's important for us to understand these political actors role in the system, maybe we should figure out whether or not our existing regulatory approaches are equipped to deal with them. The answer is like not really. What we see when we look at the existing laws and regulations is that they're divided into two main groups and there's only very few of them. One is around election advertising and election spending. It's all about payment. If money isn't exchanged, probably it's not being counted. And then the other is in terms of consumer protection and competition law. Here the idea is influencers more broadly have to make announcements about disclosure and enforcement. And so that impacts influencers regardless of whether an election is happening. I want to suggest that those approaches that focus on influencers as advertisers is too limited for a political context. In a political context we need to think about a bunch of different kinds of roles that these influencers can play. So celebrity endorsers, influencers who have a big enough audience look a lot like celebrity endorsers. And so then maybe we think we should treat influencers just like them. But the problem is celebrities rarely make or rarely accept financial contributions from campaigns or payment because they're celebrities and they don't need to. But influencers do want to make money, it's their job, that's how they set up that gig. And so in that case if an influencer is being paid they are much more clearly like an advertiser. And influencer marketing is a whole thing outside of politics that we do need to pay attention to. The thing is there are specifics about political campaigns that we need to think about. For example we have now political advertisement registries that most social media platforms and a number of other websites have to create. But right now it's unclear whether or not most of those repositories would actually include social media influencer marketing and endorsements because of the technical backing behind those systems. Then we can think of the ways that influencers might be involved in campaigns that don't involve payment. When we're thinking about campaigns, volunteers make up a huge component of any political parties campaign and a lot of advocacy and activism campaigns. We think of door knocking, we think of phone banking, we think of hosting a backyard barbecue. All of these are instances where influencers might come into play. Those are usually run by volunteers who have been trained, they've been fed lines, they've been told what messages to communicate and how what ones are important for the party at the moment. Influencers can be taught the same things. And in an online context for example when people had to campaign in the middle of COVID relational influencer campaigns started to replace things like door knocking and we started to see campaigns reaching out to their volunteer base and saying can you share messages on social media, can you send text messages, can you send emails to all of your friends and family. And then of course there's folks who don't necessarily have a huge audience base but they care about politics a whole bunch. The average joes who also use social media and other communication tools to share their political ideas and information. These folks are called opinion leaders in political communication research and they're thought of as these average folks who care more about politics than other people. They consume a lot of political information and then they share it out with their friends and family and everyday associates. And those people can be really powerful at communicating political messages because they have this shared understanding, these shared experiences that can curate information in a way that somebody who doesn't know someone personally just can't do. And so they're potentially really powerful in a political system but when we're focusing just on payment or how big your audience is we totally miss their political role. So when we're thinking about social media influencers I want you to think about their political role not just about how big their audience is. Thinking about how big their audience is is absolutely the most common way to do it right now. It makes sense social media influencer marketing has been the most popular approach and marketing agencies like quantifiable things. Follower numbers are quantifiable but follower numbers are very dependent on the platform you're talking about, the particular jurisdiction you're talking about, the language you're talking about and in politics all of those things and a lot of other nuanced contextual information matters a whole bunch. So what's next? Well I think that we need to update these categorization approaches. I think we can't keep defaulting to follower count and we need to be thinking about the political roles that these individuals can play regardless of their audience size and importantly thinking about the social and personal influence that they might be employing. In the existing frameworks for understanding social media influencers this focuses on payment and advertising. If we are going to do that and sometimes that is useful sometimes that is important. If we're going to do that we need better definitions of what counts this payment we need to think more about what in-kind donations look like how we actually measure that amount in a reasonable way in different contexts. We also need to set up structures to actually incorporate these kinds of advertisements so we need to ask things like hey maybe our political advertising registries need updates in terms of what needs to be included and if we think that's true then tech companies have a bunch of work to do because technically that is a different kind of process. But I want to leave kind of on the bigger picture. I think we need to go beyond identifying these different people based on their audiences and based on their advertisement and think about the roles that they are playing. The way that we do that is by asking how they are able to influence. What are the impacts of their different kinds of communication strategies and then trying to map those out and understand that the idea of social media influencer is very broad. So the last thing I want to add here is the idea of social media influencer. I've spent a lot of time today talking about that but in reality social media influencers right now are acting on discord and whatsapp and patreon and they have email newsletters and they also have events in person and online. It's not just about what they do on social media even if that's how we often conceptualize them and define them. So my predictions? Well I think it's about to get messy. We're going to see more innovation in terms of integrating influencers into campaigns that's both partisan campaigns and activist and advocacy campaigns and so hopefully what I'm leaving you with today is a bit of a spark to think through what we do next. And I'll just end off with my last 15 seconds here with a call to check out my podcast. Worms in war rooms where political communication theory is on the ground strategy and your host Elizabeth DuBois. You can find us anywhere you get your podcasts and you can also check out the website paulcontact.com where you can find out more about my research and the research that my lab does. Thank you all so much for your time. Faith, my dear Faith. Faith Magico Lagbe is an assistant professor at the University of Alberta Faculty of Law. Her research focuses on public interest issues in intellectual property and technology law including issues related to access to knowledge and development. We'll hear from her next. Miss Faith. Good evening everyone. Today I will be talking about what I believe could be the future of the global copyright system. So in the last one year at Breckman I have been looking at public interest activities as it relates to the global copyright system and I will be talking about those today. So to provide context a bit of a context for today's talk I'll take you through the past, the present and what I believe should be the future of the global copyright system. The global copyright system came into being with the conclusion of the Berne Convention for Literary Works and Artistic Works in 1886. The Berne Convention also created what we now call the Berne Union and the Berne Convention starts to create minimum standards for copyrights protection in countries within the Berne Union and to ensure that creative works are protected in all countries globally and also beyond the shores of the countries where a particular work has originated from. The original signatories to the Berne Convention were Belgium, France, Germany, Haiti, Italy, Liberia, Spain, Switzerland and the UK. As you can see most of these countries are today developed countries and only a handful of developing countries were part of the Berne Convention when it was concluded and despite the fact that developing countries did not join the conclusion of the Berne Convention at the time I'd add no desire to join the Berne Convention. Most developing countries early on quickly became part of the Berne Convention because they were co-opted into the system by their colonial masters without their own without their desire so they just kind of adopted the Berne Convention because the colonial masters extended the Berne Convention to those territories and so even though the Berne Convention would not benefit these countries at the at the time because it wasn't beneficial for them to have strong copyright protection to more beneficial for them to copy from foreign works and imitate foreign works and develop their own cultural heritage and their own system of creative works and have access to useful knowledge they were made to protect works of their own countries and also works from foreign countries at the time. Interestingly the United States did not join the Berne Convention until a century later why because the United States was smart and knew that it would benefit more from being able to copy and use and access works from other jurisdictions than from protecting works of other jurisdictions early on so it was necessary for it to develop up to a stage where it's going to absorb the cost of strong copyright protection. So as of May 2023 there are now 181 states or countries that parties to the Berne Convention and two-thirds of these are developing countries again even though developing countries do not necessarily benefit from a strong copyright system. Now given the implications given that implications of copyright protection varies from country to country and according to the developmental stage that a country is and also according to their development needs and priorities within the global copyright system there has been a constant clash of what global copyright policy should look like and what global copyright policy should prioritize and this clash has primarily been between developed countries and developing countries. Developed countries believe that we should prioritize strong copyright norms whereas developing countries believe that we should prioritize norms that promote access to knowledge in those countries but the dominance of developed countries mind you not in their number but in their political strength and economic dominance and global influence as seen the dominance of strong copyright rules adding its national copyright system rather than norms that facilitate access to knowledge. Beyond the Berne Convention which originally expanded the scope of copyright beyond a particular local territory to the global system we're seeing the conclusion of the agreement on trade related aspects of intellectual property which we often call the TRIPS agreement and also the WIPO internet treaties that means the WIPO copyright treaty and the WIPO performances and phonographs treaties. Now all of these treaties expand the scope of copyright protection and also budding developing countries with minimum standards that they must comply with in protecting works diameters from their countries and works diameters outside their countries. Now this has been done with Lisuano concession for developing countries with Lisuano flexibilities that provide for access to knowledge in these countries. The only concession we've seen is the Brenna appendix and the Brenna appendix has been criticized by lots of scholars myself included as unworkable, as bureaucratic and burdensome and it has yielded no benefit whatsoever for developing countries. Now since the start of the 21st century developing countries have started using their numerical strength in the membership of the world intellectual property organization to push their own agenda. In 2004 we saw developing countries present a proposal before WIPO for a development agenda that would integrate a more holistic idea of development within the intellectual property organization. Now a more holistic idea of development would view development beyond economic growth and also integrate human development priorities like access knowledge access to education within the non-setting activities of the world intellectual property organization. In 2007 the WIPO development agenda was adopted and is now being implemented. Now based on the back of the WIPO development agenda the first thing we saw at international copyright system is the Marrakech Treaty to facilitate access to published works for persons who are blind, visually impaired or otherwise prince disabled. Now prior to this time persons who are blind, visually impaired or otherwise prince disabled could not since it's really great accessible format copies for their use without getting the permission of copyright owners and oftentimes copyright owners do not also create markets that supply accessible format copies. Now with the introduction of the Marrakech Treaty which was fought for by developing countries persons organizations working with this group of beneficiaries and even this group of beneficiaries can now create accessible format copies for the purpose of accessing works in the way that other WIPO access works without necessarily facing the constraints of copyright law. So this was a new drain which was basically fought for by developing countries adding to the international copyright system. Now developing countries have continued their work of trying to find solutions to the access conundrum that we face within the global copyright system and in March 2023 we saw the African group present a proposal to the world intellectual property organization that the world intellectual property organization should continue working towards an international treaty on copyright limitations and exceptions. Now this treaty is significant in the sense that without limitations and exceptions on the scope of copyright protection what we would continue to have is an expansion of scope of copyright protection without having flexibilities that limits the scope of this protection and exempt certain uses public interest uses from the scope of copyright protection. Now one may wonder why developing countries are currently prioritizing access over protection whereas developed countries are prioritizing protection over access well the reasons are not perfect. In terms of creativity and innovation we see that on limited grants of exclusive rights without appropriate corresponding limitations and exceptions have significant adverse implications for creativity and innovation. The strength of copyright over times if untamed would adversely affect the way we create and innovate by raising the cost of borrowing and accessing materials that are the building blocks for creativity and innovation. In terms of access to knowledge and education which is integral again to society progress and human development this would largely be constrained where dissemination and use of knowledge materials are largely subject to private monopoly and having an educated population developing countries notice as well as developed countries will increase human capital in most countries because a more educated population would be a more productive population. And lastly which is a point that is often overlooked is the fact that an economy that is well developed has the capacity to absorb the cost of copyright protection whereas an economy that is still developing does not have as much capacity to absorb the cost of copyright protection and until you have that capacity you would not be able to absorb the cost of copyright protection and that capacity interestingly comes from being able to access knowledge and being able to grow on your own terms as a country which is what the United States and China did very well within the copyright system. So as developing countries continue to ascend in global copyright policies they will continue to play a huge role in determining what the policies that we would that would shape the global copyright system would look like and the future they are working towards the future where access to knowledge will take a central stage and also a future where an international copyright treaty on limitations and exceptions would emerge. This is a project that not only developing countries are committed to but scholars like me are also committed to advancing and also making the case for access to knowledge to take a central stage and for an international treaty on copyright limitations and exceptions to emerge. Thank you very much for listening. Thank you Fe. Juliana Castro-Varón, designer of our poster for tonight's event, writer, technologist, founder of the award-winning publisher Cedar Press, Feminist, wonderful human. Come up here, tell us some things, show us what you felt. We'll make it out of it. I want to start this presentation with a story. William Mummler was born around 200 years ago here in Massachusetts. It's actually not too far from Boston. He was a photographer and a man whose story can be told two different ways. The first one is to say that he was an entrepreneur and an artist. Mummler ran a thriving business selling pictures, pictures of people with their relatives, with their loved ones. He also created a unique way of development of images that would land him in the history books. The other way you could tell the story of Mummler is by saying that he was a crook and a cheater. Mummler, well, he was a young man by the time the civil war broke in the U.S. That means that he was alive in the late 1800s, a moment in which many people had lost their loved ones to the war. And that meant that as everything, every time that something horrible happens, there's space for profit. And he did profit. Mummler ran a thriving business selling pictures of ghosts. One of his most famous images is this one of Mary Todd Lincoln and, you know, Lincoln's ghost. At the time, at the time this image looked very, very real. And people just wanted to believe that it was indeed real that photography, this new camera, photography by 1872, is relatively new. It's been a couple of decades since the moment people were able to see cameras for the very first time. It was not far-fetched to imagine that this camera was also able to take pictures of spirits. Today, if I tell you that this is indeed evidence of ghosts, I will have a very hard time convincing the people in this room, even if you believe in ghosts, which you could. That is because the gap knowledge has closed. The knowledge gap has closed between technology and literacy. What we know about cameras and what we know about ghosts, maybe, has changed since the 1800s to now. I've spent the last year here at the BKC studying the history of photographic manipulation, specifically the one used to deceive people and to tell lies and to profit. And I believe that looking at the tropes of the past can help us tackle the risks of the future, specifically the ones of artificial intelligence. And so, because I encountered dozens of examples, I decided to put them in context and I did what I enjoyed doing best. I built the website. This is a website that is real and you are able to visit it right now, but I'm going to walk you through it. Don't go into your computers. I found a number of tropes that repeat over time. What are those tropes? First, make yourself look better. Use image manipulation to look taller, more independent, more beautiful. King George I was erased from a picture of Canadian Prime Minister just so that he would look more influential by being Sunday alone with the Queen. A couple of years later, Benito Mussolini had the vanity, I guess, of erasing the horse handler. You can see it over there. I love this big, big screen. He got the horse handler erased so that he would look heroic, independent, able to handle the horseman himself like a big boy. Tropes number two of the history of image manipulation. Use manipulation to make others look bad. This is a fun one. In 1950, a couple of senators they were on a little kind of vengeance fight and Republican Senator Joseph McCarthy had allegedly, his campaign of these, had Senator Tidings, which is the man on the left, thinner man on the left, erased, sorry, composed together with who was back then the head of the Communist Party. Just in an attempt to drive the narrative that he was, of course, Communist, something that we all know to be horrible. Now, third trope in the history of image manipulation. Use images to perpetuate racist standards of beauty. Unenviable sick body expectations specifically for women. In 1989, the TV guy put the head of opera in the body of unmargaret. As you can see here, they color the image and put it on the cover. They, of course, has permission to none of the women. And this marked the trend of the 90s and 2000s of not only the pressure for beauty and thinness, but of using image manipulation to sell something. In this case, to sell magazine. Right now, in 2023, we are juggling a whole new set of issues. We have beautiful pictures that are embedded in Zoom calls and that automatically make everybody look beautiful, quote unquote beautiful. But I believe these motifs are present all throughout and until the present. Take La La Land. La La Land is a text startup. So La La Land, the company, not La La Land, the movie, is an European text startup that started in 2019. And they promised, and I quote, to be able to offer clients the promise of more inclusive, sustainable, and digitally mandate brand. How? By offering a set of models that people were able to hire with very little money to make their presence more diverse. So for example, if you sell t-shirts, you could hire La La Land so that many diverse group of people would look like they are wearing your t-shirts. Now the catch is that, and you know where I'm going with this, these people are not real. They are artificially created, all of them. We have, of course, the trope of using image manipulation to drive a narrative of how beauty is supposed to look, specifically for non-white bodies in this case. But we also have the narrative of William Mumbler, the one in which on one side, La La Land is for many a company that has made money for their investors. A company that has many clients. Levi's is one of their famous clients. A successful company who saw a problem and tackle it. For others, specifically models of color or people who have lost their jobs to this, they are a fraddling crook and a cheater of a company one that is profiting of the promise of diversity or not, while not hiring the label of people of color. The other thing here is important and is that this image, unlike Mumbler's image, looks quite real for us. The knowledge gap has not closed just yet. You can now visit ourimagesreal.com and play with this. I believe literacy matters. I want to wonder for graphic design. I have been going through the whole three timelines that you can find on the website, darkroom photography, which is not taught in our school anymore, Photoshop and digital technologies and artificial intelligence. I encountered from vengeance, to horror, to humor, to weirdness, and I invite you to take a look because I think you might too. Thank you so much. You have such a reading list and a, you know, a tick list of places you have to go and check out and read and explore. We're so proud of this project. We're proud of you, Julie. Florian Martin Bartow, FMB. He's an associate professor of law of FMB. Associate professor of law in the university research chair in technology and society at the University of Ottawa. He's a technologist and a creative turned legal scholar. His research focuses on technology law, ethics, and policy with a special interest in AI, blockchain, quantum technologies, cyber security, whistleblowers, and intellectual property. FMB, take it away. Good evening, everybody. So, the Northern Mockingbirds are known to whistle in the darkest spaces. And to discuss a better future, to think of a better future, I want to share with you what I've been working on at BKC this year, looking into the important role of people working within the tech industry as well as public interest secretive researchers who have been looking into the dark spaces of AI and blowing the whistle on a variety of issues. Our lives are no digital, as we said, but we're going to prove that we edited together, and governed by algorithms. While promising considerable benefits, recent development in AI, obscure algorithms, have raised multiple legal, political, and ethical issues, issues with respect to fundamental rights, and the protection of those brave enough to disclose these issues. The development of private AI systems across society and within government is causing a shift of power to new structures beyond the control of existing governance and accountability frameworks. These new governance, governance, sorry, bring consensus for democracy and democratic freedoms to the forefront. The use of the nations facilitated a socio-technical reality where the domain of algorithms is dictated by the choices of technologists and system architects rather than the collective will and needs of society. There is a lack of transparency of the site and accountability on how AI systems are developed and how they can be used to the detriment of society. This is reinforced by increasing copyright and trade secrets, protection, and action with respect to anti-competitive behaviors that created a digital ecosystem governed by a few major leaders. Several whistleblowers and former employees unveiled mass evidence of delivered malice and harmful choices that negatively impacted society. The failed internal governance models instruct structure design choices and proper, and the lack of proper internal feedback groups enable design, enable speech, spread misinformation and promoted insurances. These shortcomings have also amplified again and again harms to already marginalized communities. But this socio-technical reality was not inevitable. Governance framework and practice must evolve to balance information asymmetries from corporations toward the public. We tried several tools for instance for AI. One thing is for sure, ethical guidelines and ethical standards are so far ineffective. Some jurisdictions have adopted and or proposed legal frameworks. Legal frameworks are very competitive design and enable complex truth and force, not only due to an informational asymmetry. To work with this obscurity, the latest legislative conversations in Canada, the US, or the EU pushed towards more transparency and accommodation implications. These requirements are a welcome initial set toward accountability. At least, there was some sort of oversight. Yet, transparency, that kind of transparency isn't without its girls. First, it offloads the duty of awareness unto society, citizens, consumers. We need to be careful to not create an information overload that would transfer the burden to individuals and offer a get-up jail card to industry. Second, it really provides actual, useful disclosure. And actually, it often obfuscates malicious practices. The limited information released to meet the regulatory requirement is often unveiled as a corporate communication exercise, merged with red tape, layers of revision by an array of lawyers, and I'm one, so I can say so, for compliance purposes. The recent logo of transparency reports hasn't successfully mitigated the most pressing issues and concern. Although don't get me wrong, we need strong AI regulation with strong transparency and accountability standards. We need well-equipped regulators with tea and auditing powers, especially in Canada, as well as staff and budget to lead such investigation and sanction. But in our toolkit for responsible AI, we need more. We must empower and protect workers who are familiar with the technical reality and design choices, those who were at the decision table, at the design table. I believe that a future of responsible AI will come from adequate protection for whistleblowers and secretary researchers. Whistleblowers are linked to some of the greatest scandal of our modern society. Concerned citizens have disclosed confidential information to the public to prevent or expose political, social, financial, technological, health wrongdoing. Extensive research demonstrated that whistleblowing is, in the public and in the private sectors, one of, if not the most common ways of unveiling wrongdoing. From Christopher Wheely to Frances Hogan to Sophie Zang, whistleblowers have revealed recent technological scandals, and shown that those kinds of disclosure are essential to protect our digital safety. The whistleblowers are essential to reveal major issues within the tech industry, but they lack protection, while they take tremendous personal risks to authorities and the public. And in my research, I have identified policy, legal and digital literacy gaps. One note though, for sure, there must be a fair balance, you know, between the public disclosure and the protection of trade secrets. Yet, I do believe that the public interest should trump the financial interests of a few privilege. Legal framework must evolve to better protect the public rather than reinforcing algorithmic power asymmetries for criminal law and trade secrets. We need legislative criminal and civil protection for whistleblowers. There are a few states in the U.S. and countries around the world provide such protection within the private sector. And even when they are, the framework are like extremely restrictive in terms of who is protected, what kind of disclosure, to whom, for what. And even when they, sometimes they are like some civil exemption, there is often a criminal statute, or like some IP and trade secret laws that actually prohibit such disclosure. In addition, we also need to incentivize companies to have internal processes in place. It is often the case that workers must go public because the internal processes failed. Lately, with respect to generative AI, teams have been disbanded after flagging that a model was maybe too early to be released, that we need more extensive validation to avoid arming society, but it was playing against the financial interests of the company. And so I am working on developing such legal framework and governance scheme. But beyond the workers within the companies, we need also to protect people outside of those companies. Most jurisdictions lack proper protection for well-intentioned security researchers to proceed with testing and disclosure, the testing and disclosure of vulnerabilities. Model on security bug bounties, we now have bias bounties and other sort of programs aiming at tackling bias and issues in AI, including one co-organized by one of the new big A-series movable AI fellow, Roman Chaudhary. This is good, and we need more of those, but they are often in control environment and with the agreement of the companies. We need to make sure that those experts who have the knowledge and the will to help out society are able to look at the systems and model that are not willingly open to them, and we need for them to be protected when they want to disclose the issues. For sure it's very complex to design, but it's not impossible. Belgium just did it with CVDs, for example. And so I'm also working on drafting proposal of law reform to propose such safeguard. So for a better digital future, I believe that trust secrets and NDA should be limited by the public interest. You know, it was one of the biggest issues that delayed the disclosures that led to the Me Too movement. This is again one of the biggest issues for a responsible AI. And in North America, we cannot go backward following USNCA. Canada, for example, has been forced to pass new criminal legislation to protect trust secrets without any exemption. So at the result, most AI systems are criminally protected by a veil of obscurity, and it appears urgent to correct this oversight. And so my prediction, like for Elizabeth, is not going to go in the better direction regarding AI system. We need to make sure that we have a full toolkit to protect society, and our social, technical system increasingly permeate every facet of our existence, influencing the entire aspect of our lives and the very essence of our citizenry. For a better digital future, we need whistleblower and security researcher protection. We need them to become imperative. They are like imperative and critical for our society. Thank you so much, and let's join the fight. Let's join the fight. Sorry. We're about halfway done. Just a little bit past. Did anybody learn anything about their features from their hands? Anything good to be feeling all right? Woo! Does anybody want to join me? Patrick welcomes anybody to read his poem at the reception all over the event. We're going to also have a chance to ask questions of the fellows. Dr. Ashley. Ashley Lee, Ph.D., is a computer scientist turned social scientist who studies AI, tech, politics, and social movements. Her current work examines the implications of technology, design, and use for democracy and social equality, focusing on youth in marginalized communities. Ashley. Ashley! Thank you for being here. My name is Ashley Lee, and my talk today is Ashley. Very much about celebrating the here and now, though I can predict the future with all these crystal balls around as well. I'm actually a computer scientist turned social scientist, and one big area of my research is about politics and ethics with technology. And I'm really interested in these questions about, you know, how do we design technologies that center the experiences and concerns of young people in marginalized communities? And one big part of that question is asking how do we empower next generation technologists and citizens to co-create pluriverse where multiple worlds can not really coexist but thrive? This question is important because in spite of all the potentials of technology to advance the public interest, recent events have highlighted some of the challenges that tech has posed for democracy and global civil society, both in terms of its design and use. And so for instance, you can think of the Cambridge Analytica scandal and many other projects around the world, or the spread of hate speech that has escalated to mass scale violence, such as in the case of the Rohingya genocide. And movements like Black Lives Matter are targeted by disinformation campaigns all the time, which are used to sow doubt and confusion and division among civil society actors. And then there are all these new concerns around black algorithms that are increasingly governing all areas of our lives, such as whether we get to, you know, whether we get a job or a loan or how much do we pay for health insurance, things like that. And so these are all very important questions that, you know, many of us are working on here at the Brooklyn Climate Center and as visual platforms come to cover our everyday lives, all the last times of our life. So I started my career as a software engineer and it's a finale after studying computer science at Stanford. And when I was in college, tech ethics wasn't a great effort. So for a long time, algorithms were primarily thought of in terms of efficiency and accuracy. And the assumption was that computer science is a bio-neutral non-political field. Yet these events that I just highlighted for you demonstrate that technology is inherently political. The process of making and deploying technology is, products are political, right? So who's designing, who's at the table, right? Who gets to code, who gets to set the agenda? These are all value-laden questions that are inherently political and raise the difficult questions about political power and resource allocation. Yet until recently, there hasn't been a formal training for computer scientists on social and ethical dimensions of computing, right? And this has changed rapidly in recent years. There's now what we might even call a movement for ethical tech and computer science departments and other related departments around the country and around the world are introducing computer science ethics programs and initiatives to their schools. So for instance, we're starting a computer science ethics program over the last couple of years. Harvard also has a computer science ethics program. So in my own research, I've been exploring how universities can better prepare next generation technologists to navigate ethical challenges at work. And this involves asking questions not only about how technologists can navigate ethical challenges at work, but also asking big picture questions about, you know, the work force and the work environment, right? So right now, the default path for many SCS grads is from the university to big tech and the industry, right? So why, right? Even as the public interest tech continues to rise. And there are all these questions around, you know, diversity, like how do we attract and retain emerging technologists from underrepresented groups, right? And the questions around tech ethics are embedded within the macro level questions around the power that's concentrated in Silicon Valley and other tech hubs around the country and other western, mostly western countries and the power that they exert globally. So in my research, I've been working together with student researchers here at Harvard and Stanford. So the two institutions that I've been feeling with recently took four ways to empower emerging technologists and to center other people's perspectives. And so I wanted to acknowledge the contributions of my co-researchers, one that was here at autumn, you're at the 7RRA. And so in addition to this kind of participatory action research, the project also has the classic social science components, which I won't go to too much detail at the moment, but I'm also happy to talk about that during the recession. So one of the things that we're really doing as part of this project is constructing a set of teaching case studies that college students can use either on their own or in the classrooms to discuss tech ethics issues. And these teaching cases address difficult real-world challenges that professionals are grappling with in their everyday work life, right? So and here are some of the Berkman Klein Center colleagues who are contributing their teaching cases, some of whom are here today, some of whom you are actually here from. So a quality that I admire about the work of these case authors is that they are, they're bringing this deep lived knowledge and experiences of the challenges that their own communities are facing. And they're really trying, you know, these are difficult questions that don't have clear answers, right? Given all the power structures and the ways in which things are set up, right? So these are ongoing questions that we are trying to engage students with through teaching cases, right? And this is especially important when we consider how technology has gone long when we are, you know, when people are trying to solve other people's problems, right? And so further, technology and ethics pedagogy today currently has a tendency to center on U.S. and Western contexts. And our goal with these teaching cases is to broaden the horizon, to include international and transnational contexts and center the perspectives of communities that are often left out of important debates around technology as a society. So ultimately, if we are to work towards a world where many worlds fit, right? As the Zapatists say, we need to be able to recruit and retain a diverse workforce and cultivate inclusive work environments that can sustain them. And the tech ethics pedagogy needs to reflect the diversity of our world peace. So that's the kind of future that I would like to see. And in doing this work, I'm building on the work of many other, many others, including colleagues and friends and mentors of whom are here today. And so I want to take a moment to celebrate their work, the work of my colleagues and changemakers around the world. So now let me close with a few of the acknowledgments. I'm incredibly grateful to Book McLean Center for giving me the platform to do this work. And I would like to thank especially Beka, Nadia, Patrick, Sia, Liz, Sue and the entire BKC team for your support throughout my fellowship. And I would also like to acknowledge colleagues at Stanford, Pax, Stanford, the Co-Ethernal Center and the Harvard App Center for providing with ongoing support of this work as well. So finally, in case I didn't mention, this work is going to continue on and if you're interested in getting involved in various capacities, I would love to hear from you. Thank you. I have a good call to action, Ashley. Thank you so much. It's been a hard privilege. Marta, assistant. Marta, come on up here. Marta is a lawyer with extensive experience in public sector reforms in the Ukraine, has supported state institutions, civil society and international organizations to improve the quality of governance. At BKC, Marta has been studying success drivers for digital transformation and education, aiming to enhance human capital development. Marta and our religion, Nadia, let's hear from you. Thank you so very much, everyone. And thank you so much for being here. I got a hold of it. Thank you so much. So as you can see today, I want to talk about human capital and the role of digital location. So let me start with talking about digital transformation, which is now not just seen as the competitive advantage for employment opportunities or in general economic growth, but now it is the key driver for, I believe, addressing many issues and for meeting people's needs. And when we are talking about issue, I mean cybersecurity, this information, inequalities, but also even amplified impact of climate change. At the same time as we embrace digital transformation, it is crucial that we think about technology with great caution, because let's be honest, they provide great opportunities but also carry great risks. And here, I would like all of us to think about two principal questions. First, how do we engage with digital technology not just effectively, but also confidently and responsibly for different purposes? And then second question, how do we ensure that digital technology can actually uphold human rights, can uphold rule of law, democracy, while giving everyone the benefits to enjoy the benefits of digital transformation? So the question at hand is how do we make sure that digital transformation is successful? And I believe one of the key pillars for that is actually digital competence, which in turn, as you can see, consists of three elements, not just digital skills, but also digital knowledge and digital attitudes. I believe that digital competence should not just be limited to proficiency in knowing how we use digital technology, but also we need to understand what are the benefits, what are the limitations of digital technology and how we can use it in an ethical and confident manner. So how can we effectively develop and boost digital competence for everyone? And here I believe education sector has a decisive role to play. But in order to meet those high expectations, we need to change the nature, how education center functions, so that it can fully include all individuals in a, let's face it, now we all live in a digital driven society. So here at BKC, I've been examining digital transformation of many countries with respect to transformations of the education sector, and I identified several factors that I believe are crucial to include in national strategy of those countries who wants to transform digitally their education sector and ensure that everyone can be equipped with a necessary level of digital skills. So when we think about digital education, many of us just picture the application of different digital technologies in delivering education. And here in this picture you can see an Ukrainian professor who is now bravely defending Ukraine's independence, but in this picture he is not conducting any military operation. He is actually delivering a lecture to his students over Zoom software. Or let's have another example of extended reality that is now crucial in Ukraine aviation training because it simulates the real flight and it provides an opportunity to build skills that are necessary for pilots more effectively and without any risks for their safety. So the conclusion is there are many digital technologies that provide many opportunities and I have to be honest, I wanted to include more photos with pictures of Ukrainians whom I just want to say that I owe everything to who is now fighting for Ukraine, but I'm very short with my time so I have to move on. And the main thing is to realize that the application of digital technologies in education is only one aspect and we need other factors that have to be considered and that implement its independence. So what are the factors? No? Digital infrastructure. In Norway it actually consists of three elements. First element we need to think about deployment of network to all educational institutions but also to all households. The second element it's about the it's about the deployment of physical infrastructure and equipment that here is very important need to comply with security rules with maintenance rules and then data protection rules. And the third element I found particularly interesting because Norway is now investing in cloud technologies. They want to come up with an open and multi-disciplinary environment to provide all interested parties including researchers, companies, individuals to have this opportunity to find reuse data, publish articles and have access to different tools and services. While digital technologies availability and access are crucial, institutions also have a role to play. We need to make changes and we need to make systemic changes to encourage purposeful use of digital technologies. So the third element is about building digital capacity in all educational institutions. What does it mean? It means optimizing and then integrating digital technologies but here it's very important not just in learning and teaching but also in the assessment and other operational activities. So in Ireland they highlighted two important elements. One of them is digital leadership where they require all digital leaders to explain and then continuously communicate the vision and that objective by digital technology is necessary and how it will benefit the educational institution. And the second element is about institutional development plans for achieving the set objective and then avoiding any challenges that were identified. Another crucial factor of the digital capacity is about digital competencies but here very specifically it relates to all educators and then to all staff who work at the educational institution. So most European countries have already adopted strategies where they want to build a effective system for their initial training and then ongoing professional development. And Sweden even is now in the process of launching new national professional program for all school leaders and all teachers in the entire country. In addition to that what is important is digital educational content which in my favorite digital country Estonia includes curriculum, smart learning resources but also digital pedagogy. I would like just to briefly know that Estonia is known for many things when it comes to digital transformation but in particular with respect to digital transformation of education they are brilliant in private public collaboration. During pandemic they worked closely with over 70 organizations and companies to provide free access to digital education solutions and now those solutions have become an integral part of their national education system which is awesome. Another example comes from my home country Ukraine for those who don't know Ukraine is now on the mission to build one of the most digital friendly country and they are doing it through now an award-winning e-government platform which is called DIAN in English it means action but following the full-scale Russian invasion they decided that they need to do more with respect to digital education so they decided to set up a separate platform which is called DIAN digital education the main goal is to equip everyone of all ages with the necessary level of digital skills and of course there is a lot of work to be done ahead there will be lots of experimentations there will be lots of uncertainties and challenges but what is clear is a strong commitment that we need to optimize technology for the thriving future one that serves people and where public interest drives innovation finally my last factor that I think is crucial is lifelong learning concept which basically means that digital education should not be limited just to a certain group of people or to a certain phase of life it should be accessible in various contexts and at different levels regardless of age or learning environment that people may be in so at this point I know many of you are thinking okay what about the risks the challenges or downsides and I won't lie there will be many and just to name a few digital divide inequality gender imbalance skill gaps mismatches labor shortages then data applications surveillance cybersecurity I mean there will be lots but to mitigate and hopefully eradicate some of those risks what will be important that the entire process of digital transformation of education is guided by internationally recognized human rights and principles which also backed up by the national respective regulations so as I need to wrap up because I feel like I'm already talking too much um I really like the picture so um when we are talking and thinking about the transformation of digital education it will take a wide system and multi stakeholder approach with continuous collaboration adaptation and then monitoring and that's what I believe we all can come in because when you're thinking okay what I can do and I really don't know what's happening um there are several ways you can not only support Ukraine on its way to digital transformation of education but you can also help your own country or you can help with building the digital capacity within your own organization or as a last resort you can actually start with yourself and try to improve digital competencies when I'm thinking about the future I know that many of us are very concerned about it but the fact is that the future is the thing that is being created now in the present and it is created by those people who actually see opportunities and then take actions and I believe that the transformation of digital education is one of such opportunities from my professional experience I know that reforms are never easy but they are achievable and I know that education is one of the reforms that are without any doubts necessary so what is important is to keep moving forward and this picture is actually a picture where I want to thank BKC very much for the fact that the initial steps in that direction for my country were taken here I'm incredibly honored to be part of this community and I hope we can all become champions of human capital through the power of digital education together thank you so much. I love the Ukraine's finest we stand with you Marta, we stand with Ukraine to people on the move all over to people who are repressed to people who are fighting the good fights thank you for being with us this year. It brings us to artificial borders and it brings us to Petra Mulder who's a lawyer and an anthropologist she co-runs the refugee law lab at York University and the migration technology monitor and is writing her first book, Artificial Borders. Petra. Like a wound in the landscape a rusty border wall cuts along our Arizona's outcoming Odell Diablo or the Devil's Highway. You can drive up to it and touch it, the rust staining your hand for the rest of the day. Parts of the wall are also painted black so that it absorbs the scorching sun of the Sonora making it painful to touch and decline. Once the pride and joy of the Trump administration this wall is once again the epicenter of a growing political row. Last Thursday President Biden repealed the Trump administration's COVID era title 42 regulation which prevented people from exercising their internationally protected right to asylum. But what comes now however is the introduction of even more hard-line policies making it even more difficult for people and also undergirded by growing commitment to a virtual smart border that extends far beyond this physical frontier. Today there are millions of people on the move due to forces of colonialism imperialism conflict instability environmental factors and economic reasons but through every point of a person's migration journey they are impacted by risky unregulated technology used to control movement and manage migration. I spent the last six years now tracking how new technologies of border management surveillance automated decision-making and various experimental projects how they're playing out in people's lives and in order to tell this global story of power and violence and innovation and contestation I rely on the sometimes uneasy mix of law and anthropology it's a slow and a trauma-informed way of thinking one which requires years of being present in a space in order to begin unraveling at least some of the strands of power and privilege and story and memory that make up the spaces where people's lives unfold and knowledge production and storytelling is also a deeply political act one that I do not engage with in without constant reflection and a recommitment towards building a world without violent technological regimes because I've had the privilege to see time and again across different contexts from Palestine to Greece to the US-Mexico border that an already violent border policy is often sharpened through the use of digital technologies developed for the purposes of border control these technologies separate families push people into life-threatening terrain and exacerbate historical and systemic discrimination that is a daily reality for people on the move from RoboDogs to AI lie detectors to drone surveillance to high-tech refugee camps borders have increasingly become a testing ground for new technologies because their places where regulation is deliberately limited and where anything goes frontier attitude really does inform the development and deployment of surveillance at the expense of people's lives I would like to share one particular vignette with you to illustrate their impact on the real people since 2020 before coming to BKC I had been mostly based in Greece and along the edges of Europe one of the major frontier sites of the EU and a testing ground for much of this border technology management and the region of Evros is what separates Greece from Turkey almost as a pilgrimage when I'm in the region I make the drive down to Siderot a small village near the town of Soufli it's really beautiful it's flanked by golden poplar trees that cut through the landscape and outside the small village lies a cemetery of mostly unmarked graves three or four have some inscriptions that were sent to the local imam who takes care of this final resting place by the families of the deceased and last time I was there I touched the fresh earth on one of these graves it was that of a young woman who died on the 2nd of February 2021 only a few months before my visit there Amal was born in 1993 and decayed bouquets of yellow roses were still on top of her grave a remnant perhaps of an active recognition and an active care but while I was kneeling down paying my respect something caught my eye something green and putrid standing water in holes that were not out of the earth three open graves await this is the region where experimental new technology is playing out drone surveillance sound cannons that emit piercing shrieks and various algorithmic motion detection risk assessments and even virtual reality glasses for the border guards to wear all of these technologies have profound impacts on people's human rights and civil liberties from privacy rights being impacted when data is shared with repressive governments or when international organizations share such data to of course freedom of movement and the right to seek asylum to one's right to life liberty and security of the person when various border violence regimes are practiced and aided through surveillance technology but what is really clear is that in the opaque and discretionary world of border enforcement these are structures that are underpinned by intersecting systemic racism and historical discrimination against people migrating these technological impacts are very real but what's extremely troubling is that there's virtual no virtually no governance mechanisms in place to regulate the development and deployment of these high-risk technological experiments and what I argue and see time and time again is that this creation of this lack of governance is very deliberate because it allows for the border to be the ultimate testing ground a high-risk laboratory and it also allows for these projects to occur that would simply not be allowed in other spaces imagine if you had to sit in front of an AI lie detector at your doctor's office or if RoboDocs were deployed in your local grocery store but why does this matter well because these technologies are becoming normalized beyond borders just a few weeks ago New York City's police department proudly unveiled their newest arsenal of RoboDocs that will be running around the streets of New York one even painted with spots like a Dalmatian very cute but when and why have we decided these are the appropriate tools to use in our society particularly when we know that there's inadequate governance and accountability mechanisms in places for when things go wrong whose perspectives matter when talking about innovation and which priorities take precedence there's also the money factor there's big money to be made in the development and selling of highest technology why does the private sector time and again get to determine what we innovate on and why in often really problematic partnerships with the public sector whose priorities really matter when we choose to create violent sound cannons RoboDocs or AI lie detectors instead of using AI to root out racist border guards it's because technology perpetuates power differentials in society and unfortunately the viewpoints of those most affected are routinely excluded from the discussion and at the end of the day this conversation isn't really just about technology it's about these broader questions questions around which communities get to participate in conversations around proposed innovation and which groups of people become the testing grounds for tech experience so not to leave us on a very depressing note how do we build a different world I want to share a little example of a little project that we've been putting together called the migration and technology monitor it's an archive a platform and a community that actively decenters so-called global north narratives and pushes resources into communities who are mobile you can check out our archive it's available in Arabic, Spanish, French and English hopefully Dari soon as well but I want to highlight one particular element of this work and that is our newly launched fellowship program for people on the move we have Veronica, Neri, Weil, Rajendra and Simon joining us this year for our first ever cohort and they are people on the move in situations of displacement who want to tell their own stories on border surveillance they will be doing original reporting on surveillance applications at the US-Mexico border looking at using WhatsApp as a way to communicate with refugees in Venezuela and creating a memory scroll archiving Uganda really re-centering power of people on the move to tell their own stories because ultimately it is this deep slow and grounded community work that can act as a method of resistance the vast power differentials that are inherent in the development and deployment of technology and for me it's also a commitment to this community work that allows us to build a safer kind of world like the BKC community has done for me this year thank you to you all in the spirit of fortune telling I had planned to join us this evening in a giant magic eight ball costume a giant one like big round 3d it had a thing with batteries and a fan that would have kept it big and round but they didn't send that instead they sent these green shoes why unclear they're not my size which is unfortunate because I would wear them otherwise I'm sure the magic eight ball would have predicted what just happened that seven brilliant people got up here and shared things that we learned from that made us feel that made us think that helped us to predict futures that we want to belong to wouldn't want to change that technology would work some that wouldn't work some that we would laugh um but these green shoes they're not ruby slippers but they'll remind me that there's no place like BKC there's no fellows like you um thank you so much to you all thank you you all for coming and joining us tonight it's still light out um but we invite you to join us to the fifth floor to the berkman client center auspices where there are food where there's snacks where the fellows will be there's a beautiful porch come and sit on it get your son get a drink get a snack um and continue to imagine and to predict the world we want to live in thank you all