 I'm sure there'll be a lot to discuss as part of this session. Welcome all and thank you for coming to the Bearing Witness Seeking Justice Conference. This is very exciting. I know it's been in the plan for a long time and it's really exciting to see this come to fruition. I'm Ian Condry. I'm a professor in comparative media studies and writing. I study cultural anthropology, Japanese pop culture and very interested in the transformations of contemporary media. And it's my pleasure and honor to introduce Sam Gregory. We'll be here for about an hour. Sam will talk for about 40 minutes and we'll have some time for question and answer. There are mics on either side of the stage for asking questions when the talk is over. Sam Gregory is an award-winning technologist and advocate. He is director of programs, strategy and innovation at Witness, which you can find at witness.org. This site helps people use video and technology to defend human rights. He's an expert on smartphone witnessing, new forms of misinformation and disinformation, including deep fakes, as well as innovations in preserving authenticity and evidence. Can't wait to hear how you preserve authenticity and evidence, but I think it's great. He directs the overall programmatic vision of witness and leads the technology threats and opportunities program focused on early influence on emerging technologies that will impact the communities that witness serves. He directs the prepare don't panic initiative at witness focused on better globally inclusive preparedness for malicious usages of synthetic media, including disinformation and deep fakes, as well as working on standards and authenticity infrastructure. He's co-author of the Content Authenticity Initiative White Paper. He's worked on impactful campaigns worldwide, particularly in Latin America and Asia. He also works in advocacy, contributing to changes in policy, practice and law, quoted frequently in major media worldwide and publishes widely on technology and human rights, including in journalism, journal of human rights practice and American anthropologist among other places. He's spoken at Davos and the White House. He was a 2010 Rockefeller Bellagio residence, resident on the future of video activism and a 2012 to 17 young global leader of the World Economic Forum. Sam is on the Technology Advisory Board of the International Criminal Court. He's co-chaired the partnership on AI's expert group on AI and the media, and he's taught at Harvard, the first graduate level course on participatory media and human rights advocacy at the Kennedy School. Founded after the Rodney King incident, witness has 30 years of experience in 100 plus countries, supporting critical uses of video to secure accountability, reaching millions of people with skills and tools, engaging technology giants on the negative and positive impact of their products and maximizing civic participation via visual and social media. Please join me in welcoming Sam Gregory. Thank you, Ian. That was a very long introduction. If I'd realized that, I would have edited that a lot. Thank you very much. So very glad to be here today to talk about the idea of proactively fortifying the truth in video witnessing and how we might foster increasingly resilient witnessing in the face of human rights abuses. It may be obvious to this audience, but why should we proactively fortify the truth? We must do this because video is simultaneously everywhere and everything in our daily lives. YouTube is a search engine. TikTok is a source of information and inspiration, as well as a key source of human rights evidence of an energy, yet the visual medium is also being simultaneously undermined by the changes around us, including a general decline in trust in a shared reality, as well as specific shifts in relation to video and image production, such as the advent of deep fakes. And we should do this because witnessing is mundane, yet simultaneously incredibly courageous. Yet too much of the time, witness accounts are easily dismissed, easily claimed as false, or simply perpetuate narratives that tire and exhaust the witnesses who must provide yet more proof, yet again, of systemic injustices, as a number of the participants of this conference, of course, discuss in their work. We must do this because even when video contributes to accountability or justice, it faces increasing hurdles of credibility. We need to do this because of the looming presence of deep fakes and other forms of hybrid and synthesized reality, and the way these in fact buttress the existing skepticism and hostility of those whose witness accounts, identify as responsible and of audiences who choose and want to ignore and deny. So I want to talk about two areas today. First, I want to share some ideas and perspectives on the intersection of human rights and video technologies. On the basis of a recent strategic review I've been leading at the Human Rights, Video and Technology Network witness. And then I want to talk somewhat speculatively about practical steps we can take to fortify the truth across a pipeline of video witnessing and emphasize the need to think ahead and proactively and equitably prepare for a future of more pervasive, omnipresent video, but also significant, emergent, reinforced, and exacerbated challenges to this video as proof and witnessing. I'll ground this in the broader work of witness, as well as the learning in a particular area of work I focus on and research. The witness prepared on panic initiative, focus on the work of the witness, focus on supporting that global inclusive response to malicious uses of deepfakes and on supporting innovations in media trust, provenance, and authenticity. Before I jump into this, a short background to witness. We're a global human rights network founded in the aftermath of the Rodney King incident on the premise that video in the hands of human rights defenders is a powerful tool. We've evolved with the trajectory of video and human rights over the past 30 years. 30 years ago at witness, we were handing out video cameras with tapes to human rights groups and just starting to create videos alongside documentary filmmakers using their footage. 20 years ago, we were training NGOs to create short form advocacy films and just starting to experiment with streaming video platforms. 15 years ago, we were running nuanced, campaign driven advocacy using video and launching a global online hub for human rights footage at the same time as YouTube. We were doing early experimentation with mobile video and advocacy to a young YouTube on content moderation. And 10 years ago, we started developing a curriculum for emerging video as evidence principles from Syria that's now used globally and took its most recent iteration in the form of solidarity sharing of practices from Yemen to communities facing rights violations in Ukraine. And 10 years ago, and this relates to the theme of the talk today, we began collaborating to build tools for proving the location where video or photos were shot and the first mobile tools for blurring faces for visual anonymity in photos and video. Five years ago, we started working extensively on the evolution of new forms of video manipulation such as deepfakes and in tandem, the idea of provenance infrastructure that can show you where and how a video or photo was created and edited. We doubled down on key focuses in providing the best possible guidance and support to communities facing war crimes, land loss and state violence, including of course significant work in the US on those issues. So currently I have colleagues on five continents who work in a combination of grassroots training and listening, deep collaborative work on specific projects that enable progress and learning on human rights issues and use of video and then broad based sharing of good practices between similar communities of struggle. I also lead a particular strand focused on proactive and early work on emerging technologies and technical infrastructures that will set the terms for human rights video witnessing such as preparation for deepfakes, authenticity and provenance and emerging audio visual technologies like AI based production, for example, text to video or augmented reality. With that introduction shared, let me begin by sharing some perspectives on the landscape informed by the recent review we've been conducting at witness. First, we know that there are ongoing constants in the challenges of human rights witnessing. These dilemmas recur for almost every community witness works with. The challenges of creating trust and being trusted, the dynamics of visibility and safety and the ongoing contingent choices that must be repeatedly assessed and are often out of the individual's control. And of course the ever present question of whether witnessing makes a difference on the terms that the communities who create share and participate in human rights videography want and how the global and local dynamics of power shape that. And we must be purposeful in understanding on whom the act of witnessing places reasonable and unreasonable, ethical and practical burdens. Witness declaratively centers in its work the communities who face human rights violations and the human rights defenders who work with them. When we do this, we often must clearly recognize, acknowledge and uphold the anger and frustration of people forced to expose the obvious. In many cases to present images of dehumanization and abuse that reflect systemic racism ingrained discrimination in human rights abuses. Why is it upon them to present more evidence? The frequent disillusionment about the ability of video witnessing to make a difference is a reality. We must also be clear that to film and then to share is not an obligation, but a choice that as much as possible they can make. Within our work, we're also increasingly focused on the obligations of what we call distant witnessing. That mediated witnessing via screens that can range from observing a telegram channel from Ukraine to commenting on a live stream from a crisis to engaging in in-depth, OSINT or open source intelligence activity around a potential war crime. This mediated witnessing which can lean into the game-like sense of spectatorship that Lily Chuliaraki and others call out as a characteristic of contemporary activism carries significant ethical obligations that are often not sufficiently acknowledged. So I wanna talk about three significant challenges, opportunities and contradictions that shape the landscape we see. Two of these are broad technological and societal shifts and one is within the wider activism space where witness operates. First, as I've mentioned already in the focal theme of my talk, there is an increasing centrality of video and the audio-visual experience in society and a parallel attack on the integrity of video embedded in a broader crisis of trust. Video is growing in prominence in culture and society as short form video, video chatting and live video become more and more central and is embedded in other technologies of course of creation and distribution such as social media, private messaging and the mobile internet. There's also, and I think it's important for us to grapple with this an evolution starting to happen of what audio-visual experience means. In the coming decade, this is likely to include even more of what we understand as video in a range of short form, long form and live as well as augmented AI based editing and text to video, virtual reality, augmented reality and other technologies of mixed reality and of immersion and co-presence. Most of these technologies are being developed with little sense of human rights and little input from those with the most to lose or gain in using them in human rights struggles. YouTubers long since stopped offering statistics on the amount of video shared on their platform as it went into hundreds and thousands of hours per minute but this volume and visibility of video is both an opportunity for more people to be heard and a challenge to the communities we center and serve drowning out their voices, narratives and documentation and creating security risk from an exposure which leads to little benefit. We anticipate that the trust basis of video is gonna be consistently and increasingly under threat. Deepfakes and the threat of deepfakes, broader claims of misinformation and disinformation and recourse to claims of misinformation and disinformation distort truths. We know that this makes it essential to emphasize the importance of creating trustworthy video and to center and strengthen the voices and narratives of vulnerable communities whose accounts are therefore even more easily dismissed. It's from this necessity that I draw the talks title on fortifying the truth and the speculative section I'll end with. Secondly, we exist in a landscape characterized by stark threats to human rights from writhing authoritarianism and particularly smart digital authoritarianism to increasing populism and both legislative and extrajudicial threats to privacy free expression and free assembly. Long established human rights strategies such as name and shame no longer work well and many state actors commitments to human rights are shaky and in some cases increasingly hostile and violent towards human rights defenders. There are challenges to both human rights and the traditional human rights approach as well as opportunities rooted in these institutional failures, geopolitical changes and societal shifts. There is also a concomitant a linked disillusionment I mentioned already with the impact of the images we see of human rights abuses. The legitimate questions of what purpose does it serve as scholars of black witnessing particularly like Dr. Richardson have emphasized to see another person humiliated, killed or harmed or another image of a war crime when the fear is that key actors will never be held accountable for by justice mechanisms. There is the concern that human rights mechanisms are totally inadequate to protect the witnesses particularly in the context of pervasive digital authoritarianism, surveillance and the suppression of online space. As my colleagues in Nigeria note prolific video from the end SARS movement in Nigeria provided a tool for targeting protesters and individuals as well as encouraging people not to participate in protests for fear of the harm they would face from the police. And of course, mis and disinformation and coordinated attacks on trust are not limited to the scenarios around video that I note but are a broader phenomenon that is completely inadequately characterized by the catchphrases of fake news and indeed the words misinformation and disinformation. On the brighter side perhaps many more people and activists see a role for themselves in human rights movements. In witnesses work we see people uniting around issues such as climate justice, participating in new tactics such as OSINT, open source investigation work or indeed of course utilizing the tools they have available like mobile phones. And finally in this landscape review just to note that the human rights field or the human rights video field is growing. There are multiple directions in which the field is growing and a diversifying field of organizations and movements working in it. This includes many more global organizations and technical support organizations doing work ranging from the OSINT work of a Bellingcat or an Amnesty Digital Verification Corps to the visual investigations and forensic architecture of groups like CITU Studios or the eponymous forensic architecture and onto the mass archiving work of groups like NEMONIC and its Syrian archive. There are more institutions and entities endeavoring to ensure that video leads to accountability. Now where to next given this landscape? Let me continue with perhaps something of a call to arms or perhaps preferably a call to cameras or parenthetically a call to data sets as we increasingly think about what video actually means. What does resilient witnessing and fortifying the truth look like in this context? And here I'm reminded of both the frame with which we've approached our work on deepfakes and trust a witness of prepare don't panic but also the rather less reassuring story of the frog and the slowly heating water that if we are not proactive in addressing the risks of our truths being undermined then given these trends, the most vulnerable and important witnesses will be even more disadvantaged in five years time or 10 years time. By nature, of course, what I'm about to say is partial both deliberately to open up this discussion and in the interest of time and more importantly by my own incompetence and emission and I will welcome the ideas that folks will share in the question and answers and in the discussion following this. I'm gonna use the analogy of a pipeline or trajectory of use of video that includes filming, storytelling, watching, analyzing, sharing and advocacy as well as preservation, reflecting that resilient witnessing should be seen as holistic, not at the point of capture only but as a process. First let's consider the act of filming and how filming practices may need to shift. In five years time as these women film with their smartphones, perhaps with their AR glasses, little more unlikely, what do we need to think about? All of the indications of what I've just said push us to consider how we strengthen the integrity of the media from the very moment the filming is happening. As I mentioned earlier about five years ago witness began to purposefully emphasize work on preparing for changes in video manipulation and trust. We grounded our advocacy in listening to both the communities we work with but also adjacent communities of journalism and activism. We organized a series of global convenings to better understand what these communities perceived as threats and needed solutions around deep fakes and media manipulation. From this one clarion call we heard from activists, journalists and human rights defenders was this, that their work was impacted by both the turn to a post truth and relativist discussion and also by the rhetoric and the reality of emergent techno social ideas and solutions around audio visual manipulation. They worried about the capacity of deep fakes to target them with non consensual sexual images as well as to falsify evidence. But also, and this is what we've been seeing most frequently the ability to use the plausible claim that anything could be faked to exercise what is known as the liar's dividend and claim real footage is false. So we can anticipate that challenges to the reality of what is filmed will grow and grow. Then practically what might we do in response? For instance, witness produces guidelines for activists who want to capture video that can prove war crimes in places like Ukraine and Yemen. In this guidance we emphasize the need for 360 degree filming and for filming from different points of a crime scene. These strategies confront the existing reality which is not new as Heather indicated in her presentation earlier that people rightly question what happens outside the camera frame. This doesn't however necessarily address the corollary question of what happens before the cameras are switched on and what happens after they're switched off. But filming practices may need to evolve further. Let's make an assumption that current video deep fakes do better with face forward images. So perhaps we need to focus in the short run on capturing profile shots of people to deter deep fakery. Though this will be very contingent guidance as we know that deep fake algorithmic research improves rapidly and will compensate for these generative failures soon. And indeed that filming generically in this new manner creates exactly the training data for the algorithms to support them to improve. We know that deep fakery works less well with imitating the idiosyncrasies of any individual's human vocal cords and the gestures we make as we speak. We might develop an approach focused on interviewing that really makes sure to capture the visual vocabulary of gesture that is unique to any individual. Less dramatically, one approach that is already embedded in the practices of voting coverage and candidate op-ed research is the constant filming of political candidates by both opposition researchers as well as friendly sources in order to maintain a constant record look back at and of course another database of deep fake training data. Then consider that in the future we will have the ability to not only create single deep fakes but multiple convincing images of the same event. Technologies such as the so-called nerf recreations allow the making of three-dimensional images from 2D images where you can pick a viewpoint, then another viewpoint, then another viewpoint, then another, then another, then another. We will need to consider how this influences our choices to have multiple cameras recording the same scene as well as how live streaming is used for an ongoing record. At the technical level, witness has been deeply vested in early engagement on what we characterize as authenticity and provenance infrastructure. Think of these as technical processes to understand where a video or photo came from with varying degrees of granularity to understand how it was edited and then how it was shared and by what entity. These approaches are rapidly moving from niche to mainstream at the camera level for tracking of edits and changes to videos and edits more broadly and for maintaining the integrity of finished media productions. This infrastructure has a growing prominence reflecting both the needs of mainstream media to resist brand hijacking as well as industry and public concerns about missing disinformation. These emergent infrastructures include a range of camera-based capture apps, a coalition called the C2PA, the Coalition for Content Provenance and Authenticity that recently launched a standard as well as the Content Authenticity Initiative from Adobe and a myriad of independent, often web-three-oriented projects. And they do have the capacity to enhance trust in vulnerable witnesses speaking out in both news and evidentiary contexts. This is one of the reasons why human rights groups were in fact among the first to build tools to enhance the ability to, for example, add rich metadata and hashing to videos and photo shot of violations. Tools such as proof mode and eyewitness to atrocity were first created close to a decade ago. These tools provide, for example, opportunities to witnesses to provide what Ellen McPherson terms, verification subsidies to distant witnesses who are watching, sharing or analyzing footage. However, these emerging authenticity infrastructure can also be threats unless they are carefully calibrated to protect the ability of frontline witnesses to be found, seen, believed and acted in solidarity with rather than serve to compromise this capacity. These potential compromises arise from both the intended and unintended consequences of introducing a technical infrastructure as a response to a complex sociotechnical issue. It is possible that the standards of proof around media will increase both in desirable and opt-in ways but also in terms of the ratchet effect that discriminates against media produced with older technologies without the latest affordances. As they move from niche to mainstream these emergent infrastructures raise questions around who can participate because of technology access or security concerns about how to evaluate these signals of trust about the potential harms to privacy and to vulnerable groups and individuals using them in repressive contexts of data surveillance. Provenance as a tool to fortify their truths risks being weaponized against vulnerable witnesses and civic journalists unless key concerns are integrated into infrastructure proposals as well as implementation and combined with ongoing media literacy and vigilance on implementation. The data trail they create may be used against those witnesses or the absence of that trail can be weaponized to undermine credibility. This is particularly true that even under optimal circumstances activists, journalists and witnesses have to make complex judgments about visibility against obscurity that are heightened by the growing corporate and state capacity to surveil. One other area of concern is to ensure that these types of signals are not integrated as a criterion of journalism as we see for example in some countries with fake news laws or valid context within the increasingly securitized fake news laws proliferating worldwide. It's in this like the witnesses carefully and purposefully engaged from early in many of these processes to push for these harms to be incorporated at the standards level not in the variance and failings of how any individual product works. We will likely be grappling into the next decade with the implication of these types of infrastructure and tools, how they protect the integrity of footage from the point of filming and help us understand the increasing mix of synthetic media with reality and yet how to do this without the related harms. Such investment in the options to provide better provenance and better video which carry with them the risk of generating yet more data to fuel deep fakes and yet more data for surveillance also require us to invest in tools available at multiple levels for anonymity. This is both visual anonymity from the point of view of the filmer to nimbly blur and redact faces and other features while filming while preserving the provenance information that two are not mutually exclusive but also from the point of view of the filmed this concern for bystander privacy has been one of the central early concert discussions around augmented reality and smart glasses. They also require us to push for better ways to opt out of data harvesting for deep fakery for example with purposeful ways an individual could pollute their own personal image data pool or other ways to opt out of participation in data sets. Alongside these technical and technical considerations there are legal dimensions to how we fortify the truth at the point of filming. The most notable is the need to fight for a global norm that the right to record is a fundamental part of both free expression as well as free assembly. This is because the way many people will share their experiences is via the camera on their phone and the way many people will experience those realities and join or assemble with others in virtual shared spaces is via social media and live streams. The US is rare in having some robust protections on the books for the right to record but it is an exception and again we should be well aware of the bias and violence that occurs around attempting to exercise this right and in the aftermath and the ongoing efforts to push back on it at the legislative level. Just most recently a law proposed in Arizona that was struck down that they're now trying to push through again. Not all witnessing is of course documentation. Narrative video advocacy was the foundation on which witness was built and continues to be a key part of our understanding of a diversified civilian capacity to hold power to account. Yet our strategies here too must evolve to fortify the truth. Continuing to develop our existing narrative practices is a part of this. My colleagues at Witness in Latin America have pioneered work on how to draw on a range of mainstream and counter-cultural approaches to narrative such as social listening, culture jamming and hacking and then bring them closer to the spirit of the land rights struggles and media collective they work with there. However, in the next five years we'll need to fight a narrative battle on terrain that is even more muddy and slippery. Narrative competition is of course not new but one characteristic of contemporary corporate and state misinformation and disinformation is the idea first popularized to describe Russian actions in Ukraine in 2014 of floods of falsehood. A morass of internally inconsistent narratives, stories, flashes of documentation that do not craft a consistent worldview but fuel a lack of confidence that you can believe anything. In contrast, video advocacy for human rights often operates on a fundamentally different proposition focusing on consistency, ethical persuasion or engaging an audience to action as a goal and as ethical actors we have fewer options for emulating or learning from the narrative floods of falsehood in an appropriate manner. More positively, there's an opportunity to lean into new storytelling modes. Our storytelling and the watching and sharing that I will come to in a moment will likely occur both in formats that are emerging of an augmented reality but also more mundanely the resurgent remix formats of TikTok where manufacturers made visible in the stitch that you add the audio or the effect. These formats allow for the possibilities to hark back to earlier avatars of these MIT hallways such as Henry Jenkins of a renewed participatory remix as well as a dark side of distortion. As human rights defenders we should lean into this understanding the documentation and narrative may look different in a TikTok like environment and be inspired by this. New storytelling may also be our fingertips in much more complex ways. The increasing ease of text to image and now text to video tools that take a text prompt and turn them into AI generated images and videos makes for heady promises about the creative potential for anyone to name what they want to see and then see it made. And these advances will also be integrated into more conventional editing and production approaches. This was a demo released yesterday of a two and a half minute video created from text. There are powerful activism futures to imagine here. A recent signal of the future is this. A project that conjures up climate futures for a specific address using the ability of text to image to depict undesirable climate impacted futures. But there are also fundamental questions of how much of our search engine content and of the images we encounter even of apparently real scenes will be computer generated in the future and how we will discern one from the other. It's one of the reasons of witness we've been leaning into the importance of building disclosure of media creation and production processes into both camera made and dataset driven videos and photos. So you can see how an image was filmed or synthesized, what the prompt was that made it, what the underlying model was, et cetera. New storytelling will also draw on old forms of storytelling to power like parody and satire. I'd like here to highlight collaborative work with the co-creation studio of the open doc lab here at MIT around the areas of deep fakes and satire and parody. When we look at the realities of usages of deep fakes to date in personal attacks and political activism, the most prominent usages alongside the omnipresent non consensual sexual images and claims of deep fakes used to dismiss real footage are in photorealistic and memetic satire. In our just joking strand of work, we look at how photorealistic satire and photorealistic deception and malicious attacks masquerading as satire and gaslighting their viewers, raise key questions about how you disclose the presence of synthetic media in your content, how we think about the consents and the possible limits of what it is but missable to deep fake. And new storytelling does not need to be about showing it all. I'm proud of a project from a few years ago at witness called Capturing Hate that took a challenge, the lack of national data in the US on attacks on transgender and gender non conforming people and a horrifying reality that YouTube and other sites were full of perpetrator and bystander videos filmed to celebrate assaults in public on trans and gender non conforming individuals. Rather than showing those videos, the project used them as a source of visual data to make the case in multiple ways for the scope and scale of this problem. This type of storytelling also reflects the ways we'll need to improve our presentation of civilian witnessing and perpetrator video in more traditional human rights settings, reflecting how databases tell stories of systemic violence rather than individual incidents and how visual presentations using multiple camera viewpoints, 3D visualization and timelines will evolve. These visual trajectories will need to consider immersive multi-source curation, packaging context of abuse, various forms of videos, different eyewitness sources and narration to provide a better way to bring or judge or fact find it into the scene of the crime. But we must do that without creating a CSI effect of misleading and escalated expectations for presentation for every human rights crime scene. And ultimately, all of these new modes of storytelling need to prove their worth in asserting human rights truths. In the face of fundamental concerns and those most impacted by human rights violations, we need to work out how video can make a difference, built through examples of it being used effectively for justice and truth prevailing. Part of this also means protecting and reinforcing the venues where video is used for accountability, ensuring its legal admissibility, ensuring those spaces and those decision makers keep up with the potentials and pitfalls of video. The literacy of investigators is as important as the capacities and bravery of civilian witnesses. The absurdities of how judicial settings currently handle video do not mean to be located further afield than here in the US. For example, in the absence of technical literacy in the recent high-profile trial of Kyle Rittenhouse. We'll need to grapple with the potential implosion point of the established or to some outdated foundations of legal fact finding. Eyewitness testimony and hearsay rules have been shoehorned into a digital age and we will need to reconcile robust open source evidence with the value of unreliable eyewitness accounts and human memory issues. Videos are mainly made to be watched, be it by humans or by machines. This watching and then sharing and I'll focus more here on the humans is likely to evolve with implications for human rights witnessing. I'll pick just two dimensions to spotlight. The consequences first of a remix orientated and participatory technological turn as well as secondly what we might call the forensic turn of deep scrutiny of media. First, the remix dimensions based as I noted on the direction of evolution of platforms like TikTok as well as of emerging technologies of augmented reality are watching and sharing will likely occur in formats that lean into remix and manipulation and into layers of engagement and information. There are complex human rights ethics around remix as a medium of human rights engagement and distant witnessing which we will have to grapple with and work out how we reconcile with the original intent of the human rights document. Another mode of looking at a video is not as a plaything to remix but an object to analyze and to which we direct our skepticism. This forensic turn in terms of our analysis of the video we watch the idea that we should scrutinize, analyze, geolocate, contrast, read the pixels is already taking place in both positive and detrimental ways. The positive ways can be seen in the growing field of collaborative verification that characterizes communities like Bellingcat and the so-called OSINT world. At that same time, there's a critical question of how we bring those skills and communities of analysis closer to the diversity of human rights contexts and defenders globally so that a neglected conflict in Cameroon receives the same support from a circle of distant witnesses providing verification on their accounts from neighboring cities, towns and countries as does a conflict in the heart of Europe and so that the ethical decisions about analysis and usage are grounded in community scrutiny. And the downside of the forensic turn is the excessive scrutiny that we subject media to, the down the rabbit hole inclination to for instance, spot a deep fake that raises the level of skepticism also about true media often without providing much help on actually spotting the falsehood. We must also think about who has access to the tools and skills for detecting more complex media manipulation. As journalist, activists and human rights defenders in witness convenings noted and as the spread of deep fakes and supposed deep fakes now highlights, when it comes to the availabilities of both tools as well as media forensics capacities or mechanisms for escalating the most complex cases for expert review these are not evenly, equitably or appropriately distributed globally across rights issues and in other ways and more broadly nor is the capacity to do meaningful forensic analysis of video. There is a detection equity gap in access to media forensics capacity let alone emerging tools for detecting deep fakes. We will also need to think about the realities of live and real time witnesses witnessing. A few years ago, I led a project of witness focused on what we described as co-presence for good. In the mobilized us initiative we looked at how to use live video streams and asynchronous recorded videos for engaged storytelling working with a media collective in a favela community confronting violence in Rio. This was combined with the routing of particular task-based opportunities for action to people watching in solidarity either via these live streams or the other videos. For example, to translate or contextualize or amplify attention on an issue. It was an imperfect project and in fact revealed many fault lines some of which Jacob referred earlier that will only get harder. The vicarious viewing of haters and spectators the security risks in live streaming the challenges of securing audiences for synchronous broadcasts but also pointed to the possibilities of virtual audiences acting with and in alignment with frontline witnesses to take actions that force multiplied their capacity to act and could fortify their truth. Let me end by talking about the final stage in that pipeline I articulated. There will be an ever greater volume of testimonial documentary and narrative accounts in five years and each account will be under pressure of truth and trust at point of creation as well as down the line. Preserving critical truths caption on video both for usage the next day and generations later is both easier and harder than it has ever been. Cloud storage and cheaper offline storage as well as the promise of the decentralized web suggests that it will be cheaper and easier to store a range of copies of media in distributed networks and with redundancy. Meanwhile critical videos disappear at the blink of an algorithmic eye from the commercial media platforms like Facebook and YouTube and Twitter compromising the sources of video evidence before they can even be archived. But all of these technical opportunities are irrelevant if we're not also finding ways to ground them deeply in community control and if we don't recognize that archiving is an act of power that makes decisions about what is preserved. Too much of the current hype around web three and human rights footage a place where discussions of authenticity and preservation sits in the popular imagination and the popular media focuses on naive promises of transparency and mutability and access to content rather than on nuanced values of selectivity community control and redaction to protect the vulnerable and buy those most impacted by the footage. We need to help shape this emerging socio-technical infrastructure in that direction rather than the two naive dreams of venture capitalists. More mundanely, and this has been a significant focus of witness in recent years, we should assume that the weaponization of internet shutdowns that is used repeatedly to target potential and actual descent from Myanmar to Ethiopia and beyond will continue to grow and be deployed in ever more nuanced and targeted ways to suppress the production, distribution and preservation of videos. This has a powerful impact on the sharing of critical videos. Just yesterday, the Washington Post noted that the number of protest videos coming from a telegram account that regularly posts and circulates clips from the current protest in Iran had dropped in apparent correlation with the throttling of the internet connectivity with the number dropping from around 80 new clips on September 21st to just 40 the day after. Again, to revert to technology infrastructure, we need to invest in resilient peer-to-peer approaches alongside mainstream infrastructure that allows for documentation and advocacy and sharing to occur even when the internet is stripped away from people as a tool of communication. As I wrap up this talk, let me leave you with maybe three calls to action around fortifying the truth. First, a call to center the voices and needs of people facing human rights abuses, of course in their own narratives, but also as the people whose voices should be loudest in shaping the infrastructure of trust. One outcome of that is that we should focus less on misinformation and disinformation, the assumption that accounts are false, and more on enhancing their crust worthy and critical voices. Second, a call to proactivity in fortifying the truth about the level of infrastructure as well as everyday practices. We need to recognize the fundamental importance of a more diversified human rights witnessing using the tools of today like video, but also the underlying and growing threats to that. Third, a call to collaboration. A proactive approach to fortifying a truth requires us to think about collaboration between human rights movements, international networks, the individuals developing new ways to visualize information and the developers who think they are building the next layer of trust for the internet but have no experience of the reality of human rights. Often fortifying the truth will be about breaking down gatekeeping barriers as much as about technical barriers. With that, I thank you for your time and I look forward to questions, discussions, feedback. Thank you. Thank you, Sam. That was quite amazing and a lot to think about. Encourage people to come up. There's two microphones on either side. I'll take the opportunity to ask you a first question. It's really quite fascinating thinking about authenticity, provenance, and I guess I've been really quite struck by all the metadata that if I just take a photo on my phone, how much it knows, where I am with time, all these things and that I don't remember turning on. I don't remember saying, oh yeah, this is great, please do it, but it also strikes me that it kind of captures a moment of this surveillance development where it's sold to me as convenience, that I can look it up and so that, I think for a long time we've been worried about our AI robot overlords, but as I look at Alexa and my remote control that responds to me, what I see instead is it's a control by my service. They're serving me and in the interest of serving me, I've accepted a level of surveillance and so I guess what I'm wondering about is how do we think about some of that kind of contradiction in there? I thought we'd be fighting against Big Brother and instead I'm buying a Big Brother so I can listen to the radio without getting up to turn on the station and so I think it's quite remarkable and I'm sure you've thought about this, but it's a little bit about convenience, metadata and surveillance and how it's this contradictory mix in there of our AI servants helping us but then increasing really the dystopian future that I thought I'd be running away from. Yeah, there's lots. Maybe too much in there, so just pick something, pick something. So I think it's so many different ways you could address that. I think there's a convenient side which is actually also applies to a human rights advocate, right? There is a convenience of how you might be able to share more trustworthy information that is more trusted. I think the critical element that we've been pointing to and it's been how we've been thinking about our engagement with these development and these infrastructure is it has to be much more visible and this is the challenge is that the desire to make it invisible is squarely there when in fact in order to make it useful both to a human rights advocate to make a decision, for example, I wanna include this data but not this data, it has to be visible, right? And it also has to be about it's not all or nothing and I think this is one of the things we've really a grapple with these infrastructure is actually from very on arguing that in fact, for example, redaction, removing data is not a hostile act. Redaction is a part of the media generation process but in order to redact so you need to know it's there to begin with, right? So understanding how convenience can serve human rights activists but also it has to be linked to this transparency. I think it's perhaps where I would start on that, yeah. And please, Michael, are you ready? And I ask the people who are giving, asking a question to please introduce themselves. I'm Mike Fisher. I teach in the science technology and society program here at MIT. I wanna ask two questions, two quick questions of things that you mentioned in passing. One is first of all to thank you for this terrific overview and to say to Ian, those of us who have experience working in police states always assume that surveillance is there and will get worse. Having said that, the overview that you gave us was abstract as it needed to be and I wonder if you could drill down and give us an example that you mentioned in passing and that I happen to be personally very invested in and that is how is witness or how might witness be able to help in a fluid situation like Iran today? And the second question has to do with your last chapter on preservation. So a lot of the preservation that gets talked about and that Jacob talked about earlier about Brazil is localized, maybe community controlled in the best circumstances, but is short-term and given the fact that technology is changing so rapidly, how do you think about the longer-term accessibility of preserved material given the fact that material disappears so quickly, particularly if it's digital? So I will give an analogy to Iran because witness is not currently working extensively. One of the principles for us is we have to work that's grounded in long-term relationships and we don't have extensive long-term relationships in Iran, but I'll give an analogous example, which is Myanmar and so I would describe the way we would approach this and again, I'm talking from a witness organizational way of thinking about how we would do that is we think on multiple levels. One is how do you support the ongoing lawyers and documenters who are trying to do things like gather robust evidence, for example, of the violations that take place? Myanmar, I'll just take us back to February last year when we had the military coup very much into a context that we often see in our work which we would describe as a surge response which is like suddenly you have lots of new participants in a movement who are using their cameras and filming which raises all these questions about safety and security, about do they know what to film, do they know the risks of sharing and we see it constantly, right? Like you get new participants entering because they're in the middle of a protest movement or something else and they're not necessarily the established activists and so there we tend to focus very much on one of the very simple guidelines that help you make a decision. You saw a decision tree there around should I share, right? And in fact, often we're telling people not to share, right? Because in fact, we've seen that the sort of default assumption driven by social media that you should share straightaway can really backfire, right? So like first question, should I share? Probably not pause, right? Like that sort of guidance. It's connected to the work that I think and we work on a sort of short-term and long-termism approach, right? The infrastructure stuff is not relevant to Myanmar now but it'll be relevant to a Myanmar context 10 years down the line. So for example, we'll bring the discussion about how videos are being preserved or highlight the authenticity questions into these infrastructure questions. So that's maybe just an answer there. On the preservation, yeah, archives are the hardest, right? And digital archives, well, all archives are hard, right? All tape degrades, all digital archives need to be renewed. I think working out how you do that is one of the things we have the hardest time with, honestly, and we're grappling right now with that is the thing that is hardest to work at how to do with limited resources. Archiving takes resources, it takes capacity and skills. So how you do that is actually one of the thorniest challenges we face. I think the moving closer to community control though applies across the work and I'll draw an analogy around miss and disinformation. So many of my colleagues who are working in the African context have been looking a lot at like, how do you push kind of miss and disinfo responses closer to communities because often information is shared rapidly. It has a relatively short half-life. And in that case, and I didn't mention it here, one of the big pushes we've had in the tech company context is for much better availability, for example of intuitive reverse image search, right? So one of the biggest problems you have is what's called a shallow fake. It's like a recycled image that gets moved from one context to another. It's actually quite hard to work out where it came from in say a WhatsApp group or in a YouTube and the companies, for example, could make it much easier for it to be intuitive to see in a platform. I'll say easier, not much easier. But yeah, they could certainly do it. And they might not want to do it either. Well, there's a political will question. Absolutely, there's a political will, but could they do it technically? Should they do it? And could we make them do it? Yes, it's all three, I think. Hi, thank you very much for your talk. Can I take this off? Okay. So I have... I'm sorry, could I ask you to introduce yourself too? Oh yes, sorry. I'm Hamid Razana, Syria. I'm from Fordham University and I'm also originally from Iran, so I have a related question too. So considering that the vast majority of these videos are distributed and preserved on social media, what do you think about this constant recent push toward content moderation and social media by the government here and in Europe where even the human rights watch came out with a report against the German government's criminalization of social media content, considering that the main victims are actually the people who are the victims of injustice when it comes to these kinds of moderation. So we saw that in Facebook that Palestinians get censored a lot and recently there was a report that came out that the Islamic Republic government in Iran has infiltrated the Instagram moderation team. So a lot of our videos from Iran are getting taken down, literally because I'm just thinking that's like if this push for moderation wasn't happening in the US and Europe, we wouldn't have this problem right now. So I'm wondering that's like what do you think about this? You're naming a problem that pretty much every country globally faces, particularly if you're in the majority world. I'll break it down into three sort of bits that I think are the problems and how people including witnesses are trying to address it. One is that content moderation is inadequately resourced and poorly done by companies outside the global North, so that Facebook and Instagram are not doing it while even without an infiltrator. So this is just a systemic ill investment which we need to constantly push back on and the recent push on Instagram and Metta led by Iranian activists is a good example. The problem is it has every time it's repeating the same thing and it's incredibly draining for activists to keep doing this because both the global activists and the local activists know that this will recur in the next conflict in the next context. The second is the poorly designed laws by European governments that set precedents that contain implicit discrimination, right? So the next DG tends to target Muslim, Arab speaking content, probably more than white supremacist content, right? And so those types of laws which also can be emulated, which is the other risk we see as emulation of laws as well as norm setting. The third is there is some content that should come down, right? Like so generally you don't wanna have incredibly violent content, beheadings circulating on a consumer platform and there's a legitimate reason why a Facebook or Instagram or a Metta might want to take that down. The question then becomes, are they preserving it? And there's a big push for what are known as evidence lockers, which would be a much more systematic, transparent way to preserve the videos that get taken down by content moderators, but also by AI because a lot of the videos are not even really seen by humans. They're identified from classifiers, they get taken down. So there's not even a chance that say an Iranian activist could grab that video and say I wanna hold onto that as proof of an incredibly violent shooting of a young woman in a protest, I'd say. Hi, I'm Valerie. I'm a doctoral candidate at Harvard where I study the history of photography and policing and specifically the use of police bodycams today. And I was struck by the fact that the metadata that you're working toward incorporating in civilian videos is quite similar to metadata that has recently been added to police bodycams. So the kind of the provenance chain, the time stamping, the location stamping, all of this. And it seems to me that police or state actors and civilians have been in a kind of dialectical relationship for a century now with escalating tactics of surveillance, verification, et cetera, that is moving toward a near total perfect recording. Both groups are striving toward this and kind of egging each other on. And we also know that as we add more metadata and we collect more videos, we need more cloud storage, which is incredibly environmentally harmful. So I'm seeing this kind of strategic escalation on both sides that seems quite necessary because it's what we're caught up in, but it also seems quite damaging. And so my question is, what kind of larger political or ideological struggles need to accompany this to potentially get us out of this cycle if you see the need to escape this cycle? I think you described the problem so eloquently of that dynamic that is happening. And I think it's a painful awareness that we're competing in that dialectic because it's also the risk, and this is what we describe as the ratchet effect, that it excludes people who can't compete in that for a whole range of reasons. So you have to be very cautious about it. As a way out of that cycle, and the other thing I wanna appreciate also from what you noted is I wanna be careful and I frame this as moving forward, not being sort of trapped in a presentism around what we're talking about. These are dynamics that have existed before, right? We're not just starting it. It's not 2022 and everything suddenly starts. I think maybe one place, and I guess it's ideologically trying to work out how you do it is there's from the perspective we're coming from is how are you reinforcing the credibility and the capacity of civilian witnesses in these settings given they are disadvantaged, right? And they are increasingly likely to be disadvantaged by the ways in which it is possible to manipulate perceived reality. And so we have to invest in supporting them to do that or have to engage in supporting to do that because they're already disadvantaged both by the long-term trajectory. We know that their accounts are dismissed, right? This is the historical reality and the increasing change in that direction. I don't know how to resolve that and I appreciate the question. I think one of the things I would note is one of the things we did as part of, we were part of this coalition for content provenance and authenticity and we did something that was quite unusual in that space which is do a very comprehensive threats and harms assessment early on which tried to look really broadly at what this meant. So it didn't just say, you know, people will be able to prove true or false. It looked at things like climate implications of this. How would you think about, how would you mitigate this? How would you think about kind of access to livelihoods and journalists? Would they be, you know, so like, I think it's really important we have those conversations early on because sometimes it can turn into a discussion just purely focused on one bit of it when in fact there's a far greater circle, but yeah. Perfect, hi. My name is Nancy and I'm coming from the Harvard Graduate School of Education and I'm here with some equity and inclusion fellows and my question is coming as an educator. What lessons can we teach children, those who will be living the realities of the climate crisis fallout, how to use these technologies responsibly? Thank you for the question. I don't have an answer in response to the climate crisis but I wanna name that one of the things that Witness is centering in its next five years of work is actually how do we think about these tools in the context of climate justice and how youth are using them globally to express that. So I don't have an easy answer right now but we're trying to understand that and if students in your class and others are really trying to grapple this part of what we're doing now and it's always how we engage in this is to think how do we listen to a broad range of people who are already trying to use these tools in an appropriate way, right? This doesn't start from zero. We have to listen first to what we're hearing around it. I think more broadly as educators, I think thinking about what are the literacies we need as Witnesses, I think there are witnessing literacies that we should be helping people think through that are really about like should I film this, should I share this, who should I ask for needs protection if I'm gonna share this. And we can see that via obviously the implications that happen to vulnerable Witnesses who get thrust into this, take any of the prominent Witnesses to police killings in the US in the last years, right? The repercussions that have been very extreme in many cases. So I think thinking about witnessing literacies as part of the toolkit of young people not to encourage them to run out and film human rights violations but simply the camera is their tool for showing their realities is important, yeah. All right, so Channing Sherman writes, and there's another one, I'm gonna sort of combine them. Are there any real world examples of what newsrooms are doing to combat deep fakes? And also, do you know any deep fakes that news organizations were able to catch before they gained traction? So sort of success stories. Can I ask, was it, oh, we're able to catch, I think you're gonna say weren't, and I was gonna be like calling out someone. It's really, it's an equity question again, right? So the Washington Post and the New York Times do really well. They have people who work on media forensics. We've been involved in a number of cases globally. I can think of one last year where, you know, journalist in Myanmar had no capacity, and this is not a slight on journalists in Myanmar. It's a reflection of who's invested in journalism training there, who's providing resources, who's giving them opportunities in this and the underlying state of the journalism industry. So it really varies. There are good practices that are coming out of newsrooms, but they tend to be very much aligned with who has money and resources and not necessarily aligned with where you're gonna, where the most vulnerable people face deep fakes, which may actually not be about the high profile politicians but women in public life in, across the world. There are not, you know, one of the things I would say is there have not been a lot of deep fakes and I'm glad of that, right? I'm someone who focuses on deep fakes and I'm very excited that there have not been a lot of deep fakes that have created political deception. I think an example where the newsrooms did really well and everyone did really well is actually a tale that deceives us. It's the tale of President Zelensky. There was a deep fake of Zelensky in March this year and it was very rapidly debunked. Like newsrooms padded themselves on the back, Facebook padded itself on the back, but it was a very bad deep fake. The Ukrainian public had been warned that there was gonna be a deep fake of him doing this and then he promptly turns up on telegram with his 4.6 million followers and says it's me, right? So it's like a terrible example and what it does is it makes people think it's gonna be easy, but it's easy if you're the president of Ukraine, if you have national newsrooms and Facebook wants to fix this quickly, it's not easy if you're a civil society activist or a journalist in Myanmar or Georgia or, you know, Alabama, yeah. Wonderful. Thank you so much. The conference goes on, Sam will be around. Please join me in thanking Sam Gregory.