 I'm Arash of the Brickman Klein Center. I'm a senior researcher there working on projects related to the ethics and governance of artificial intelligence. And very, very pleased to welcome Nathaniel Raymond, who is the director of the Signal Program on Human Security and Technology at the Humanitarian Initiative. Today you're gonna hear a little bit more, and he's here with his colleagues from the Humanitarian Initiative as well. Today you're gonna hear a little bit more about the work of HHI, particularly on the ethics, governance, and legal questions of emerging technologies in a humanitarian context. Nathaniel's had about 15 years of experience working as a humanitarian aid worker and a human rights researcher in this field, is really driving these initiatives and efforts at the university and globally with the networks that he works with. Anyways, I'm gonna turn it over to Nathaniel, and he's gonna talk for about 25 minutes, and then really eager for you to engage in discussion with us. So please do have your questions ready for that. Welcome, Nathaniel. I just wanna thank our colleagues at Berkman Klein for putting this together. I wanna thank you all for showing up, and I wanna thank Harvard Catering for giving you a reason besides me to attend today. So basically where I wanna start is assuming that you know nothing about the word humanitarian. Often when we say humanitarian, it means doing good stuff for people. But humanitarian actually has a very specific legal meaning. And so what is it? Does anyone know who this guy is? Take a guess. Yeah, of course you know, because you work for me. Anybody else? This is Henri Dunant. He was a Swiss businessman, and he went on a carriage ride in 1859 and happened to see this. This is the Battle of Solferino. And back in the 19th century, during the Napoleonic Wars, people would go and actually have picnics and sit on a hill and watch the battles. And at the end of this particular battle, Henri Dunant alighted from his carriage and saw a landscape littered with dead and wounded soldiers. And he was so affected by the fact that there was no mechanism for caring for the dead and the wounded that he wrote a book called Memories of Solferino. Do you know what that caused to happen? Anybody? Yeah, the Geneva Conventions and the International Committee of the Red Cross came out of Henri Dunant getting out of a carriage. It is from the beginning of international humanitarian law that we have the beginning of what we define now as humanitarianism. The idea that humanitarians abide by the law of Geneva and abide by four principles. Humanity, independence, neutrality, and impartiality is the idea that makes a humanitarian a humanitarian in international law besides just doing good stuff that their actions are based on need. So we zoom forward in time to 1969, 1970, a crucial moment in the development of modern humanitarian response. Classic image of 70s and 80s humanitarianism starving children in Africa. During the Biafra secession from Nigeria, the state of Biafra was basically blockaded by the Nigerian government and by some estimates, upwards of a million people starved to death or died from starvation-related illnesses. This was a crucial moment because it was the rise of the international humanitarian NGO. In particular, one NGO came out of that. Does anyone know what MSF is? What's MSF? Medicine en son frontier, doctors without borders. So as we move into the 70s, there is in the context of conflict an increasing role of non-governmental organizations playing a neutral, supposedly neutral humanitarian actor role to meet need when governments, when war prevents the delivery of assistance. Another critical moment and unfortunately more pictures of starving children in Africa. They do know it's Christmas, but will they ever get there? A reference to what? I'm gonna test for you youngins in the room, your knowledge of 1980s pop culture. Yeah, live aid. The We Are the World Super Famine of 1984 in Ethiopia. Out of the crisis in Biafra and the crisis in Ethiopia, you have the beginning, not just of the emergence of the international NGO but of something else. Technical and ethical standards for professional conduct. One of our big ones coming out of the famines of the 80s is about how we use imagery or don't use imagery of starving people, in particular the bloated belly children of Ethiopia and Biafra. But something else was happening which is the beginning of a realization in the field, particularly around the feeding operations in Ethiopia, that assistance delivery wasn't enough, that there was fundamentally a human rights issue and a tension between the responsibility of humanitarians to provide assistance, but also to protect a population. And sometimes what you had to do to get humanitarian access and as someone who operated in Ethiopia, this is still a problem today that you have to make certain compromises on advocacy and protection to access a population to give assistance. This all comes to a head very quickly in this, 1994. So if there was a big bang moment after Henri do not, this is one of the other big bangs. You've got World War II and the modern Third Geneva Convention, but following that, it's GOMA. Rwanda genocide happens. Genocide heirs, people engaged in the genocide and civilians run into neighboring countries, including Democratic Republic of Congo. All the NGOs come in to assist. There's no coordination. There's no minimum technical standard for water provision in particular. So you have some people setting up their water operations where the TV cameras are in one village, but over here in another camp, there's nobody. So cholera breaks out and spreads like wildfire. This was a catastrophe. And in many ways, it was a catastrophe related to two problems. One, we didn't have coordination and we had a lot of people who didn't know what they were doing, just doing stuff to get donations. The other thing that happened is that we had people coming in and re-arming on the DRC side and then going on back into Rwanda to kill. And so out of that, we get two critical things. The sphere standards, which are minimum technical standards for the provision of humanitarian aid all the way to how far apart you gotta put tents to have a fire break in case there's a cooking fire. To this thing, the humanitarian charter, which articulates the beginning of what we call the rights-based approach or the RBA. Okay, so you've just got a crash course in humanitarianism, yay. You know everything there is to know in five to 10 minutes about the history of humanitarianism. Let's ignore that for a moment and talk about, okay, that's how we got here. Where are we now? Now we are in another crucial moment, a moment that I often call the digital goma. Thinking about the goma crisis, we are now in an equal crisis in terms of rights and in terms of minimum technical standards. But this time, it's about data and it's about ICTs, information communication technologies. And before I launch into part two of this talk, I just wanna say everything I'm about to say to you, even if you don't do anything ever again with humanitarian response, will be relevant in any other issue you look at related to data, ICT and law. So bear that in mind, it's gonna have relevance even if you never think about Syria. So getting personal for a second, I was here. When this photo was taken, I was somewhere over by that water tower. This is Biloxi in 2005 after Katrina hit Lamphall in Mississippi. For me as Solifrino was for Dunant, this Biloxi was my Solifrino. Three days after Lamphall, I'm there with Oxfam America. And we are in primarily an African-American neighborhood down on Division Street by the Baptist Church and no aid in that primarily African-American community. Go over to the primarily Caucasian community uptown, they got generators, they've got the big tide washing machine truck. Have you ever seen those? They're pretty impressive. Lot of washing machines. But they also had something else. What do you think they had that the African-American community didn't? Cell phones. And what we began to realize in that response is that we were seeing digital invisibility, that the assessment for that response was only as good as the networked connectivity of the population. And that there was a racial and economic disparity in terms of how federal and state authorities were seeing that population based on whether or not they had barriers to their connectivity. For me, everything I've done in my life since then was really related to that moment. That was the big bang. Where suddenly we were seeing invisible populations affected by digital modalities and hyper-bright populations. That's only gotten worse. And it's only gotten worse to the point where now our use of ICTs and data are causing what we call secondary big data disasters. The solutions that we're supposed to from a responder-centric perspective, make things better, have in many ways actually potentially made it worse. This, another big bang moment. 2007, 2008, Kenya election. Ushahidi, which means witness. Ushahidi is deployed, one of the first crisis mapping crowdsourcing platforms. And so as we had the rise of the NGO in Biafra, now we have the rise of the VTO, the voluntary technical organization. You can hop on your laptop and save a life, blah, blah, blah. The fact of the matter is, is that buried in this innovation is a broader crisis. In the 20th century, our ethics, as it relates to personally identifiable information, PII, so HIPAA, right? Everything you know about protection of personal data comes out of the PII tradition. But now, going back to this map, this isn't an individual, right? These are incidents. This is now DII, demographically identifiable information. And you can abide by law and regulation in duty of care on liability on PII and still kill people with DII data, sometimes even more effectively and faster through negligence and not violate any of the extent ethics. As we think about artificial intelligence, as we think about big data, DII is the fuel and the tank of the 21st century as it relates to AI, as it relates to social media. So how do we retrofit, as humanitarians, our conception of ethics and law that was built around hiding the registration number, hiding the name? When now, it's about, are we targeting whole populations? So I'm not just gonna throw stones in a glass house, I'm gonna put myself on trial too. In 2010, my colleagues and I came to the Harvard Humanitarian Initiative to do this. And you're like, what's that, a Jackson Pollock painting? No, this is a burned village of maker Abior in the contested Abye region between Sudan and South Sudan. This was arson, allegedly. How did we know that? Can anyone tell me? Those dark spots are burned houses called tukels. How do we know it was arson? Because the ground between the houses is not burned. We call that selection. Suggesting intentionality, unlike a wildfire. So we monitored with money from George Clooney and much of the cast of Ocean's 11, I kid you not, Don Cheadle, Matt Damon, Brad Pitt, most interesting funder I'm ever gonna have. Take that Ford Foundation. Anyhow, we used high resolution satellite imagery analysis combined with ground reports to detect and document threats to civilians in near real time. What we were doing was we were in search of a magical holy grail. My colleague, Kristin Sandovic, a Harvard Law School graduate, and I in a recent article called The Ambient Protective Effect. The idea that the collection and release of data would in four potential ways change the calculus of events in a conflict situation. That we would smile your on camera. We would interdict the perpetrator response by deterring them from attacking based on a perception of liability. Second, is we would provide civilians with early warning information to cause them to make better, faster, safer decisions. Third, we would pressure governments by putting information that otherwise would not be available to the public through satellite imagery. Out there, shaming them. And fourth, we would motivate public pressure for governments to act. Now, the Ambient Protective Effect may exist. It may not. Scientifically, at this point, based on the limited data we have, I can say to date there is actually no evidence that an Ambient Protective Effect exists in the case of deployment of information in conflict to somehow put a ring of fire around a population. In fact, what we are seeing in cases such as Ebola, in cases such as satellite deployment in Darfur, we are seeing the beginning of qualitative and quantitative evidence that we may be mutating these complex, dangerous environments in ways we don't understand. So that's why we did something radical. We stopped. We didn't know what we were doing. Not in terms of satellites, not in terms of, quote, humanitarianism, but we didn't know some fundamental things about law, about rights, and about technical standards and our duty of care responsibility. So, I love this image. I could show you this image all day and I'm being respectful of time here. This is a family that has gotten out of the Mediterranean fleeing Syria. It is a boy with a tablet taking a picture of his family as a proof of life photo. If you wanna know the world we live in right now, this is it. And it's being marked by the impact of digitally networked displaced populations in diasporas, fundamentally changing the nature of how emergencies unfold, how information is transmitted, and what information itself may be now. How many of you have used WhatsApp? Okay, the largest app sale in history, I believe $16 billion to Facebook, right? Get ready to have your minds blown. The plurality of active daily users at that time of the sale, were people running for their lives without a home. You had the largest refugee population since World War II helped capitalize the user base of one of the biggest app sales in history. That's where we live now. Anyone know who these guys are? Anyone watch Netflix? These are the white helmets. They are humanitarian actors in Syria. It appears they're being targeted by misinformation campaigns, including potentially botnets. So the question is, what does information in terms of Henri do not, right? What does humanitarian space mean now? Is there such a thing as humanitarian cyberspace? We can debate it, we can have a symposium on it, but the international humanitarian law doctrine that you learn here at Harvard Law doesn't give you a clear answer. So now we head into the homestretch. The radical work that we are doing at the Signal Program is to suggest that information is now aid. In terms of the Universal Declaration of Human Rights, we often think about information as freedom of speech, as opinion. But what happens when it's not about Article 19, freedom of speech and opinion, but it's about Article III? When it's about life, liberty and security of person. These men are in Hungary. They're Syrian men. They're looking to get into Western Europe. They're using phones to try to find their way. They're engaged in information networks with other displaced people. We are studying how this is happening. We call it teledemography. Our colleague, Danny Poole, who's a PhD student at the Harvard School of Public Health, did breakthrough research this past January in the Ritsona camp in Syria, looking at disparities, gender disparities, economic disparities in cell phone use and access between men and women, and began to correlate that with rates of depression. The science of the future in humanitarian response is understanding information as a basic need necessary for the sustainment of life. Freak out, okay? Because that fundamentally is changing how we think about the UDHR, how we think about Geneva, how we think about information no longer as expression but as survival. That, the world you all inherit as lawyers will be informed by a debate about what I just said. So, wrapping up. Last year we took the no innovation challenge. We decided as a lab to stop innovating. We realized the big innovation wasn't technology, it was rights. And that we were doing things that were fundamentally experimental in nature, fundamentally, in many cases, extra legal, with clarity about our assumptions and aspirations for tech, but doing things with populations that might win us a Nobel Peace Prize in Africa but could get us arrested in Cambridge. And so the question was, how were we looking at stuff? The promise of technology came first and we started looking down the wrong end of the telescope, right, and it shrunk the people we were looking at. The actual effective population became smaller. We decided to flip it. What happens if you look down the other end of the telescope with rights, then standards and ethics, then think about how you operationalize it, then you think about what technology can do. The people come into focus. So out of this process in the no innovation challenge, we developed the signal code available at the store near you, signalcode.org, or wherever podcasts are sold. No, I heard that in NPR during the pledge drive. I always wanted to say that. Okay, so the signal code, we identified five rights. That there is a right to information as humanitarian assistance, that there is a right for, and these are not new rights. We're translating the rights that already exist. That there's a right to protection and how information is provided during disaster. That there is a right to basic standard of privacy and data security. That people have a right to data agency and that means meaningful consent. It means informed notification, informed participation. Not just consent because sometimes consent doesn't apply. And then we found that there's a right to redress and rectification when data is wrong or when it causes harm. And all of these rights are displayed in a loop because they're interconnected. You can't have one right without the others. You can't have redress if you don't have privacy. You can't have information if you don't have agency. So what's the elephant in the room? The big question in the field right now is do we need a fifth Geneva Convention for the age of cyber warfare? Well, this guy, Brad Smith, the president of Microsoft and it's chief legal officer said in January in a blog post that really caught us out of left field that was time for a fifth Geneva Convention and that corporation should be at the table. Where we are going now, I'm not gonna give you an answer on the fifth Geneva Convention, but I'm gonna show you a iPhone case of Picasso's Guernica. Where we are going now is into either a golden age of new law and new regulation. Or we are going into a post normative world of the disintegration of the normative frameworks forged in the 19th into the 20th century out of World War II and we are moving to a time because of data where the word humanitarianism may not mean anything anymore. What I can tell you is that we must answer these questions. What rights apply? How are they enshrined? What is a cyber war crime? Is it different than a war crime? Because the Guernicas of today and tomorrow will happen on the battlefield, but they will also happen on social networks and on cell phones. So we are at a pivot point, much like Henri Dunant, much like Eleanor Roosevelt and the drafters of the UDHR after World War II. We have a choice. As we face what I call the Third World War, which is a war on trust, as we saw in the 2016 election, our institutions and our norms are under attack. The question is, do we develop new norms to support them and how and who gets to do that? And what's the consequence if we don't? You will decide. Thank you. When you put up the picture of Biloxi, you said the problem was differential access to ICT services so that basically the population that didn't have the cell phone access at the time wasn't being served because nobody knew they were there. Then you put up the map of Kenya and you said there was a similar issue that resulted from that observatory project, but I didn't quite understand how. Was that again a matter of basically that the attention was going where the ICT was and places that didn't have the ICT were not getting attention so the observatory's work wasn't doing what I wanted to do? That's a very good question. I didn't make that as clear as I should. I put up Ushahidi as trying to express the past 10 years of practice by voluntary technical organizations in crowdsource mapping in one slide, which is never advised. The importance of the Ushahidi slide is that was like Chuck Yeager breaking the sound barrier in the Bell X-1. That was the Chuck Yeager moment for crisis mapping and for good and for bad it set off the idea that somehow the challenge for humanitarians was how do we connect to the information ecosystem of the affected population and that that was inherently a positive, that we were not exposing them to new threats, we were mitigating old ones. Thanks for having distributed more equitably. What was the problem in Kenya? The problem in Kenya was the question of what does it mean, and we face this in Sudan, South Sudan, when you put out information for supposed blanket warning or documentation, how are you also providing actionable intel to the perpetrators? And we're seeing this again and again in the case of Syria right now, which is anecdotal suppositional evidence that putting out a barrel bombing, for example, on Twitter may change the tactics, techniques, and procedure of the perpetrator. We can even step back and think about how providing that information can increase the speed and ease of demographic targeting. And in the old days of human rights abuses, you had to drive out and see the population in the burned village. But as I say all the time, new ways of seeing create new ways of being blind. There's ways we can be gained through false incidents that has happened, and there's ways in which, as we saw unfortunately, with anecdotal evidence in Libya, ways in which populations crying for help with no organized way to assist them can cause that population to get hit before anyone can get there. So Natty, real quick, with Urshahidi too, there was a big component of that was voluntary spontaneous reporting. Yes. Could you put the mic up to your mouth? Oh, great. So a big component of Urshahidi was that there was a spontaneous reporting component to it. It relied on volunteers who would then, using the deployed incidents, use their cell phones to log incidents. So it did require connectivity. So there is digital invisibility potential built into the product from the get go. Absolutely, and so we're talking about what happens when you deploy modalities where the response is contingent on the connectivity of a population and that what we call digital distortion distorts your operational picture of the population. Okay, some other folks, yep. Hi, thanks for the talk. I also work in the international development and humanitarian response sector, and I appreciated your non-innovation challenge because there's a lot of rhetoric in the sector that it's not all about the technology and we shouldn't have ICT driven solutions. Yet donors and NGOs are constantly responding to or initiating technology driven challenges and making pilots that don't go anywhere or irresponsibly designed. So my question to you is what is it gonna take for the international development and humanitarian response community to really embrace your thesis, which is to invert that funnel that you have and take a different approach? Yeah, and that's a great question. For me it's so frustrating because all we're saying is go back to the future. That in terms of this new technology we can't be techno exceptionalist. We must integrate it the way we did plumpy nut, the way we did any feeding or nutritional program. There's nothing different here because it has electricity in ones and zeros. We must start with rights, go to our ethical obligations, then the minimum technical standards. To answer your question directly to the slide I showed of GOMA, what it's gonna take is a digital GOMA and we have little ones happening all the time but due to the political economy you talked about where there is an incentive for us to engage in the innovation narrative to chase the data philanthropy, the donor funded partnership in the idea of responder centric efficiency or the idea of an ambient protective or ambient assistance effect on a population without any evidence by the way. Until it blows up and there's a cost we're probably gonna keep doing it because the money's coming. The fact of the matter is is that as we do it we're not asking the critical questions. We're engaging in what Evgeny Morozov calls solutionism. The idea that we just have to find the right technical solution for the problem. We're at a moment where the amount of non permissive environments that we're operating in due to conflict is spiraling at a rate we have not seen in a while. We are dealing as I said before the largest displacement since World War II. So the issue is these are in many cases political problems, they're problems of political will and we're trying to find a silver bullet and ones and zeros that if we just, hey, if we just have artificial intelligence we can change war injustice and suffering. The fact of the matter is that that is a very dangerous line to take when it may be undermining the international humanitarian law pillars back to what does an impartial server look like? What does a neutral data philanthropy deployment look like? How do we assess need for a population as it relates to information? I think it's gonna come down to and it's beginning to happen. There are leaders in UN Ocha. There are leaders UN office with the coordination of humanitarian affairs. There are the beginning of voices on the donor level saying, okay, we are at a new moment. The question is, can we get there without a tragedy equal to GOMA? I don't know. Yes, sir. Great talk. So I work in cybersecurity, but from a national international completely different point of view. So when you mention, you use this term cyber warfare in one of your last few slides. Could you explain a little bit about how you're conceptualizing this idea? There's so many different ways to think about this, right? Because many of these conflicts are happening in zones where governments are involved in fighting their own people in many cases. And I'm trying to understand what you mean by cybersecurity and how do we get to that point? Because unless we can understand what are the limitations? I mean, the technical barriers, for instance, in many of these places, if people don't have cell phone access is because the first thing an opposing government does is tries to go after the technical systems, the wireless systems. So great, great question. And I'm going to respond with two questions. So I'm not going to fundamentally answer your question because we don't yet doctrinally have an answer to your question in terms of, and I was just out in San Francisco with a lot of folks who were thinking really hard about this, including the president of the International Committee of the Red Cross. And where we are now to answer your question, we have to zen archery, not aim at the target yet, which is we have to, and this is to all, is there a lawyer in the house? What we need now is an official commentary on how current international humanitarian law answers your question. And there will be multiple answers to that question. In some cases, cyber warfare is just warfare. In some cases, in misinformation campaigns, it may be something else. And in other cases, we may be looking at combinations of misinformation, cyber kinetics, aka cyber modalities being used to neutralize certain systems. And, but the underlying issue here is two things. One, we need that commentary on what we have now in terms of how it applies, in terms of the laws of warfare. But the other part included in that and separate is a theory of harm. And so we say over and over again that we don't yet have a theory of harm about what these technologies do in certain environments to certain populations to the point where we can fully articulate the legal duty of care. So the trick is commentary on current IHL combined with research to show evidence on current theory of harm to inform that interpretation through the commentary. Not the most exciting answer, but the issue is all the time I have cybersecurity folks saying, hey, just tell me, we can pop in, we can do it. And the issue is to begin with, we need to answer some core questions. What is protected infrastructure, especially when we have the same cell grid for a hospital, maybe the same cell grid for anti-aircraft? Those are the types of issues we have to figure out. Next question. I've answered all the questions in the world, yes. Hi, thank you for the talk, is this working? I was wondering when you were showing the lenses and when you flip the lenses and start with rights, what's your take on the cultural approach to those rights and different perspective on those cultural approaches to those rights? And my second question was about the speed of creating these policies and reaching consensus considering that technology is changing so fast. So I feel like we're always like race against time. Thank you. Both great questions. Fundamentally, and we can engage in a larger critique of humanitarianism, it is rooted in a rights-based approach that's fundamentally a Western normative framework stemming from the UDHR and Western traditions, including the Geneva Conventions. So that exists, that's a fact. The question here is really as we begin to think about the next generation of digital and data age related rights, it's really not just a question of what are the rights, but what is the table by which who is going to be allowed to gather around? And I quote him all the time, I'll quote him again now, Frederick Douglass said, power concedes nothing without a demand. We have the beginning now in a very interesting moment where from places in Syria back to the WhatsApp example, from Syria to the 2016 election, from the Rohingya to Harvey Irma in Texas and Florida, we have people being affected by issues of the absence of moment specific regulation and legal interpretation as it relates to data equity, as it relates to digital rights. And so on one hand, we have a high potentiality for a movement popping up in a bunch of different places to catalyze into a larger global demand for this type of regulation. On the other hand, it's happening in many balkanized ways and the question is who and to what degree should catalyze that? I am just an academic, but the underlying question here about the speed, the speed of technological change, I think we often get lost in that mirage. The more we study at the Signal Program on Human Security and Technology the rights issues, we realize that don't get lost by the magician's magic trick. I find technology increasingly boring. And what I mean by that is that we're looking at, yes, constant disruption to the point where disruption, the absence of normal is the normal, right? Based on the speed of innovation. But there's another theory I would lay on the table, which is we can choose to get caught in the maelstrom of disruption. There's another choice in what we call absorption. And in business theory, absorptive capacity is the capacity by which systems, in this case firms, for 30 years they've been thinking about how do firms take new information and incorporate that into their decision-making on new products and new markets. Well, now the question is what is the absorptive capacity that societies need to absorb disruption in a way that is not traumatic for norms in the things that norms and institutions protect? So the key, we're never gonna slow down the technological innovation. That's not the point. The point is beginning to get a science and a theory of the absorptive capacity required by societies, not just firms, by societies to begin to absorb disruption in a normative way. If we don't, the disruption will put us in a reactive posture forever. And at the expense of that will be our normative frameworks. Last question, gentlemen in the back. Thank you so much for this presentation. I'm just curious, because the challenges in the situation in Puerto Rico that created any new technology or anything like that. And you know the challenges. You're talking about people in the mountains with no electricity in all towers or anything like that and no connection, road-wise and bridge-wise and things like that. So have we come up with anything that will help the situation out there? The best innovation I've seen in humanitarian response has come from the frontline national staff. Has come from the people themselves. In many cases that is occurring separately from what's happening at HQ or academic institutions. In terms of connectivity, in terms of being able to see and know and talk to populations in non-permissive environments. When I was in Pakistan to Jikistan, I remember in 2001 after 9-11, I literally, I had a suitcase this big for our sat phone. And it was like a $10,000 deposit and carrying around what was like a tuba. And now I can have the same capacity here. And this capacity is rapidly becoming more and more available. But I want to caution, we have focused too much on that being the thing. So back to my point on teledemography. That is only as good as our ability to see the gaps. And right now, we actually don't have very good science on how tech does and does not affect the humanitarian status of a population. So we're designing basically prescriptive intervention without a science for diagnosis. And so in terms of, yes, we have lots of stuff. And the stuff is giving birth to more and more stuff, stuff upon stuff, turtles, turtles all the way down. But we need to have a science so that we're not a slave to the stuff. But we have an organizing theory by which we triage what stuff to design. Man in the turtleneck in the back. Sure, and I'm just gonna add to that that there's also perhaps in an irony in that at least anecdotally, we're hearing one of the best technologies is about a hundred years old in ham radio. Shortwave is proving to be one of the better ways to connect with the islands when they get cut off. So yeah, the takeaway is everyone get your ham radio license today. Okay, we have a little more time if you have any more questions or comments. I have a question, Mike, Render's privilege over here. Yeah, please. And I guess the questions around transparency and trust. Humanitarianism is a really sensitive topic because you're going out and doing some of the work that we wish we could do, but instead we may be here at a presentation on it instead. And maybe donating money and giving that out with that hope. I guess, so for instance, just anecdotally, I'm from Houston, I grew up there, I live here now. During Hurricane Harvey, a lot of folks were, I think the network age that we live in was amazing and how quick information could get out. If someone needed to be rescued from a home, you could send that information right away to a number of organizations on the ground, they're inactive. But there was a lot of skepticism around folks who wanted to give money to larger organizations or to donate or support a Red Cross, for instance, because I don't know how this money will actually get to folks on the ground, and I'm seeing it more and more. I don't know how widespread this may be, but I was just curious about your take on transparency, especially thinking about us moving into this information age or already being there and more and more information, how do we know how it's being used, or what role do you think humanitarian organizations have in making sure they remain transparent, and what does that look like with more and more technology? Great question, and go Astros, by the way. Thank you. So this is a rare opportunity, I get to say two things. Academic research institutions have never been more important, and I'm not just shooting my own horn. Right now, the service we are providing through the creation of generalizable knowledge is a capacity that is not available in humanitarian organizations to develop the criteria by which the use of information by humanitarian organizations should be held to account and should be transparent. They're not going to develop that themselves. It really comes to institutions like Harvard and our colleague institutions that are playing a participatory action research role that the accountability and transparency would be worse if we weren't engaged. But the second thing is that you here at Harvard Law have an incredible opportunity. I frankly think we need coders and hackers significantly less than we need lawyers. We desperately need you all in this August institution and your colleagues to begin to articulate the problems and the gaps in where we live now. So back to accountability. As it relates to, you had so much good stuff in your question, you had, you touched on another thing and our colleague Feng Greenwood who's here with us today was just down in Harvey and Irma looking at the use of unmanned aerial systems, aka drones. We're looking at environments where traditional big organization response is now contending with the voluntary technical organization but also spontaneous responses from community-based organizations. And that's always been part of humanitarian response and that's always been part of humanitarianism but now there is a, here's the disruption word, a disruption in the traditional humanitarian business model where those CBOs and those spontaneous responses as we saw in places like Houston and elsewhere can now reach audiences and engage donors in ways they couldn't before and engage beneficiaries in communication in ways they couldn't before. There's good parts to that. That challenge the big organizations to do better, to compete and to partner. There's dangers because now we are entering an environment where increasingly because of technology, the centrality of professional standards and ethics for that accountability has been degraded and it's been degraded through this rapid explosion of some people engaged in humanitarian response who don't know anything about the principles, don't know anything about Geneva, et cetera and you may say, great, that's disruptive but the question is, how do we ensure equity in needs-based response? So it goes back to Biloxi. In the white community, I saw organizations everywhere and they were there helping the churches and that was great but they weren't there helping based on the need of the whole population. So the real challenge to accountability is to keep aid needs-based, right? Because it's when we delink from that that we have gone to a very bad place. Well, thank you and get the cookies while they're left. Thank you so much, Nathaniel.