 Good morning, everyone. My name is Braxton Bridgers, and I'm a millennial public policy fellow working with the research security program here at New America. On behalf of my cohort and program, thank you for attending our spring symposium. A quick bit of housekeeping before we begin. Directly after this session, we will have lunch provided. I'm really excited about that. I don't know about you guys. So please feel free to grab some food and mix and mingle with us millennial fellows and New America staff. My cohort and I are really excited to host discussions around some of the most pressing issues that our generation and future generations are expected to face. One such issue is identifying the role of big data and technology in the policymaking process. Whether it be in building government service platforms with a user-centered design or analyzing trends in user behavior on social media platforms to counter violent and extremist rhetoric, both big data and technology have the ability to spark the creation of innovative and effective policies. During this session, our expert panelists will explore ways in which data and technology can be used to strengthen policy as well as examine the challenges associated with data and technology driven policymaking. We'll have conversations in three rounds. Our first discussion features millennial fellow Emma Coleman in conversation with public interest technology and open technology Institute fellow DePionne Gauch. Emma and DePionne will discuss the utility and having technologists at the policymaking table. Drawing from DePionne's experience as a technologist in the public policy space. Next, millennial fellows Bandy Singh and New America International Security Fellow Ivana Hu will discuss the challenges in countering violent extremism online. And last but not least, millennial fellow Dylan Rosine will host a conversation with New America Cyber Security Fellow Robert Lord and public interest technology fellows Sonia Sokhar around the benefits and risk of healthcare data with a focus on patient privacy and security. But before we dive into these important conversations Cecilia Munoz, Vice President of Public Interest Technology and Public Interest at New America will provide remarks regarding the role of data and technology in policy. Before joining New America, Cecilia served on President Obama's senior staff first as Director of Intergovernmental Affairs for three years, followed by five years as Director of the Domestic Policy Council. Prior to her work in government she served for 20 years at the National Council of La Raza, now United States, the nation's largest Hispanic policy and advocacy organization. Where she was Senior Vice President for the Office of Research, Advocacy and Legislation. She received a McArthur Fellowship in 2000 for her work on immigration and civil rights and serves on the boards of the Open Society and Kresge Foundations. As well as the non-profit United to Protect Democracy. Please join me in welcoming to the stage Cecilia Munoz. Thank you very much, Brexton. Still morning, right? Good morning everybody. I'm really excited to do this. I'm really excited to be here in part because I'm pretty thrilled about the Millennial Fellows Program here at New America. Emma Coleman, who you will hear from in a moment, has been working in particular with the public interest tech team. But as I have gotten to know the fellows and the work that they're doing across our components, it's just quite an extraordinary group of people and more importantly an inspiring group of people. And I am just one of many across the institution that have been really inspired by their presence here and the work that we get to do together. So thank you for that. And thank you all of you for being here. So as you heard, I'm a civil rights person. I spent 20 years in the civil rights movement before I went into government. And it's the combination of those experiences which brought me to doing public interest technology at New America. And I'm just going to outline that for a minute to sort of set up the panels which are going to come. But that context is really relevant because I don't start out thinking of myself as a technologist. I've discovered as I have learned about how this works and I learned about it from the experience of being in government. At the Domestic Policy Council, I sat at the intersection between the sort of policy nerds, which is what I am, the policy teams working on a variety of issues, solving a variety of different problems. And this thing that we created, the U.S. Digital Service, where we brought in hotshot technologists from the Silicon Valley. We recruited them to do two-year tours of duty sort of peace course style to help us in the work of solving public problems. And I got to help place these tech teams, which seemed to have magical powers because they knew stuff that I didn't feel like I knew, with policy nerds like me who I didn't really get why we needed them to sit down with a product developer and engineer. And when you got these different kinds of problem solvers sitting at a table together trying to solve problems, it was not just magical to sort of watch it happen, but it was transformative. And the insight that I picked up from that experience coming from the world that I come from is that the civil rights world that I come from, the NGO world that many of us support that we are indeed part of here at New America, we need the same capacity. We need those same transformative skills to solve the public problems that we try to solve. So I came out of that experience feeling like the way in the future that we're going to be protecting voting rights, the way that we're going to be solving homelessness, the way we're going to be addressing income disparities in the future, all of those are going to involve data and all of those are going to involve technology. And in most cases we just haven't figured out how that's going to work yet. But we must, because we count on the institutions of civil society in this country, to help us identify when disparities are happening and to help us identify what to do about them, right? That's what the civil rights world is about, that's what it's for. We pass statutes that give us the tools to hopefully get in front of discrimination before it happens, but certainly to address it after it happens. Those tools are imperfect tools, but they are sacred, they're important. The way we protect voting rights mostly in this country is through litigation on the Voting Rights Act, which was passed 50 years ago, which has been weakened by the Supreme Court recently. And it was never a perfect tool, it's not going to become a more perfect tool in the future. We need additional tools. And we live at a time in which technology is transforming everything about the way that we live and work. It's right we're in this moment where we're in conversations about the future of work. We are in conversations about the strength and the capacity of the platforms that probably everybody in this room uses to advance our connection to people, but also to potentially undermine and corrode some of the institutions of our democracy. Everything is on the table, everything is changing. And the question that we're asking here at New America and with partners around the country is, how do we make sure we are leveraging those tools to solve our public problems? To make sure that they are in the hands of the institutions that we count on to identify when there's a problem and to help us steer toward solutions to those problems. So just to give you one of the many negative examples that are circulating, we know for example that we can be perpetuating some of our biases, some of the things which create inequalities lead to disparities through algorithms, that we may end up creating structures which perpetuate some of the things which have created inequities in this country. Again, we count on civil rights institutions, we count on the institutions of civil society to help us figure out when that's happening and to help us fix it, help us get in front of it before it happens, even better still. The civil rights institutions largely are not living and working and operating in a tech environment. They don't necessarily have those skill sets on their teams, they're not necessarily, they are thinking about the problems. They're not necessarily leveraging the right skill sets to help get in front of them and to help us resolve them. That's something that we want to change by building tech capacity into that work. But I also want to make sure to give you a positive example because we are starting to go down the road about thinking about this in terms of the potential dangers, and those are very real, it's important to understand what they are. But it is also true that we can be leveraging these same tools to solve public problems. So a modest example from the city of New Orleans which leveraged data, they figured out, you can use data to figure out where and when a house fire is likely to happen in the city of New Orleans or anywhere in this country. You can figure that out through zoning, through the kinds of buildings that you have by past history of house fires. There is data that you can leverage to figure out which parts of the city are more susceptible to have a house fire which is not only costly in terms of property but costs us lives. And by using that data they transformed what fire departments usually across the country do which is that they do outreach and get people smoke detectors. So the typical way of doing outreach is to kind of get out there, use access to media, go out and speak to schools, like hopefully try to find the people who need this information and that need smoke detectors and hopefully get them in the hands of the right folks. But by using data the fire department was able to target neighborhoods where deadly house fires were more likely to happen. And what did they do? Their outreach strategy was to go knock on those doors and help people install the smoke detectors. And in the first quarter after instituting that policy they were able to prevent what could have been a deadly house fire. They got the information, the smoke detector worked, they were able to stop the fire before people died. So we can be leveraging data and technology affirmatively to get in front of problems before they happen as well. There is as much positive potential as there is negative potential but the thing we have to do is make sure that there are technologists working in these fields, working in city governments, working for the federal government and working on the teams with NGOs. There are lots of good people in the tech world who are smart. There are problem solvers, there are thinkers and they believe they have the solutions to potentially every problem and maybe they do. But what I can tell you for sure is that they don't always fully understand the problems that they would be seeking to solve. Right? There's lots of people who think you can build an app that can solve any problem. I'm pretty sure that's not how we're going to solve our problems. But I have witnessed what happens when you sit technologists down with the doers at the Department of Education to create policies that are transformative and tools that can have the potential to transform the way people access higher education in this country. I've seen that happen and I have seen the light in the eyes of technologists who realize that how much meaning there is in public service and how extraordinarily rewarding it is to be tackling some of the big public problems that we face. So what we are hoping to create here is a field of public interest technology so that somebody who is deciding how they're going to pursue a career is interested in data, is maybe interested in engineering, can be thinking about getting that training so that they can go solve homelessness or getting that training so that they can be addressing disparities in the healthcare system. That's the world that we're trying to create here. I feel incredibly lucky to have been able to bring that passion to building a team of public interest technologists here at New America and a team that's working to build a field of public interest technology. You're going to meet some members of that team including our millennial fellow, Emma Coleman, in some really fascinating and important discussions about how we leverage these tools, what kind of public problems we need to be attacking and how we're going to be building this world together. So I couldn't be more excited to have all of you here and on with the conversation. Thank you. Well thank you for that introduction Cecilia and thank you Braxton as well. Hi everyone, my name is Emma Coleman and I'm the millennial fellow in the public interest technology program. I'm joined today on stage by Dupain Ghosh who is both a public interest technology fellow and an open technology institute fellow as well. And he focuses mainly on privacy, security, and civil rights policy. So before Dupain came to New America he worked at both Facebook and at the White House as a technology policy advisor. And his most recent work has been digital deceit examining the technologies behind precision propaganda and political disinformation on the internet in which he explored election meddling and disinformation campaigns across social media. So thank you for being here Dupain. Thank you so much. Yeah, so I'd love to start off our conversation. You've worked in this space for a long time. Why is there an urgency now to incorporate technologists into the policymaking process? Well I think that, I think as Cecilia was just saying, we are seeing technology really come to the fore across society. When we are trying to read the latest news, we turn to our phone. When we try to hail a cab, we turn to Uber or Lyft. When we try to find the nearest gas station, we're turning to an app again. And so it has created so much opportunity, economic opportunity across the board and has created the platform that we really turn to every day. For every single thing that we try to do. But I think at the same time as Cecilia was also saying, the use of technology across society has really raised a lot of concerns. And has raised a lot of really difficult tensions that we need to navigate in the way forward. And so I think technologists have to increasingly be at the table, both to figure out ways to leverage technology to its highest potential. But also to help remind us that there is a darker side to and there is an underbelly that we need to address and we need to really try to work around. So that many of the tensions that we've seen thus far can be avoided in the future. And I know that you've worked in several capacities in terms of as an advisor on these types of things and now you're doing more research and writing. What would be the best way to go about incorporating technologists? Is it to bring them into elected officials offices? Is it to teach elected officials how to even understand some of the more complicated elements of technological learning that we have now such as machine learning or algorithmic discrimination or anything like that? I think it's all of it. I think you raise a couple of really great examples. We have the tech congress initiative which aims to actually place smart technologists into offices on the hill. And we have several colleagues of ours that are doing exactly that in leading offices on the hill. That is important because we saw just the other day a set of hearings that I think members try their best to understand all the issues at hand. But of course sometimes there are gaps and they are not technologists all the time. In fact very few are technologists. It helps to have a staffer who is a technologist and who can raise these important issues in the right way and can understand these issues and navigate around them. So I think the goal is really to bring that expertise closer to the decision making table. Now everybody has constraints. Of course members on the hill have constraints. They have a set budget and they can't hire everyone in the world. And of course I'm sure they would love to have a civil rights expert and a technology expert and a health policy expert and many of those are roles that are filled in many offices. But everybody has constraints so I think we just need to figure out ways to bring technology as close to the decision making process as possible. Especially given its really central role across business functions and our livelihoods and our social life. And I know now you are working on an anthology around machine learning to sort of educate policy makers. Could you talk a little bit more about that? Yeah this is really about one of the issues that Cecilia had mentioned that of algorithmic discrimination and algorithmic bias. And as a related issue addressing ethics as artificial intelligence comes to the fore. Algorithms are used across society in almost every app that we use. Every decision making process that the business world engages in whether it's determinations of credit worthiness or routing a car through traffic most effectively or making decisions about federal housing and public policy. These are all decisions that are increasingly driven by algorithms. And as artificial intelligence comes to the fore those algorithms are getting smarter and smarter and being trained on more and more historical data to drive basically the corporate sector and the government. Whoever is housing those algorithms and trying to implement them to greater efficiency that is to greater profit. What we need to make sure though is of course that as those algorithms are being developed and implemented we stay grounded and we respect civil rights law. We respect ethical frameworks whatever the context might be and don't make decisions that can lead to enormous disparities across society in ways that can really be damaging and hurtful to many many people. Because that has happened in the past it really needs to stop and I think we need to bring ethics into the development of algorithms in a much bigger way in a much more concerted way starting from that decision making table. And so this anthology is really about trying to raise some of the biggest ideas across the country and around the world in this area and give experts a voice as to how artificial intelligence and algorithms raise ethical concerns in different spaces and what can be done about those issues. And your other recent paper digital deceit which looks at the tools that are used to spread disinformation. You suggest the resulting ad tech policy should center on three main areas which is election law privacy regulations and consumer protection law. When you have these global platforms like Facebook and you're asking for these sorts of changes how do policy changes in one country affect such a broad problem. Does it need to be that technologists in across the world or in multiple countries are coming together to work on these policies. Yeah absolutely you're spot on in that analysis. These are global platforms they raise global societal tensions and so we need global collaboration to address those problems. I think in this case that paper which I collaborated on with Ben Scott a senior advisor at New America is our goal is really to make one central point which is that the leading digital platforms of the day whether it's Facebook or Google or Twitter or any of the other leading internet companies are really premised on one function which serves which contributes to more than 95 percent of their revenue which is targeted advertising. It's a space that it's a space that doesn't really have much regulation right now in the United States we have the Federal Trade Commission which is charged to try to police it but they don't they don't have much authority in the space. And there aren't any other federal agencies that that have much authority and so the business community the industry has almost grown like photosynthesis toward the light of the practices that that can bring them highest profit margin. And that whole business model is really premised on creating extremely compelling services that that are borderline addictive. Collecting as much data as possible about individual people through those services and creating opaque algorithms to try to target ads at those people. And there's because there's no regulation in this space they've grown in this direction in a in a huge way. For example Facebook's Facebook's revenues 10 years ago were three billion dollars and today they're forty three billion dollars. And around that time of a few years ago that is when they introduced all their tremendous and targeting technologies. I think that the solution really lies in understanding that business model both here and around the world. And trying to address it by listening to the listening to the global community's concerns. And that's something that the industry is is trying to do more and more. But it really lies in global collaboration with with Europe with Japan and Korea with India with Brazil and Argentina as they also consider data protection regimes. There is a big conversation that's starting now in in on Capitol Hill about a data protection regime here as well. And I think moving forward this will really require global collaboration because regulation in the United States does not necessarily mean regulation in India or Brazil. And these are global platforms they're not just U.S. companies anymore. And as you mentioned Facebook has seen incredible growth over the past 10 years that you know no one would have necessarily expected at the beginning. How do we end up creating policies that are agile enough to adjust for how fast new services tend to come on the market. I love this question because in the U.S. we actually have a very rigid regulatory regime. The two agencies that that oversee tech and telecom are the Federal Communications Commission and the Trade Commission. And it there is almost a waterfall effect of you know depending on who is in power Democrats or Republicans the regulatory regime is is either here or it's here. And when the politics changes it moves back to step one one and then when it changes again it moves back to step two. And there's really no certainty. There's no consistency. And we just really need more agile regulation that can that can address the harms that and the benefits that that that the sector germinates and creates. I think it it will require more advocacy it will require more events like the Cambridge Analytica News. Frankly it will it will require the public sentiment shifting to understand that this is how the business model works and this is how it can create tensions across society. And as such we need to rethink the way that we oversee the industry. Now that's not to say that that we need to implement strict and stringent regulations against everybody. But but we do need to give the government a little bit more agency and understanding how the sector works and and addressing it. And how have you seen that change over time both during your time at Facebook and during your time at the White House. Honestly I haven't seen the regulatory regime change all that much. So I think the biggest example of an attempt at changing it was with the net neutrality announcement. Briefly net neutrality is the idea of treating every bit that travels over the Internet the same so you can't engage in paid prioritization or blocking the or throttling of content you need transparency on the Internet connection. Those are some of the elements of President Obama's idea for what net neutrality should look like. And what he tried to do or what he what he actually advocated was that telecom providers should be reclassified under how the Federal Communications Commission considers them providers of an information service versus a more utility like service. And he advocated that the Federal Communications Commission's chairman at the time Tom Wheeler actually proposed that they do get reclassified. That's been that was brought back that was brought back under the Trump FCC pretty quickly. And that's that I think was the biggest attempt that we've seen at trying to legislate or regulate this. There have been a lot of legislative efforts but they haven't really moved forward through committee either. So I see it as on the Hill it's a lot of advocacy trying to bring public sentiment a little bit further. And in the administration you know I see I see attempts one way or the other to to try to have an impact on on regulation. But I think we again we just need to think about the public sentiment. There needs to be better public education on how technology works to get us over that hump of political gridlock. And how do you think the most recent big news stories around tech policy so net neutrality and then Zuckerberg's testimony before Congress have impacted public sentiment and then will in turn impact the regulatory side. Well we just saw I think in the past day Facebook's revenue grow by by 63 percent over the quarter. So you know I think I think public sentiment is is still pretty high in general. It's it's almost like a filter bubble as Cecilia was alluding to. There is a there's a conversation that's happening amongst DC tech policy people. But but there needs to be a broader sense of how things should change for the sector. How the government should should be more agile in responding to it. I think that the public sentiment has shifted a little bit but only a little bit in general because we might we might think that it's shifted a lot based on everything that we read in the New York Times and everything that's that's being said on in different forums about the sector. But I think the average American consumer still uses Facebook every day and still will for the foreseeable future and and has a high affinity to all these platforms. As a matter of fact I think it was just reported through a Wall Street Journal poll last week that most Americans think that we have enough regulation of the sector already or that we have too much regulation of the sector. So so I think I think it'll require a little bit more and as we sort of create these policies and move forward. How do we create that collaboration between public sector and private sector between the government and between these large companies when so often they're demonized in the news and for justifiable reasons but still in a way that we need to have them in this collaborative process. Yeah I mean they are demonized and news reports we have to remember that that you know not not to cast aspersions or anything but we do have to remember that the business model of tech implicates the business model of journalism and news. And so there is a there's a hard tension between those two industries right now and so a lot of the reports that we might see might not always tell both sides of the story as well as they could. Many of them do but many of them don't. And so I think I think we just need a better representation of technologists and everybody in the sector sorry in the in the in the in the universe of this conversation all the stakeholders at play we need them at the table. Facebook actually Facebook Google Twitter they do have they are global platforms and there are many many minor and major implications of any public policy change that they might make and we don't always see it unless unless we are you know working there or talking to folks who are there. It's it's often hard to see how a change here or a change in policy in the company's privacy policy might implicate implicate the global usership in ways that we might not have seen. And once we you know hopefully have these evolved policies that better protect consumers what does the technologists place in enforcement look like. Well in enforcement we actually have a couple of good examples one. I'll just mention is the CTO at the Federal Trade Commission. We've had a series of really brilliant people in that role from Harvard professors to Princeton professors to the chief technology technology officer. Actually I think in this case it was it was not the CTO that the title was actually that the principal technologist I might be wrong about that but some some really amazing people have had that role and that is an important role because the Federal Trade Commission which is headed by a chairman or chairwoman and has four other commissioners who all vote on on a proceeding or a regulation moving forward or a an enforcement action for example they are they're typically lawyers or policy experts but but don't have a broad understanding of or deeper understanding of technology and of course technology is defining a lot of the agency's priorities now because of implications around privacy and security and communication and so the principal technologist at the agency advises the commission and the chairperson and that really helps the agency be incisive and understand industry practices to a much greater degree than they otherwise would be able to and having that person internally really helps because it allows them to trust the recommendations that are coming forward as opposed to having a narrow influence actor telling the agency what it should or should not do so I think I think that is one good example of how bringing a technologist into the into an agency or into an organization can drive a discussion as to what priorities it should take on well thank you so much for your insight and we'll welcome the next panel on to the stage thanks everyone thank you so much for being here my name is spandi and I'm a public policy fellow at new america's open technology institute I'm very excited to be joined here today by Ivana who Ivana is a new america fellow on the international security program and she's also the CEO and a partner at omelos. Omelos is a machine learning and data analytics firm that seeks to automate quantify and standardize approaches to countering violent extremism or CVE. Ivana has done a lot of really awesome work around the world and it's kind of hard to condense it all but just a couple of amazing points she's interviewed a number of former extremists around the world from groups like Hezbollah, ISIS, AKY and the Taliban and she's also participated in a number of de-radicalization programs with neo-nazis in Scandinavia so I'm really excited to be joined with her today and today we're going to be talking about how can we improve the space of CVE by including data and by using data and to give you a little bit of background on sort of this conversation through the fellowship I've been writing a paper that looks at what are the challenges involved when it comes to evaluating CVE approaches that are implemented by technology companies and I've found that a lot of it is related to a lack of metrics, a lack of data and a lack of clear definitions in the space and as you know companies like Facebook and Twitter and Google have come under a lot of pressure over the last couple of years to take down content and to do it in a more quick manner and this has sometimes translated into legislation so like at the beginning of 2018 Germany instituted the Network Enforcement Act which mandates that companies have to remove content within 24 hours of it being flagged and they have to remove harmful content which includes extremist content otherwise they will face fines and so there's a lot of growing pressure but there's no real proof that these approaches work and so this is sort of what we're going to be talking about today so to kick it off Ivana you've worked in the CVE space for a really long time and you've worked at this intersection of CVE data and technology can you talk a little bit about the challenges that you face when it comes to evaluating CVE programs and how Omaha has worked with this? Yeah so I'll start with the challenges I mean the first is it's really hard to measure the lack of something right because CVE comes left of boom right before stuff blows up and so with the Pentagon where your mission is lethality you can say a drone took out 5 bad guys and 10 civilians and because of that X1Z happened with the network but with CVE you don't really it's really hard to say oh yeah well this person because of the work that we did this person decided not to join ISIS or Al Qaeda or a neo-Nazi group unless you know it's a very it's timed so well that you have managed to actually disrupt a plot that was happening and then you can count the plot right and so I think that's the first challenge and the second is also it's the mentality in the space is government is not used to measuring anything and when they do they look at click like click-through rates they look at shares and likes I don't actually know what any of those things mean right because we don't know if they lead to any kind of change in behaviors they're just numbers and there's an interesting stat that 70% of people who share links on Facebook with their friends were on their newsfeed they don't actually read the article they just read the title and that also causes other problems like fake news and all that stuff but that's a different conversation so the way that we're looking at it at all my losses we said okay how do you actually measure the radicalization level of groups of people we really don't go down to the individual level because people change and people are fundamentally irrational especially when they're emotional and so we look at the group data so what we're saying is right now the way that these things are measured is someone's walking around with a clipboard in Kabul or even within the Somali American communities in the U.S. and they're asking how do you feel about joining Alshabab most people are not going to be like yes I really want to join Alshabab right they're going to tell you what they think you want to hear but that's basically lies and then we start thinking okay online people tend to be more truthful and they're probably a bit more radical online than they are in real life because there's a screen separating them from the online environment but that probably gives us a better data source than walking around with a clipboard or doing it by polling or any of the other options that we have so far so we start to look at okay where are the narratives that they're talking about can we actually start to match the online behaviors of people as compared to the online behaviors of known violent extremists so these are people who you know we don't care about extremists we care about the people who are going to become violent extremists and so we start to use machine learning and create this really cool algorithm that gives you a score at the end and that's what we use to measure so when you're sending out a counter messaging campaign where when you want to look at the local narratives that people are saying about the U.S. or about ISIS or any of the groups you can just sort of use our dashboard and organically track what they're saying online and when we talk about metrics and definitions in the space this is something that's come up in the conversation quite recently like over the last couple of years but companies and you know organizations that work in CBE haven't really implemented this quite enough so just as a couple of examples this week Facebook put up a blog post that provided an update on their CBE and their takedown work and in it they said that they were still working on defining their metrics and they're defining metrics for programs that they've run for years now that's kind of concerning that they don't actually know how to measure this similarly a couple of months ago some of the major internet platforms testified in DC about their CBE work and the Twitter representative was asked about metrics and he kind of just gave a vague answer and was like we're working on it so it kind of seems like you know they spent a lot of money spent a lot of resources to create these programs but haven't really paid any attention to what actually works so what do you think are the challenges that are associated with establishing metrics in the space like why are there no metrics right now is it just people don't care haven't really thought about it or are there actually hurdles to creating them or taking down content taking down content in CBE in general okay I think for a lot of the tech giants it's not really in their interest they're basically meeting the threshold that the government has put in front of them and just speaking very frankly if you have watched the Facebook hearings with Mark Zuckerberg it's glaringly clear that our policy makers at the highest level don't understand technology they don't understand technology not even talking about AI which got flown around like I don't know what like they did not understand it and the hearing actually missed the entire point of why Mark Zuckerberg was called is like that was a national security threat and they made it into something completely different and so I think it's technical illiteracy on behalf of the policy makers and because of that there are the metrics that the metrics that they're giving to these tech companies are just not sufficient and so it's not that they don't exist they just really suck amen to that and so when we think about companies and governments operating on proven CBE programs can you talk a little bit about the consequences that this can have on people both online and offline I think the worst consequence is having a backlash meaning that the CBE programs typically are done with very good intentions they're implemented by people who really care and yet there's a backlash effect that it actually further radicalizes the exact group of people that they're trying to help DARPA did a study on the effectiveness of the redirect method done by Jigsaw which was Google Ideas and what they found was that this program has been written up by every single newspaper as this is the way forward in tackling online radicalization they found that there was a backlash effect and that it further radicalized people who receive the content through Google but we don't really talk about that either and you sort of see the same thing in places in Pakistan and that's why a lot of successful CBE programs are not branded as CBE they're branded under community resilience or agricultural livelihood or something that's completely different even though the actual mission of the program is to decrease the number of push factors definitely and I mean when you talk about content takedowns a lot of the criticism that you receive is that running an unproven content takedown strategy can actually censor legitimate voices and that pushes them to be further marginalized and radicalized actually can I ask something to that? So it's interesting because I actually sat on the same stage with the UK Home Secretary who has been super gung-ho about taking down content and I asked a question I said you know taking down content is getting at the topical way that we're trying to fix a problem but it doesn't fix the underlining problems of why people in the UK are actually joining these programs she didn't like the question very much but I mean I think that's taking down content also has the unintended consequences that human rights abuses were war crimes actually get scrubbed off the internet because from an AI perspective they look the same and so you know I have talked to the International Criminal Court and I talked to a lot of the non-profits are gathering evidence against ISIS to try to prosecute them if a war tribunal actually happens and that's our biggest thing is you know YouTube is taking down all the evidence so what do we actually have now to prosecute anyone? Definitely and so a number of these platforms also focus on the big groups so the Islamic State, Al-Qaeda but there's a lot of concern because obviously these are not the only extremist groups that exist and you know some experts have said that by only focusing on these groups and only strategically dedicating your resources to these groups you're letting these smaller groups gain a presence online, engage and then they can like in a world where there's no ISIS or no Al-Qaeda anymore they would become ISIS 2.0 and they would become ISIS 3.0 So can you talk a little bit more about do you think companies should be more strategic in their approaches do you think they need to dedicate all their resources to only these big groups? I think the big groups are easy because there's not a lot of controversy about the fact that they're terrorist groups but if you look at other groups like the PKK if you're Turkey it's a terrorist group for anyone else it's just a guerrilla group, right? And so I think it is much easier to focus on these groups like ISIS and to a certain extent some Neil Nazi stuff has been taken down but yeah they're definitely missing the boat and even with Al-Qaeda with all this new attention turned on ISIS for the past couple of years what we have seen on our platform is that Al-Qaeda propaganda has risen significantly Telegram channels for ISIS have been taken down but Al-Qaeda in Syria, Hattash, any of their affiliates they're just it's like the wild west for them because they know that they're not going to be taken down Yeah definitely And so when we think about companies reporting data on their CDE when it comes to content takedowns the real way that they do this is through transparency reports or through blog post updates when we talk about counter narratives like you mentioned Jigsaw's project there's not a lot of data disclosure that's going on in that realm how do you think companies can improve on that do you think they should issue transparency reports similarly for these kinds of programs or do you think that companies shouldn't be the main sources of data and we should rely on things like omelas instead to do that kind of evaluation I think the incentives the incentive structure is a little bit skewed right now it's not, you know you can say I've taken down 10,000 pieces of content or 10,000 accounts but we don't really know what that means because it could also just mean that ISIS immediately created another 15,000 right so what's the actual environment like and I think you do need someone objective who's not getting paid either by the government or by the tech companies to basically be like a special auditor or like an independent audit yeah something like that and then going back to Jigsaw what's interesting to me is that the most sophisticated metric that we have that's widely accepted within counter narratives until maybe a year or two ago was a click-through rate and Jigsaw is its internal think tank at Google but if you're a client for Google advertising like Expedia or anyone like that and you walked in to the client with the click-through rate they will literally laugh you out of the room and so we have a lot of people who come from Google advertising who work for my company and they specifically said if you don't have anything more sophisticated than you know the lifetime value over the average customer acquisition cost of a campaign like there's no way you're going to land that account and you will most likely be fired and so you have this very interesting Jigsaw position of like the far of expectations and yeah okay great and so I think we're almost out of time so as a last question what do you think the next steps in this space are how can we use data to make CVE more meaningful and who should be the real stakeholders who are doing this? I actually want to see civil society step up a little bit we do some work with civil societies who get money from the US government because they're more credible voices than the government itself and I really want to see that they get their skin in the game in terms of helping us come up with these metrics and also be a little bit more literate in talking about it because we have to do a lot of education with them first and then I think in terms of the government the Global Engagement Center at State just got a lot finally they got their money so they now can actually do something and I want them to instead of leaving the monitoring and evaluation to sort of this last thing that needs to get done is when you're actually thinking about and giving vendors money to create the content is you actually start to talk about M&E like right there because it's really hard to run an M&E if you don't have a baseline in the beginning and I think that nuance still needs to be further emphasized Great, with that I think that's all the questions we have so thank you very much for all being here Good afternoon everybody I think enough time has passed since Cecilia is good morning that we can actually say good afternoon safely I have the somewhat and sometimes unenviable task of being the moderator of the session right before lunch but I am actually very excited to be joined by two colleagues today for a really exciting discussion on healthcare data and technology My name is Dylan Rosine I'm the Millennial Public Policy Fellow in the Cyber Security Initiative here at New America and in my time here I've had the great fortune of working on a number of projects last fall I helped deliver a report to the United Nations Secretary General and his Chief Executives Board on various normative entry points for the UN and shaping norms around various frontier technologies and most recently in the topic of conversation today I've been working on developing a series of policy recommendations around healthcare cyber security and so I'm really excited to be joined by two folks today who are going to talk a little bit about that and generally about the promises and perils of healthcare data and technology So I'm joined by Sonia Sarkar here on my left Sonia is a Public Interest Technology Fellow here at New America She's also the former Chief Policy and Engagement Officer for the Baltimore City Health Department and Sonia's work is focused on identifying the role of technology in facilitating connections between various aspects of health sectors particularly through an equity lens and to Sonia's left we have Robert Lord who is one of our Cyber Security Initiative Fellows here at New America and I might add has been really helpful and really leading the development of some of these policy recommendations that we're working on for the Healthcare Cyber Security Policy Report which he may speak to a little bit In addition to that Robert is a co-founder and the president of ProTennis which is an analytics platform that leverages artificial intelligence to detect data breaches in healthcare and he's a leading entrepreneur and thinker in the fields of artificial intelligence, cybersecurity, healthcare analytics and data privacy So very excited to be engaged in conversation with them really about one key question which we're going to be addressing on this panel today and that key question is how could the proliferation of healthcare data and technology both improve patient health outcomes while also exposing patients to new risks and what can we do about it So just for brief background for those of you who are just hearing about some of these ideas today the emergence of new healthcare technologies has really grown and has been unprecedented in the past few years after the high-tech act was passed by Congress in 2009 we saw the rapid transition away from paper-based records to electronic records in the healthcare system just to give you a sense of how fast that transition occurred in 2008 only 9% of hospitals were using even the most basic electronic health record system but by 2015 that number had soared to 96% using certified EHR technologies so a really fast transition In addition we've seen connected medical devices growing at an unprecedented rate including things like infusion pumps and pacemakers which are connecting to each other to your cell phone and making it all very exciting for the possibilities that we can have with some of these new technologies So with that I want to turn to Sonia to talk a little bit more about some of those exciting developments and the possibilities that we can have in leveraging these health technologies really to deliver better patient health outcomes while in the clinic but also before patients even step foot in the clinic in the first place Great, thank you Dylan and I too am working through hunger so I understand what it's like to be at the panel right before lunch but I really appreciate the conversation and one of the things that I'm very interested in as Dylan was referring to is how do we think about health as beyond just the clinic walls or the hospital walls how do we leverage technology to really address the fact that about 90% of what actually impacts our health outcomes doesn't take place through the traditional medical care system as we think about it but has to do with where we go to work and how we get around and what we eat and so one of the things that I'm really excited about and I get to play tech cheerleader today which is not a role that I'm often in is the ways in which we can leverage not only the EHR which we've seen a large proliferation and how that particular system is really being used both inside of the clinic but also to connect to systems outside of the clinic but also other types of technologies that enable us to identify what types of social needs a patient might have so you can imagine in a clinic in East Baltimore where there are significant health disparities being able to keep a patient out of the emergency room because it's been identified that maybe they're struggling to put food on the table at the end of the month and we're able to get them enrolled in a food assistance program they're able to connect with a local urban garden that's in their neighborhood and they're able to join with food advocacy efforts in the community that are actually focusing on why those food deserts and those food disparities exist in the first place which are tied to all sorts of policy issues and so in that vein you can imagine that a screening tool that's identifying that that food insecurity exists in the first place or even a module in the electronic medical record that actually shows that the patient has the food need and then an automated referral for the patient from the clinic to an actual food bank or to that community garden could start to bring the patient along this sort of full stream of health and social services that's really relevant to their health as a person and not just as a patient that's receiving medical treatment so I think there are a lot of exciting pieces there just in terms of the pure technology the other sort of area where there is great room I think for us to think about how technology could be a force for good is how it actually lifts up the voice of the patient itself so as a patient is receiving care as they're interfacing with the healthcare system as they're actually collecting data on what's out there in the community and what types of services are out there being able to voice their own opinions about how those services are or are not meeting their needs and then thinking about how that information gets back to the healthcare system so that institutions can be advocates for some of those services as well and just before we pivot to Robert to talk about the flip side of that could you give an example of maybe something that you've seen in Baltimore working to really empower those patients to be able to take their health data and really go out there and advocate for themselves yeah absolutely so when I was at the Baltimore City Health Department we were very lucky to be an award recipient from the Center for Medicaid and Medicare Innovation for a program called Accountable Health Communities basically the idea that we should be able to identify patient social needs refer them to resources in the community or to government programs that actually address some of those needs and then be able to loop back to the healthcare providers and to the healthcare system so that this information about their social history becomes part of the standard intake it becomes part of that standard quality of care in terms of how healthcare is being delivered so one of the things that we were very excited about on the patient side was that many of the community organizations that we started to talk to said we want to know how we can have access to some of this aggregate data as well so that if there are hundreds of patients in Baltimore City that are being identified as not having access to affordable housing that's not particularly new news in Baltimore City for the people who do work around that but if you're able to show both on the healthcare side and on the housing agency side how many of those referrals are actually getting met what it looks like to come off and on of the housing wait list, etc then you actually start to get some traction and you gain some unlikely allies in the form of clinics and hospitals who may not be actively involved in housing advocacy work but now are as a way of the technology and the systems that they're engaging with Thanks, Sonia Now Robert, I want to come to you to talk a little bit about the flip side of that and thinking about some of the perils perhaps of introducing new technology systems, new platforms and really how those technologies can introduce new privacy and security concerns that could actually negatively impact patient health in some ways Could you speak to that a bit? Absolutely, I'm happy to be the tech boogeyman today a little bit I was actually just at a talk where we were right after lunch so now you guys are hangry before I was dealing with people in a coma so it's a little bit of a nice switch for me personally but I think actually it's most, you can kind of most illustrate this point with a story that I had for a little bit of context before my co-founder Nick and I started Pretennis I was a medical student, also in East Baltimore and one of my real clinical interests was working with HIV positive patients and I worked in an HIV clinic for a lot of my clerkship time and one of the things that I noticed there was that a lot of patients were very very hesitant to be forthcoming they would ask questions like where are you putting this data who can see this information when you're putting that into the computer where's it going, that type of thing and it really made me think, I heard it a few times and heard it over and over again and I also saw this with psychiatric patients as well so people who had sensitive diagnoses of some sort or another and I really began to dig into that question and just start to ask how are we protecting this data what's the current state of this information and you start to realize two really terrifying things when you just scratch the surface of that problem the first is that the basic cybersecurity hygiene and protections that we have in healthcare are probably roughly five to ten years behind industries where you have similarly sensitive data I mean really I would say terrifying structures with regard to how we protect that data what you might think of as like a traditional external network based attack this is improving in some ways but for a little bit of context while comparable industries probably spend about 8% of their budgets on this type of problem in healthcare we have about half a percent on budgets for most places so pretty bad situation those numbers vary and they're difficult to get but that gives you a little bit of context the other thing that I began to realize that will be intuitive if any of you happen to be clinicians or people who work in a medical record is that actually the real problem for a lot of health privacy is insiders it's people who already have access to the electronic health record and in healthcare we've got two really big problems and trends we've got one where we are opening up the data to all these different sources we're creating all these new linkages between individuals, between institutions we've got health information exchanges we've got increasingly interoperable systems in our communities and that's really great as a student I saw that all the time it was really important to be able to get the complete medical record but simultaneously there's no controls over who can access that information in almost any case when I was a medical student if a let's just say high ranking DC VIP came into my institution there would be nothing essentially that stopped me from taking a look at that person's record especially for anyone to know that I was even in that record so you can imagine as a first year medical student even as a volunteer actually working there in many cases and this is essentially every hospital in the United States so there's no exception in fact the place I was at was really advanced in this regard and this was still the case and so what ended up happening was my co-founder and I started to realize hey we think that there's a much better way to do this on that path I used to be what's I guess called a quant at a hedge fund Bridgewater Associates used to be in the intelligence community and a former Green Beret and we said look in finance and in national security this can be done very differently and that's how we ended up tackling that problem but just to give a little bit of context of what you see on the ground things are getting better but we still got a long way to go and that's some of the work that Dylan and I and Ian are working on right now along with many others at New America and I think where we're focused right now which is how do we stem the gaps and put band-aids on things but actually how do we actively articulate a more proactive vision for where our industry should be in five years and have that be a more constructive look at the future I think one of the questions that would strike me in thinking about how much access everyone in the ecosystem has is why why would a first year medical student have access to say a VIP DC insider yeah so it's a great question and it comes down to two things and one's a really good reason and one's not so good one is that there's a real culture of open collaboration in healthcare that stems from two areas one it's a very collaborative it's a very exploratory it's a very I think academic community especially at many medical centers and so people want that openness and teaching but then two you've got the emergency situation problem which is if you don't have access to an allergy when someone comes into the ED and you've got to push a particular drug and you don't know if that drug will kill someone or not then blocking someone with traditional say role-based access controls is a lethal decision from a cyber security capacity and so essentially healthcare systems decided look we'd rather have the insider threat versus that person dying or anything which was literally just look up a piece of data so certainly the technology is there no one would deny that but the structures to do that are really hard and that gets to the second piece of it which is we frankly just don't understand healthcare workflows well enough to permission people appropriately so what does a nurse really do they could be inpatient, outpatient research they could be in an oncology ward they could be in the OR all of those have completely different contexts and different types of patients they're accessing the electronic health record and if you think about it you probably have the equivalent of millions of different roles in a healthcare system even if there's tens of thousands of people and so a lot of your basic security paradigms for protecting data inside an institution just completely fall apart which is why kind of more advanced analytics are coming about as an alternative to that approach that are more behavior based it's clear to me hearing from both of you that there's a balance to be struck and taking advantage of and leveraging these technologies to improve patient health outcomes while being mindful of the risks that they introduce so rather than have one of you play the cheerleader and one of you play the boogeyman can you think through sort of critically where that balance is and how we can be mindful of the risks while also leveraging the technologies at the same time yeah I think one of the things that really resonates about your story Robert is as we were planning for the account of a health community's work we had convened a coalition of multiple social service providers multiple healthcare providers and often I think when you're in the direct work of patient patient care delivery or service delivery there's a sense that the more you know about the person in front of you the better you'll be able to do your work so if you have the whole set of data that could possibly be out there more is better and that's kind of the overwriting philosophy that comes to really sort of run the design of the intervention and things like that but as we were having a conversation there was a representative from House of Ruth Maryland which is an organization that does a lot of work around domestic violence and you know is very invested in the idea of really defining what it looks like to provide information safely and how to protect women from potentially being exposed to you know former abusers or current abusers within a system and she asked the same question you did which is really you know why are we collecting this data and how are we actually ensuring that the right levels of protection are in place to ensure that the right data is getting to the right people and so as we think about issues of consent as we think about issues of really sort of you know making things available only when they're needed for me there's a very large incentive to actually talk to patients and stakeholders directly and you know find out from them what they would view as appropriate and really sort of give that equal weight with the subject matter experts or the technical experts that are often in these rooms. I would really echo that patient involvement side of things. I would maybe even take it a step further and say that I think we need to start having mechanisms of transparency where consumers of health care I should say should really be able to understand what the cyber security and privacy posture is of the institutions that they're going to entrust their data with. To a street and if we don't have that transparency you're basically I mean everyone here has gone into some form of doctor's office or something and basically they say here let me give you a stack of ten papers that you're going to sign essentially giving all the rights away to every one of the most sensitive pieces of data that might be in your life unless you've undergone like a top secret SCI clearance that's basically the only thing you can I can think of that's more invasive than a medical record for many individuals and so when when you think about that it's all about okay well what are you going to give me back institution what technologies are you using what cultural elements are you using to protect my information and I think that is a is a huge piece that we can that we can implement. I think a second thing is and I see this a lot have been a clinical researcher as well we always talk about and I know we're talking about AI and all these other technologies and buzzwords that have lost all meaning to me at this point we've been in the field for a decade now but we always talk about these technologies in terms of how are we using the data to improve outcomes and to do more clinically focused analytics but a lot of those same tools can also be used to protect data in a variety of different ways and I think we just need to have a parallel track of investment and thoughtfulness about how we're using these kind of sophisticated techniques to defend our institutions as well as how we are using them to implement improvements to patient care which should come first I agree but but really does have to happen in parallel. So it's looking like we have about time for one more question and unfortunately there's no Q&A for this panel but we're going to go to lunch and I encourage you all to stick around and ask those questions to all of us certainly but the others who came before the final question that I have relates back to Emma Coleman and DePyam Gosha's panel about bringing technologists to the table and so I'm wondering if you can speak a little bit more concretely about bringing technologists into the healthcare space and encouraging those collaborative dialogues between patients technologists, policy makers how we leverage public-private conversations to really deliver the best outcomes and maybe if you've seen an example of that in your own work Sonya or Robert if you can speak to what that looks like and helping deliver the best patient health outcomes. I think it's a great question we were talking earlier about the fact that oftentimes there isn't a great understanding amongst technologists of healthcare workflows and I'd flip that and say that similarly there's often a great sort of lack of understanding of technology in general but then certainly the way that technology gets incorporated into actual healthcare workflows amongst public health professionals and healthcare providers themselves and so one of the things that I have found to be incredibly useful is to think about how to bring technologists to the table in a way that really is in a mode of learning and not just in a mode of saying we're the people who know how to design and code and we're here to fix your problems which can often be the positioning of some of these types of conversations and instead to say everyone's got different assets that they're bringing to the table and I found that to be incredibly useful. I'm not a technologist myself but I really enjoyed getting to learn from the other technologists of public interest technology fellowship who have helped me to break down some of the pieces of the policy questions that I'm looking at and then think about how technology could be applied against that. Yeah, you know I always think about it having been in health IT specifically for about the last five years as it's really a two-part problem and I think that it requires a bridging of the gaps between two mindsets. On the healthcare side there is very much a culture of no that has emerged around technology and it's all about okay let's protect patients let's protect patients so we can't do something new we're going to do it the way we've always done it. This is beginning to shift in some ways but we need to start thinking about the long term which is if we're really going to protect our populations and the long term health of our nation we can't do it the way that we've always done it and so I think we need to move into that shift of from a no all the time into a yes but let's be thoughtful and I think that's a really important cultural shift simultaneously I have a lot of friends on a certain coast that when they go into healthcare are often just focused on like let me just disrupt whatever I can I'm here to disrupt I'm just disruption as a service today right now and I was talking to healthcare CIO we were at this conference and he said the last thing I want to hear is someone is going to disrupt my hospital okay just that is not a word you want in a clinical workflow does a surgeon want disruption in their OR no and so I think we just have to start to think about the nomenclature that we use and thoughtfulness and the ways that we're entering that space as technologists and being respectful both of the cultural norms as well as the unique challenges because there are elements to that that are definitely playing the entrepreneurship game on expert mode or the policy game or other pieces of that and I think we just both need to come to that understanding and concordance and I think it's events like this that are helping to build those bridges that are just so so important well I hope you'll join me in thinking Sonia Sarkar and Robert Lord for being here thank you Dylan thanks everyone and congratulate yourself on making it to lunch I believe lunch is served out in the lobby so save those questions we'll be back in here at 130 thanks