 So thank you very much, and we have a great panel for this discussion on tomorrow's technology and what it means for cybersecurity. And I thought an interesting starting point for it would be to go back in history, because it's almost exactly 50 years ago that two Pentagon funded, and they weren't computer scientists. They were actually psychologists by training. JC Licklider and Robert Taylor wrote a memo that literally changed the world. And the title of it was, The Computer as a Communication Device. And in it, they floated the then crazy idea that computers were not just for calculating, but that they could be used as communication devices. And then not just that idea, they went on to say that it would revolutionize everything from creating new kinds of jobs to building new what they called, quote, interactive communities to giving people a new sense of place, what they described as, quote, to be online. And then they went on to say, as long as this technology was made available to the masses, quote, surely the boon to humankind would be beyond measure, end quote. This is all before we actually have the internet or ARPANET itself. But the one thing they didn't predict is what this conference is about, the idea that a computer would not just be used as a communication device, but as a miscommunication device. One that is used and misused for everything from, as we've heard today, crime, politics, and war. And so what I thought would be interesting is to ask our panel of experts to play a little bit like Taylor and Licklider and project the future for us, but one up them by going into what the bad side might be. And so I'm going to kick off by asking each of them to identify a technology and it can be something in the near term or maybe in the long term that they think will shape the world of computing in the future, but also go a step further and explore one threat and it might be a new actor, a vector, a type of attack, whatever, one threat that will also see that's new and emerging in this space. And so first I'm going to turn to Michael Daley, who's Chief Technology Officer at Raytheon, and lay out what you see in this space. Sure. Well, we've been seeing an emergence of metamaterials, you know, nanomaterials, and these have all been driving lower power consumption, plus energy generation, plus a reduction in size, a massive increase in storage. So what I think that's going to do is end up creating a bunch of very small sensors and effectors. In fact, we were speaking earlier about just an article I read this morning about a new crystalline structure on the molecular scale that when exposed to blue light actually flips its orientation and it changes shape, and so therefore it can act as something like a flagella. Well, the point is to me that these are going to create a bunch of very, very small devices that will float in our bodies or be injected into our bodies, be part of us, and that, of course, creates quite a landscape for cyber problems, from simple espionage type problems if I can... Let's stop. What would be one use of this before we get to the attack side? So a positive use of it would be to bring chemicals into your body, drugs to help you solve a problem that you're having, and to be able to communicate back outside your body exactly what it is doing, and maybe even receive updates about a new methodology in your body, like spend your time in your bowel instead of in your brain and put the drug there. So attack side. This sounds so great and awesome, robots all inside my body. What could be wrong with that? Yeah, exactly right. So there's the obvious thing where you've got an attack vector to kill somebody, so that one's a little too obvious. The one that I think will be more prevalent is in using these devices for espionage reasons, and it could be somebody deliberately, an insider threat, somebody who has devices that you're just not going to see going through a scanner of some kind, and you're going to be able to go into environments, and then you could imagine it actually being a threat that the person didn't know anything about, that somebody put something into their food, they brought it into an environment, they collected information about its environment, and then those devices when they left the secured space are able to transmit the data out. So next we're going to hear from Yasmin Green, who's director of research and development at Jigsaw, which is in essence the think tank for Google Alphabet. Thanks Peter. That was really interesting, I'm going to talk about something which might seem less innovative from a technology perspective, but is in terms of how the technology is applied, and what I've been thinking about a lot recently is a concept that we're calling the digital insurgency. So typically insurgencies in the physical domain are tough because they're costly and they're risky and they're unpopular and they can be attributed, which means that there'll be retaliation, and the internet is a game changer for those things, because politically motivated states and non-state actors can use online personas, be they fake or real or cyborgs, to influence a population which could be their own or it could be a population abroad. And their goals for doing that might be general propaganda to change the opinions, it might be to change the actions of populations or to persecute individuals to censor them and monopolize information at home. I want to just take a second to tell an anecdote which I think is interesting about how this is applied and we go to threats. So I've been interviewing people in the fake news space recently, fake news and fake personas, and trying to understand... So can you tell us who is in the fake news space? Because I keep hearing different opinions on that. Very down the street, he's got some ideas. So in all seriousness, when you're trying to identify who to interview... Fake news is such a fuzzy term, I'm a little lazy to even use it because it's a catch-all for anything we don't like, especially that the other guy says. But actually I was interviewing on stage at South by Southwest this week, last week I was interviewing one guy who was an entrepreneur in fake news. He's like a business empire, he pays his mortgage, he makes six figures a week, like peddling fake news, and another guy who is... I was just telling you guys this, he's the state representative for the 15th congressional district of Georgia. Very popular, it's very impressive, especially because there are only 14 congressional districts of Georgia. So he's a really popular politician online, only online, and he doesn't make a cent. He has very strong political views and he has a following. So one guy was political and not lucrative and the other guy was very lucrative and not political. The point now is I was just going to give a story from the fake news business guy. I was like, tell me about an article that was popular, his main career was in this outlet, not a real paper, but this outlet called the National Report. And I said, tell me an example of something that's popular. And he said, well, you know in America we care a lot about what other people spend their food stamps on. And I was like, okay. And he said, you know, like people say they spend their food stamps on soda or something, so they spend their food stamps on steak. And I was like, okay. And he said, so I wrote an article about marijuana dispensary in Colorado that accepts food stamps. And I was like, oh, and he said, and it went viral. There were like six million hits. I was watching like my site blow up and every time someone visits his site he makes money, so that's really good for him. And I said, wow. He said, yeah. You know, I even set up a dispensary, online marijuana dispensary to make the story seem authentic so that he could link to it. He had his staff weigh in on the site and post like, you know, yes, they accepted my food stamps. This place doesn't exist, okay. So there's so much traction around it. And then the Colorado Legislature passes legislation prohibiting food stamps to be used for marijuana. Okay. So you kind of see like the end to end. This guy, I mean, it's part recreational, part commercial. But then you think how easy would it be for a state to take this same machinery and apply it to fabricating a cause for war. So, you know, I can show you the images. I can even show you the video. I can show you first person speaking or fabricated. Use the, you know, my networks on social media to propagate a message. And now the population supports us going to war on another country. So I think it's a really serious threat. I think we think we may have seen some manifestations of it already in the U.S. And over the last year, we spent a lot of time, you know, worrying, previously worrying about cyber attacks on voting booths to overwrite the, you know, the genuine popular vote or just to hack the stock market to create a lot of hysteria and disruption. And there was a different type of, there's hypothesis there was a different type of cyber attack on the U.S. population around the presidential election. But I think we're going to see much more of it going forward and it's going to have more serious, well, it was a serious outcome. But we're going to see it even be applied to causing countries to go to war. Some additional thoughts on this around why now? Why is the internet a game changer for this now? And that is various trends that are mutually reinforcing. So one is around the diffusion in publishing and the fact that there's no longer a monopoly on credibility. So when I started working for Google 11 years ago, there was, you know, Google didn't own YouTube, there was no blogger, sites, there wasn't even Android. And now everyone is their own publishing platform and TV station. And back then, Google was your window to the web. Like if you didn't, people's homepage was Google, some people Yahoo, most people Google. And if you didn't rank highly in the search page results, you no one would find out about you. And you're fast forward a decade and it's through social networks predominantly that we get unused, importantly. And then on the side of bots, and then you know a lot about this already, Peter, but on the side of bots, they're cheaper, more sophisticated. We're not at the point of really autonomous social chat bots now, as we've seen with some high-profile examples of chat bots being gamed and kind of turning out to be like misogynist Nazi trolls instead of seeming like normal people. But it won't be too long until we see, you know, mass autonomous chat bots. And we already see examples of states, sorry. And then I was gonna say that right now we're already seeing humans and software being deployed at the things that they do best, which is software doing the more automated stuff like creating accounts or reposting and humans adding the authentic flavor. One other really interesting, tiny anecdote is around we got a hold of a Russian troll farm handbook, which is exactly what you'd think it would be. It's really incredible. It's like a manual to like a call center and it has all the kind of headlines that are in the Kremlin's interest. Actually, they domestic or foreign policy goals and then it has the script and then it has the assets to link to on the internet that support that. So, you know, NATO is terrible and it's gonna be dismantled, it's useless and, you know, Nemtsov was murdered by the CIA. You see these all playing out and then you have the staff going in and implementing these online. And I think the interesting thing for us to think about for this type of warfare versus kinetic and versus like cyber war 1.0, is that it requires useful idiots to work. The states are not going, they're not trying to populate an online movement wholly with their own actors and personas because that would just be an echo chamber. You're not really changing anything across the population. So they're going for a seed and fertilizer strategy, which is let me get like a small number of influential, well-embedded personas that can start online movements that are carried by the rest of us. Yeah. So next we're gonna hear from David Weinstein, who's Chief Technology Officer for the state of New Jersey and also a New America Cyber Security Fellow. Thanks Peter. So I think it's really helpful to frame this discussion in the context of the advent of the internet, right? We always talk about how it was developed without security in mind and it was strictly a communications medium in over the course of years. Individuals sometimes nefarious, sometimes otherwise devised ways to exploit this channel. I'm actually fairly optimistic moving forward that in the next five, 10, 15, 20 years we won't find ourselves in the same position we do now with something like the internet just because we have more people thinking about this. We have a more robust security research community, one that traditionally was limited to just the academic community and now is actually in the for-profit world as well. So I actually come at this from a fairly more optimistic angle than others. But one way that I think about this is moving forward, every new technology that is introduced into the global commons, into society will be hackable, right? So that's kind of the starting point for the discussion, I think. Everything is hackable today, everything that's introduced tomorrow will be hackable. The question is, how do we manage that reality? So when I think about the future of technology, there's two things that really need to be part of the conversation. One is cloud computing and excuse me because they're not nearly as sexy as what was just discussed by the two panelists. One is cloud computing and one is artificial intelligence, right? And with cloud computing, the cloud is gonna be more and more prevalent in our society moving forward. And in many cases, actually it will be to use the same naming convention less hackable than a lot of the IT infrastructure that we as enterprises, as governments, as individuals manage ourselves. Rob Joyce, the incoming cyber czar for the Trump administration and former director of NSA's Territor Operations Unit said the cloud is just a fancy word for someone else's computer and I think that's dead on. But there's truth in the phrase, in the notion that in more cases than not, that someone else is better at managing your security than you. So that begs the question, if we're reducing the likelihood of hacking by moving to the cloud, what's the risk? And that's where I think we really need to focus and it gets into the notion of insider threat. So the vector for getting into someone else's computer, so to speak, or at least the one that we've seen play out more and more is insider threat. Most of the time we think of insider threat in the context of, and Everett Snowden, the user who's able to leverage their access or their privileges to exploit the system. Moving forward, I think we're gonna be thinking about it more and more in the context of the software developer, the manufacturer on the supply chain, a little more removed from the actual user who's just removing documents or removing data from the system. We're getting better at using technology to mitigate that risk. I think we need to focus more on other aspects of the insider threat. So that's one piece on cloud that I think we really need to think about moving forward. And the other piece, which I'll just mention briefly, I'm sure we could talk about, is AI and how we secure this whole notion of artificial intelligence. Not necessarily from a purely technical perspective, but how we manage over time the data and the processes that feed into artificial intelligence technology such that we can trust those systems. We can trust that technology. So that's where I think we're gonna see more and more innovations in the threat side, things like, and it was mentioned at the outset by Nate Fick, things like file lists, intrusions, the injection of code into memory, the manipulation of seemingly legitimate software for nefarious purposes. End point protection, particularly end points that are involved in the development of AI systems, will be a big space moving forward to avoid the manipulation of the platforms that ultimately we as end users will come to increasingly rely on and trust. So we've heard about some major changes, but also potentially major threats. We have everything from a redefinition and miniaturization of human machine interfacing. We have influence operations maybe becoming even more scaled and intelligent than what we see today. And then we have the shift to the cloud bringing arguably more security but a redefinition of insider threats. So today you might have heard there's a congressional hearing going on and the way hearings are structured is they lay out big important topics and then say, give me the quick 30 second answer. So in each one of these, I wanna hear the 30 second answer on what Congress can do now, one thing it can do now about the problem set that you laid out for the future. So I think that we're going to have to invest in the technology and the policies and the developers to help segment the internet. Right now we're operating with a big flat internet, everyone's all connected and if I give my grandmother a tablet computer, she's open to attacks from anybody on the planet earth and only half of the people are on the internet right now. So double that and then give everybody five devices, 50 devices, that changes. But let's get specific to the miniaturization. I mean, you spoofed me out with this. Yeah, I know. So what I'm thinking is that what we have to do is create with maybe software defined networking or whatever comes after that, slices on the internet where there's stronger authentication for the individual devices that can be more strongly attributed and that we can separate the traffic so that that tablet, the device that's sitting inside my body, whatever it is, does not have to be exposed to the entire planet earth but rather is only talking to the devices that are enrolled within that slice and strongly authenticated. What do we do about influence operation 2.0? I would say in the same way that many people in this room think that in order to understand what's happening with geopolitics, you need to understand technology. I'd say the reverse is true too. So in order to understand technology, you have to understand what's happening with geopolitics and in the tech sector, we spend a lot of time like anticipating and analyzing malicious activity that has a criminal motive and really only a fraction of the time around the same type of activity that has a political motive and I think it's because it's less, it's just less organic and familiar to those groups. So I think that some kind of communication channel or briefing system actually from the government to the tech sector around the state of the threat as they see them and their understanding of the political motives of I guess allies and adversaries alike would help the tech sector design the best tech solutions. So we're rapidly, if not already there, approaching a world in which the defense of perimeter is marked by the identity, not the physical or logical network of a system, of an organization or of a system. Lawmakers need to understand that, public policy makers need to understand that and they need to think about the policy implications, specifically the role of government in securing a domain in which the perimeter is defined by the identity, right? That raises lots of public policy questions that if we're really gonna balance security and privacy responsibly, we need to accept that new norm. But at a really basic level, and I confess I snuck out to watch a little bit of the Comey hearing, lawmakers need to get really smart on this and fast because there is, there's a... Okay, but you're, we say that and we've been saying that for years and years, some of us even wrote a book on it. Okay, but you work in this space. So you say need, what can be done? Well, there's great training courses and lots of good books on the subject that they need to get smart on. I mean, to be honest, they need to prioritize it higher as a legislative priority and over time it will become more familiar to them and their staffs. But it's not a priority, it's not prioritized at least from my vantage point at a level that it should be. Let's open this up to the audience. Please raise your hand and wait for the mic to come to you and all questions end with a question mark as well. Right there, yeah. Tim right at a non-resident fellow at GMF. I was wondering when it comes to sort of the, especially because healthcare is a priority right now for Congress and America, whether it's the choice care card veterans and I have a friend who is in town, he's a neuroscientist and his wife works at the VA. Anyway, the back story of the point is how can, whether it's telemedicine or how can this be used in the healthcare sector to deliver things more efficiently to provide better care at lower costs, especially giving other disparity between doctors in cities with great hospitals versus rural areas, things like that. Who wants to tackle that on the, in many ways it seems parallel to the broader discussion of convenience versus security. But your business in some ways connects to this. It does. Well, I think that the new technologies are gonna greatly enhance medicine for everyone because of that lowering cost of the sensors. So instead of you having to go and get an MRI where there's one machine in your neighborhood and it maybe circulates between four different hospitals in your area and you have to schedule it, if there can be smaller sensors that are more cost effective and pervasive through your body, constantly feeding data back, you're going to be able to get cheaper healthcare sooner in the cycle of whatever your problem is. That's a short answer, I suppose, for it. But I think that you're gonna find a lot more sensors that are part of our lives. Whether, another example was a toilet that you used that's actually actively measuring your effluence and telling your doctor whether or not you, you're having a problem in advance so that you don't have to wait for that once a year test. So when that happens, it'll be better. To raise the level of I'm gonna weigh in on that as well, not that topic, but the, I think we can see another interesting. You can have a stand up toilet here, it doesn't have to be. I'm gonna jump from that. We can see an interesting parallel in the lesson to what you were saying earlier in the Anthem Breach where on one hand, it's a healthcare company. The information that was taken was personal, it's the largest breach. Personal information ranging from addresses, income data to healthcare data. And yet the alleged attacker was a state government that was using it for an espionage gathering activity. So all of this expansion of convenience and health data all the way down to the personal level actually raises the pool of big data that a foreign government could take. And in turn actually to what you asked me, it opens up a whole new area of influence operations as well. You deal with this though, much of in America, healthcare is not a federal government issue, it's a state government issue. In New Jersey, how are you thinking about the current and future issues of cybersecurity as they relate to healthcare? Yeah, so it's a great point. One of the phrases that we continue to repeat across state government in New Jersey is that in fact state governments are a pretty target rich environment for not just hacktivists or criminal hackers, but nation state actors. We have the same types of data that the federal government has that has been targeted in the past. So we deal with it in many of the same ways that the federal government is attempting to deal with it and that Fortune 100 companies deal with it, which is we implement an enterprise risk management strategy that sounds really good, but what does that really mean, right? And for the most part, it boils down to at a basic level prioritizing the criticality of data and systems, making sure that those systems are locked down as much as possible through a combination of people, process and technology and whatever resources are left, spreading those resources across the rest of the enterprise. I think there was a really important point made earlier that this isn't so much about managing risk anymore, it's really about managing value and for us, our value proposition is to the citizens, in our case of New Jersey. So we deliver that value proposition by prioritizing those assets that we are the custodian of that connect to their lives to making sure that we could protect that data, not only the confidentiality of the data, but also the integrity and the availability of it. So it's a big problem for state governments. To be honest, we have really federated large IT enterprises we're fiscally constrained and we can't compete always on the human capital side. So we do have to get creative and it starts with prioritizing what's most important and working down the value chain from there. I think we've got a question right here. Anasai. David, I was very interested in your identity as being part of the perimeter, but I wanna kinda turn that to what Yasmin talked about with trolls. You know, trolling as an influence operation requires fake identities, fake, all kinds of fake things. Have you kind of grappled with the question how we can take on and deny, let's say the right of access to traditional media and other media outlets or information outlets by going to the heart of who is who? I did a study of comments on a New York Times article on Putin's wealth and those who have benefited from Putin's wealth. The amount of comments that came in after that front page Sunday story were five times greater than any other story of that nature and it was quite clear from reading it, it had a repetition of certain themes, which is moral equivalency and other things. Are you dealing with this, trying to get your hands about how we combat from a geo-political perspective this? Yeah, thanks for that question because I do actually wanna spend a second talking about how we can get ahead of this threat other than having Congress beef the tech sector, Silicon Valley, which is, so our group at Corjigsaw is, aim is to understand these dynamics so that we can develop technology means to detect them and disrupt them and the main hypothesis in this kind of area in network propaganda is about diversity. So on one hand you have organic campaigns which have many people doing many things and on the other hand you have coordinated campaigns which have a few people trying to manifest the illusion of many people doing many things and because they're constrained in time and energy and creativity, they won't be able to manifest the same amount of diversity as organic campaigns to be concrete. Looking at the temporal dimension, organic movements have something that looks very organic in terms of the timeline that they act on campaigns. So you'll see like this is a little simplification of it but you'll see like it goes like that, there's a spike and then it trickles down. You'll see that when there's a state-sponsored campaign, it's kind of like that and then they're onto the next topic that they're acting on. In terms of semantics, the three of us might both think that the New America Cyber Conference is a great thing and we might have all tweeted today, hashtag new am cyber, can't wait for our panel and we might tweet that several times today but you'll tweet about your son, I'll tweet about the weather, you'll tweet about New Jersey sports team, yeah. So it's actually pretty difficult. Normal people, they do tweet about burgers and things that seem mundane and it's very difficult for bots or actually even somebody who's working a 12 hour shift in St. Petersburg and has to manage 20 profiles for them to keep up with the diversity that you'd see on the semantic side and then also in terms of network shape. So you'll find that the people who participate in the hashtag for this conference, some of us may follow each other, we'll have like a loose ball of interconnectedness but when there are state-sponsored campaigns, you'll find that the organic and inorganic look different which is really good. It's really promising for ability to identify markers and then build systems to demote state-sponsored activity. But who decides which of these activities is legitimate or not given that we've seen state-sponsored ones pushed by Russia to we've seen political campaigns around, we've seen state-sponsored ones pushed by US allies like Israel, we've seen political campaigns on both the right and left, higher bots, how do we think about the legitimacy of it versus the tool? It's a really good question also because for us it has to be for technology companies and definitely for jigsaw, it has to be the tactic. You have to come out and have a position on the tactic, it can't be on the politics or on the ideology. So for example, we built a tool over the last few years to mitigate against distributed denial of service attacks like flooding a website with traffic so that you take it offline. We have to be against that. It doesn't matter who's doing it. It doesn't matter if it's a group whose ideology we agree with in the case of Russia going into Crimea. We protected sites from both sides, Russian and Ukrainian because we're against the tactic. Manipulating social media conversations, deceptive content, fake personas, that's not good for the conversation. That's not good for the internet and we come out against that tactic. Okay, we have time for one last question. So right here in the front. So I'm curious about this information operations issue just to talk just a second longer. The notion of influence operation has dominated a lot of the discussion in the last year and the question I would have is sort of for all three of you, given that you come from very different stakeholders, who you would want to see addressing both the tactics and the content of these sorts of strategies? Yeah, I'll go first because it just builds on what I was saying which is I think you should leave the technology development to the tech sector and not have too much influence from government there because for most of these things that we're talking about, malicious activity is bad for business for the tech sector. They don't want it. They don't want ISIS on their sites. They don't want like a bunch of fake actors on their site doing malicious things. They don't want criminal networks. So I would let them help them understand how these tactics are being applied for political ends but let them develop the defenses. Corporate side. Well, I think that governments are gonna have a play in a part of that because some of the information that they collect is from the intelligence community and about critical infrastructure and other things. So I think that they have to have a play in the story. Yeah, I agree, I agree. Yeah, it's tough. I don't think it's necessarily clean cut. It needs to be a multidisciplinary, multi-institutional approach to it. Government certainly has a role in verifying certain information for intrinsically government purposes. And to the degree that government can be a trusted stakeholder in that transaction, that's great but I think we also have to accept that other players bring capabilities to bear and can be equally legitimate and we've seen that play out over the last. Yeah, influence operations are tricky because there's this matter of privacy and anonymity that's important and freedom of speech but there are nonetheless protected areas of speech and protected operations, like for instance, an election. An election is a critical part of society so if someone is conducting influence operations to undermine an election, then I would say the government has a stake and needs to step up and other countries have done that in the past where they say, well, we're not gonna permit advertising in, let's say, the last two weeks of a campaign. And so they can have freedom of speech but then some restrictions in order to prevent these types of influence operations. The model I've pushed is a expanded version of the Active Measures Working Group from the Cold War which was an interagency group during the Cold War that was designed to identify Soviet-influent operations and then push back against them so it involved everything from the Intelligence Community to State Department. The modern version of that would have these entities but also expanded outwards to include key technology companies, media companies and the like and the most important part of it is actually an important phrase that you used. It's not just to identify the campaigns, it's to debunk the work of useful idiots who make them possible, the insider threats that actually create the most influence. So this has been a great panel in terms of looking towards this future space and please show me a round of applause for the folks who joined us here.