 I'm Mike Atala. I'm in computer science. I've been at Purdue since 1982 and I have done work in early-on in algorithms data structuring and computational geometry and security also, more recently, security. Okay, hi, I'm Jennifer Neville. I have a joint appointment between computer science and statistics. I've been here for 12 years. Is that what we're supposed to know? It's how long we've been here. My research area is machine learning and data mining. I focus on modeling graphs and other kind of complex network data sets. I'm Eva Tardos. I'm at computer science at Cornell University, and I'm just visiting here. I have worked on algorithms to start with and earlier in my career, more recently on games, and for both cases, networks is certainly a big team. I do co-teach or sometimes teach a course at Cornell called Networks. That's based on the book Networks, Crats and Markets, and a particular aspect of that my work involved is both networks and actually modeling the feedback effect. We all have each other on each other through our influences, through our networks of connections by both by just using game theory and thinking about incentives and also thinking about learning and how things evolve as each participant is trying to learn what might work well for them in a network context. And secondly, I'm in the School of Communication. I joined Purdue in 2008. My key focus is on network analysis. I usually look into networks that relate to communication, information sharing and social support, and recently I've mostly been looking into disaster context, so how these patterns of networks among people and in communities impact their return decisions, evacuation decisions, and also there and rebuilding and recovery processes. I'm Andrew Liu from the School of Industrial Engineering. I joined Purdue in 2009 and I'm a social professor there. My research interests are in the future optimization theories and the algorithms and this application in game theory, like computing and actually building it. And also with many focused self-application areas in the power systems or in the energy sector as a whole. And I've been working on the, before, on the alternative capacity of the planning environment and policy analysis of recent shifts in my focus on smart grid and basically how to manage the distributed resources and how to make the better response work. Great, so I'm sure in this course there's going to be a lot of questions you may have. On the desks you have a pen and paper, so by all means write those down and in the end we'll have an opportunity to ask those. But we'll start off just to kind of set things. We'll have each of the panelists kind of very quickly go through and give a vision of one of their particular applications for the network world going into the future. So take your energy markets, for example, and energy grids, Andrew mentioned that, social networks. Okay, I guess I can start. So, well, the energy market or power system to be specific, it's definitely so connected by physical transmission lines and distribution lines. But now recently there's the top of the internal things and in my view like smart grid or power system is at the front of that discussion. And I learned this term from some other people called grid of things. The idea is really basically anything like you are smart dishwashers, smart washers, dryers, and not mentioned like vehicles. They can all be plugged in and they can all be controlled by whatever algorithms are there. And they can be collectively used as resources for either help save energy or to, for example, for electric vehicles. They can be used as a storage collectively from the grid perspective. So in my view, the future power grid is such a connected network both by physical connection, but also can be by the cyber side connection. And basically devices talking to each other, for example, consumers, they have smart segments that talk to each other. And so collectively they can make a lot of benefits to the consumers as a whole, but also there are a lot of management issues. Like there are so distributed resources without central control how the distributed resources can behave in a way you want them to. And how they can be heavy and to bring them actually the so-called promised benefits to the whole system. That in my view it's a great area that's still at this infant state that most people in the United Kingdom and in this way they're trying to figure out solutions. That's great. So you have an example. What I'm sure from social systems, how do you see the social world changing in the next few years on the network? I think one of the aspects of being paid attention to is actually the social systems are not isolated from physical systems that would represent social science aspect, but there is more focus on now. So for example in disaster situations we would look into how people are connected to each other, they're tied to their communities, organizations. And before most of the research would have looked into social networks and social systems is an isolated field. And the physical networks and infrastructure systems is another field. And now there's more focus on how the different networks are interconnected with each other. So we might call them networks of networks or systems of systems. So for example we've been looking into how, so for example this case of how people had to have good social networks and social information and connections to get their utility back after Hurricane. So we would think it's something that's so isolated in physical systems but to get that something activated you needed to have social connections as well. So I think there's more emphasis on how the networks are interconnected with each other. Very cool. First of all perhaps maybe the learning aspect in the incorporation learning? Yeah, so I guess one thing that is interesting or changing in our model world is how we get information, how we learn from this information. A lot more of our decision making is algorithmic. Someone's making the decision for us or someone's writing quotes for what the decisions are. This is coming from the information feed that is what we read on the news is basically algorithmically decided by some of the news providers like Google or Facebook's news feed. From any design system where someone is learning to, you know, wrote a code or an algorithm to help us make decisions. This raises a lot of sort of both interesting algorithms and very interesting societal questions. These algorithmic tools make decisions of what's useful information to have and collect information and have amazing potential to base this on or collect information way more than a human can collect and therefore possibly make much better decisions. At the same time it also has the potential to inherit some of the human biases around and make bias decisions and one scary aspect as watching what's happening out there in the world that most of us trust the systems more than we should. That is, I'm not biased, it wasn't my decision, it's the algorithm that told me to do this, but unfortunately when these algorithms learn what to do from our behavior, they inherit the very same biases that are out there in society and unfortunately now we trust them more than we used to or we think they are more objective than we used to. We need to think about how to make these algorithms make better decisions. Probably we should learn how to trust them less but also improve this algorithmically speaking and as someone who does algorithms for a living we need to think about how to make these algorithms better to make that the human bias from the algorithm's decision making. We'll have enough follow up as well to that, there's a lot of interesting things to progress. I guess if I'm trying to be provocative it would say that we're moving towards social systems that are going to be jointly populated by humans and AI agents. I mean we actually, following on what you're saying, there actually are AI agents that are interacting in our social systems right now. We just don't know that they are AI agents, false identities that are meant to spread propaganda or change people's behavior. I think that's what's going on right now but what we will probably evolve to is actually having AI agents or chatbots that can remember everything about us and can be our best friend because they can interact with us in the ways that we want and then we'll be dealing with a joint system at the social level where we have algorithms and humans co-existing in the system. So that's my provocative statement. Very provocative. So from my perspective, which my research is in security currently and my interests, the defining feature of technology today, the strange thing about it compared to the past history of the human species and that especially holds for networks is how much it has empowered small groups to impact negatively very, very large numbers of people. We've never had that before in the history of our species and let me give you an example. There was a flash crash a few years ago where trillions of dollars of value were wiped out from the markets, world markets and that was an oops, someone who didn't intend to do it. You can imagine what someone who intends to do it could do. Why was it an oops because the person doing it did it from the basement of his parents in London with a really old computer actually and he didn't want to do it because he had already made 40 million dollars quietly manipulating markets and trading on the manipulation and he wanted to continue making his money. And what happened is, well, there was a positive feedback loop, you know, the impact, you know what positive feedback is, it causes instability, there is a change in one direction that self reinforces and further self reinforces and the positive feedback loop causes instability and it just self-reinforced and there was a crash, a market crash caused by one person. So I'd love if someone can tell me if there was any time in the history of humans where such small groups of people like one kid could wreak havoc on that and you might think that the markets recovered within 10, 20 minutes. They did but hundreds of billions of dollars were lost anyway. Why? Because when they spiked down, people had sell orders, automatic sell orders. So they took the elevator down, the sell orders kicked in, the money was gone, they lost the money and then when the markets quickly snapped back and recovered, you and I didn't lose because we didn't do anything, we were teaching our classes, but someone who, well, you know, real money was lost, hundreds of billions of dollars were lost and now the kid is being deported to the United States where he, you know, because he caused all these losses and we have no sense of humor when it comes to money lost. So he faces last I checked, 500 years in jail or something like that for a very long list of... So this is a challenge for us and what positive feedback systems exist in networks that we are not aware of or that we are aware of but nobody, none of the bad guys have yet identified. I've actually, I know some that could easily be swamped by an Asian state, not by a kid in the basement of his parents in London, but an Asian state who wants to inflict huge damage on us, they could do that and I could actually, I could tell them how. But, you know, it's really scary, it's really scary. I mean it's, so I think this would be a big challenge. I see a lot of other bad things happening. I mean I think there's good things. I think productivity will go up unlike what we've seen now where productivity is sagging because there hasn't been enough time. The electric motor took 40 years to achieve its potential and it didn't achieve its potential because only it was miniaturized. It achieved its potential because people figured out how to use it in different ways on assembly lines. And so anyway we're still, I think productivity will improve tremendously because of networks. But I think other things like the truth will be a victim because of human biases. It's very well documented that we, humans in general, it's called the reinforcement bias. Reinforcement bias is when you hear something that agrees with your prior biases. You tend to believe it and even when someone later on shows you that it is not true and patently false, it still is operative in your mind and you're making inferences based on it. And a truth that is inconvenient that does not match your prior biases is shelved. They have these mental compartments and so this is really not good news and this was alluded to in others. So I will stop here and maybe people can ask more. Yeah, yeah, so this brings up two questions actually on the reasoning side. One from the machine learning side, the tools that we use and the other one is the people using the tools in the world and so we can kind of take these two branches and the security and so on. We'll definitely come up. And so let's go from say the machine learning side first. Using these different tools, deep learning and so on, they need a lot of data. They can learn biases in the data like it was brought up. How do we know which data is right to learn from and besides that? The way we don't get these biases. Any thoughts on that? Perhaps the machine learning side? Yes. Well, how do we know what data is right? That's been a question for all of human learning as well as machine learning. I think that a lot of the issues that you see in the news right now with bias showing up in our algorithms comes from the fact that machine learning developers really have not thought much about the input data. We just frame it as an optimization problem. Somebody gives us some data. We learn a model to predict something and we have just blindly trusted or not even thought about whether that data is the right data to get an accurate prediction model for whatever the application is. There's a lot more focus now on how to collect the right data and have in situations where often it's not entirely clear what you're trying to measure and the data that you have available is often only an indirect measure of what you really want to get at. For example, if you're trying to get at people's utility functions or preferences, we never have data on that. We have indirect data on the actions that they take in the world and often I would say from the machine learning perspective we've often just said, oh, okay, that's good enough. We'll just take that at face value and we really need to rethink what is the impact of the data on the models that we have and having a computational framework that would allow us to measure the amount of error that we have due to the bias in the data is something that I think we should move towards. A lot of theoretical analysis of the methods focus on decomposing error under the assumption that you have data from the true distribution that you're trying to model. Even our analytic frameworks don't even acknowledge that we might have the wrong data. I think what is the right data is maybe a better question for the social scientists than the computer scientists. That was the next opinion I was going to go for as a social scientist. I guess one way I don't know I can question your question is in some of these social decision aspects there isn't right data out there. The data about decision making just came from humans and we know humans are biased and so there isn't objective data out there. You're trying to like for example in hiring someone or admitting students to your institution you're trying to make a decision about someone's future potential. There is no data about the guy's future potential. There might be data about future potential of people like this correlated with the actual success when you admitted them and maybe you can learn from that data. But I guess one thing that I think would help is to explicitly admit and try to study what bias is the human decision making had and try to build that into your decision making system. The goal of the machine learning the way sort of simple optimization question or prediction is classically or simplistically maybe classically phrased as the machine should make the decision human would make. So you make the machine make decisions and then on a few instances you try to open humans and if humans make the same decision then your machine was good. Well if we think humans are making biased decisions then this isn't the goal. We need to change the goal to make a decision that corrects for this human biases. For that we first have to use the data to learn about human biases and once we did that then hopefully there is a chance to try to build machine learning tools that instead of copying our biases correct for our biases. So I don't know if the right data is I think is how we use the data and how we define the objective function which I think there might be the answer more. But I can't let the social scientist answer. I just brought in the idea of data to information I think what's more interesting now is and we've talked about this a little bit before coming to this panel that people would think that they have more choices now in terms of accessing information and screening information because before we would have thought that it's more one-way mass media I mean you turn on your TV you have selection of channels but that's about it in terms of how much information and how diverse information that you'd be able to get. And now with all of these more user-oriented media and social media we think we have a much broader array of information that's available to us and with these algorithms that are embedded in those Facebook feeds and all these super news feeds it might not be that we are really selecting the information that we want to access it could be more determined by something else that's out there is the idea of human agency and our belief in our agency in terms of being able to use and utilize information and data. At the same time I think that the thing that is happening in that space is that all the people developing the algorithms and the systems are optimizing for user engagement and so if we are getting to a system that limits information in a particular way to us at some level a response from the algorithm designers could be well it's getting to be that way because the humans prefer that so if you're on one side of the political spectrum and the algorithm gives you articles from the other side then the humans don't like that and they go to a site that gives them all the kinds of articles that they want to see so we are sort of interacting with the algorithms in a way where I think we see ourselves as less biased than we actually are and maybe that's one of the things that you wanted to talk about is that as a society we're moving to this polarized setting because we actually prefer that or I don't know maybe we should think about it as a game we're going to end up in a situation where nobody is in the situation that they prefer but because of our actions in relation to each other we create this polarized system yeah let's say we have these tools up and running and they make decisions for us, say for example in the energy grid situation or generally we use them and there are potential security issues if something goes wrong who's responsible? is it the algorithm creator? I can answer that no one is responsible so it's the same way to offload no one is responsible and everyone is responsible at the same time I'm serious suppose I write an algorithm for pricing or for trading I cannot predict how it will will it create a positive feedback loop that will wreak havoc in the real world I would be foolish to tell you either way why? because it depends on the algorithm she is writing and in a very trivial example of this a trivial cycle of length 2 was created a positive feedback on Amazon and where a book we were just talking about that where a textbook was priced at 23 million dollars on Amazon you could actually get on Amazon check for that book and see that the price for it is 23 million dollars and that happened because my price read her price and reacted to it her price read my price and reacted to it and it took all of three days for the price to go from something reasonable like 100 bucks to 23 million this is a trivial example and easy to catch no one is going to buy the book for 23 million but if you're trading and doing things so when the algorithms are in charge have you seen the movie Fantasia? the apprentice sorcerer in Fantasia who is playing with the this is the situation we are in where there's a whole bunch of apprentice sorcerers who are doing things that they cannot control nor can they predict their consequences so yeah I mean that's my take off yeah yeah so maybe perhaps they say in the energy situation if we had some AI agents or something controlling things and all of a sudden say the North East got shut down for energy because of someone for seeing circumstance who would the public want to maybe hold responsible then? right and again the answer unfortunately is no one is responsible and so this is I was just talking with the panelists before so my view is indeed to have algorithms for controlling the distributed resources because it doesn't make sense for individual humans to pay attention to 24-7 on what the power prices are and what the current power system of the situation is nobody cares so it makes a lot of sense for AI agents or whatever algorithms with all under the umbrella name of smart things and doing this supposed to be smart right? but then when I do this research I keep thinking of what if they're just a hacker that you know hacked the system so that you know it could send out one signal in the sense that okay for example during the you know 2 am in the night and some hacker has the system and everybody like it seems like everybody is using electricity but actually it's not so there are physical ways to check what the actor demand is but then if everything is like so this is basically the cyber physical system that's back in my mind so the cyber side it's very easy to hack to manipulate and then we need to have physical aspect to check if that you know the cyber side is correct or not so I know my colleagues to use the TV shows I'm also actually watching Mr. Robot the show so I don't actually watch that you know basically you cannot hack analog things you can hack digital things but not analog power system energy happen to be like the physical side is analog and then it's nowadays more and more connected to the cyber work so my view is to really have a robust system you really have to have you know the physical work to be able to check if the cyber work giving the information is correct or not and if that can be done that should be responsible of the great operators or basically whatever central facilities then that would probably minimize the cyber side of the swag so that's just my view I mean how to exactly do that yeah so do you think maybe we can create AI agents that can kind of run amok and do you think we can actually control the things that we're building maybe through incentives setting up the right games for these AI agents to live in but do you think we can actually control build these systems without losing control I mean in principle AI agents have a lot of great potential that the point of using AI instead of human decision making is that AI has access to more information we can process more information we can think through consequences better this is why AI chess algorithms work better than human chess players no matter how bad trained the humans are and it's true for every other game AI has an incredible computation can take advantage of incredible power and incredible access to information there are two issues that I really intertwined here the AI having bad or unfortunate consequences even without anyone's bad intention like I think in the example my growth up is the book getting priced at some insane amount of money there no one wanted to have the book never bought again or whatever that priced us to a real book there was no bad intention here it's I believe it's a very standard side of feedback loop when something is really low supply and then have a lot of copies on hand and maybe I could sell it a little bit more expensive than say you know Michael Sussett and unfortunately Michael also doesn't have much supply can say it's just a little bit more expensive than I do because he's not in a hurry to sell his books and if we feedback on each other then that's what happens the book price goes through the roof we all want to be just a few dollars above what the other guy does there were no bad intentions this is incentives going wrong our incentives were set up algorithmically to do something that seemed locally reasonable but globally it wasn't at least the effect wasn't reasonable and then there is the other issue that you brought up now or actually many other panelists is the if someone's incentive is to wreak havoc and can we control them and maybe on the you know can we defend against these things I should defer to someone like Michael who does security it definitely is a big issue of how much we can control these things and part of it is that we are so fast and you know big effects are happening in a speed in which human intervention is not usually viable or often not viable but I guess Michael is more security expert so he know more than I do well I think there's two things that are really really bad one of them is that economically speaking and effort wise speaking yet the attacker has an easier task than the defender if I'm attacking your your system I need one weakness and I'm in it's a weakest link situation but if you're defending it's a sum of efforts it's the summation it's the plus you have to eliminate all vulnerabilities pretty much because I will try every single one of them so already the odds are stacked against the defense and in favor of the offense that's one factor what makes it lethal is the incentives are completely misaligned there is absolutely no incentive to the incentives are perverse really perverse the organizationally speaking the people who make decisions about security are not those who pay or suffer if things go wrong so they have every reason to not actually do security effectively because they don't you know it hurts their bonus if they spend on security and if something bad happens then it's that other person who gets the blame for it so the incentives are misaligned and think about it you know who so in the recent denial of service attack the internet of things devices participated so to speak your toaster in your home participated in crashing some servers and look at the players there is the buyer of the toaster and the person making the toaster that's connected to the internet what incentives do they have to spend more money suppose now it's five dollars why should they make it seven dollars I make toasters right my customer doesn't want to pay more than five dollars now the customer what reason do they have to who is suffering from the vulnerability it's not the maker of the toaster it's not the buyer of the toaster it's some server in Europe that got crashed by billions of toasters they don't even know who crashed them though because there's no accountability on the internet you know that there's no reliable trace back they don't even know what hit them they all they know is they got flooded with packets and their server crashed they don't know it's your toaster you see the situation why should someone spend money on security it's really a tough sell security I mean think about it you know if and also what do you have to show for it what you have to show for it is what did not happen right okay so I go to my boss at the end of the year I say thank you for the half a million dollar you see nothing bad happened you know what they would say across the hall they didn't spend money nothing bad happened either you're fired it's a way of getting fired I mean really so it's really really tough and look 20% of users use pet's name as password their pet's name as password 100% of users reuse passwords the one for this bank is the same as the one for that bank you might wonder well I'm not stupid it's not my password well it affects you too it affects you too because someone it's a dormant it's a dormant account that's usually it's someone else's account that usually compromises you like for us in computer science if we have an ancient account with a weak password we're all at risk it's not just that account that's at risk because the anatomy of a break-in is they found an account they find an account they break into it because of weak password or something then they do something called escalation of privilege escalation of privilege is basically you exploit the software vulnerability to become rude to become administrator and then you can't say your password is weak but you know it's you it's not me but we've had people who we ran actually a password attack on our own faculty once many years ago and we reported to them to those faculty their weak passwords and you know I remember hearing things like look I have nothing of important on my files I can actually put them on the internet I don't care if anyone sees them and you have to explain to them it's not about you it's our system it's the integrity of our system you're endangering us and we try to force people to use secure passwords using random number generators and that was very brief and sad episode because actually that's when you walked into a colleague's office and you saw a little stick on with the you know with the password data because how else are you going to remember so the janitor could see it and actually I could see it my vision was much better than now you know without squinting you could see anyway I'm not optimistic I'm afraid and something needs to happen I think at the legal end the legal system has something to do with it if you look at other technologies it took decades half a century practically for people to realize that to have accountability you know when people in the 1930s and 20s early early when people were crossing the street and they got hit by a car they didn't realize the driver they didn't realize that they actually would limp for they would pick themselves up and start walking and limp for the rest of their lives it's the lawyers that figured it out and now you know we have a system where cars are much more secure people have but it took a long time a new technology takes a lot and we are in the infancy we think of it as advanced but we are actually in the infancy of the information age and networking and all of this so I think accountability is very important legal accountability and we don't have that you know quite simply there is the entity that can do the most about mitigating the risk should be the one that bears the liability and bears the consequences of what they are doing and the legal system has to change to make that happen to have accountability otherwise you're never going to have the problem solved and I am confident it will happen actually eventually because why because it has happened with other technologies and you know hopefully it happens before we hit the digital apocalypse or something like that we're not you know what so that's one aspect I guess the AI side we can keep going on AI and accountability and security and so on for a while we do have a time limit though but I do want to get the human side of everything and so how is the interconnected world perhaps safety and social aspect to start off with how has that changed maybe how we think about things even from a purely logical perspective we had these old reductionism and determinism that got us so far the interconnected world has that changed whether we can use those rules or the extent to which we can use those old side connectors going back to the issue of control and security and things I think the extent to which we can understand how these patterns of interconnections work or even intervene and design systems or connections or networks that work in different ways because the outcomes of systems in this extent to which the system collapses or fails after a small amount of change or like like when you mentioned weak link weak links the extent to which system outcomes make differences will depend on how the system is engineered and designed so for example a system with very central hubs will act very differently than a system that's more decentralized and more egalitarian so the extent to which we can understand and that's the field of network science that the way as much as we can understand and intervene or design networks we have much better idea of control of the systems and that's how we can so networks we take complex networks kind of where we have this organic structure that builds up can we use something like the old tools like a can we reduce a large complex system to the parts can we actually get something out of it does that logic make sense when the system is quite different from how we thought about things 50 years and previous any thoughts on that maybe even from the education perspective how do you teach people to exist in a world where there's how's that different from taking a physics class of 30 years ago any thoughts on on that it's a little bit trickier of a question I mean there's a lot of feedback effects which is something we have been talking about and I guess Michael was talking about the two person feedback effect which is already there but there are other opportunities for very unfortunate feedback effects on large networks where the size of the effect scales with the size of the network like if there is n people involved then it multiplies by this n which in a large network is very damaging I guess one example of this is exactly around the security or you know vulnerability game where you know as Michael points out the weaklings matter and often the weaklings matter in a social network and if you know if I go back to something classical this very same thing happens in spreading of infections we somehow got society to agree that we need to inequality against all kinds of infections in a selfish way of thinking about it you should actually not I don't mean to say this I should and I did did have my kids inequitated also but selfishly speaking if everyone else already uses vaccines then every vaccine comes as a teeny bit of danger for whoever gets the vaccine and the socially selfishly somehow best move for me is that you guys all get the vaccines so therefore the disease and then I don't have to expose myself to the little bit of danger that the vaccine poses this exact same thing happens in all kinds of security aspects and the best for me if you guys all deal with buy and install the update in Microsoft and whatever is needed to secure your computers and then when I connect on a network to your computers then I'm not I don't have to bother with these things and I'm not exposed to these things because in society went to the system in which vaccination is essentially automatic and except for a tiny minority most of us believe that of course you're vaccinating your kid and somehow we're in a state where we do not vaccinate our machines most of us most people do not or don't buy the proper guarantees I there is no way to change the incentive system the incentive stays the same it's in your best interest for everyone else to vaccinate and you do not that is the best for you but yet our society went to a system where everyone vaccinate or all the kids get vaccinated almost all the kids and I hope I'm not I don't know if I want to predict the future but I'm hoping that we're going to a system where we can we'll become automatic also and we will do it despite incentives otherwise but we can't change the systems we can we can maybe change the system we cannot change the incentives and we have to get somehow either through a legal system or through cultural there is no legal penalty for not vaccinating your kid or very low legal penalty for not vaccinating your kid there is a people do do it almost all people do it and I hope that security will at some point become like this but that's just a hope I think it's a very good hope I also have the same hope however one of the earliest results in security is and actually it holds not just for security but for most interesting properties such as being ethical it's non-composability security is not composable what it means if you are secure you have a system that is secure the pieces that are individually secure and you put them together and hell breaks loose security is not composable and other interesting properties are not composable so it's a price of anarchy a little bit why because if you had a benevolent designer who imposes constraints on the pieces they could actually design a secure system the IBM 360 I'm dating myself here the very ancient the historic machine was built and tested as one unit and for economic reasons we don't do it anymore like this if you look at the browser it's little pieces that are written by different people that are perfectly fine individually and they are very secure individually they are very quality pieces but someone puts them together in a certain way and then you have a very insecure insecure browser it's interesting to look at that from that perspective of the benevolent dictator who has a centralized system versus the highly creative but very insecure situation we are in creativity in the current situation is actually the flip side of the security I mean it's amazing you have this primordial soup of pieces pumping into each other and we design new artifacts and new things that enrich our lives because of that but the side effect is they're not secure it's inherent in some sense so if I try to go back to your question of interconnectedness I think that we always had interdependent systems social systems physical systems natural systems we just from a scientific perspective decompose them because that was easier to do at the beginning and I think what's changed is that the scale of the systems that we have now and the speed at which information or data whatever you want to call it in these systems or disease the speed at which it can spread has really made it critical to start studying those dependencies but they were always there and we just have to think of you know we just have to pay attention to the fields are now starting to actually just study the dependencies in order to understand what's going on so I don't I don't think it's anything necessarily fundamentally new that's happening now but maybe that it's becoming more important that people study it that way Yeah, yeah it was more not that it's a new phenomenon or the fact that these kinds of systems are growing rapidly and becoming people to think better in that kind of a world or before they still existed but they weren't so important you could compose something even if you really shouldn't now there are cases where you really shouldn't decompose it how you think about it how you make conclusions out of it how you understand it is there some kind of a change in education that we need to have Yeah, I guess I was thinking about it more from the control perspective that the typical ways that people would think about how to control regulations like for vaccinations there were decisions that could be made much more locally before so like the US could make a decision on a regulation and that would change the basic behavior in the US and there wouldn't be so much influence from you know other countries but now the ties are so strong in terms of travel or migration or connections on the internet that the ability to have centralized control is pretty much gone and so now the control decisions have to be made in these more complex systems taking into account what would happen with the decision the other person is going to make I'd love to follow up on that but I want to make sure that the audience has a chance for questions so if anybody wrote down questions and some people might have a lot of questions just raise your hand it's a small enough room for anybody I saw you guys both writing yeah you have questions raise your hand you actually answered most of my questions but I have one kind of you touched that but you haven't answered completely so it's you talk about human biases and you discuss that we can have a computer AI biases built in to gain profit for creator but it's in AI but we still have a human and humans and we can create AI with biases who will impact humans it's not it will not be interaction between to AI and all these collapse will happen and it will not be like a virus you cannot if it's it can be just some social it can be fake news something like that but it will be create some algorithms that will create that will be hoping for this force feedback it's more question about learning and how we can actually defend from this type so it's how we can defend us humans with interaction with AI who predominantly who has this built in goal to or focus on our weakness not on some computer or mechanical weakness but on human how we protect ourselves humans humans from death so yeah who wants to take that one it kind of gets to the intended question I would defer to Mike's previous comment that it's very difficult to protect from these kind of attacks I think that when I said that they're already in all the systems you can see that in games that are trying to make people addicted to them in social and information systems where you end up getting more and more information in a biased way that's focused on your preferences I think it's hard to think about how to protect yourself from that without understanding how our own sort of human behavior and how we what are our preferences for those biases the psychology of it so I think what's going on is that the developers of the technology and the algorithms and the systems have to be combining with the psychologists to take the ideas that they have about how we form habits and look for information that supports our previous views how that differs based on the type of topic that you're focusing on how to exploit people's fears all of these things are coming from knowledge and psychology and I'm not sure that the algorithm developers are the ones to figure out how to protect us from that really we need the psychologists to figure out what are the right mechanisms or incentives to try to move people out of those scenarios and I guess based on your tech yesterday I think maybe it's not even possible that we're in a complex scenario where it's going to be hard to motivate people to move from their own selfish perspective to actually I guess the technical term in AI would be that there's an explore exploit trade off and so the algorithms are exploiting exploiting exploiting and we're letting them because we want them to do that we get a benefit from them being biased towards our preferences and then we get down the road and say hey wait I didn't want to be here but along the way every time they tried to exploit and show us something different we just didn't take that option so I'm not sure how to you know control I would say we'd have to learn how to control our own behavior to not selfishly optimize for our own preferences maybe from the social side of PC data that says how bad we are controlling ourselves I think I agree with the idea of having to understand human motivations and how they make decisions and how they form biases like there's some reasons that it's looked at how fake news and incorrect information spread more quickly than correct information and incorrect news like on Twitter those channels and so when we understand who we form ties with who we would like to get information on these topics and depending on different context understanding how people access information and form ties with who it would be necessary for understanding the types of biases that are inherent in the community. Isn't this just a transitory problem because if you think about it like the fundamental assumption was we trusted news that came through the media or we trusted algorithms to act in a way that was good if you think about it we do this all the time you go to a use car salesman you don't believe what they're saying because you're tuned to taking that information as something that is not going to be trustworthy for you I trust my doctor probably a different way than the use car salesman I have learned that behavior when I was born I probably didn't know the difference between a doctor and a use car salesman I've learned that the same way this information that's coming in the assumption was because this was a new model or a new mechanism we assume that the algorithms AI all of this stuff was going to be trustworthy and that's what we were incorporating in our decision process now the fact is that everybody talks about fake news and everything is not trustworthy we're going to incorporate that in our filtering of that information so this is isn't this just a blip we'll get fooled by the next actor that comes in maybe some next version of AI and there will always be something new but this is going to be something that humans still do I mean it's already happening maybe but AI decision making is affecting all of your lives and we all not necessarily the five of us on the panel but as a community we are involved in writing these AI tools and I guess we should do our best or should do better than we are doing now in having these AI tools help rather than hurt AI decision maker is helping companies read CVs and figure out who to hire yes at the end they're going to interview but not all thousand applicants only some of them and the first phrase of processing is now an AI system the bank will give loans to some people and some loans to others and at the end some decision is made by a human but some amount of filtering or rule or a recommended rule is written in today's system as an AI is a machine learning system sentencing in criminal cases or setting bail is also there is much more algorithmic aspect nowadays these algorithmic aspects have a potential they can take more information they can use more information and they can be better than the randomness of a human decision maker but they can be worse also and the moment we do have both an algorithmic problem that is some of these decision makers are actually pretty bad and we do have a legal problem that I guess Mike was more talking about is if a human decision maker made a decision against you like then sent the bail so high you couldn't get out of jail maybe there is an appeal process but if it's the algorithm that made the same decision unfortunately the appeal process uses a very same algorithm there is only one algorithm for setting bail and they use the same algorithm they say well it's not my fault the algorithm said it wrong but there isn't an appeal process and it's machine learning what's inside so there isn't even a proper explanation of why the hell you can get out of jail or why you didn't get your loan or why you didn't get hired and the money is great it can do amazing things but one thing it's not doing so but it's explaining why it made that decision so you're not entitled to an explanation there is no appeal process available and you're just host and it could have been the wrong decision clearly not all decisions are perfect so there is something where we need where algorithms can help algorithms currently helping and we should do better in making these algorithms use the information right and more importantly people will trust what the algorithms are doing when the outcomes are consistent with their own biases so so it's not as obvious as the you know shady car salesman right if you yourself have a personal bias that would show up in the loan system then when the algorithm you know makes decisions that reflect what you think is appropriate you stop you just trust it and you stop debugging and you just move on once again I just want to survive I don't know the answer I'm raising a question would cross sourcing be a solution for example Facebook is using putting Wikipedia on the side of news to check if that's a fake news or not so basically it can be trust correctly the crowd appearing as a way to correct bias so I don't know just throw that out so that's large population gain yeah so I guess sometimes I give students this example with respect to self-driving cars and what sort of decisions they would make if you're thrown into a situation where you're driving down the road and you're going to have to crash and you can either you know keep going and into a big you know multi-car crash kill 50 people you could veer off to the side of the road and kill your grandmother right and so which do you which do you pick I think one of the issues is that our algorithms we expect are going to make one decision and the same decision every time right we come up with a deterministic algorithm and then people argue about the utilities of those things but when you think about the crowd sourcing view of it as humans we're not all going to make the same decision scenario and so having I hesitate to say that we would have some sort of randomness in the algorithms but in cases where we don't all agree on the utility for a particular outcome I think it's just an interesting philosophical issue to think about is what should the algorithms do because in a crowd sourcing scenario there will be one predominant view but not everybody is going to have exactly the same view so I don't know actually answer your question or not but that's what I think about that question I think has a separate panel associated we'll go to the next question so I guess you know it's building on some of the common system so when you think about choices and preferences that the gentleman was talking about and then the biases we as humans we come up with accuments I think that's the idea about the salesman and these are cases where we are probably not encountering them on a daily basis or a minute basis it's basically like we have sufficient time to do some learning to come up with an acumen about these decisions but then when you're thinking about these algorithms that are just surrounding us all the time and it's influencing our thinking we strategically also changing ourselves so it's a dynamic environment going back to your talk yesterday so now how do you now develop so if and you are competing with other AI algorithm writers right it's a competitive situation everybody's writing their own AI tools but if you're trying to do good things but then people are changing on a daily basis based on the information they're getting how do you go with that with an eye with a clean face eye yeah that helps then we have to the digital apocalypse for sure I don't know if you wanted to take up that question I'm not sure like true the word is changing you know good AI systems can adopt to changing words I guess I'm much more worried about the incentive misalignment that you know as it was pointing that a lot of times if you write your system to be a little bit more in line with what people prefer people use that one all the time so if you write something that socially good you might just lose out the competition to someone who didn't care about that aspect even accepting it it's you know individually we might all say we are for socially good but our actions speak louder than our words and our actions suggest that we are selfishly optimizing for ourselves even when we say we don't do that so that's what I'm really concerned about and I guess the most positive view I can offer is the vaccination story we somehow are vaccinating our kids it's really cool that we are and I hope it will stay that way and I hope the electronic word will somehow more become like the vaccination story but it might not it might not I guess we're basically out of time so one of these we've been off to learning and so on in terms of how to reason in this complicated world and so I guess the answer is we prefer not to and let somebody else do it and so I guess we'll leave it at that thank you very much for attending I think it was an excellent panel if we could thank the panelists and then that will be it we can have a great day so thank you very much