 Our next speaker will now focus especially on the risks that threaten a modern network society as a result of security gaps in information technology. And here, too, we could not wish for a better expert. David Besen is full professor and soon, I guess, dean at the Department of Computer Science at ETH, where he heads the information security group. His research focuses on information security, in particular on foundations, methods and tools for modeling, building and validating secure and reliable systems. They even have a new one prepared. He is editor-in-chief of the ACM Transactions on Privacy and Security and of Spring of Alarx book series on information security and cryptography. He's also the founding director of the Zurich Information Security Center, which he led from 2003 to 11. I suggested earlier on that as beautiful as their words always sound, philosophers don't often help as much to understand the real world. However, I would like to expressly exclude the Harvard philosopher Daniel Klein from this accusation. His main thesis is that complex phenomena are often best understood with a joke. I take the liberty of trying out this method again to call our second speaker on to the stage. How can we best control complex systems today and what are the risks? Perhaps the joke about the interaction of man and machine in a modern airliner, a very complex means of transportation, will give us some interesting insights. The joke goes like this. The passengers in an airliner are waiting impatiently for takeoff. The airliner is full down to the last seat. Suddenly two men in pilots uniform board at the rear entrance. Both are wearing dog glasses. One is led by a guide dog and the other struggles with a long white stick to find his way along the aisle to the cockpit into which both disappear. Nervous craps of conversation, anxious laughter in the cabin. Everyone is hoping for a sign that this is just a joke. But the engines are already starting. The plane rolls on to the runway and accelerates for takeoff. The passengers in the window seats press their faces against the glass and see that the runway ends directly at the open sea and that the airliner, although getting faster and faster, has not yet lifted an inch. Just as they reach the end of the runway, all the window seat passengers begin to scream loudly. But at that very moment, the plane takes off gently, the screaming in the cabin stops and the plane flies peacefully towards the clouds in the cockpit. One pilot turns to the other and says, you know, Bob, one day the passengers will scream too late and we'll all die. I don't know what you think, but personally, despite my skepticism about the anonymous algorithm increasingly controlling my life in this situation, I would prefer to rely on the airplane's autopilot. Ladies and gentlemen, please welcome our next speaker with an applause as deafening as the concerted revving of all four engines of an Airbus A380 so that he knows he need and achieved us from anything when describing the ridiculous IT risks in our modern society since we have already fastened our seat belts. Professor David Basin, ladies and gentlemen. Thank you very much. It's a pleasure to be here. So in my talk, I'll be looking at information security, not only the question of what, but also the why and the how. And in the end, it's relevance for financial advisors and investors. If we look at the world today, it's driven by cyber physical systems. This is in the case in many areas of society. For example, finance, communication, energy distribution, entertainment, warfare we just learned. There we have systems calling the shots. These are IT systems that are networked. They work together. We have systems of systems. These systems, moreover, sense the world and react to the world causing effects in the world. Think cyber physical. Unfortunately, we're increasingly less secure. These systems aren't increasingly complex and they're embedded in other systems which are embedded in the world. And I'll have a lot to say about that. And moreover, there are dramatic changes in the threat landscape. If we go back far enough, we have attackers which are sometimes acutely called script kiddies. And indeed, back in the 80s and 90s, people would hack computers for fun. They didn't even realize the damage they could cause. In the early 2000s, we saw the first really malicious worms that would try and do damage. In the meantime, attacking has become very professional and attackers are very skilled. The best attackers are nation-state adversaries. I like to call them NSA's. But they're not just the Americans. They're also the Chinese and the Russians are very good at this. The Iranians are quite good at this, too. And these adversaries are very skilled, very professional, very well-funded. They have tremendous resources and tremendous patience. I'll come back to them. Let's first look at complexity. So systems are complex. Here I have various systems ranging from pacemakers to cars to platoons of cars to self-driving systems, et cetera. Let's take a look at one of the simpler systems we have here, a pacemaker. A typical pacemaker has about 100,000 lines of code in it. But that's not enough. We have systems of systems. So if I have a pacemaker, my pacemaker talks to my telephone. So I can see, for example, data about the therapy I'm receiving. My telephone talks perhaps to the cloud of the pacemaker company so they can collect data and do analytics. Furthermore, there's probably a PC in some hospital, perhaps with a 10-year-old operating system that hasn't been patched for 20 years, where the doctor can over the cloud, over my cell phone, reprogram my therapy. So although the pacemaker itself is relatively simple with 10,000 lines, we have a system of systems, and if you can attack any of these systems, you can kill the patient. With cars, the situation is a little bit more complicated. A typical car has about 100,000,000 lines of code. But it doesn't stop there. Again, we have systems of systems. So in a typical self-driving car infrastructure in the future, you will have the cars speaking with each other. They'll be speaking with roadside infrastructure. They'll be speaking with the car manufacturer who might do software and firmware updates on the fly. Again, if any one of these components has bugs, and trust me, they will, then attackers can take over your cars. The situation, of course, is similar in other domains. Think military, for example. These concerns are not just hypothetical. It really is the case that the attackers are not just good and professional, but they're doing what they can do to begin with everything is being hacked. Believe it or not, there are hackers who are specialized at breaking into implantable medical devices. So examples are not just pacemakers, but also cochlear implants, insulin pumps and the like. As a result, if you're a person who is politically exposed or for that matter you care about your life, you might consider disabling the wireless interface. That was the case with Dick Cheney. Attackers are very good in specializing in other areas as well. There are attackers who are specialized at attacking automobiles, how to get malware onto the bus of your car to infect the embedded systems in your car. Not just to steal it, but really to take over your car. Another example would be in the military domain, attacking drones. One way to do this, of course, is that drones are not isolated systems. You also have systems of systems. They depend on GPS signals, so on satellites to get their location. It turns out it's remarkably easy to spoof GPS signals, and you can convince a drone that it's anywhere you would like it to be and have it fly into buildings or crash into the ground. It's not just that everything is being hacked. Everyone is being hacked. So nation-state attackers are doing this. Here I have just a couple examples of the Syrians and the Chinese attacking the U.S. Of course, the U.S. has also carried out the other attacks themselves. For example, Stucknets is a good example where American, Israel set back the Iranian nuclear capabilities a number of years by attacking some of their plants there. Everybody is doing it. The Chinese are very good at it. They try out their malware, typically on Taiwan. The Russians are good at doing it. Ukraine is their playing ground. This has almost become daily business these days. So it really is the case, I believe, too, that the next war will be fought in cyberspace. Moreover, everybody is interesting. Of course, politicians are interesting. There was spying, for example, sorry about this, on Margaret Thatcher by the Americans for a number of years. Of course, the British are into the game, too. There's widespread hacking within Europe by the British. But the interesting thing is it's not just famous politicians. Data on all of us is being collected. We know a lot from the Snowden revelations that the NSA is pretty much collecting all data it can on computer networks, communication networks, and the like. Not just metadata, who is talking with whom, but the actual data themselves. They're storing it in their NSA cloud, and they can use it later on when they have need to. There's a question of who we can trust. Many people say, well, I'm unsure about my own infrastructure. I will outsource to the cloud. But cloud providers aren't necessarily secure. On one hand, they can be hacked from the outside. You may have applications running in the cloud, but the applications themselves may be unsafe. Or it may be that there's no reason to trust the cloud service providers. We know from the Snowden revelations, for example, in America that the NSA worked with all of the major IT manufacturers, whether we're talking about Microsoft, Google, Sun, etc. And for those that they couldn't work with or would stop working with them, they would simply attack and compromise. Perhaps the one cloud which is safe is the NSA's cloud. They haven't outsourced the storage of their data to an external cloud provider. They build their own internal cloud, which is highly secure. Maybe this is something that companies should keep in mind. Can this be solved by politicians? Can this be solved contractually? Well, there have been attempts to do this, but unfortunately, they don't work very well. For example, one month after this agreement was done, the Americans realized that, of course, the Chinese were still carrying out their cyber hacks. And probably they were doing it, they, the Chinese, all along. Okay, so it's a little bit of what is happening out there in terms of security. We'll look shortly at what's happening in terms of privacy. But I'd like to ask the question of why. How did we get there? I'd like to give you a couple explanations. But let's go back in history. In the good old days, you had a user and a computer. So a single user system. Think back to the early 1980s, for example. Then shortly after that, you had multi-user systems. So people would share a computer. And then at that point, users had to worry about each other. Some of the other users could be adversaries. They could be attackers. Then came client servers. You would have your machine as a client. You would use servers typically within your own IT infrastructure. This would be over the internet. And unfortunately, of course, the internet is a very hostile place. It's very easy to tap traffic, modify traffic, etc. Although you had your servers, if you were trying to reach your servers outside of your intranet, then you were very liable to be attacked. More recently, of course, there is the cloud. There are problems in both settings. So where you have your own internal computers, you need to worry about them being hacked not only from the inside, from the outside, but from the inside. Insider threats are a huge problem, for example, for banks today. For the cloud, these clouds may be run by service providers or even governments. But as the Snowden revelations showed, there's no reason to believe that your security interests are their security interests. If you're interesting enough, you may be compromised. Finally, an additional problem is that what used to be our safe computer is now typically, in an enterprise environment, bring your own device. Your iPad or your iPad or whatever. And these are notoriously difficult to secure because typically they're administered by people who are not experts in security. So summing up some of the sources of information in security today is we've seen complexity. So complexity was one problem. But another problem is virtualization and trust. We trust parties that may not be trustworthy. We're virtualizing pretty much everything. Our networking, our storage, our management, our infrastructure, and it may be that the parties who are handling this are not really worthy of our trust. So here in Switzerland, I'm a little bit concerned. We use hardware made in the US or China. Our operating systems are made in the US. Our applications are, most of the things on our software stack are made in the US. We're using cloud services typically provided in the US. It's quite clear that we do not have, we here in Switzerland, we do not have much sovereignty. If the US government wants access to our data, if they want access to our endpoints, it's very easy to compromise this. Note that the symmetric situation is not the case. The Americans do not need to have so much fear about what's happening here in Switzerland. Finally, when we look at root causes, there's a third that I would like to pull out and that is that the systems we use are social technical systems. These systems include humans and even if you get the IT right, unfortunately humans usually get it wrong. So wetware, not just hardware and software, but humans are very easy to hack. Here's an example of Brennan who used to be the CIA director. How his personal email account was hacked, where he by the way had confidential documents. It was hacked by a 15 year old. Now you might say surely the director of the CIA is smarter than a 15 year old and you would be right. He didn't have to socially engineer Brennan himself. The way he did it is he socially engineered and hacked Verazon. He pretended to be an employee of Verazon. He got information from Verazon or Verizon that allowed him to do identity theft. So things like pins and last for digits on the bank card. And then he called AOL where Brennan had his account and convinced AOL by answering security questions using the stolen information that he had lost the ability to access his account and the account was restored for the hacker. So these are the problems that we face today and here are some of the reasons for them. There's a separate kind of a problem that's closely related to security and that's privacy. And I'd also like to touch on that. Where in the world are we today? Here I have a beautiful graphic that shows different types of privacy data breaches where if you can read the fine print you see how many million records. For example here a friend finder network over 412 million records were compromised. I suppose here in the financial service area you know about Mosok and all of the records that were compromised. Here I have on the right of it the reasons for these and these are just typical weaknesses. If software has one weakness where you can do a vulnerability like a buffer overflow or things haven't been configured correctly then you can typically compromise servers and steal all of their data. This can have a huge impact of course for the people whose data is stolen. If for example your social security information is stolen in America your financial information is stolen. It's not just that the attacker knows things about your finance and your person but typically that's enough to start stealing your identity. Another aspect is AI which was also mentioned in the last talk. We are being hacked by AI and to some extent by the companies we trust with our data even if they are not externally hacked. We're being somehow attacked by the AI that they use and their partners use. Here is a classic example that I'm sure many of you are aware of because it was quite recent. This was Facebook and Cambridge Analytica. What happened there was Cambridge Analytica had an app that harvested the data of 87 million Facebook users. It was particularly unpleasant about this. It didn't just collect the data of the users who were using the app who had somehow consented to using this and perhaps even read the fine print that their data is being collected but collected all of their friends app. This is an example we've heard of but Cambridge Analytica wasn't the only one doing this. It was in the economic interest of Facebook to go quickly and break things. That was their slogan. To partner with companies in ways that they could make money and data protection was a secondary concern. Why is this a problem? With this you can basically hack people. The CA data combined with users' Facebook likes and publicly available demographic information is very useful for profiling people to understand their political interests and their political direction. For example, one can make models based on this data that if a person is neurotic, for example, then target adds to them that emphasize security. If a person is conscientious, then target adds at them when they're surfing, they get adds that emphasize responsibility. If they're open, then target adds for personal growth. There is the belief that these models were likely used by Trump's political campaign to influence the outcome of the election. We are literally being hacked. What is at risk? As financial analysts, you think about risk. On one hand, we have our critical infrastructures at risk, energy distribution, transportation, military, communication, etc. But there's more at stake. Our central values are at risk. Our privacy, our ability to have things that are precious to ourselves that we can somehow protect. In fact, these days, one of the most personal parts of ourselves is now going online. That's our genetic information. We're increasingly being sequenced, and that goes into genetic databases. And if those databases can be hacked, then people can learn the most personal things about you as an organism, you as an entity. So I find that very scary. Democracy is at risk. Elections are being hacked. Voters are being manipulated. We can end up with governments that aren't what the people want. Finally, serenity is an issue. You may or may not know this, but if Donald Trump were to wake up today, or tomorrow, and have a bad day, he could cut off the internet in at least seven different ways in Switzerland. He could just hit a kill switch. The complete internet would be shut off. It would be a major disaster. We do not have control over this. Let me in my final couple minutes summarize with some discussion points, which I think perhaps are interesting from the financial perspective. The first point, which is a rather depressing one, is that security and privacy is probably worse than you think. I say probably because if you're in the security profession, you've learned to be incredibly paranoid and depressed about this. But maybe in the financial profession, you're not there yet. But I've emphasized we have technical ecosystems, we have systems and systems, they're complex, and moreover, a little dirty secret, is the way we build our systems today is very immature. We really don't understand how to build secure systems that are bug-free and cannot be hacked. Then there's a point about lack of economic and political incentives in the post-Snowden world. Companies like Facebook want to monetize your data. Other companies want to get their apps out quickly. Governments don't have an incentive for security. They have an incentive for insecurity. If insecure applications are built out there and you find a zero-day exploit, that's as good as a nuclear-tipped weapon. The situation is degrading. Everybody speaks about digitalization. It just means more processes and services in the real world will be vulnerable. And on top of this, hackers are getting better. Second point is that this can make or break companies. A typical example of this would be Bering's Bank, which went bankrupt due to insider trading, which is basically due to improper controls on IT systems. And I have some other examples here of recent hacks that have been major problems for companies. Moreover, even in traditional companies like pacemakers or if you build elevators and things like this, you typically don't think about IT security, but if you don't get it right, it can be the downfall of your company. If you make a pacemaker and your pacemaker data and communications is controlled in some cloud-based infrastructure and a bad person can hack into your cloud, then it will be literally like in a science fiction movie where they can press a button and all wearers of your pacemaker will die on the spot. It's not a joke. What does this mean from the business investment side? It means we have to rethink things with security in mind. First of all, security and privacy risks must be adequately measured and weighed. It means companies, when you think of investing in a company or buying a company, part of a due diligence is not just a financial due diligence, but a risk-based due diligence and security should be a central part of this. I also think from the standpoint of governing and managing companies, it requires a new way of thinking about security. In a certain sense, you shouldn't have corporate secrets because all secrets will be leaked eventually. You should run your companies in a hyper-transparent way assuming that everything is known to everybody. The next point is, of course, you have to be prepared for the worst. You need pre- and post-breach preparedness. And part of having a company that uses digitalization means preparing for this, and investing in companies requires understanding how well they're prepared. The next point is skepticism in new technologies in snake oil. I've given one example here, and I'm happy to talk with it later with whoever would like or at the break, but please don't fall for don't worry, we're secure, we use the blockchain. A final point as a way of hope is that it is possible that research advances can lead to a more secure world. In my group together with other groups, we're actually working on a more secure internet that solves many of the problems available today with denial of service attacks and the like, and it's actually not snake oil, it actually exists and can be used. And I'm also happy to talk with people about that and explain to them how we can go to a better future. Thank you. Professor David Beyschen, ladies and gentlemen.