 All right, hello. Today we'll be listening to the talk of technologies foreign against digital sovereignty, with just two of the three promised speakers, but we'll still have enough academic firepower. At the start we have Professor Volker Grasmuck and Rüdiger Hold on, hold on. The University of Technology Berlin and Stefan Lux could unfortunately not make it, even though he has a very cool university. Give a big hand to our speaker, please. It's Mike Onni, yes. Thank you very much for the friendly announcement. As was mentioned, this is a joint work between me, Stefan Lux and Volker Grasmuck. And as you may have noticed, technologies foreign against digital sovereignty is a bit of a weird title for a talk. The title is also the same title as a task handed out by the Digital Ministry of Fabrallerschutz, and they wanted a project on digital technology, and we sent, we handed in a proposal and surprisingly won that claim. And today I am my colleague Volker would share some light on a couple of aspects of that. We have a big bunch of slides. It will be a big quick glossing over these, but we'll point to these and we'd like to thank the Ministry that this study will be published in spring and should be visible in more detail. There are a lot of things escalating in 2016, as you've noticed. And since we've been looking at the whole affair for about 10 years, there are a couple of things that are as important enough to be warranting discussion right now. And one of these is the security of the Internet of Things. And there are positions emerging recently that I haven't seen in the last 20 years or so, which caused me to notice that the state of security was getting a bit precarious relating to Microsoft, for example, where the situation is a bit more friendlier. And some people might not like that in the audience. And I'd like to discuss the sociology, implications, how we manage technology. We can try to predict some technological developments and their effects on society. And we can work on figuring things out that technology allows us to structure society. But first up, we have to present the technical situation. And that's not that funny. Or how Bruce Schneider presented it. It's a bit like the end of playtime. I usually refer to the talks of Ron and Frank, security nightmares of the past millennium. And my point referring to that is, there's one simple conclusion, computers can be hacked. If I connect something to an internet, it can be hacked connected systems connected to a network can be hacked, which leads us to the conclusion that if we have an intelligent thermostat, then that is a full computer connected to the internet. And people from the outside could be interested in that. And again, as Bruce Schneider has said, there is a big failure of the market. The embedded devices, we have sold in great amounts to small small prices will stick around in that system for a long time, without anyone realising that they are attackable systems. So now we have this nightmarish security disasters. 900,000 telecom clients flew from the net because of a programming error due to some real fault. And we have to just figure out that there are millions of devices out there that can be used for attacks. And I want to point out that you'll probably don't have to fear a drone crashing into your fridge recently. But if your evil evil fridge participates in attack against US military systems, then according to the current situation of the law, that's an act of war. And the US government has the right to react against that. And we do have a big trust in the US government since decades. But that is a point where we need to rethink some things. And the next three slides that might not look this bad have kept me sleeping as a couple of nights. Because we need something like a turf warranty situation, liability shift to the manufacturer that can prevent us from some hacking possibilities that will impede the hacking of systems. We formulate this as small as possible. But we need a definition of the lifetime of devices, and we need warranties that these devices receive security updates during that lifetime. Secondly, we need to make sure things that cost 34 euros and intelligent thermostat might be used over decades. It's a very quick lived market. So manufacturers will disappear. This isn't nature is not philosophy. We have had the case of the Chinese cameras that disappear from the market. We just need to work around the fact that manufacturers will disappear and systems will be around that we cannot update cannot make secure. So what we need is a well defined lifetime. The manufacturer need to have the motivation to choose that appropriately. And we also need the security that systems are released as open source if the manufacturer cannot update them any longer. And finally, the third and most critical point, because these companies may disappear really quickly, we have come to the conclusion that we need to have a source code escrow of the manufacturer. Because if that company goes bankrupt, then there needs to be an assurance that the source code can be published as open source. And the community and others have the chance to fix books that might still be present. The open source is really a part of personal sovereignty. Because otherwise, we will not know what could happen to our coffee maker. So when you when you use these devices, and we don't have to worry just about us drones, but we also have to worry about denials of service and we need some sort of defensive mechanism. And it'll be extremely unfunny when we we have an attack based on our heating system, when a denial of service affects the people's ability to heat themselves. So we're coming to the we're now going to specific discussions now. So I'm also going to air a little bit about Microsoft. So I'll talk about a security feature by bypass vulnerability. What actually happened here? So an attacker that has physical access to a system could violate the integrity of the system and could deactivate secure boot, prevent the validity check in BitLocker and could disallow device encryption security features. So basically everything if I'm not mistaken. This was blacklisted, but there are still problems creeping up. For now, I just want to point to that once that there are companies that built close systems with mistakes that were one tiny packet of data can prevent the security architecture from working at all. Because of that, I want to say that we need to act because, for example, the trusted computing architecture builds on 2048 bits of key length, which is not enough for long lift security. So we need alternatives. So I made the last recommendation. So Microsoft reacted fairly cleverly to this. So in previous discussions, I've really discussed about this. So if the Microsoft saves the data in the US and so they've they realized that was acceptable. And so they made a cloud in Germany, but that unfortunately has not been successful to this point. This was a recommendation from the CEO of Microsoft. So they knew they needed a massive data center in Eastern Germany. These alternative trust anchors should be a possibility for companies to not have to trust traditional CAs, not to have to trust the government. And I think is a very interesting point. We have made some good experiences with trusts, like for example, I can jumping into technology. We have technologies that we can share trust. We have zero knowledge things we have secret sharing things. Those are not just academic proposals. For example, this is being done in practice for the DNS root key for fun ceremonies. But it's working. We as cryptographers can invent and can point to technologies that you can use for such applications. Coming to a creepy quiz. Eric Schmidt in 2011 explained they built a technology that is very fantastic. But it's the only technology that Google ever produced and said it's too creepy. We were not going to use it. Do you have an idea of what that could be? Silence from the audience. I also wouldn't have known. I'm reading that I was a bit weird in a technology that works great and Google isn't using it because it creeps them out. But the subject is facial recognition and Google considers that creepy. Google, for example, the government can film protests where people are filmed and are recognized or festivals at the entrance you're filmed and your faces are recognized in HD. Such technologies that can now be used by secret services will later be used in a year by hackers and a couple of years later by normal, the normal populace. In a Russian art project, for example, a Russian artist thought, well, we can recognize any person. So what happened on the internet is on Tuchan, people took pornographic movies and sex worker pages and stalked and de-anonymized people on these pages. And that was a violation of their privacy. This is a danger for, for example, homosexual demonstrations for normal protests. And that is a technology where you just have to sit down and you will realize that it's, it is creepy. And if you look at the current development, I'd like to point to a brilliant article by Anna Bezelli, who pointed out, for example, for transsexual sex workers, anonymity can be life-saving. It's participation in sex work can be met by draconian punishments and that in a transsexual context can be even more dangerous and punished even worse. And yeah. Because of that, we've come to one clear position. I don't want to go into a political discussion that'll go nowhere. But if we want to protect people, then we have to understand that we will always lose databases, that even with rules about this, we will always lose databases. And because of this applause, because of this, we have to think like hackers. So if the data could be lost and people's lives put at risk, then we can't store it. We can't save it. We haven't made it that simple yet. But it was important for us to say that these pink lists or sex workers, by the way, the last time and then national socialism was relevant, you should also briefly be so such technologies can endanger really living real persons. And they can endanger anonymity. We could potentially help there. We could say there are anonymization methods to anonymize people. Many people go somewhere where they get an official document certifying something. And the people issuing that might not know who they've issued that certification to. Also, there's the point to disallow the Ausweispflicht due to the person identifying document. They are pseudonymous at that point. And basically, the countermeasures proposed were that you should not upload pictures to social media, but there are other people uploading your pictures to social media. So anonymous identifying documents with a picture are not anonymous anymore. And we have to figure that out. There's also another recommendation if masses of data are collected. And that's problematic. Because if people cannot be advised enough by the courts and the system, but there would be technologies, for example, differential privacy that could help with that. That's a technology that helps gathering statistical data and still maintain some sort of protection for privacy. There's work of Sinti at work of Microsoft. And it's a very powerful concept. It's a very, very beautiful mathematics, not as complicated as a number theory. And we will provide concrete parameters in our study that we recommend for use with differential privacy. We also point to whomever using differential privacy, whoever uses a process is personal data should do well to use differential privacy. And we just have to give a big shout out to Microsoft. They have done excellent work. It's 10 years past now. And the maths is so powerful, but not so far away from normal maths. It's not it's not mass in development, not research maths. It's we can really we understand that. And we give recommendations in our study towards using differential privacy. Finally, moving to the positive things, what can we do with cryptography? What else can we do with cryptography? Anonymous attestation is a combination of blind signatures and zero knowledge technologies. If we can combine these two, and I'll come to that later. In trusted computing, we can guarantee that a system has some security properties without giving away the identity of the system. Blind signatures, explaining that again, the magic is that Alice can encrypt a document to Bob to design it. Bob signs the encrypted version with the and with additional knowledge Alice can create a signature for the unencrypted version with some extra knowledge from the signature to the encrypted version. And in that case, Bob, who created the signature cannot determine which document it was and for whom it was signed. I worked with one of my master's students on blind ECC signatures for elliptic curves for open PGP. And he proposed a very nice hack, working that into open PGP. The talk wasn't accepted, unfortunately, to this, this Congress. So I just had to sneak that in. It was on the open tech conference, and you can, if you're interested, you can check out the source code. And we are, of course, open for comments for feedback. And I have critical comments on elliptic curves, but there are some scenarios where they can be used pretty well, zero knowledge protocols, touching upon them briefly are the idea that we can prove that we have some knowledge without giving that knowledge away. I forgot pointing to the dates. Blind signatures was 1982. They were patented, but these patents have expired by now. So please do have a look at this blind signatures 82 and zero knowledge 83 and cannot be used without regardless of the patents because they have expired. Now, trustee computing. Direct anonymous attestation. There's an analysis of Stefan and me from I think 2004. There are some farther developments. Another pointer to blind signatures identity management. There are great works by Stefan Brands at Microsoft. Do you have a look at that? It's maths that allows us to protect people. And that's a thing where I invite people to try and have a look at the maths. The maths is pretty understandable. And you can really make a change and improve on the world. The final two slides. I'm a great fan of hash functions. Last year we had this great talk by DJB and Tanya Lange about post quantum cryptography. And what I took away from that is what I really understood are Merkley signatures based on hash functions. And that is fascinating that these are even safe against quantum computing quantum attacks. Also we can take some ideas from Bitcoin and the block changes to drop that buzzword. And we can use these hash functions as a building block. We have understood that even though we have to use some quite weak assumptions. So yeah, the collision freedom of hashes are a valuable, valuable point for us. And the good point about Bitcoin is that people are good cryptographers and they know that they do not know anything. And they often use, they often do stuff twice or they hash and hash again, which is basically Russian space engineering. It's not respectable for me just saying that because they just they know where the weak points are. They do it twice for more security. And that way they don't have to explain why they do stuff. So instead, they just add some more layers. And that's a good thing. Finally, this short slide. This is a slide that I told pretty massively to the ministry. We have some software that is relevant to systems. So if that software breaks, we are in trouble. And that is for one true crypt and veracrypt. I want to speak about PGP. That's not that nice tool with which you can sign things on that. The entire update management in Linux is built. And if we realize that Linux has completely complete control of the operating system market, that we realize that this is really important because if that breaks away, then the whole world cannot run updates securely anymore. Finally, before I give the word to Volker, who will speak about scoring, which is a very interesting thing. Again, I want to point to the words of Edward Snowden. Cryptography contains maths, but it's not black magic. It's things we understood, things we have discussed. We do not have all of the maths behind it. And even I after 30 years of studying it, do not understand all of it. But we need to see what possibilities there are. And politicians cannot guarantee anymore that we're not being listened to. But if we use end to end encryption, then we're not that vulnerable to stagnate surveillance, but they will have to attack each every single one of us independently. And that being said, encryption as the standard for standard mode of communication for everything is what we need to do. And I repeat the invitation to academics, we need to implement it, and we need to actively research it. And for all the people who see social problems related to distributed trust related to confirming things without touching upon the privacy of the subject whose aspects you confirm. There are a lot of things going on in there. And there's a lot of room for improvement. So please participate. Applause. So I'm going to try and get through my talk quickly. No clapping. So it's supposed to be 20 minutes of our scoring, but we don't quite have 20 minutes. I imagine it's gonna be a lot of discussions after Rudy's talk. So I'd recommend the scoring presentation I'll be doing in the the third of January cryptic presentation in Berlin. And he's now I'm asking what does the stage manager think? Should we go right to questions and answers? We actually have time. I don't know why you want to do it. We don't need to make a half hour of Q&A. We have a 20 minutes. We have a half hour to go still. So I think we only have 15 minutes left. So one of us is going to start with this, either him or me. Sorry, sorry for the horrible start. Okay, now we're going to avoid a close misunderstanding. No, it's not about getting tips for the party tonight. This is about statistics. So I must go to where we are right now. Risks. Minimum and safety are the central questions. You know, to minimize them and control them is one of the key things of the current economy. So we need to think about the future possibilities here. Predictions are difficult, particularly from when we try it when they meet the future. So someone either a author of whether author or physicist wrote this, Mark Twain or Neil Boers. We want to talk about how to make the use the information from the past about to more easily to predict the future. So we know that things like credit scores and insurance scores are useful to predict the future. But it's not an easy thing to do to decide whether this is where you can make decisions based on that. So using large data from from previous things, previous events. So I've used regression analysis for various grades to see whether I could correlate accurately correlate old data with with what actually happens in the future. So I want to see if I could easily prove what was whether an old old data and data model is what actually worked. Bekann in in the vergangenheit liegen ein Ereignis tatsächlich mit einer gewissen Wahrscheinlichkeit übereinstimmen ist das datenmodell so robust is trying to find out not just whether whether whether it was accurate, but how accurate was and how effective it was. So trying to find out if the you could make the decision about an individual based on data, how accurate was that? And can we make a risk or son a risk score based on that? To allow someone to make a decision whether whether we should take a risk with this person and how much how much do we need to take effort? Do we take to ensure it? So normally some of the some of these calculation methods are unknown. But this is an example from from to describe a scorecouter in the US. So if someone had a a check in a savings account become you become 39 points or 31 points for both in your age as a factor. And you're so some of the examples of the different mechanisms, places that that uses information for make decisions. So example, predictive policing uses as well. The risks of of tax violations. In China, they use a a social social credit system since that they that is going to be required by 2020 to do that that also is related to insurance. And the person in war war in the in the second half of the 19 year 100s, the first mess into instruments that for gross. So the so weight since 1860 was has been used as a mechanism to to to measure the risk. And in 2004, there were four authors. And they use three different three different mechanisms to measure traffic risk. And for example, in traffic there are rebates for for people with new cars for people who drive little for people who complete a security train a safety training. On the other hand, there are add on charges for old cars, old people or cars that are being used in Eastern European countries. But the study shows that only few of these aspects are are relevant predictors. Since the 19th century, lots of things have changed. Things didn't have IP back then. Now we have computation devices with us at any time with lots and lots of sensors, and more and more are being added. As as I speak, basically, social media, no, on social media, we share lots of data freely now. And then we have big data, cyber and the cloud sharing more data, because we can all of the data we we leave behind is being used for scoring. Change in mentality helps. In the 80s, there were movements against population countings and stuff, censuses. And now there are blocks for post privacy and things like that. And here's a quantified self, for example, where we see the scale again. Here, one concrete example for a car insurance that's called bonus drive by Alliance. It's targeted to young people, people who do not yet have credit or insurance history. And they agree to have their driving behavior measured and receive in exchange rebates up to 40%. Here we see the data streams that a car equipped with that will publish the octotelematics, for example, builds and runs these systems for more than 60 insurance companies in 23 countries, amongst them, Alliance. They add a black box to the cart with acceleration sensors, a SIM card and so on. And these sensors, they track geo data direction, breaking, turns, crashes, duration and distance of drive and transmit that to octo. In addition, a lot of other things are promised. For example, a stolen car can be easily found. For example, in the health program from Generali, so in Generali had used it for a health and life insurance. What's important here, the feedback isn't also given to the the corporation, but also to the individuals. And that will that way they can actually set goals for themselves for health and use a tracker to see how they're approaching that approaching the goals. It's not just about a measurement, but it's also about a taxation of the measurements. So gamification is a strategy for this, for example, a Spockhasser Bank in Germany had started a driver of the month. And the customers were largely happy with it and it ended up costing about $100 per month. So people then competed to attempt to to reach the high score of the month. So the new trend is to use Facebook posts to attempt to get to improve your purchase of a car. So so this is an idea to to get an offer from for for insurance. To do this, you'd give the admiral access to your Facebook account. And then they the admiral would use the your your history and look at it and attempted to find out this the person was was well organized and thoughtful. Using locations and and other mechanisms to determine whether someone was eligible and also using it to measure their their bill that user their character data to help them understand about whether you could come up with a risk that someone would face. And they attempted to start this in November, but Facebook pulled pulled their their ability to do this. They wouldn't test with Facebook. They wouldn't give admiral access to the data, despite that actually that they'd already tested this with admiral for several months. So I'll talk a little bit about the reason for that. Instead of that, admiral has done a small survey. But also again, using the Facebook login so the reason for it was based on objectivity and fairness, because they felt like there couldn't be a correlation between the Facebook information and the and the decision to offer a a insurance quote Führender akademische Einrichtungen hier aus der Pressemetalung von Generali, welche führende Einrichtungen das sind wird allerdings nicht gesagt. And so they didn't believe didn't believe that I had a a academic or a tested mechanism for doing this. As opposed to Schufa, which is the German mechanism for measuring credit scores, which is in theory free from subjectivity. So and so this is a discrimination chart that attempted to measure the different attributes and how it how it tied to discrimination. Durchschnittlich 70 Facebook likes based on the number of Facebook likes likes to determine whether they were what your attitudes were based on what you you'd actually selected on Facebook or liked on Facebook. Fairness. So the scoring was was designed to protect the users from for being needlessly charged for things. An important argument. And some people found this risky and some people didn't. And we don't want to have the people who behave risky. No, the people who don't behave risky basically pay the ones who do behave risky. So that would be a punishment for the ones behaving less risky, which is the argument should be prevented. So yeah, punishment payments for some groups are basically forbidden. Moving on to data economy and data economy is one of the basic principles of privacy. And the more data we have, the more precise scoring can be done. So scoring is basically the exact opposite of data economy. Also the being bound to a purpose of data is lost because the data and social networks is not being generated for input to scoring algorithm, or for price determination. But they are used for social networking. But because data is the oil of the 21st century, we see this reuse because everyone is asking for data richness and data use instead of data protection. This rhetoric of data protection as innovation hindrance needs to be removed. And we need to have a clear commitment to privacy and to the protection of basic right of the basic right of privacy in the net. We need to negotiate borders. For example, the Facebook platform policy has prevented admiral or Facebook has prevented admiral from reading data for the insurance pointing to that platform policy because of the data obtained from Facebook must not be used to make decisions about eligibility, including whether or not to approve or reject applications. The probable background of this is not that Facebook has suddenly become conch privacy conscious, but because they probably want to use that for themselves. And there are implications on free speech bordering on that thing. Opt-in is often being called as a basic principle of data protection. Here is a cookie banner that is visible on basically any site. Often you see such banners where you can either accept to use cookies because the company was forced to place that banner here or you can leave this site. Well, we want a lot there. Opt-in is powerful, but it's not a perfect solution. Only about 30%. So even if data is voluntarily just closed, we need to assure that privacy is still assured. For example, using differential privacy as proposed before. And we should have independent agencies certifying the data protection of solutions that solutions are actually trustworthy and not just claimed to be. Finally, data quality, wrong and age data are a problem. All companies, that whole data about me need to tell me, as you know, negotiate that. For scoring, that means probabilities from the six months passed need to be this close to me. And I need to in detail know which data has been used to calculate a score. Algorithms. To consumers often have the feeling of being entirely transparent to organizations that are completely intransparent to them. And we know algorithms can introduce systematic error. And we cannot do anything about it. Merkel in Munich said that her personal opinion is that algorithms should be more transparent. And her speaker then corrected, they do not want Google or Facebook to open their crown jewels. But in principle, and well, they in the big picture, how these algorithms work should be published. And there are discussions about the, how algorithms come to result, there should be some sort of mechanism to to identify them. But companies say we can't open these algorithms. Because then they're open to being manipulated. And then other people could also steal our business model. So this is speaks to a tension between the the the subprime market manipulation within the US. So the users then need to make decisions based on the auditing. What we demand is that algorithms that have an effect on how customers consumers receive things, these algorithms should be audited in camera behind closed doors in secret. And people who create such algorithms have assured me that it is in principle possible to discover how parts of the algorithms work together and to determine more or less if a systematic error is being introduced. With regards to Facebook, I have mentioned drawing borders. And the same as the European Court has done the same, saying that equality should be such a fundamental concept that differences should not be used for the determination of insurance premiums. So what we see is that women live longer. So their insurance premiums should be higher from a rational perspective. But it's a rational book. We cannot do it because it would violate the equality. So as a society, if we hold such values like equality high enough, we can counteract economic arguments. There should also be the possibility to check what sort of decisions algorithms have made and to compare it to local logbooks. So if we don't So talking about scoring for health data. So before something is a risk score is calculated and given to a company, they can only do things when after data has been aggregated rather than then separated into different pieces. So data is so data that is gathered about by you about users can only be passed on an aggregated form rather than all of the individual data is pushed on. For example, rather than pass up, which for for electricity at that home, rather than pass all of the data up for all devices, you put aggregated information on 15 minute increments, or over a year average averages rather than having to give all of the tiny individual data about individual device usage. So this is sort of data that can be protected with differential privacy. But who decides this? So for example, on a local bank branch, where you know a local, local customer, so someone could actually decide against the score. But in the larger market, we normally do things based on algorithms. So what are we going to do now with neural nets and other automated decision making methods that the idea being that everyone in theory can pay for their individual based on their individual score. I mentioned in fairness should become real when when we have this price differentiation, things should normalize the minimizers should happen up in the telematic plans, and the normal users should still end up in the non telematic tariffs and the price will rise accordingly with the result that the poor will pay more for food for insurance for for living. David Kaplavitz has seen that in 1973. And that has not changed until today and that will continue throughout scoring people who cannot afford a smartphone or a fitness device who cannot afford a new car who were raised on bad food. They will pay more. And that is going to be systematic because the symptoms of being poor are correlated with a higher risk. The last slide, the free choice of tariff needs to be assured even for people who do not want to be tracked and scored. And it should not be the solution to everything. It's a very important step. The short basic income that has been asked by a lot of people, for example, Siemens boss Joe Keiser, who says that will be inevitable. There is applause. And with that, I believe we'll be moving to the Q&A. So we're going to ask and answer questions for six more minutes. We have about 15 minutes of time for questions and answers. Please step up to the microphones. And if you are walking in the cybersphere, you can ask questions. There is interfacing. Micro number two. Go ahead. Many thanks for this wonderful presentation with a lot of interesting ideas that I've never seen put together. Thanks for the possibility to give a small question and answer time. So we had some wonderful crypto and possibilities in the first discussion. But there was a question about the manipulation of the platform, whether something can manipulate something by breaking the platform. So, for example, a toxic platform would it would it make more sense perhaps for the government level that the that mandated or supported the separation of high value systems from potentially toxic toxic systems. Five minute limit. What I don't see a problem is. So we have all these great data sets that are aggregated. But in the moment where I put together four data sets for me in half a minute. Is there an approach to this? So. This is the last part of the question. Do we have possibilities to combine data sets? Answer. It is visible that after the election of the trim and in the in our ministries, we have more reflection, apparently. And we have to say that quite clear. And I don't want to do that in a ridiculous fashion. Part of our government or the people within it seem to have as philosophy that we cannot we can trust the Americans in principle. And these people who help that principle are now starting to change their minds. And I'm interested to see what happens there. What I've seen is some hardware manufacturers have allowed or have been forced to allow a backdoor of the NSA. There have been gag orders preventing people from talking about it. The computing industry has tried to fight that with warren canneries. But this is a point where there is a need to assure system-relevant software. And for example, core router boot system, core boot system should be a run in a deterministic public fashion. And one of the ways to achieve that is is running open source. And I want to support development in that direction. And the second question. I do not have a solution to don't limit the anonymization as being asked for in data protection everywhere. It's we do know it's quite easy to to remove these layers of anonymization. And I'm torn as a sociologist. I would love to see to be able to use all of that data. But I also see the dangers. And I would like to do that in a responsible way. One final point. Differential privacy is one of the very promising approaches in the right direction. And we have to look at that in a more concrete way. There are steps going in there. It's not easy. And it's not a final solution to that problem. As I had in the slides, we cannot measure how much trust in privacy is lost. And there is some border. And in our study, we will actually have epsilon proposals for the parameters. And that was it. applause to both of the speakers, please. Thank you very much for listening to the