 A German talk, which I'll translate into English. My name is Seba Liss. And this will be about what happens if the great TPM and the great internet aren't so great and what could go wrong. If your keys are stored or initialize, if you want to initialize your keys on such a TPM and the professors, doctors, vice and follower, are now going to help you. Along with that, please give them a warm applause. Right, thank you for that nice introduction and thank you for the many people that came here. This will be a talk in which I will, together with my colleague Fahler, have been kind of picked out by reality. It was supposed to be a discussion about how do you make cryptography more robust, even in a hostile environment? I learned a new buzzword, resilience cryptography for that. That was the supposed title, the planned title, but we have to, in the face of current developments, we'll have to give it some context. To begin with, the normal good news is that scientifically strong cryptography is not breakable at the moment, even for strong secret services. So we've kind of understood what happens, the things that, luckily, we were able to learn through the Snowden revelations showed that even the cryptographers in the NSA were just using normal tools and, in light of the large amounts of money that was put in there, there were surprisingly few surprises in there. So as Bruce Schneier had it, trust-the-math encryption is your friend, that is a fairly good situation until you take the situation into the real world. We cryptographers can really develop a kind of magic. With a few bits that you can store securely, you can defend against all kinds of attacks. The problem is, if these few bits aren't safe, then there's nothing you can do. And if you look at it from a model theoretic point of view, you can say there's something that the authorized communication partners have to have in common, but that's different from everyone else. And that must be those few bits that have to be stored securely. You can't ask for any less. That's what cryptographers can do. So you have to tell them, please look after those few bits. And that is the thing that is really exploding all around us in an epic way now. That was this very interesting talk. And I will not go into too much detail because of it, but the title is just too nice, How to Hack a Turned-off Computer, or the fact that you can run arbitrary code in this internal management engine. And that engine is a quite interesting thing, particularly for my research. Because thanks to the internal management engine, I can now answer that annoying question. When finally is Minix ripe for the desktop, I can now give them a friendly grin and say, since 2012, it is in your desktop and you haven't noticed. So this management engine really does work under Minix 3, which is an academic project, which for years and years no one was using. And now I am actually a member of the Minix Steering Committee. And we were sad about the lack of participation in the last conference until we noticed that the Minix we developed there really is on all the internet platform, Intel platforms, not internet Intel platforms. So this condescending question, now Minix is running on more machines than those running Windows or Linux put together. This is a funny detail, but speaking of funny details, the main problem is if this hasn't been done well, then all kinds of security holes can arise. And the fact is they took this Minix in the end, but it is under a free license, not a GPL, a different license. So it is under a different license. So the modifications do not need to be published. And Minix is quite well suited for security, but the modifications may not be known. It is a 24-bit architecture. There are very few known attack vectors. I don't know how the exact code development went, but it was below 10,000 lines of codes at one point. And as an operating system professor, that is quite a nice situation to explain the world quite easily. Linux has about 10 or 20 million lines of code, Windows about 100 million. So there is a kind of dilemma. You can say that all 7,000 lines of code will yield one error. So Linux that will give you 20,000 errors, and with Minix maybe a handful which you could look at. And another small rule, if you program in C, you can expect one mistake to cause real pain. The set of things that you can ignore in C is much lower than in other languages. And therefore you can say that the information that Minix is small is so small that you can easily explain it in one lecture is directly linked with its security. And next to that problem that we have problem on the lowest level, another thing that over years we were following was this TPM debate. And again, there is something rather bizarre here. RSR is being used here. And if you know talks I've given at TCC, you know that I like RSA quite a bit and there are a few reasons for that. There are proofs of security, hard proofs. It is mathematics that understood. And the implementation is fairly short, where there's not so much that you can get wrong. And because we still have some kind of a hacker conference here, let me allow myself some formulas. We need prime numbers. You need 1,024 bit long prime numbers. If you multiply a question, then of course, do these exist? Yes, enough. And then there is this formula that tells you how many prime numbers you can expect to have in a certain number range. So just about every 1,024th number in that range will be a prime. So 2 to the 10 of 15 is the number of prime numbers to expect. And you should always be wary if there is a approximate sign between two terms. And that is because what you should read this as is that there is a factor involved. And because that makes people uneasy, we checked. And this lower estimate down there is valid for the range of x over 17. And that is what you should use here. Now, this is a mathematical joke. Thank you for those that found it, to those that found it. Now, back to the hacker world. How does it look like in practical implementation? You have random numbers. You get random numbers and test those using the standard sieve algorithm at first. And you use the primes up to 4,999. The second part is a Fermat test. You can just look at Wikipedia to understand what that is. And the third one is a bit more elaborate because it keeps shifting bits back and forth. But this is the complete code. So you take random numbers. You run these tests. And then you have a high probability that you have found a prime number. You multiply two of these. And then you're ready. What could happen? Well, remember, we are in a number space where you can have about 10 to the 15th as the number of primes. So nothing could really go wrong unless you suddenly think, well, why don't we try a fast prime generation? The first thinking then would be, well, rather not. Then look at the literature and then look at it for two days. And then you decided, no, really. And in practice, this is really where practice beats computer science. TPM systems actually manage with, they have one key, a central key, a storage root key, which is generated just once in the lifetime of the device. Only once. So there is no necessity to use unnecessarily insecure fast generation methods. But this is what is being done. And on the other hand, many of these kids have the key length of up to but not exceeding 2048 bits. And now this is a cheap total damage. And we've had this other talk, the main idea is from 1996. And this wasn't really modified. What I'm really grateful for is that there is a price list attached that comes from this other talk. I don't know what this is in Bitcoin, but these are fractions of cents, US cents. And for the 2048 bits, the cost for breaking TPM keys, the upper threshold is $944. Now, who of you sees something ultra bizarre here? Yeah. Also, you know, someone, please, after the third beer, tell me how you can manage 3027 bits, have a cost of the power two to the 26th. And then 4096 bit has the vastly lower cost of something to the times 10 to the power of nine. So it's not really easy to exceed that other cost. Now, about other things that can also go wrong, here's another attack group with Hannover and other authors which showed this very nicely, showed that this is still possible. And they show that several vendors were vulnerable. And the most popular web services, including Facebook and PayPal, this, I think, is the company that does something with money, right? So again, this is a point where you really ask yourself. But again, Koppersmith 1996, Bleichenbacher 1998, so you realize I like old attacks. So be assured that there will be even older things coming out during the course of this talk. But this is quite painful. Now, also we should say what we actually recommend. There's no way around it. You need open source booting. UEFI has to go. Core boot, Libra boot, and U-boot are required or something like that. And that has to happen quite quickly, because otherwise we as cryptographers will be defenseless. And because if you just take away the foundation from us, then there is nothing we can do in terms of magic either. So Nerf, that's acronym. This is why Google is very strong with that. Now let's come to the topic that I would really like to play with, which is robust cryptography. All right. Now I will tell you a few things from the engineering point of view. But of course, they are mathematically well funded. First thing is XOR is your friend. So that means that if you take multiple sources random numbers and you XOR them, then that should stack. So if you have 15 bad random number sources and one good one and you XOR that, then you still have the properties of the good one. You have to take some things into consideration here. But XOR is your friend is, we can assume that. Second thing, you can double hash things. I did that many years ago, since many years ago. For Bitcoin, you also hash twice before you start the mining process. But they don't do it correctly. They are missing an XOR. And so they don't have as much security as you could get with XOR and double hashing. Also longer key lengths. This is generally a good recommendation. But if you remember the previous slide, it's not universally better, but generally it's better. A historical thing, a cryptophone, a phone that Snowden also likes to use. Here you see the design that was created in 2003. And here we saw even at that time, computers were a bit slower. Even at that time, we were able to do 4096 Diffie-Hellman. We hashed it once. And before the key derivation, we hashed it again. So just to explain this a bit slower, a bit more compact, we used the SHA-2 method at that time. If you have a collision problem and you hash twice in a row and then there is a collision problem, then this does not really help. But sometimes it's just a bit. So Bitcoin would be much more secure. Bitcoin would be much more secure if they would flip a bit after the first hashing. So sometimes it's a bit scary if you think about one bit having that much of a consequence. Later I will say some more things about that. And here again, XOR is your friend. At the time, we used AES and another random damage generator. And we coupled them with XOR. And this has the nice property that attackers then have to break both random damage generators. I'm going over this quickly here, but this is basically true. So let's talk about hash functions again. I want to advertise for SHA-512 here. In my opinion, you should rather use SHA-512 instead of SHA-256. Well, you can cut off. If you only have 256 bits, you can just cut the other 256 bits off. If you look at the standard again, SHA-256 is basically with a small modification, SHA-512 with bits cut off. Nicely randomized bits you can always use. So if you have 512 bits, you also have a few random bits lying around. And then you can think XOR is your friend again. And then you might be able to use these bits in the end. Also SHA-512 is very quick, especially on 64-bit platforms. It's basically as fast as SHA-256 on old 32 platforms. Of all the... Sorry. OK, we'll deal with this shortly. Longer curves, another remark. I believe that the group around Bitti Bernstein had this network for small curves. He had this 256-bit ECC, looked at that and had interesting results. But he looked at longer curves as well, but not with the kind of intensity that it may need. And also I really believe that if you do like a little bit curves, then 256-bit really is very close to the limit where you say that could be problematic, in particular because there was a difference to the problems that RSA had with quantum computers, I have to say, elliptic curves with a smaller key length will crash quite rather earlier than the RSA operations. All right, now Christian Foller, my esteemed colleague, now will give you some constructive tips about how you can use existing systems in a better way. And it's actually quite surprising. We looked at things again and thought that there are some new ideas concerning TLS, which looks quite good at first, particularly if you compare it with if you consider different attacks. So the work that Christian is going to use will be practically relevant to a high degree. Hi, so this will be fairly technical. You have to go through that. This is about the right use of AES. The thing is that all the makers of these various products advertise security by saying we are using AES. That's not the worst idea you could have, but normally you don't find out how they use it and there are lots of things that you can do wrong here. So if someone claims they do AES, you should ask them how it is going to be used and you often find then that you find indications that something was not done correctly. So let's have a look. AES is not a magical weapon. It's just a block encryption. You can have a clear text and get a safe text out and the block size is 128 bits, so 16 bytes. That doesn't really help because normally you have messages that are larger than 16 bytes. If you look at files, network, packets. So the question then is how do you encrypt a clear text? You divide it into blocks and work at these blocks one after the other, right? That is the current strategy. So let's have a look at that. If you took it at it naively, this will clearly go wrong. You put each block through AES, but then the problem is clear text blocks will lead to equal encrypted text blocks. So you've all heard about this. So structure is preserved and if structure is being preserved, then that is extremely bad. It's not what you want. Exactly not what you want because you want cryptography to scramble your structure because if you don't do that, this is the world on Wikipedia example, this is what you get. This is AES encrypted. It's military grade security as the advertisement would say. And people do advertise the software with military grade security and do things like this. That is quite bad. Now, the other thing that you can do wrong is we all know you use CypherStream and with AES, not a bad idea at first. You all learned that one-time pads is something that works. So you try to emulate that in this way and then there is this simple way. You encrypt something by using a counter, encrypting that and the result is then X-O'd. It's used as the CypherStream and is X-O'd with the clear text. But this is stateless. So this is deterministic. Each time you run this, you will get the same CypherStream. And as you know, one-time pad, using it more than once is a very bad idea. So you have to change the key each time you use AES. If you want to run a new AES encryption and that again is quite impractical. If you forget to change the key with all the different methods, you don't really notice this at first. You see different images, structures gone, white noise, looks nice. The problem then is you have the same Cypher key, CypherStream. So if you X-O'd the two results, that CypherStream is canceled out and this is what you get. So you can imagine it's not so hard to use this X-O'd of two images and extract the two clear images from that. Even as a Secret Service, you could do that, even if you're not well-founded Secret Service or students as a homework task in the bachelor studies, it's not that difficult. So how does it really work? You have state, if you have one key and don't want to encrypt in the same way, you need a nonce, which is the state. You don't set your counter to one, but to that state, to that nonce. So you have that idea now. Each time you encrypt, you just randomly grab another nonce. You always get a new CypherStream, which is advantages. The new CypherStream means you can use it for that one time. The whole thing is safe. As long as no key is actually repeated, so far, so good. The requirement here is to not repeat the nonce. So that is fairly critical. And then there are various nonce-based encryption strategies and the thing to remember is do not repeat the nonce and the second problem is you do not ensure the integrity of the message. If you encrypt it this way, it's quite clever, but if you know the formats, you can manipulate it, see that make the amount change or the recipient address, and maybe that's not what you want. Exactly, that's because there is authenticated encryption. As a cryptographer, of course, for a long time, you've had the idea that you don't want to encrypt, ensure the confidentiality, but also the integrity of the message. Now that is the big aim and how does such a scheme work? You have the nonce again, and you have the clear text P, and not just the Cypher text comes out of that, but also a checksum. If the clear text changes, the result changes, and the checksum changes, not just the Cypher text, that's fairly nice. And if you then decrypt, very easy. Again, you such a nonce Cypher text pair and get back the clear text, great. And the big thing is now, if you manipulate this, then you'll get an error. So you will learn that the plain text is no longer valid, and that is very nice, that's what you want. And there are two very famous procedures for that. There is the counter melt GCM, widespread in the industry, then there is OCB, TLS and others use it. And the problem, the great thing is this is super fast. The disadvantage, well, it's a bit fragile, and that's what we'll look at. And the other procedure that I can recommend is OCB. So I guess GCM was used to both industry and TLS, and if you have the choice, OCB is what you should use. The problem again is, if the nonce is repeated, then the whole thing breaks down. I took a look and checked how bad the situation is once the nonce is repeated, and it's always completely broken, you can always in real time break the whole thing, confidentiality and integrity. So that's not what you want. If you repeat the nonce, you have a real catastrophe on your hands. So from the design point of view, this is not good because the requirement for a cryptographer and a mathematician, you have a state that repeats. It doesn't sound that bad really, but in practice, that is a real problem. Let's have a look again. The, a bit of GCM bashing, it's extremely fragile. You have to really take care if you use it. If you use GCM, only use, do not use a 96-bit nonce because something is hashed and you could get collisions. And the hash function has some weaknesses there. And then shortly after GCM came out, Ferguson complained to Microsoft and said, well, the problem is, if I now shorten the checksum, then the whole thing will get extremely insecure. And the, so shortening the key is a bad idea. And that is a strange thing that you don't want. The other thing that was found is, well, yeah, nonce repetition, you can actually get out to recover the key that way. So you get the key and GCM key has two keys, one for encryption, one for integrity for the checksum. And the key that you get is the one for the checksum, but still that is very bad. It's not what you want. You don't want the key to be recovered if you make an error. So that's kind of fragile. And there are some weak keys as well. GCM is Galois counter mode. So Galois stands for multiplication and there is a multiplication taking place. So as you can imagine, multiplication with zero isn't a bad, it is a bad idea. If any message times zero is zero and the checksum that is independent of the message is useless. So there are some more weaknesses there. So as this came out, Niels Ferguson looked at it and he works for Microsoft and wrote a paper that said, I can't really recommend this. If you have no other choice, then do use GCM, but please, please, never shorten the checksum. Yeah, exactly. And the second thing, the other thing is, ROC4106 describes what IPsec uses and it describes the implementation and says you have to support the whole checksum. I think it's not impossible to implement all this, but it says the checksum can be shortened to eight or 12 bytes and that is a bad idea. If you only have eight bytes, security is much less than you would have imagined. So two bytes do something, if you're an admin and you're using GCM with shortened checksums, do deactivate that feature. It helps. So let's go back to Nansrius after I talked about the downsides of GCM. So for Nans' repetition, is this problem relevant for practice? There was a discussion when this was published. I will make three examples. I think it is problematic. Programmers do make mistakes and people who design this also make mistakes and there are three funny examples. Microsoft didn't do this correctly in Word and Excel. The other thing was, made a mistake and the newest thing this year was a VPA2 also made a mistake and you see this again and again. Even the big companies and standards and committees do mistakes there, so there are problems and also in the future there will be problems again and again, I think. And the other thing is, if you look at all of these app stores and the software is in there, many people use apps, every 10th app, every one app out of 10 which uses AES will use this random number generator from XKCD which is returned for. So this is a structure problem. The other thing is misuse or errors in use. So, for example, if you, you can clone virtual machines, there can also go things run for Nantes or the other thing is, if you do not have enough entropy, then, yeah, this is all not funny. The good messages to this problem was actually solved 2006 already by cryptographers. There was a paper that was published and this is called E-Saturn. The cool thing is arbitrary long Nantes are supported. So if you encrypt packets, you can use the header as a Nantes and the payload as clear text, great. That's something we really should do. The procedure was introduced, there's a paper, no reason to not use it. And then there is a weakness. Siv is actually idiot proof, you can repeat Nantes, but it's not, it's not complete proof. So I worked on a different method that strengthens this. So why is Siv not idiot proof? Because the cryptographic checksum has to be validated. And if you don't get to do that, remember Apple. So if this happens to Apple, it could happen to other companies too. So we built it in a way that even if you get it wrong, the verification, you will notice it because you built it in a way that the clear text, as soon as it's manipulated, it always leads to white noise. And the idea was then, as soon as you validate the input, you will notice that there is white noise and there is no solid, no valid input there and the software will throw an exception or a crash or something. Right. So I'll skip the details of how this is done. But it does do something, it uses these MRAE schemes, but you have to go through the cryptics twice. That is a disadvantage. It takes a bit longer than other methods. It's not that efficient. But it, of course, could happen that you have real-time demands that do not allow this. So what should you do then if it's technically impossible? Well, don't panic for once. And because there are solutions, I was part of developing that. And the answer is robust AE schemes. They used non-reuse-resistant schemes and we renamed this robust because there was a kind of a conflict within the crypto scene. And when we built this in 2012, the procedure is called MAKE-OEG. And the thing is here, if you repeat the nuns, integrity is still maintained, still safeguarded and so is confidentiality. So depending on how bad the errors, it might still be maintained. So if you repeat the nuns, you could get to clear text having the same prefix. That is something we could not get rid of if you only go through the clear text once. So the idea kind of progressed and there was this Caesar competition, exactly. So the Caesar competition looks for a successor to GCM. This is... Einstein is the chairperson here and we have a successor here. I was involved in that too. We didn't make it to the third round, unfortunately, because maybe we were a bit too heavyweight and not so performed. But what is still in the race and what I can recommend is KOLM, a combination of A is COPA and ELMT, which have joined and have become a third round candidate. And another thing I can recommend in the Caesar competition is EZ, a misuse-resistant A-scheme. So these are the two candidates that I can recommend and let's take a look at that. Well, they are provably secure. So you get a proof of security, a threshold for security, and if you take a look at it, you can see there is a birthday security bound. There are these weird bounds all the time, limits, and Q is the number of messages that you encrypt, L is the number of blocks that you encrypt altogether and T is the time. And what you see here is you can roughly get 100 terabytes encrypted with the same key and then only change encryption. And of course it's only secure if AES is secure, who would have thought that because the whole thing does work with AES. So the idea is this is super strong, super heavyweight, because after all we make the upper lane, we do the upper lane, this is just a checksum over the critics, the lower lane is the checksum over the ciphertext, and in between you have the encryption. So the cryptographic checksum is critics and ciphertext and the other cool thing. If you have a mini-player to something here at number two, here then from point N2 you get white noise, to N2 that is. So if you get the verification wrong, you get a lot of white noise, and that way you would probably notice it would become apparent. So again there is a kind of protection here that verification can go wrong. So these schemes are not always very easy to implement, but now I'm almost through, we've always made it and really can come back. So what you really should do, never ever repeat a nonce, that is a huge catastrophe, that is really a real accident. Exactly, and the other question is, how should I encrypt my data? You then have these two, four categories actually. The first one is the strongest, the last one is the weakest, and you should check that whether you can use one of the category one methods, or schemes. If not, then number two, then three, and last number four. So all the encryption that you kind of know would be in the last, the least secure category. So you have to see and make sure that these are improved into the third, second or even first category. So that encryption gets more robust and stronger. Okay, thanks a lot. Your quote? Yes, I do have another quote. The most important news of all, please, for the sake of whoever talked to cryptographers, you all do your software and it all falls apart, because you don't talk to us. We, both of us, our email addresses, phone numbers are on the internet, so you can find them, you can talk to us. You can come to Berlin and talk to us too, over a cup of coffee, right? Again, thanks a lot. I would like to come back to that too, to make it very, very clear. This is really a relevant problem in practice. Last time I checked TLS, we're thinking we are clever, we don't use any schemes that do not include authentication, and we enforce GCM, the Galois counter mode. That is fine, except for the fact that if the nonce is repeated, this is catastrophically more weak than any of the methods in the fourth level. So cryptography is comparatively hard. So again, Peters and Christian, what Christian said, I would like to repeat, the cryptography community has a strong scientific part that publishes a lot, so you can look at this and verify whether your ideas really are good, and yes, talk to cryptographers, and the reason perhaps that this room is so full is because we said we will talk about blockchain next. Again, I would like to say talk to cryptographers here. It would have been kind of nice if you did that. We are looking at some cooperation partners, virtual ones, and we want to be a bit better when it comes to distributed real-time. So if you have any more tips what kind of research institutions there could be, you could come to us with that information too. So a few words about Bitcoin. I did say it a few years ago. Maybe I should have bought Bitcoin at the time instead of evaluating this for security. Well, maybe no, I shouldn't have done it. I am an old-fashioned professor. My opinion is if leading capitalists newspapers ask how secure this is, then the answer is probably no. I want to say it like this. The cryptography in Bitcoin would be a satisfying bachelor thesis. And that's more or less enough. As in a satisfying bachelor, I usually point you in the direction of the true problem. It's a good amateur encryption. So they hash twice in a row, but in my opinion you should, after the first hash, switch a bit and then hash again. And that would have, as a consequence, that the collision properties would be better. So if they switched a bit after the first hash, they would have better properties. But like this, hashing is not really relevant, at least if you're doing strong things. The good thing is that they do not do not give out the public keys. So at least it looks like they listen to a few hacker conferences and they notice the crypto is not that good. So let's try to make it more robust. So that's kind of nice, but if the young people would have talked to cryptographers, we would have had some nice implications. It's kind of funny. The Bitcoin main author said, oh yeah, everyone is calculating with specialized hardware now and our answers is, it's all distributed, et cetera, et cetera. But we should have a gentleman's agreement that we only mine on CPU. And oh boy, it's a concept by capitalist anarchist so that you do not have a central point and everything is distributed and without spoiling, this was not implemented perfectly. So yeah, if well paid, professors from Berlin are the consequences that a lot of energy is wasted. At the start, I said, well, calm down, it's only one. Yeah. But the thing is that energy consumption rises linearly. It's turning into a real ecological disaster. And some people then say, okay, let's just use water to generate the power for mining. But the thing is that the overlap between crypto coins and ecology and this is fairly small and it really would have been helpful if they had talked to cryptographers and there's an image that I would have to introduce to keep this democratic that is quite linked to the question of password hashing. The idea that a hashed password should be very painful in terms of computing power and memory to recompute the password as a proof of work. So to see that this is not completely gone from the world, S-Crypt, I think only Litecoin uses that. One of the larger cryptocurrencies and they used S-Crypt, but they didn't talk to cryptographers either, asking them what the parameter should be with the result that now with a bit of a delay, S-X is coming and the same problem that you have with Bitcoin you have there too. In terms, I get a kind of a crisis of confidence if you had talked to us, we would have told you comprehensive changes would have changed one parameter. That certainly would have been a democratic event and not something where power is needlessly burned and huge companies or plants can play with it. So mathematicians that know mathematical numbers are kind of horrified by that. So it would be good to talk to cryptographers and set the parameters right. Then there are these hashing competition winners. Argon to D, there is a typo on the slide. You actually reintroduced the typo. Versioning problems. So this is supposed to be a D. Argon to D. Katina Stonefly. This is where Christian actually wriggled out of the ship together with Stefan Lux. And the Katina Stonefly is one of the implantations that in particular addresses these problems. This is how it looks. You see a bit there, but if you have a large inset here, there is not so much you can find. A nice thing about the world is the work of Fowler and et al. There are some well-founded mathematical thresholds of security. So you could use this or use other password hashing and then you would have mathematically provable situations where you could not have people with specific hardware to take the whole thing over. As I said, setting one number correct in this whole thing, we would have had democracy. By not talking to the present and setting up these gentleman's agreements you get this obscene amount of energy being burned by really I'm keen on hash functions but useless in the sense of hashing it all and then throwing it all away, that cannot be the right idea. Proof of work, again, it would have been nice helpful to either talk to cryptographers or the ladies and gentlemen of Litecoin should have thought of setting their parameters orderly. My opinion is that we have to go away from that useless heating and that's why in the next few months together with a PhD candidate I will work on a useful proof of work or proof of useful work. There are a few ideas here in the areas of storage that's one candidate or network services that is a thought I've had when you anonymize web traffic perhaps don't just rely on the Tor project that has its main finance from the US Defense Ministry I would like to have a construction where router network services offer Tor-like network services and at the same time do some mining and finance something this way and when you are allowed to dream you could say the router that you put up in the third word somewhere as long as the sun is shining it will run some services when the sun is gone it will use electricity but pay for it with the work it's done so a dream of a completely autonomous system I would like to maintain this and let's take it step back I really am not keen on mining being 2 to the 73rd operations which are simply turned into heat that doesn't make sense and to repeat it once again the whole nonsense all this there are a lot of questions to ask here so this is a useful PhD thesis and I am an old fashion professor if it doesn't revolutionize the world it will still make a good PhD topic but the revolution now will happen within the range of my duties which is a nice perspective to have as an aging professor but just to repeat once again ladies and gentlemen who did this if they had looked at password hashing if the litecoin people had set the parameters correctly again just realized the power of programmers and you have a better world this way at least in this virtual world so it does have consequences and again first I would like a proof of useful work socially useful work and then if I do use this old proof of work keep it democratic if people had done this password hashing this is a really hard, brutally proven safety limits or security limits that should have been used ok let's come to the end we should do something of course it's a motto here at this congress and crypto magic of course is that on the area of one of your fingernails or a few lines of code you can produce something that secret services cannot break that does have a kind of magic doesn't it you have to look into it and people implemented badly such as what Infineon did with the TPM chips oh we had one slide here that I think I'll keep the TPM chips in one of the main categories the thing is the federal agency for information surgery certified this but it was not enough if we had been able to look at it then a talented PhD candidate would have taken a few hours to cry out and say something so at the lowest level we urgently need urgently need open source we need software that prevents back doors and facilitates finding errors federal security agency are good people but they didn't find the problem it's one of the few state institutions that I would trust more than others but they had an epic failure to remind you those Infineon chips are in all these Google Chromebooks that in America whole generation of pupils are now using they just failed it all so obviously it only works if it's public and again I can point to my esteemed colleagues Bernstein Langer Nadja I forgot the name Nadja Henning so in the talk one of the highlights was when they showed these are the submissions for the post quantum thing and within the first evening they just had fun with the whole thing and about 10% of the things they did in a few hours so let's understand that cryptography is so much better if it's in public than behind closed doors and that really this is not an ideological point but the code has to be readable it should reasonably be under an open source but at least it must be readable the community must be able to look into this thank you the last part of 2WAT is a little bit pitchy for me that should be allowed for me but apparently the mathematics is so many mathematicians implement very new models immediately so as a first example I want to mention the colleague Heiko Strammer he did a very nice talk he talked about distributed key generation and this is one of these things where if in a world where you can trust very few people then if you have math it's basically a way to describe trust you can do this well with cryptography this is a very recent talk from 2017 and this is very recent methods from 1987 you can laugh it's some time ago this slide well this also I'm also not going to show you blind signature where I did something with a master student of myself based on blind signature of David Schaum and this is from 1983 and we only took this much time because the patent finally ended jokes aside not everything we do here elliptic curves is very hard mathematics the look at it it's solvable there is a whole canon of things where you only have to have a short question ask a short question from a cryptographer and you will go in the right direction it's not really like we do idiot safe cryptography idiot safe is a relative term an embedded device does not necessarily have a good random number source so we just want to be as robust as possible another message is galore kanda mode is obviously if you do finance it's awesome but if you are in the real world attackers can can ramble renounce and the security and the integrity of the whole system is then in danger so it's not a good thing let's end with the good words of Edward Snowden cryptography is not a dark art but it's a basic protection we have to implant it and we have to do research active research on it and after so many years after Snowden I have to say cryptography is what protects us from the barbarism of the secret services politics did not really show good results during the last years thank you sorry so the message was talk to cryptographers but unfortunately time is up so as said you can ask questions over a cup of coffee unfortunately not in here but again another round of applause for our two speakers please