 Okay, good. So now I've said all these things and I want to talk a little bit about what governments are actually asking for because I think it's a little bit complicated and I don't think even governments fully understand what what they're requesting. So let's let's be clear when it comes to messaging and I'm going to keep this entire talk on the subject of messaging. When it comes to messaging what they're looking for obviously their preference is the best kind of encrypted messaging service is one that doesn't exist one where the messages are not encrypted. They're sent in plain text and they're retained in long term stores at providers so a warrant police can show up and say give me all the messages sent between these people. And the problem is solved. That seems to be a very, very strong desire, or at least if we don't have no encryption we can slow down the deployment of new encryption. Failing that there is a strong push to develop what are called key escrow systems and of course, this is a very technical group of people I think but just to be clear what's defined this. It means that every time we send a message we're going to have some master encryption key, which could be held by the government although the US does not seem interested in holding keys, but more likely held by the provider or in a company like Facebook or Apple, and that master key will decrypt on demand, whatever message needs to be decrypted. And that seems to be the general push. I just want to stress here. How scary that requirement is right so if I have a master key sitting somewhere that can decrypt criminals messages, then there's nothing technically to prevent that same master key from decrypting everybody's messages. So if that key is stolen. Everything is clear text to the person who does steal it. This is a very, very strong requirement. The more interesting current request that is actually getting a lot of traction I think unfortunately in the United States is actually kind of a very hard shift so governments have mostly been asking for the ability to what they call, get exceptional access gain access to messages when they have a warrant, which in the exceptional means, not this is not the normal case we don't want to read your messages every day. If you go up with a warranty will happen very occasionally it will be the exception to the normal message flow. However, very recently, an entire group of governments, particularly here in the US, the Attorney General signed on to a new requirement that comes kind of out of nowhere for us and it basically demands that we have real time content scanning. Another example of real time content scanning is that we can look at every single message that you send through the system. And we can check it for certain properties and the most common property that's mentioned in the US is, is this an example of what we call CSAM or child sexual abuse media. This is a very powerful request because none of us I mean I have kids, none of us wants this to exist. None of us likes the idea that this kind of media is flowing through systems. And of course the idea that end to an encryption to facilitate the transmission of this kind of media is horrifying and it brings out an emotional reaction and everybody. From a real point of view, it's really important to understand the difference in the requirement in the ask what governments are asking for between asking for exceptional access and asking for real time content scanning. The first is an exceptional capability that is not used except in an occasional, you know, very serious legal instances, when a court has granted a warrant. Real time content scanning is a ubiquitous workflow, every single message is being checked, maybe not every message is being flagged if the system works well, but every message is being scanned and from a technical point of view that's a very very big difference. There are some governments who I think are being a little bit more reasonable in the sense they're asking for more targeted wire tapping and eavesdropping the UK has made a proposal for that so that's a little bit more reasonable. And we'll come back to that in a second. One of those is called the ghost users proposal which gchq in the UK came out with. And of course I mentioned previously traceability this is kind of the new one for me. Very hard, and we'll talk about, well, potentially hard and we'll talk about why that is. I think that's on the time on kiosk grow because I think most of you really kind of do understand this but I want to present it from the government's point of view which is kind of this is what the governments want they don't care about kiosk grow they don't care about key storage or any of the details. They basically have been demanding at least in the US that companies should be able to produce plain text on demand. In the 1990s, when Clipper was the proposal, that system was actually very, very carefully developed so that US agencies US government agencies would hold keys the master keys. There would be a process to split the keys across multiple agencies, and everything was done centrally by the government with certain protections in place. Various proposals that have been made recently do not include that protection, they basically say, hey, you figured out industry go off and figure it out, which means of course, you can imagine Apple which is very well funded could probably build something more reasonable. But you can imagine there are a lot of other companies very small companies that are going to be figuring this out from scratch. We're going to do it very poorly. And so this is not necessarily an easy ask. So, you know, obviously some of those companies will say, well, we just can't afford to do encryption at all if this is the requirement that's a very likely outcome. But generally speaking what it means is that somehow they're going to have to encrypt session keys for messaging using some kind of escrow key that he's going to be stored somewhere typically on a messaging providers hardware security module. And we're not really sure how they're going to do that that's a very, very hard thing to do. This is a hard thing to do because the US government tried to do it once and the result was the clipper chip, and it was broken not necessarily in its key escrow function, but it failed its design which was created by the US National Security Agency, had a field with a 16 bit message authentication code and Matt Blaze at Bell Labs was able to figure out that he could bypass the encryption the key escrow function on this chip relatively easily. So maybe as a result of that design failure support for this kind of government design key escrow collapse now this is the US National Security Agency, whatever you may think about them, they're supposed to be good at designing encryption technology. So the fact that they failed on that aspect of the clipper chip was a good indication that maybe this is not such an easy problem to solve to key escrow, and the US is not very easy to do. If we had done it, you know would have really been very bad for software security and so on. We also had a really interesting example of how bad key escrow could go wrong, even if you protect the master keys. I like to cite this to government officials at least here in the US because none of them have heard of this. But many of you or some of you may have heard that in the 2000s this came out during the Snowden slides. The US government created what we believe to be a backdoor pseudo random number generator was called dual EC DRBG. And there's a long story I'm trying to cover very quickly, but we believe that this, this algorithm had a covert backdoor that was put out in this standards. And for reasons we still don't fully understand this algorithm wound up inside of juniper net screen firewalls in about 2008. And they continued continued to be there through about 2012, and actually 2015 and so on. In 2015, a group of non US hackers we believe, possibly the current rumor is possibly Chinese state sponsored hackers were able to get into juniper's net screen code base. And what they did was really alarming. What they did is they took this possibly backdoor algorithm. And they replaced a single 32 bite field with a key that they had generated themselves. We don't know how they generated but clearly they did this for a reason. And what we know about that in fact we did some of the research just to verify this. What we know is that this creates an opportunity for anyone who has the corresponding secret key. In other words the hackers themselves to decrypt any VPN connection that was made by these firewalls. And so essentially they took our backdoor and by our being very specific the possible the NSA is highly likely backdoor, and they turned it into their own backdoor. And I think this illustrates kind of the risk you know there are so many risks around kiesco systems and duly CDRB G is one. It's an example of how dangerous they are you cannot hold these systems against very sophisticated state sponsored attackers and this kind of shows us one of the possible outcomes. I don't want to praise other backdoor approaches but in a few years ago GC HQ in the UK came up with their own approach and I'm not in love with this but at least it gives you an idea of what's possible. There was an alternative approach to creating some kind of backdoor into messaging systems, and what they identified is really valuable. They pointed out that most messaging systems have a server. And that server's job is to, when I want to talk to Bob, I go to the server and I say give me Bob's public key. And this is called the identity subsystem really its job is to hold on to who's part of the system, as well as their public encryption keys and give them to me when I need it. And all of the systems that we're familiar with have a server like this Apple iMessage WhatsApp signal they all centralized the server to distribute public keys. And when you go and you try talking to a new user you're trusting that server to do the right thing you're trusting it to be not compromised. The systems have some kind of key fingerprints you can verify that but nobody does the idea that GC HQ proposed is well here's the weak point in all these systems let's attack the weak point. And what they want to do is they want to subvert that identity management system they want to subvert that server to insert either wrong keys or add new users. So when I try to make a private conversation to Bob it's not just me and Bob, it's me and Bob and maybe the FBI. And this seems very simple of course then the question is why doesn't my app tell me that the FBI is present, but of course, the problem is that you know that they're also proposing that we change the app. So it doesn't reveal facts about who's part of the meeting. I'm not doing that pretty much anything's possible, but I will give GC HQ credit, at least in this case, what they've identified is at least a system that does not have the same vulnerability as the key escrow approach. If you steal the keys from GC HQ, you won't be able to decrypt anyone's message, you'll have to actively target a particular user and that's not always so trivial. So this is at least slightly lighter approach, even if I think it's unworkable. I want to briefly talk about tracing and I think what's kind of notable about this part in this idea of tracing different people's people's text forwards or attachment forwards. What's really notable about the problems and tracing is that it's just a problem we don't quite know how to solve. There is an article that talks about a government Indian government proposal to do some kind of hash based tracing and I think the idea is basically if somebody's forwarding and attachment and it goes becomes very very popular. We can find out who started this or we can go back and see the pack by which it arrived at being viral. The government in India has proposed some kind of hash based tracing. I've seen some research that proposes other ways to do this. It's possible that WhatsApp already has some way to track forwards. It's a little bit difficult to tell. But it's not necessarily the case that we know how to do this because the situation is sort of fluid. Some people have proposed tracing approaches that are privacy preserving in that you can only trace messages if they achieve large scale virality. But the problem is we don't know what a message is just starting out if an attachment is sent out from one person and it reaches 10 people and then it goes to 100 people and it goes to a million people. At what point does it become traceable is it always traceable from the point where I send it to my 10 friends. Or is it traceable at the point where it reaches a million and then somehow when it reaches a million people, can we go back in time and identify who sent the original one to 10 etc viral push. And so these problems. I'm not saying these problems can't be solved. I'm saying we need a lot of research to really figure out what is being asked for if we don't want to just build a system that traces everybody's messages everywhere. And this is kind of an example I think of what governments are asking for. I'm enthusiastic about solving problems. I'm uncomfortable with the idea that governments are saying we need a system today, when we researchers don't even know how to build that system.