 Hi, thank you very much for coming. My name is Jerv Markham. I work at Mozilla where my title is policy engineer. I also have a long term cough so if I cough a bit don't worry about that. If I actually collapse on the ground then feel free to call an ambulance, anything short of that I will recover momentarily. Today I'm going to be talking about the Mozilla root program which is our program for managing certificate authorities and a little bit about how it works, what we do, how we do what we do and then in the second half of the talk I'll go through some particular incidents that have happened in the CA world and the sort of policy responses that we have made to those incidents and possible future stuff and then hopefully we'll have some time for questions. I'm going to try and speak clearly if quite fast but if I'm going too fast for you wave at me and let me know. So as background, if you want to establish over an untrusted network a connection to a particular person on the other side, there are roughly three ways that you can do it. The first is out of band so you give them a phone call or you send them a letter or something to exchange some information that you can use to identify them when you make a connection over the untrusted network and make sure that you're actually connecting to the person you hope to connect to. The second way is called trust on first use or tofu which is normally what out of band ends up being in practice where you just connect for the first time over the untrusted network and you get through to somebody and then you hope that that's the right person and then next time you just connect you check you're connecting to the same person you connected to last time. But the third way is to have a trusted third party, somebody you trust and somebody the other side trusts who can issue kind of tokens of identity and then you check when you get the token of identity back from the person you connect to whether it was correctly issued by the person you trust and if so you assume you're making the right connection to the person you want to. And on the internet at least trusted third party is most commonly implemented using a system of digital certificates. The advantages of this system are that minimal user interaction is required. If I want to correct connect to a new website I've never connected before to and I want to connect securely I type the address into my browser with an HTTPS on the front and it connects me up and assuming my browser doesn't sort of set off warning alarms I can be pretty confident I'm connecting to the right place. Now you can argue about the various different systems and their pros and cons but I would say at the very least that fact the fact that your grandmother your grandfather someone who is not technologically savvy can make secure connections without having to know anything about computer security is the reason why this system is primarily the system used on the web and is used for secure web connections and is used in many other environments as well. Because if your grandparent of some sort had to phone up their bank and get their bank to read them the 256 fingerprint which they then had to manually verify against the key that had been used in the connection they were establishing with their bank what would happen would be that no one would ever bother to do that and lots of people's data would be stolen. But I'd say that's the main advantage of certificate systems but today is not about whether we should replace them with something else today is about how we deal with the world as we have it today. So if you are going to have trusted third parties is that helping you to establish your secure connections then you need to have a list of the third parties that you trust and that is called a root program. You can have root programs for trusting people for server to server connections you can have root programs for trusting people for signing of code so you know who wrote the code you're going to run on your computer you can have root programs for trusting the signing of documents or for email. Some root programs cover more than one thing. The Mozilla root program for example covers server to server communication and it covers email. It used to cover code signing but we found out that no one really cared so we stopped doing that. Who has one? Well all of the operating system vendors have a trusted root store built in so Microsoft has one, Apple has one, Google has one for Android, also Oracle has one for Java, Adobe has one although it's focused on document signing and Mozilla has one for Firefox because unlike Chrome which delegates these decisions to the operating system although it may make some tweaks to them Firefox uses its own built-in store of roots and so a root CA which is trusted on Firefox is trusted everywhere on every operating system so we have consistency across Firefox across the OSes and the reason there is more than one of these root programs is that it's probably not a great idea for one single world authority to determine who everybody in the world trusts. People want to make different decisions about who they think is trustworthy and who they think is not trustworthy and in fact you know what you get with your operating system or what you get with your browser is a default set to which you can add or even remove people if you feel that they are not worthy of your trust. The issues that that causes of course is that for technical and historical reasons when a website presents a certificate to anybody who connects to it normally except in special circumstances they can only present one certificate and therefore they want to present a certificate which is many people as possible who are trying to connect to them trust because otherwise some proportion of their supposed customers will get error messages and be scared away from using their website. This gives CA's certificate authorities the problem of ubiquity they would like to be in everybody's trust store they would like to be trusted by everyone and they would like that to have worked its way through the ecosystem so even old versions of Windows or old versions of Mac OS or old versions of Firefox have that trust so if you're setting up as a certificate authority unless you can find some way of bootstrapping yourself and we can come to that you have a very long lead time before your trust is ubiquitous enough among all of the different clients that people will even want to use your certificates. Why is Mozilla's route program different? We run our route program in an open and transparent manner we feel that it's very important for the internet that at least one route program is run in a way which listens to and takes account of the views of the internet community where the processes and procedures are open and people can suggest modifications and improvements to the way that we do things that when we're considering adding someone to the trust list that is an open and discussion process whether it's discussion and when we're considering taking someone off and whether that is the right reaction or not to some perceived crime or error or mistake we should have a discussion about that and so that is how our route program is different and in fact actually that influences the community of route programs to be more open than they used to be certainly. So just a little terminology to make sure we're all on the right page about certificates when I talk about them later. Certificates come in hierarchies they start with the routes at the top which are embedded in the browser or the operating system with the other trusting thing but normally you want your route certificates to be locked up in a bank vault somewhere because you really don't want them misused and so what happens is that the route sign what are called intermediate certificates of which there may be a number and those are the ones that you keep in your data center which are then themselves signing the end entity certificates which you hand out to people who own websites or who own email addresses or who want to sign code or whatever it is that you're doing. So that's routes and the intermediates and end entity certificates and if you're a new CA who wants to get into the market you might pay another CA a very large amount of money to get them to do what's called a cross certificate which is where their route signs your route and so stuff that you're issuing is trusted in older browsers which don't have your route built in because there's an alternate path up to theirs. So that's called a cross certificate. What policy tools do we have for managing our interactions with certificate authorities and managing this trusted list? Well every certificate authority for each route has what's called a CPS and a CP which are two documents of the difference between which is a bit geeky and technical but basically together they define the processes that a CA will use issuing certificates. We're going to issue them this way to this sort of person. We're going to keep our data centers like this and so on and so forth. There is also an international organization called WebTrust and another European organization called Etsy which produce standards for certificate authorities as to how they should behave. Often that is you should do what you say on your CP and CPS and we'll check that you do and a little less than we would like about the thing you say that you're going to do is actually the right thing and saying but nevertheless there are these standards. An organization called the CA Browser Forum which is an informal collection of the major CA's and the major root stores also produces a couple of standards and all of these standards are audited so every year a CA has an auditor from an audit firm come and visit and check that the things that they're doing match the requirements in all of these documents and the reason they have to do that, the reason those documents and those audits have teeth is because each root program has a policy that if you want to be in our trusted list you have to have this sort of audit and this sort of audit and this sort of audit and you need to show us the documentation that shows that you've got them and that you passed. So root program policy defines what we expect of CA's but also the audits that they have to have and therefore someone goes on to their premises and to some degree at least checks that they're doing all of the things that they're supposed to do. One could argue about how effective those checks are but these are the tools that we have. The other policy tool that we have actually is changing the user interface of our product and or its capabilities. So in Firefox at least we can eliminate older algorithms or you know deprecate them or put warnings in for them to try and encourage people to stop using them and we can do things like so when Firefox connects to a secure site there are effectively only two states it can be in. There's a state which uses difficulty called extended validation where the name the actual name of the business has been carefully checked by the CA and then we put that in the URL bar normally in green so it says this is you know B and Q or this is PayPal or this is Amazon.com Inc in the top and then there's everything else. Now CA's have other sort of gradated levels of validation that they manage to sell to people but that doesn't change the browser UI so I'm not quite sure why people pay for the extra levels but I'm not in a CA's marketing department. So those are the policy tools we have in broad brush for managing the behavior of certificate authorities for our root program. So Mozilla is keen to use our power because CA's want to be in our store so that their certificates will be trusted by Firefox so they'll be able to sell them so people will be able to connect using Firefox to their things and that power that we have allows us to drive improvements in the certificate system and in the security of the web. Some of the previous improvements we've managed to drive either by acting unilaterally or by acting with other browsers or by acting through the CA browser forum have been things like the first thing the CA browser forum did was this extended validation standard. So you can argue about the value of having information about the sort of physical location and address and so on of a company in a certificate but if that is valuable to you then you definitely want that information to be correct and it used to be the case that CA's were doing things like yes I'll accept a dodgily faxed copy of some phone bill to show that this is your address right and some CA's were trying to do better but of course because doing better costs more there was a race to the bottom problem and so the certificate the CA browser forum got together and produced a set of what we think are minimum standards for the vetting of actual real-life identity and called it extended validation. So many CA's still use lower standards but we don't trust those enough to display that information in our browser so extended validation is what we think is necessary for to actually have a real world identity written into a certificate. So those standards were defined we started those in 2007 they finished about 2009. Another thing we did was eliminate the use of 1024 bit RSA certificates. So we had to drive them out by stopping CA's issuing end entity certificates like that and then we had to get them to roll over all their intermediates and then we had to get them to stop using 1024 bit routes because we started started to become clear that a well-motivated state-level attacker with a large amount of computing power mentioning no names may be getting close to factoring 1024 bit certificates and if they could factor one 1024 bit certificate that was still in our root store they could generate as many certificates under that as they wanted and our browser would trust them. So it was definitely time to move away from that but that was a multi-year effort. Around about and we'll get to exactly when and why 2010 or 2011 the CA browser forum produced a document called the baseline requirements which are a set of things that everybody has to do for the issuance of any certificate so this was just for certificates which contain real-world identity but this is like anything that you do and so that was definitely a way of driving up the lower bound of certificate authority behavior and making sure that everyone did things that were at least vaguely sane. The the world of intermediates so we had the routes which we know about because they're not a trust store but then the CA can issue any number of intermediates and you don't necessarily always see all of those and don't know what they're doing and so we came up with a policy a few years ago where CA's had to disclose all their intermediates to us in a database so we knew about all of those or technically constrain them so that they couldn't issue for random sites on the internet. So that was a policy that we we drove. Recently there have been improvements and much more rigor in domain validation methods which are the methods of determining whether somebody owns example.com or PayPal.com or dirv.net or whatever because obviously the key thing a CA does before issuing a certificate is check that the guy they're giving the certificate to actually owns the domain names which are in the certificate. That's like the basic most important thing that they do and exactly how they were doing that was a wide variety of methods which had quite a lot of kind of slop in the definition of how they did it and so we came up with best practices for all of those styles of method documented those and those are now in the baseline requirements. We've also driven non policy improvements through the changes to Firefox primarily in the use of cryptographic algorithms. Upcoming things that we're hoping to do through this root program power that we have there's a standard called certificate authority access or CAA which is where you put in your DNS a list of the certificate authorities that you want to allow for your domain. This is a way of trying to solve the weakest link problem if there are 60 trusted CAs and one of them sucks that one can issue certificates for your domain and you can't do anything about it. So certificate authority access is an attempt to solve that problem by allowing sites to say no I only want certificates from Symantec or only Komodo and did you search or only let's encrypt. But the trouble is of course that this hasn't really taken off because CAs aren't required yet to read and abide by that information and they're not very keen to do that for various reasons but we want to make it mandatory to change this. We're trying to push that through the CAA browser forum now but if we don't manage to succeed we may well take unilateral action and require it anyway. Google has come up with a system called certificate transparency which is a way of trying to make sure that CAs disclose publicly every certificate they issue. It's not fully implemented yet so it's only used for a subset of certificates but it's already been extremely useful in finding issuances of problems that CAs have been I think bad things that CAs have been doing. Mozilla has some issues with the scalability of that system and how you can get certain guarantees that the log servers are actually going to behave properly and so we're proposing some modifications to it and that process is ongoing but we may well end up doing something like that. And for reasons which we will come to we're feeling that the audit process is not as good as it could be because what happens is of course a CAA employs an audit firm to do its audits and that audit firm would love to be employed again the following year and so is keen to make sure that gives the CAA a clean bill of health and then that relationship has various professional obligations which means that they can't really the audit firm say they can't really tell us anything about what happened apart from the fact that they passed. And we think this kind of sucks and we need more transparency so we're trying to work out how to improve those mechanisms. And recently as we'll come to we stopped accepting audits from one particular audit firm which we hope will generally galvanize audit firms to make sure they're doing a good job. So that's sort of a broad and very quick overview of the route program and what we do and what kind of things we're trying to achieve with it. And now I want to talk about some of the incidents that have happened in the past five years in the certificate authority space which have led to changes both technical and policy. In 2011 a CAA called Komodo a large CAA had what's called a registration authority or RA in Italy. An RA is basically a firm that you have normally in a particular country which knows how to check that a business is a business and knows how to do publicity and knows how to advertise. And what they do is they kind of get business for you and do some of the checking and then they hand over the information to you for you to issue the certificate. Unfortunately the security of the account that they were logging into at Komodo to issue certificates was very bad. A guy called Komodo Hacker found out the password and issued a bunch of certificates for major websites which he then boasted about. This also revealed in fact that Komodo were keeping their route certificates online. So instead of the intermediate level that we were talking about they had their route certificates in the way in the servers that were issuing the certificates and were just issuing directly off those. Komodo Hacker may or may not have been from Iran. Komodo claimed that he was but that may be just because they wanted to big up their problems as being a state level adversary. Not sure. So what did we do? The result of this was that we required that all issuances or pretty much all issuances had to be via an intermediate which then means that the CAs can keep the route certificate offline and reduce the risk of compromise of the route certificate key. And we also said that any account to the CA you can log into as an RA that can cause certificate issuance has to have two factor authentication. That was in 2011. 2011 still a very famous incident a Dutch CA called Diginotar got comprehensively pwned by somebody working for the Iranian government. The exact scope of that was unknown but it was massive. They completely failed to notify any of the route programs that they had trouble and instead tried to revoke the certificates that they found would dodging cover it up. But in fact what happened was certificates that were issued from the Diginotar routes were basically used to man in the middle people in Iran. The company which did the forensic analysis of Diginotar produced this video which was used because when you use a certificate often your browser will go back to the CA and go is the certificate still okay? Using a protocol called OCSP. And if you can look at those logs you can geolocate the IP addresses to find out where in the world people are using that certificate from. We are really quite washed out here aren't we? Can we get some of these lights off briefly? Is it up here? That's the, there we go. All right. Are you found it? Found it. Okay, that's better. The little red dots are people using the certificate. See where they all are? The others we think are probably sort of VPN or Tor exit nodes and that kind of thing. But it seemed pretty clear that the government of Iran was using these certificates to man in the middle tens or hundreds of thousands of people and we have no idea how many people's security was compromised and how many people got into serious physical trouble because of this. When I first saw this video it made me cry. What did we do? Well, we distrusted the entirety of Dijenita's organization and all of the root certificates they had control of. Eventually some of them they were managing for the Dutch government and the Dutch government did an initial investigation and said no, our stuff is fine. It's okay. It's only the other stuff. Now you can leave ours but it turned out that was rubbish. And so we distrusted theirs as well. The baseline requirements had already been kind of going in the CAB forum but this definitely lit a fire under them and so we got them going. Because of what we just saw all of these requests were coming in, is this certificate any good? And this service was saying yeah, it's fine because it's certificate they'd never heard of. Because OCSB responders were in a sense powered by lists of bad certificates. And so they just said it's fine for anything that wasn't on their list of bad certificates. And we decided that this was completely ridiculous and that OCSB servers had to be properly database backed with a database of good certificates. And so we told CAs that they needed to completely rejuke how OCSB servers worked so that they didn't do something so stupid. And the last thing is that the CAB forum produced some things called the network security guidelines which in hindsight were a bit of a knee jerk reaction because they did say what seemed to be vaguely good things to do with network security in 2011. In 2016 a lot of the things that you do are different but this document hasn't changed, no one really knows very much. No one in the CAB forum has the expertise to update it and so it's actually now a bit of a drag on stiff good authority network security best practice. And so we have to figure out what to do about that. But that's something that happened at the time. 2011 another company called Digisert SDNBHD not to be confused with the much bigger Digisert in the U.S. was a Malaysian subordinate CA of Entrust so Entrust had issued them an intermediate certificate which they were then using to issue to their customers. They decided it would be great to have some certificates with 512 bit keys which you can factor in about three minutes on your coffee pot. And no key usage information which means these certificates could have been used for email or servers or anything you wanted to use them for and no way of revoking them. Well done Digisert SDNBHD. We decided to distrust them completely and at around the same time we published the baseline requirements which in case it wasn't really obvious required you to put information in your certificates about how to revoke them and what they should be used for. 2012 a company called Trustwave who's an American commercial CA issued an intermediate certificate to a company called Walgreens who used it for man in the middling everyone on their corporate network. Now in one sense that's sort of okay because the people inside Walgreens network agreed to it. In another sense it's really dangerous because if that intermediate certificate leaks someone can use it for man in the middling anyone else in the world and we decided that this sort of thing was too dangerous to allow in the public PKI and because we hadn't taken a strong stance against it beforehand we didn't actually sanction Trustwave but we told all of the CAs that this has got to stop and we gave them two months to cut it out and get everyone to find some of the solution. So no more man in the middling under public roots even for people's internal networks because we can't trust you to keep the keys safe. So then in 2013 when the French government CA did exactly the same thing we constrained them to .fr and about 12 other very small top level domains which are French dependencies and it turns out actually that they decided that this incident meant that they didn't really want to be a public trusted CA at all and so they'd be moving away from this hierarchy and it's soon going to be removed entirely at least from Firefox. Then in 2014 there was an incident but it didn't apply to us it was a CA that was only in Microsoft's stores so we dodged it that year. 2015 a Chinese CA called CN Nick issued a certificate for man in the middle to a Middle Eastern company called MCS Holdings in violation of its own certificate practice statement. It didn't disclose this certificate as it was supposed to have because we had the intermediate disclosure rules in place by that point. MCS had no PK practices we had no idea what they were doing with this certificate. They could have had it just stuck on a server in their data center with no protections whatsoever. And we decided that this was so entirely ridiculous that we distrusted the entire CA. Although we did say after a year they could reapply and in fact they did reapply last October and they're going through the re-application process to be re-included. 2016 was Wosign and Startcom which I'm going to talk about a little bit more because it's a really interesting detective story and I very much enjoyed detecting it. Wosign is a Chinese CA and Startcom is a Israeli CA and this incident started when people reported from various places including some of our friends at Google that there were various problems with Wosign. And when there were sort of three or four problems on this list we decided we should make a list and we made a list and then other people reported other things and the list got really quite long. And so Wosign were kind of trying to deal with and respond to these but one particular thing on the list was particular concerning and that was Shahwan backdating. The background to this is that Shahwan is a cryptographic algorithm, a hash algorithm that has for a long time been showing its age and so there has been a industry-wide plan to eliminate its use. And one of the key dates in that plan was the first of January 2016 which was the date after which no certificate of authority was permitted to issue certificates that used the Shahwan hash algorithm as part of their cryptographic construction. The trouble is that certificates don't have an issue date in them. They have two dates. They have a not-before date which is the date you're not supposed to use the certificate before a start date in other words and a not-after date which is sort of an expiry date. But both of those bits of information are controlled by the CA and there is no requirement and there is no way you would enforce a requirement that the CA make the not-before date the date of issue. And in fact there are sometimes technical reasons why you have to vary it a little bit. So Shahwan was forbidden for certificates issued after a certain date but of course it is technically possible for a CA to back date their certificates to make it look like they were issued before the ban came into place thereby working around the code that Chrome and for a while Firefox had in that said we don't trust certificates that were issued after this date. So we thought WOSINE had been doing that. Why did we think that? Well this is a graph of WOSINE's Shahwan issuances for the three months leading up to the deadline day when they weren't supposed to do it anymore and it turned out by looking at various fields in the certificate because certificates in a sense have patterns or fingerprints sort of little features that you can tell they were issued by a particular system or in a particular way that they had two types of Shahwan issuance which are in green and orange. The green ones we think were probably issued by an automated system. They were issued on every day of the week and they were issued in quite a big volume but the orange ones we think were probably issued manually. They were only issued between Mondays and Fridays. They were issued in much smaller quantities. Well, almost completely only between Mondays and Fridays. Occasionally it seems someone came in on a Saturday and issued the odd certificate and on this particular Sunday there were 62 certificates issued. That's sort of two Sundays before the end of the year, 20th of December. And we thought that seemed a little bit surprising that they would come in on a Sunday and issue so many certificates at once. And so we had to look at those certificates in comparison to the other orange ones and we looked at the time that they were issued. Now, the orange ones you can see were issued. This is UTC. So we're issued during the day in China with a break for lunch. OK. The blue ones were issued at random times throughout this particular Sunday. So we have two options. One of which is that the employees of WOSINE perhaps because they were, you know, under some sort of pressure came in on one Sunday at midnight and worked all the way through to the next midnight issuing certificates and show our certificates before the deadline or it could be that in fact they had a template which had this back date in it and they were just using that template to issue certificates in 2016 which they were claiming were issued in 2015. We thought that was a bit more likely. So then certificate transparency provided cryptographic evidence of the backdating of six of the 62 certificates but not the rest. And so that again was a bit more evidence that they've been doing this the other thing that they did was that they bought this Israeli CA called Starcom and Mozilla has a clause in our route policy that says that if a CA changes control you have to tell us because trust is not transitive just because we trust company A doesn't mean we trust company B when they buy company A and so you have to at least tell us that that has happened and not only did they not tell us but they explicitly denied that it had happened until we went to the British, Israeli and Hong Kong company registries and traced the chain of ownership to show that to show that in fact the Israeli company was now owned by a British company which was now owned by a Hong Kong company which was owned by Wosine which was itself owned by Chinese large Chinese IT conglomerate which also has a browser called Chihu 360. But the problem was that we then found again using certificate transparency data that this kind of style of misissuance that we saw in those graphs those 62 certificates had a very specific fingerprint and we found a mis issuance done by Startcom with the same fingerprint for a Australian payments company called Tyro and they'd issued two certificates to this company again which had been backdated and so we took that as evidence that the bad practices at Wosine had imported themselves into Startcom and therefore it was reasonable to treat the whole as one and so what we did was we distrusted both CAs entirely from the 21st of October 2016. Startcom got an opportunity to be readmitted after six months if they could change their management so that they were no longer managed by Wosine but were owned directly and various other changes because they'd also moved over to Wosine's issuance systems which we had no confidence in because the code call quality was terrible. So Startcom had a bunch of work to do but could possibly apply for readmittance in April. Wosine can apply for readmittance after a year but basically has to rewrite their entire infrastructure and so on. But the other thing that we did and this may be pressing to things to come is that we determined that some of the things that Wosine had done wrong should have been spotted by their auditors. So for example they had out of date software on their issuing computers right and one of the things that's supposed to be checked is that you know you're applying security patches within a minimum of six months from the time they're issued and some of the software they were running US was five years out of date. There were other things about certificates that had been vast tranches of certificates had been issued with the wrong fields in. This wasn't a technical or a security problem but it's the sort of thing an audit could have been picked up and their auditors Ernst and Young Hong Kong had given them a clean bill of audit although I hear that this fact caused some disquiet among other auditors and among other bits of Ernst and Young but that's what happened. And so we decided that we were no longer going to accept audits from that particular branch of Ernst and Young although of course if we later find more bits of Ernst and Young causing causing audit troubles then that might get wider but that's what we did. And so we hope that this will because it's the first time that this has happened. We did not for example and maybe in hindsight we should have done but we did not stop accepting audits from Diginotar's auditors. Right despite the fact that they're you know a lot of things that they were doing were completely terrible after they were investigated but this time we did. So 2017 and this is sort of ongoing at the moment so we'll see what happens. A very large CA called Symantec has discovered some problems with an RA in Korea called Cross-Cert. So what happened was someone was looking through the certificate transparency data and they found that a bunch of certificates supposedly issued by Symantec that had like the word test in the organization field instead of the name of an actual company which again is not necessarily a security problem in itself but isn't what you're supposed to do. And they said what about these and Symantec said oh yes those are issued by Cross-Cert we'll go and talk to Cross-Cert about this and they had a little chat with Cross-Cert and it seems like the more they talked to Cross-Cert the more problems they find. So they were you know there were 12 certificates that they were concerned about and they said oh just show us the logs for these the audit logs to show that you'd properly checked that these domains were owned and Cross-Cert like audit logs. So this it seems that it's this way the story is still emerging. So and again this may also raise questions about Cross-Cert's auditor and whether Cross-Cert got a clean bill of health in their auditor and whether they should have done. So this is an emerging situation which we'll have to see but this might well be 2017's thing because and I'm still asking Symantec about this I don't think it's necessarily possible to tell when a certificate was issued via Cross-Cert and when it wasn't which means that the blast radius for this could be really quite large. So watch this space. There you go. So there's some things that happened and what we did about them as well as stuff about our route program and I'm glad we have 10 minutes left and I'm very happy to take questions. Please sir. Yes that's you in the blue. Yes. I'll tell you I'll tell you well I'm sure you know that there's a trend towards more men in the middle with the browsers because in some legislations actually ISPs are required to provide the parental control filtering. And so they are going for you can speak a little slower I'm not quite following. Yeah. In certain legislations such as the United Kingdom now ISPs are required to provide parental control filtering which means that they are going for DNS filtering solutions that then title rate direct for example HTTPS connections just towards forbidden websites to their own proxies with a page that says this is blocked and whatever which doesn't work unless you can men in the middle to certificate and create a fraudulent one. And this is going to be a growing trend because more and more countries are possibly requiring this making these compulsory requirements by law. So did you ever consider this problem because unless you have a route certificate in the browser that allows I mean the men in the middle to create a valid page I mean what the users get actually is a nasty security exception ever in the browser and this is a terrible user experience for everyone. And so I mean I don't know how to solve this but I think we should have the token. So and another question is this really thinking given all these problems with the CI system that you can really be run securely or I mean if just one certification author it is correct everyone's factor so does this really work? So two very different questions. The first question was that some governments are now requiring parental controls which sort of requires man in the middle and how do you accommodate that technically? And the second question was basically that if one of the CAs is bad everybody's stuffed what do you do about that? To answer the second question things like CAA and mandatory use of CAA will hopefully go some way towards that. It means that hopefully the CAs will have systems that automatically check in a non-overrideable way and this is why we want to make it mandatory that they're allowed to issue for a particular domain and it will be automatically blocked if they don't even if some evil person is at the controls. So we hope that that will deal a little bit with the weakest link problem. Certificate transparency which Google are doing will also make it easier to spot when bad certificates are issued because they will have to be publicly logged in order to be trusted and then you can notice that they're there. So those are a couple of things that people are doing to try and avoid that one CAA is bad therefore we're all stuffed problem. In terms of man in the middle and legitimate man in the middle by say by companies or perhaps for parental control reasons I think that having the browser specifically add features to allow man in the middling is probably not the direction we want to go down. I think that you can already allow man in the middle by installing your own route and then generating certificates under that route and I think that's probably safer than your browser shipping with some kind of built-in man in the middle feature. Another question? Yes, you're getting man in the middle though if you are behind an HTTPS proxy basically you have man in the middle in an enterprise in this case. Many enterprises have this. Yeah. Okay and the question is then about the ubiquity. So now if I correctly understood in Windows if I run Chrome or I run Firefox I will get different for different websites I'll get different I can get red green, green, red so different things. So that's yeah that's the ubiquity question and then the related question is okay why separate route program instead of cross-referencing? So the first question was you get different results in Chrome and Firefox on Windows and what about that? And secondly, why is the more than one route program? Well, you get you can by default you get different results possibly in Chrome and Firefox if there's a disparity between the Windows root store and the Firefox root store but Firefox does now quite recently have an option that you can switch on which allows it to trust manually added certificates in the Windows root store which means not the ones that Microsoft add but if you've added one as a so domain administrator or as a company Firefox will find that and also trust it so it makes it easier to use Firefox in enterprise environments in those circumstances but why do we have multiple root stores? Why do we not just say okay on Windows Firefox will trust Microsoft's root store and Mac will trust Apple's root store and on Linux will trust oh hang on a minute well there's one reason right because most Linux distributions use our root store and so if we stop doing a root store then they would end up having to copy Microsoft or Apple and having either Microsoft or Apple determine the list of difficult authorities that everybody trusts in the world in a non-transparent way seems to me to be a definite step backwards for web security so you can ask the other root store programs why they have their own program and it's certainly true I think that many of them are in a sense following Mozilla's lead in terms of who they admit because for a commercial company a root program is just a cost center right? It's a necessary evil that you have to do you may want to keep it in order to exercise some forms of control but you know it's not really something you can make money at charging CAs for inclusion will give you a bit of money we don't do that but you know not really very much money it's just a hassle right? But the reason our root program exists at least is because we think at least one root program needs to exist which is run open and transparently and we think we can use it both to in a sense drive openness and transparency and other root programs also drive up the general security of the web using that power that we have yes? Thank you for the talk I'm wondering if you could elaborate a bit more about what you want to see changed in CTE and also another question can you elaborate on how AI does will maybe change the root program for Mozilla? Yes so on the matter of CTE some people who understand the technology much better than me are currently producing a paper about the changes that we'd like to see so they put an initial post on the trans mailing list which is the ITF mailing list for CTE and things like it about some of the concerns that they had and they've been asked to go away and write up the changes that they'd like to see and that's what they're doing so exactly how those changes work technically is not my bag and so you'll have to wait until they explain more about the problems that they're having the second question was about EIDAS which is an EU regulation for regulating certificate authorities and certificate authority lists I have to be slightly careful what I say about EIDAS because some of my opinions about EIDAS are probably not all that printable or wise to broadcast but I think that the people who legislated EIDAS didn't really understand the difference between the SSL certificate market and other sorts certificate market like document signing and code signing and even email and therefore the regulations are not a good fit the EIDAS people would like Mozilla to just decide that we're going to trust anybody they decide to trust and so you know if the EU trusts somebody Mozilla should obviously trust them we and other route programs to be fair push back fairly strongly against that suggestion and so we don't think that that's going to happen we think that the best way of managing it is if they want to have their own trusted lists they can have their own trusted lists and use them for whatever they want but the best way to avoid problems is just to make sure everyone in their trusted lists has been through our process and has also got into our trusted lists and then there's in practice no disparity and so there'll be no difficulty and we're very keen to make it easier and simpler for people to go through the process necessary without making the process less rigorous and we're happy to work with people from EIDAS or anywhere else on doing that but it's certainly not the case that we're going to agree to trust anybody that any other entity just says here's a list of people to trust off you go because we think that first of all that reduces our ability to drive positive change because someone can say no I don't have to do what you say, Mozilla because I can just get in their list and then you promise to trust them so up yours not very helpful so and we don't think EIDAS will have much effect in practice on our route program two questions one here and then one there okay, thank you hi well it's been a long time since I've been playing so the company that owns the route program where is it located? the company that owns the route program? well there's a route program by Mozilla yes so it's owned by a company I guess well Mozilla is a non-profit organization which is a US 501C3 so it's based in the US okay so there is where that some of the organizations issuing certificates in the United States are on the United States law and it's possible that the United States administration of the keys of this route certificate well that's true of certificate authorities in any jurisdiction so yeah if a CA is based in a particular jurisdiction it's possible that the government will come knocking with their national security letter or whatever it is and say we'd like a copy of your route keeps please that is certainly true that's not a problem that's specific to our route program or you know or to any route program because the government can always come knocking and asking you for things but the advantage of the certificate system is at least that certificate transparency might be able to help with that in some cases but also if the government does do that and starts issuing certificates of man in the middle they are handing out with every connection cryptographic evidence of what they've done which means that if the client is smart enough and making smart clients which detect this kind of thing is an ongoing area of research it can grab that cryptographic evidence submit it or publicize it and we can go okay certificate authority based in Kublaqistan you know you've clearly been issuing dodgy certificates it doesn't matter if the government made you do it we're not trusting you anymore so you know there is always cryptographic evidence when that happens the trick is capturing it and that's a better situation I think than other systems where it's much harder to prove that the government has snuck in and and stolen some keys we have seen that the power of the CAs is I think too big yeah they have they have a big potential for abuse and they did it and they will do it and why is there not a bigger drive like from Mozilla for example as an NGO and to combine CA based systems with for example trust on first contact so that they can be reduced the CAs for example to bootstrap only for example trust on but first contact with CA yeah so that would be one idea why I think it's no one is even I mean that this is idea new for me or what there's no one even it would be a whole other talk to kind of talk about the different trade-offs and user interface issues that there are with different alternatives to the CA system as it works today but I'm very briefly one of the big problems with the trust on first use system is how you deal with key change right so in a trust on first use in a CA system if a key changes nobody knows nobody cares right because the new key is signed by the same authorities the old key and it's still signed and you still trust the authority so it's fine in a trust on first use system a key compromise is indistinguishable from some guy just deciding to roll over his key right you know I think I need a new key I'm gonna get a new key you know the browser goes oh there's a new key it's like well what do I do about that well it could be that something bad has happened or it could be that something bad hasn't happened whatever right so that's a very brief answer problem and I you know it'd be great to discuss that but we really don't have time here yeah well in fact we're out of time but Joerp thank you so much