 But as of the end of November, RIM had published a blog post saying that they were expecting to leave, so we'll see how that goes. Back in East Asia, Korea has implemented a thing to protect children called Smart Sheriff, and that's one of the implementations, but it comes out of a policy to add additional filtering. It's required on mobile devices need to have this child protection thing. And there was a report this fall after the implementation in the spring saying that it actually opens up a bunch of security issues and a bunch of surveillance issues on the devices that it's on. So there's sort of an ongoing conversation about how to do this and what the right level of safety versus the other type of safety is. Not to be outdone, North Korea has added a squid proxy. Cool. So from the European segment size, I would like to mention a new so-called, as I used to say, censorship tent, also known as a gambling censorship. Apparently, the governments have decided that they need to better regulate the gambling and betting resource on the Internet, and they have an easy race of pushing more and more regulation and policies to block filter and somehow reduce access to these resources. Unfortunately, there are many mistaken blocking, various cases of overblocking, as far as collateral damage cases like inconsistent DNS mail exchange records or actually TLS time out on websites that host on the same IP. It's quite common in the web space environment where actually we said hosting you share the same IP with the rest of the other website. What I would like to mention here, it's a report from an organization in Switzerland for the European Commission. It's like more than 1,500 pages, and it's from 2,000 speaks. What is happening actually here, it's like a table that mentions the countries from left to right, like from left are the stricter countries with policies and gambling and betting regulation to the right, which they have less and less regulation. This has changed already, so for instance, UK on the far right side, it actually does some extreme blocking, which I'm going to be mentioned later in our slides. I would like actually to give to point out, it's a case that we did some work later before this summer was the case of Greece, we did some extensive study on quite many ISPs, actually we cover most of ISPs, and we, sorry for that, so what's actually happening in Greece is that the ISPs somehow they have started blocking at will, so they just decided that they need to do this regulation, okay, so they were forced from the gaming commission to do this, but as you can see in this table, actually in this figure, covering like most Greek ISPs, they have inconsistency in blocking, so the red represents the side that has been blocked, actually the far right column, it's like the total side on this snapshot of the blacklist that has been blocked, and the green was the resources, the entities that hasn't been blocked yet. I can't cover the whole research about Greece, but I'm going to give some examples of what have happened after this and what was happening during this, so mostly they have done like DNS blocking, some companies decided to just don't administer this well and they have issued like HTTP 403 errors and in some cases like Vodafone they have done some DPI, so what the users were used to experience actually was that this blocking page, this is quite important if you consider that Greece was considered to have quite liberal views on the internet, you know, like you can still have, can download some torrents and you can, I mean, there was some IP blocking or there were some definitions, but in other words, it was not such extensive filtering, so most of the ISPs were making like this one that you see right now, where the users were being landed, what is actually happening now, it's that the ISPs are redirecting them to the gaming commission website, reminder of the gaming commission, it's actually the commission that issuing the blacklist, forcing the blacklist and making laws, one of them is to punish or like consider a crime to even access these forbidden web resources, so the users are actually experience such a block page right now with this generic image from custom new data center pointing that everything is fine. What is quite interesting to mention and it hasn't been mentioned yet, and I think it makes sense to mentioning at some point, and we will show this right now, is that since the Greek ISPs implemented this filtering infrastructure, they have started actually using the same infrastructure to block different resources, of so-called illegal content, as you can see there is encoded domain name, it's in Greek, so for those of you that can read HTTP, it's quite simple, the body, it's null, there's nothing there and it's returning the HTTP status 403, no block page, no nothing, so and this has been done by Vodafone, this specific segment has been done by Vodafone, in June 2015 they have used the blue quote web proxy for utilizing a DPI. Moving on, another interesting case, United Kingdom, we could actually have two presentations for United Kingdom, but yeah, that's that, we have only one and we could not mention so many things that happened the last years, but something quite important that happened in 2015, or and was increased in 2017, is that the government has started pressuring a lot for promoting filters to ISPs, but in the difference with UK is that they didn't, they don't really have a gaming commission or they don't have like a commission that mentions what should be blocked and you know, makes a blacklist and whatever, so it's ISP decide on what is right that needs to be blocked filter and censor, so you can actually find out what happened, like you know, like dentist website have been blocked, some bars have been blocked or some website that used to do form like the keyword sex have been blocked, so a lot of false positives. What is quite important to mention is the blocked project from Open Rights Group, they have done some impressive work monitoring all this, like all ISPs or most ISPs in UK, they have used the Alexa top 100K and these results like are from their yesterday snapshot, so they have actually found out that there are around 20, from this 100K, there are around 20, 21,000 websites being, resources being blocked by strict filters and around 11,000 websites blocked by default filters. Let me get clear what is default filter, so when you buy a new, when you are becoming a subscriber or you buy a new cellular number in UK, you are coming with this default filter, so you actually need to opt out, you need to say, hey, you know, I want to watch porn, I want to have like an uncensored internet connection and this is like the latest, the yesterday's snapshots on their page, you can actually click on the slides, which we are going to publish online. There are actually having per provider, their presented of sites that have been blocked and some discrepancies. So I would like to move to a different segment right now, this is Middle East, and continue with Saudi Arabia. This is an expert from an UNI report from an HTTP request test, I will explain more later. What is happening actually here in the case of Saudi Arabia is that they have actually blocked Arab times, this happened like this 2015, and they have just brought like a random, just a sort, this page is blocked with an indication of this resource has been blocked by wire filter. I did some search and I found out that that's the wire filter, it seems like a web approach, DPI, whatever, which, you know, supposingly, on their website, you should be used for good ways and, you know, for, as they call it, for a worry free internet experiences. Continuing with Iraq, does anyone know what happened on the, normally happens end of July or specifically on 27th of July 2015? Okay, so during this time, normally there are the school, the student class exams in Iraq, and what happened actually between 2 to 5, like 7 to 10, locally run time, is that as you will see from this BGP data flow, is that Iraq decided to actually stop a bit the internet, you know, for the students to have worry free exams without teaching, and then we, we moved on with our exams. Another interesting report that we found by scaping the UNI report was the HDP in valid request language, so that there were some instances of squid proxy, a web proxy that has been used from, as they mentioned here, from this file, or from VADS, like, I don't know what to do. Moving on with Iran, and specifically a very small segment of what happened in Iran, as I mentioned that it's a very short presentation, we need to cover a lot of things. So Iran decided to, actually, they have been blocking telegram and so many other things, but what's interesting in this case is that on the October 20th, they were, the users were starting wondering social media sites like, hey, what's happening, is it blocked or not? And then they said, no, actually, it wasn't blocked, there were some disturbances that technical issues that helped us to, that didn't help, and we had somehow this, brought this site offline, this telegram offline. And now I'm giving the question to the audience, what's happening in the world? So, yes, in the West, one of the main things that's happened is trying to understand how to deal with this rise of dash or isole, or whatever we're going to call it at the moment. And there's sort of this recognition in politics that, you know, suddenly there's bad stuff on the Internet again. And we need to think about this, we think it's, you know, from one side, this is sort of a thing that's sort of generally in the political mindset right now of, well, you know, it's not censorship if Twitter, you know, solves this problem and maybe we should talk to Bill Gates and he'll close the Internet for us. And there sort of seems to actually be this response from Silicon Valley and these big companies to try and police their content more. And we've seen also that as the Twitter agreement with the EU sort of fell apart over the summer that, again, all of these content providers are taking a closer look at management of their networks and their user generated content. And so you're seeing things like Twitter is suspending the ISIS affiliated accounts and similar things on like Facebook or whatever, right? And so this is actually showing this big hole that we've got which is many of these companies have transparency reports. We can see when the government is asking for subscriber information and we can see when they ask for explicit content to be removed. And we can see when other companies make copyright notices. But there's really nothing here about how much is being taken down by Twitter itself. And so we don't have great insight. And so taking this full circle, we actually do have these technologies. We use them for sites like Weibo, the Chinese equivalent to Twitter, where this open source project called Freebo, Weibo monitors the Chinese social network, finds popular content that then is subsequently removed and reposts it for posterity. And so this is open source. I'd recommend that someone do this for some of the Western media as well, so that we can see what's happening. And I guess it's worth also pointing out that a lot of this stuff is happening on non-todally public websites, right? As we get into Facebook and WhatsApp groups and Telegram and all of these content providers that are having to wrestle with this, a lot of this content is not public. It's within small groups. And so it's going to be very hard for us to understand what sort of management policies are happening and what's going on. So that's sort of our whirlwind tour of sort of things that we thought were worth mentioning. There's tons more. And what we want to talk about in the sort of the second half of this is how do we know that? What are the projects and tools that we've got that help us see when there are disturbances and what perspectives we can get? So we mentioned that the initial map, and that comes from this group called Freedom House, and they publish a report each year called Freedom on the Net. And what that looks like is they write up a long sort of narrative for each country, and then they have analysts code free, somewhat free, not free against that. And so they do this, and there's another group called OpenNet Initiative that also does this. And so you get maybe a few categories, like how free is it on the political spectrum, how free is it on, you know, access to circumvention tools. But what you don't get is you don't get the data backing that. You don't get the list of specific sites. You don't get when things were blocked. Because all of that is potentially harmful, right? If I publish all of the data that I'm using to make my analysis, whoever is managing that network can potentially go back and figure out who it was who ran those measurements and potentially say, hey, knock it off. So this is something that's been sort of struggled with for a while now by the measurement community, and we're making progress. And so we're going to start with sort of the big player in the room. So yeah, how we measure interference then? Uni. Disclosure. I'm an Uni developer. Also known as Open Observatory of Network Interference. It's been around for some years. It has some peer-reviewed proof of concept. It has been used to analyze in general, like most of the things that I saw during my presentation. It's free software. You could help by adding specific test models. Long story short, I can point you to somehow how this looks like with Open Observatory. It's a measurement platform that has a backend, has parts of clients that are submitting the reports. A new sign-in pipeline, it's being developed. There is an API that is going to be online soon and give users the ability to do it in a different way. And the part that is going to manage and do all this stuff, like storing the reports. Because actually what is happening with Uni is that it stores the reports publicly. So it allows all people researchers or interesting users to search and somehow start exploring through the reports, to the only report. A much simpler approach of a test, of a test mentioned in the Saudi Arabia case is the HTTP request test. I can't, I don't have the time to go into detail on to that, but I will give you only a short idea of how this works. Actually, it just sends to HTTP gateway request to the resource that we like to probe. One comes through our connection, which is potentially tampered or filtered sensor or whatever. And the other one comes from another connection, like from Tor. And then there is some kind of diff happening and some other types of customization that are being checked in order to see if our, this resource has been blocked or not. There are many, many more things to say, but you know, please come and find me later if you need more or read on the website and help if you want. Another thing that happened and that has been working the last year was a Raspberry Pi distribution to increase the vintage points of Uni and provide more reports from different segments of the world. And in an easy way. This is Lepidopter. And moving on with ICLab, it's a project that has been developed from the University of... Stony Brook. Stony Brook, yeah. I can't even pronounce the name. So, yeah, they have done a sort of a closed system of containing making network measurements. Another nice example of collecting network measurements, but unfortunately not sensitive or not sensitive-oriented network measurements collection is the RIPATLAS. RIPATLAS has a bunch of notes out there, but it's being found by RIPNCC. And as you know, RIPNCC has some members like Turkey and some other organization that they would not like to be associated or to have any reports that are negative to their presence. Cool. So, yeah, so in addition to these sort of distributed platforms, right? And so this is a few examples of places where you've got probes in networks. We can learn a lot of stuff by looking at specific domains. So there's platforms like Tor that will provide data. So on metrics.tor I can look at how many users are connecting from Bangladesh. And I can verify that report that we showed earlier that there was something weird going on from November to December, because while Facebook was blocked, we see a corresponding spike in the number of Tor users, right? So they are, you know, providing this data. And likewise there's a few other services. So like Google shows how much connections it's getting from different countries and we can say they actually label themselves and say we believe that there was a disturbance in these places at these times, right? And so depending on how that data is shown, you have different levels of ability to, you know, trust it, right? Like I can't actually go in and look and verify that Google is telling me the truth here. I have a better ability to do that from Tor. Beyond this, one of the older projects sort of in this space is crowd sourcing. Herdick Webb came out of Berkman Center a while ago and basically asked users who visit, it shows them a website and says can you or can you not access this site? And they checked yes or no. And that, you know, gives you a somewhat noisy because now you have to trust that the users are, you know, is it, you know, their connection? Is it something else? Like you don't get the same insight into why something didn't work as you would if you were running test code. But you can learn a lot. And then focusing on specific countries, you've got things like Great Fire, which basically runs its own infrastructure in China specifically, and then keeps monitor, sort of keeps tabs on popular sites that are accessible or not there. Switching over to mobile interference, right? We're entering a world where cell phones are gaining importance and we've sort of found ourselves, you know, having a harder time to understand exactly what it means to interfere there. There's a few projects. These range from, sort of, NETALISER has been around for a while. It's just sort of a connection analysis or diagnostics tool. It lets you see things like, is my mobile network preventing me from opening connections on specific ports? And is there, you know, noticeable filtering or proxies? One of the projects that came out in 2015 is something called AsiaChats. And that was, again, out of CitizenLab. They basically looked into a bunch of the popular chat networks. We chat Kakaotalk line. Looked for built-in censorship lists. So are there words that you can't type over these things? And looked at other management policies that were going on in those specific apps? We're moving from a world where what we care about is the open web and, you know, a domain to what is happening within a specific closed app ecosystem. Sort of the equivalent from uni as we move into mobile is this library and corresponding iPhone and Android app around this project called Measurement Kit, which is trying to reproduce some of these connectivity tests looking at DNS and HTTP that we have run from uni but now in a mobile environment. The other thing that's gaining popularity is trying to think about what we can measure externally. So as we've been sort of, you know, struggling with this question about putting things in networks and what risks we're sort of exposing people to. There was a paper and project that came out sort of at the end of last year beginning of this year called Encore, which brings in the idea of I can embed the favicon, the little image for Twitter or for Facebook in my page and see if it loads for my visitors. And so if all of my visitors from a specific country fail to load the Twitter favicon and so we can, you know, have these visitors who aren't actually cooperating, they're not running a probe or a measurement, but we can still learn things from them. Similar to that, we have side channel work and I can, and this guy used for some of the tour accessibility stuff as well, which is saying even when there's two remote servers that I don't have control of either of them, I can using some packet spoofing and TCP fragmentation leaks learn if they're working with each other or not. And so now without, you know, putting those users aren't like cooperating, they're not running some probing software, but I as a remote observer can potentially learn about connectivity or IP level blocks. We're also still coming to terms with a world where the IPv4 space is small. So you've got Zmap, which is there's a talk on mail security stuff that Zachir will give at 32C3, but they've added something called census just recently, which is sort of looking at what's running in different places. One of the projects that I'm involved with is called satellite, which is looking at open DNS resolvers and what sort of DNS consistency we can learn from a single external vantage point. So we find roughly 8 million open DNS resolvers that are willing to resolve domains for you, and so you ask all of them for popular domains and you see if they're willing to resolve them for popular domains where you get weird responses back. And so now without having anything except for these hopefully infrastructural DNS resolvers, I can understand if there's some sort of country or ISP level policy going on. Cool. Right. So we're going to shortly cover Meet About Governance. What's happened actually is going to be accepted. The HTTP status code 451. That's a status code that is going to be used in case of legal websites that have been filtered, blocked or censored because of legal issues or political incidents. I hope that this will not be passed and this is somehow like acknowledging the problem and saying that's okay, we could censor and block internet, but I would rather protest for a more free internet. What did happen in terms of governance is the forming of a group called Human Rights Protocols Consideration Research Group that is exposing the relation between principles and human rights having a focus on the rights of freedom of expression and freedom of assembly. And now Will is going to... Cool. And I guess the point of this HRPC group is that they're hoping to sort of append policy on further IETF standards to include a section sort of expressing on in terms of the implications to freedom of expression and freedom of assembly with new protocols as they come in. So this is something that we should be thinking about as we start standardizing protocols. What are the implications? Are we doing this in a way that it's going to be supporting these universal human rights? The other major thing in the measurement community space is that Open Tech Fund a branch of the U.S. government under a few layers got a big injection of like 25 million dollars to support circumvention and censorship measurement stuff. And so a bunch of projects have gotten money over the last year and that's continuing now. And so there's been this injection of sort of excitement and people working on it as a result. And so it'll be interesting to see how long that continues, but that's really sort of boosted a lot of stuff. But again the community is sort of struggling at the same time with trying to understand what it means that so much of this work is now funded by the U.S. So it's not all roses although I think you've already come to notice that. There's a lot of stuff that the tools that are out there aren't doing a good job of measuring just to give you some examples. In addition to blocking, right, which is a thing that's pretty binary, I can see if I get the wrong result or no result. What happens if I just get the result really slowly? My automatic test is going to have a hard time coming up with those heuristics and we really haven't written these tests that actually are doing a good job with heuristics to understand that this is just throttled and isn't actually going to be usable and differentiate that accessible but really slow from normal. We still are pretty far behind on platform self-censorship, so our ability to understand what sort of management and, you know, removal of content platforms are doing by themselves and unilaterally. Like the Greece example points out, a lot of times policies are not implemented on a country level, which is sort of how we try to think about it and how those maps that we've been showing show it but really our ISPs are specific regions and so we need to tear this data apart and get down to the specific, you know, how is it going to work for these home users in this place? That's not a question that we have a good way to answer right now. And we also don't have a good handle on the social impacts. Do people notice this? What does it mean to be extreme? How often are people hitting block pages? Like is there other stuff that people are choosing not to visit even though it's not explicitly blocked because they feel unsafe on the internet because of the policy? Like these are things that we really haven't even begun to explore. On the academic side, there's been a big conversation about ethics that's continued to sort of emerge in the last year. Part of that is trying to understand when it's okay to make measurements, right? If I get abuse complaints telling me to knock it off, like if it's someone who owns an organization sure that seems like I probably shouldn't be doing something they don't want me to do. What happens if it's a country that tells me to knock it off? Do I just not measure their country? So where are these sorts of things, where are these boundaries? For our more traditional things inside of countries what does it mean to get informed consent? Are the users who are running probes or who are taking measurements on these systems understanding what they're getting themselves into? Do we as researchers even know what they're getting into and are we able to tell them those risks in a way that they can give informed consent? There haven't been a huge number of retributions or anyone getting in trouble but we don't know what could happen. We don't know how technically savvy the people who would be getting them in trouble are and if they understand the nuances, we don't understand the nuances. So these conversations are happening. A lot of this there's a bunch of ethicists. Weirdly, it's the US Department of Homeland Security that published something called the Menlo Report on Ethical Principles for Communication Technologies in 2012. And what that document says to some extent is that if you can't get informed consent you should be practicing sort of minimizing risk. You shouldn't be doing anything unnecessary in order to get the measurements that you need. So now what is least risk? How do we what measurements do we need? Where are these boundaries? We're still drawing these. There aren't clear lines yet on any of this stuff. But we had the first workshop on ethics in August at the SIGCOM conference. So this is sort of I think at the forefront of a lot of people's minds is what is okay? What is safe? You know, are we doing a good job of both collecting the measurement data that we need to, but also doing it in a good way. So we're going to end by you know, the pitch of what you can do, how you can help. You can measure right? You can collect the data. You can work on the code. A lot of these projects are open source. We'll put these slides on the linked from the talk so that you can click on all of them and find the project that you're most excited about. You can skip the whole measurement thing and work directly on circumvention. Plugable transports are, you know, gaining standardization. There's a nice website, PlugableTransports.info that talks about the spec, which is pretty easy to implement and so you can make your own different way of connecting. And you can have the conversation in your local jurisdiction or on the internet as a whole to advocate for transparency if nothing else so that you can, you know, understand what is blocked and figure out where your community is comfortable having that. So we'll be around to answer questions. Thank you. Thank you. So I assume there'll be some questions. Okay. For questions, please line up. There's one, two, three, four mics here and we'll also take some questions from the internet. First gentleman is lined up. Go ahead and talk, please talk into the microphone so we go to that tape. Thanks. As you're probably aware, more and more countries which we do not usually think as restrictive are now blocking something. It's not only, I suppose it's not only because of the attacks. So I'd like to ask for advice on advocacy. So in my country there's recently been some blogs implemented because of lobbyists trying to stop your other companies if you don't want to go ahead. And I've been trying on behalf of the civil movement to advocate, to not do that and what I get in return as well. We, here's the thing we need to block this. So they are, if I give them something that can minimize collateral damage they're okay. But in my mind that indicates deep packet inspection. So is there a recommended way if we are okay to block this small thing but we don't want to block the whole IP, the whole domain do we go DPI or is there some magic pill? I don't think there's any magic pills unfortunately. I mean so there's a few things. One is whenever you get this sort of thing you need to focus first on transparency because that's something that's much easier to get public opinion, right? There's always going to be a debate about where the line of what is acceptable and unacceptable content is. But I think most people are going to be pretty happy to say that we need to have public record and public knowledge of what is blocked so that we can have the conversation that we can audit that what's being blocked is what we've decided is worth blocking. So the first thing is if there's these regulations or things that are coming without that ability for people to inspect and understand what's happening, that's something that's controversial that you can push for and that's worth having. In terms of should you be blocking full IPs and the collateral damage thing, that's a hard question. It depends a lot. I mean I think we're going to enter and we are moving towards a world with more encryption through let's encrypt and some of these things and so what you could imagine is if you're doing things like just taking SNI or specific domains and trying to block those, that's something that potentially technology is going to surpass in the next three, five years, something like that. We'll enter a world where that sort of block isn't going to be nearly as effective. Hopefully we'll get to a world with encrypted DNS and with, you know, good encryption in some of these newer HDP-like protocols where you can't do these easy DPI fixes. So putting in boxes like that the hope is they become obsolete faster. Whereas doing IP blocks that's something that's harder to rectify that damage. That's my personal opinion. Okay, thank you. Tell me back there at mic number five. Is that a question? Okay, please talk into the mic. Thanks for taking my question. I'm going to ask about the funding that you mentioned. So you mentioned that the American Government funds much if not all or most of the work that you do for the work on censorship research. I'd like you to talk more about that. What do you make of it? How do you think this fits, your work, fits into America's foreign policy objectives and if it does in any way? Sure. So I guess what I'll say is there are other sources of funding too. A bunch of this stuff happens from academics who get funded through grants and through, you know, the university grant process rather than directly through governmental funding. I think it's reasonable to see this as in line with U.S. foreign policy. They would like to have generally that freedom of expression. That's something that, you know, I think that that group at least, the Radio Free Asia Broadcasting Board of Governors is pretty in favor of. And so I think for the projects where their incentives align, it's not like you're not agreeing to do anything, right? What it's potentially doing is it's shifting the work, right? And so what we need to be careful about is that the work that we're doing isn't getting shifted purely to circumvention and away from something like surveillance, which is maybe less exciting or less funded. And so that's the thing where we need to be careful. But I think that, you know, the specific things that they're funding are all great projects that I'm happy to see getting that injection of support. And I think, you know, the call that I would make is that we should have more people also stepping up to fund. I think that's something that, you know, what I've heard from Open Tech Fund is they would also like to see that, right? This isn't something that's sustainable from a single government because you know, the optics don't necessarily look great and also because why should they be able to make those decisions? And so the way that we counter that is not by, you know, refusing to take money but it's by saying, hey, maybe Germany should be giving more money, maybe a bunch of these other countries that care about these same, you know, freedom of expression issues should also be stepping up to help us. What's the mix right now private to government funding off the top of your head? That is like, I have no idea, right? There's a bunch of companies, there's a bunch of VPNs and stuff that are making money from users. They're not going to tell you how much money they make but they're pretty big players in the space. OTF, right, they do publish that their grant from the government is 25 million so we've got that but I don't know how big the overall space is. Okay. We are a question from the internet here. Yes, so this is this is a question from Janix and Hamid. What's bad about recognizing censorship with error code 451 and because the page will block nonetheless and it's good to know what happened, right? Okay, so I think that by having a status code of recognized censorship it's like we acknowledge, as I mentioned in the presentation we acknowledge the fact that okay, yeah, censorship is for granted and we should assign a status code. I guess to my opinion instead of just assigning new status code of something that we can't really happen like internet should not be censored so assigning a status code to the HTTP should not happen so if they would like to do it they will do it in a different way but assigning, like changing the status code and assigning and acknowledging the fact that we should have more, you know, like a blocking code, no, I don't think that it's a way to go. I don't have any alternatives on that but, you know, I mean why we need to to have alternatives on blocking the internet? They do it otherwise without status code so why to give the freedom of using also status codes that, oh, okay so these websites that will attend the status code so now we know that, okay, you know these sites are blocked, yeah, they will still block as I mentioned in the presentation sites and the resources, legally and illegally so now they will have also the 451 status of the legal so this is the legal like status code and, you know this will continue, they will continue blocking again, you know, websites illegally. Okay, thank you. Number two, your go. Thank you for the talk. I have a question about talking to you Mike, please. I have a question about the gambling about the gambling about censorship because this seems like the beginning of a general kind of censorship do you have any idea if anywhere in Europe it was possible to get this removed in some way by any legal measures, lawful measures, something like that? Well, actually most of these blocking are quite trivial they are they were doing the DNS blocking so you could just change your name server, your resolver and you could change your address around this issue many of these sites can be seen from other, like they can be seen by a tour or they are just trying to block this locally, they were just trying to do this in a local way they are not doing it well right now, they are just we like to let's say stop doing this from the non-power users let's say so I come from Bulgaria and there they switched from just DNS blocking to IP blocking and they've been doing that pretty thoroughly for the sites they were ordered to block which results in a lot of collateral damage for this and we know that using any kind of VPN just go around and it creates problems and it creates a very bad precedent for everything else because right now it's gambling sites then we have the infrastructure and then it's anything else so are you aware of anyone that is working to stop this on a political level? We should speak to these people we should do stuff like mentioning that hey you are destroying the internet you know you can't just start blocking the internet, you can't just do that and finding your local groups that are caring about internet freedom working with them to find especially instances of collateral damage are a great place for advocacy of look at these sites that for whatever reason are being blocked this isn't a cool thing to be doing we really are overstepping we're losing transparency these are all great arguments Thank you. Okay, thank you. Number six, Yergo. Hi. Thank you very much for an interesting overview. I wondered if you would like to share some thoughts on maybe combining or proposals to combine censorship measurement with censorship circumvention. There have been some proposals recently for tools that could bind the two, and I just wondered what your thoughts were on that. Yeah, so there was a paper that came out about the circumvention tools are a way to understand measurement. Again, you've got exactly the same issues about what are the risks that users are facing, right? And that's now maybe you're having a harder argument about not knowing what's going on, that you know that this is a user who is actively worried about something and trying to get around something, and so are you now comfortable trying those sites that are blocked, which are the ones that are being looked for? And what sort of dangers are you getting your user into? At the same time, there are plenty of tools that are already doing this, right? You've got a set of tools that do fallback stuff, so they first try to make the request locally, and then if it doesn't work, they run it through a proxy. So they're already exposing the user to that risk. You might as well collect that data, right? And so then it's just a measure of what sort of anonymity and protection against that collected data do you get so that if that data gets public because expect data to leak to the people you don't want it to, how does it not get anyone in trouble, right? Because it's easier to just take that whole database of all of the users in the country who tried to visit these bad sites than having to do the network, like measurements of them individually. And unfortunately, they do collect data of these users, and they don't know what to do with this data. Some set of synchronization tools. Thank you. Okay. Another question from the internet, and please talk into your mic. All right, this is a question from Janix. It's a two-parter. So there are a lot of methods for future evasion, SSH tunnels, ICMP, DNS tunnels. That being said, is there any government able to effectively censor or block experienced users who are able to use this, I guess, tunnels? Is it too paranoid to believe that the government might intentionally allow some protocols that they can decode because they want to actually inspect what people are doing? It is not too paranoid to expect that. There are instances that have shown up of proxies or other things that are being run by governments to be able to get a sense of what users are doing. If you are looking for your technically competent adversary, you're probably looking to China. We've seen different things from that network. And because of how big it is and the diversity of ISPs, you see different things and you get mixed reports. But there's a sense that at least some of the time you get weirdly like long-term connections that don't look like TLS will get throttled or become flaky and that affects a lot of these tunneling solutions. We still need more work on being able to get reliable things. Maybe you use IP diversity. Maybe you use lots of short-lived connections, UDP. There's a bunch of different things out there to try. There aren't good things that seem that everyone feels are reliable and are hard to for a technically competent adversary to block. Okay. There's somebody waiting on number five. Go ahead. Talking to the mic. Yes. I think it was around six months ago there was a website that was forced to shut down that archived political tweets from politicians. Essentially, the politicians strong-armed them into shutting down. My question is what kind of implementation or capabilities would you suggest to try to prevent that kind of strong-arming from happening again with archiving suppressed tweets and things like that? So depending on what threats you are worried about, we've got a number of technologies that our community knows about. We've got hidden services. If you want to run something where it's IP address and the server is not known. So if you feel like the infrastructure is in danger, there are ways to make that site run in a way that it is anonymous. You can look at the different legal jurisdictions and come with a jurisdiction where you are not going to have the same ability. If it's in a different country, it becomes much harder. And then you can find a set of lawyers. If you believe this is legal, which in a lot of places it is, you just need to find the right representation where you don't give in and show that this actually is an instance of free speech. If I remember correctly, they actually strong-armed Twitter into revoking the developer access of the website in question. So then it's things like free WebO that are crawling and site scraping. You can't expect that the platform is necessarily going to be happy about that. You can potentially raise a stink and get the platforms to reverse those sorts of policies. Or you can try and show the value of the system so that they realize that this is something that is worth having in the same way that we have things like chilling effects, archiving the DMCA take-down requests. It may be worth having this sort of transparency and trying to convince the platforms. But at the same time, you can again make the argument that even without the platform support, there's value in doing this sort of thing. Thank you. Thank you. We have a tight schedule, but the last question we're taking from number four. Is there anybody else? You'll have to come up front for later. Number four, go ahead. Just a short question. You mentioned apps very shortly. There is this debate, especially around Apple, that they have a very special way of thinking about what is a good app and whatnot. Is there anybody out there who measures taking apps and thinks this or don't you consider this a censorship because it's from a company and not from a state? That absolutely seems valuable. I don't know of any projects that are doing a good job that are out there yet doing that. So I'd love to hear about one. And again, there's been reports of the Google Play Store entering China that's going to involve some sort of management of what apps are allowed. And again, keeping track of that stuff is super valuable. Maybe someone could do that because I think it's quite important today. I agree. If you want to do this, please come and find us. Okay. We'll close this talk now because we're tight on schedule. Thank you very much. Let's have a last hand. Thanks.