 All right. It's my very big pleasure to introduce Roya and Safi to you. She's going to talk about Sensor Planet, a global censorship observatory. I'm personally very interested in learning more about this project. It sounds like it's going to be very important. So please welcome Roya with a huge warm round of applause. Thank you. It's wonderful to finally make it to CCC. I had joined talk with multiple of my friends over the past years and the visa stuff never worked out. This year I applied for a conference in August and the visa worked for coming to CCC. My name is Roya and Safi and I'm a professor at the University of Michigan. My research focuses on security and privacy with the goal of protecting users from adversarial network. So basically I investigate network interference and somebody is interfering right now. Damn it. What the heck? Okay. Cool. I'm good. Oh no, I'm not. In my lab we develop techniques and systems to be able to detect network interference often at a scale and apply these frameworks and tools to be able to understand the behaviors of these actors that do the interference and use this understanding to be able to come up with a defense. Today I'm going to talk about a project that is very dear to my heart, the one that I spent six years working on it. And in this talk I'm going to talk about censorship, internet censorship and by that I mean any action that prevents users access to the requested content. We have heard an alarming level of censorship happening all around the world. And while it was previously multiple countries that were capable of using deep packet inspections to tamper with user traffic, thanks to commercialization of these DPI's, now many countries are actually messing with users data. For the first time that the users type CNN.com in their browsers, their traffic is subject to some level of interference by different actors. First, for example, the DNS query where the mapping between the domain and the IP where the content is can be manipulated. For example, the DNS answers can be a dead IP where the content is not there. If the DNS succeeds then the users and the servers are going to establish a connection, TCP handshake and that can be easily blocked. If that succeeds then users and servers start actually sending back and forth the actual data. And there are enough to clear text be the traffic encrypted or not that the DPI can detect a sensitive keyboard and send a reset package to both basically shut down the connections. Before I forgot let me tell you and emphasize that it's not just governments and the policies that are imposed on the ISPs that lead to censorship. Actually, server side which provides the data are also blocking users, especially if they are located in a regions that they don't provide any revenue. We recently investigated this issue of dual blocking in deep and provide more details about what role CDNs actually provide. Imagine now we have how many users, how many ISPs, how many transit networks and how many websites each of which are going to have their own policies of how to block users access. More censorship changes from time to time, region to region, country to country and for that reason many researchers including me have been interested in collecting data about censorship in a global way and continuously. Well I grew up under civil censorship, be it the university, the government, more frustrating the server side and I genuinely believe that censorship take away opportunities and degrade human dignity and it's not just China, Bahrain, Turkey that does internet censorship. Actually with the DPI it's become cheaper and cheaper. Many governments are following their leads. As a result internet is becoming more and more Balkanized and the users around the world are going to soon have a very very different pictures of what is internet is and we need to be able to collect the data and to be able to know what is being censored, how it's being censored, where it's being censored and for how long. This data then can be used to bring transparency and accountability to governments or private companies that practice internet censorship. It can help us to know where the circumvention to where the defense needs to be deployed. It can help us to let the users around the world to know what their governments are up to and more important provide the valid and good data for the policy makers to come up with the good policies. Existing research already shows that if we can provide this data to users they act by their own will to ensure internet freedom. For many years my goal has been to come up with a better map, a censorship better map where we can actually see changes in censorship over time, how some countries are different from others and do that for a continuous duration of time and for all over the world. Being such a map wasn't possible with the techniques, internet measurement methods that we had at the time. At the time and even the common techniques we now use, the measurement methods to be able to use for measuring internet censorship is often by deploying a software or giving a customized Raspberry Pi to either a client or a server and based on that measure what's happening between client and servers. Well, this approach has a lot of limitation. For example, there are not that many volunteers around the whole world that are eager to download a software and run it. Second, the data collected from this approach are often not continuous because the user's connection can die for variety of reasons or users might lose interest to keep running the software. And therefore, we end up with a sparse data where we cannot have a good baseline for internet censorship studies. More measuring internet, measuring domains that are sensitive often create risks for the local collaborators and might end up with their government's retaliation. These risks are not hypothetical. When the Arab Spring was happening, I was approached by many colleagues to recruit local friends and colleagues in Middle East to be able to collect measurement data at the time that was very interesting to capture the behavior of the network and most dangerous for the locals and volunteers to collect that. My painting actually experts what I felt at the time. I can't just imagine asking people on the ground to help at these times of unrest. In my opinion, conspiring to collect the data against the government's interest can be seen as an act of treason. And these governments are unpredictable often. So it's exposed these volunteers to a severe risk. While no one has yet been arrested because of measuring interest in censorship, as far as we know, and I don't know how we can know that in a global scale, I think the clouds are on the horizon. I'm still at all how Turkish government used their surveillance data at a time of co-op and tracked down and detained hundreds of users because there was a traffic between them and bilock, a messenger app that was used by co-op administrators. These things happen. Before I continue, if you know UNI, you might ask how UNI prevent risk? Well, with a great level of efforts. And if you don't know UNI, UNI is a global community of volunteers that collect data about censorship around the world. Well, first and foremost, they provide their volunteers with a very honest consent, telling them that, hey, if you run this software, anybody who is monitoring your traffic, know what you're up to. They also go out of their way to give freedom to these volunteers to choose what website they want to run, what data they want to push. They establish a great relationship with the local activist organization in the countries. Well, now that I prove to you guys that I am a supporter of UNI and I'm actually a friend with most of them, I want to emphasize that I still believe that consistent and continuous and global data about censorship requires a new approach that doesn't need volunteers help. I become obsessed with solving this problem. What if we could measure whether a client in anywhere around the world can talk to a server without being close to a client, somewhere from here, from University of Michigan, and see whether the two hosts can talk to each other globally and remotely off the path? Who are the people about this? Honestly, everybody was like, you don't know what you're talking about. It's really, really challenging. Well, they were right, the challenge is there, and I'm going to walk you through it. We have at least 140 IP addresses that respond to the same packet. This means they speak to the world, and they follow blindly TCPIP protocol. So the question becomes, how can I leverage these subtle properties of TCPIP to be able to detect the two hosts can talk to each other? Well, a spookiest scan is a technique that Jet Kriandel from University of New Mexico and I developed that uses TCPIP side channels to be able to detect whether the two remote hosts can establish a TCP hand shake or not, and if not, in which direction the packets are being dropped off the path and remotely, and I'm going to start telling you how this works. First, I have to cover some background. So any connection that is based on TCP, one of the basic communication protocol we have, it needs to establish a TCP hand shake. So basically you send a sync, and in the packet you send in the IP header, you have a field called identification, IP ID, and this field is used for fragmentation reasons, and I'm going to use this field a lot in the rest of the talk. After the user received a sync, it's going to send a sync back, have another IP ID in it, and then if I want to establish a connection, I said act, otherwise I send a reset. Part of the protocol says that if you send a SINAC packet to a machine with the port open or closed, it's going to send you a reset, telling you what the heck you're sending me SINAC, I didn't send you a sync. And another party said if you send a sync packet to a machine with the port open, eager to establish connection, it will send you a sync. If you don't do anything, because TCP IP is reliable, it's going to send you multiple SINAC. It depends on operating system 3, 5, you name it. As bookies can require some basic characteristics. For example, the client, the vantage points that we are interested should maintain a global variable for the IP ID. It means that when they receive a packet, and they want to send the packet out, no matter who's they're sending the packet to, this IP ID is going to be a shared resource, it's not going to be increment by one. So by just watching the IP ID changes, you can see how much a machine is noisy, how much a machine is sending traffic out. A server should have a port open, let's say AD for Votery, and wants to establish a connection. And the measurement machine, me, should be able to spoof packets. It means sending packet with the source IP different from my own machine. To be able to do that, you need to talk to upstream network and ask them not to drop the packets. All of these requirements, I could easily satisfy with a little bit of efforts. As bookies can start, with measurement machine, send a CNAC packet to one of this client with the global IP ID at a time, let's say the value is 7,000. The client is going to send back a reset following the protocol, revealing to me what's the value of IP ID. In the next step, I'm going to send a spoofed CN packet to a server using a client IP as a result. The CNAC is going to send to the client. Again, the client is going to send a reset back. The IP ID is going to be incremented by one. Next time I query IP ID, I'm going to see a jump two. In a noiseless model, I know that this machine talked to the server. If I query it again, I won't see any jump. So, delta two, delta one. Now imagine there is a firewall that blocked the CNACs going from the server to the client. Well, it doesn't matter how much of the traffic I send. It's not going to get there. It's not going to get there. So, the delta I see is one one. In the third case, when the packets are going to be dropped from the client to the server, well, my CNAC gets there. The CNAC gets to the client. The client is going to send the reset back, but it's not going to get to the server. So, it's going to send multiple CNACs, and as a result, the reset is going to be plus plus more. So, what jump I would see is, let's say, two two. Let me put them all together. So, you have three cases blocking in this direction. No blocking and blocking in the other. And you see different jumps or different deltas. So, it's detectable. Yes, yes, in a noiseless model. I know the clients talk to so many others, and the IP ID is going to be changed because of a variety of reasons. I call all of those noise, and this is how we're going to deal with it. Well, in two different slicking, we can amplify the signal. We can actually, instead of sending one spoof in packet, we can send n. And for a variety of reasons, packets can get dropped. So, we need to repeat this measurement. Some data from a Spooky scan of where I use the following probing method. For 30 seconds, I spoof the, I send a query for IP ID. And then for another 30 seconds, I send this five spoof in packets. This is machines or clients in Azerbaijan, China, and United States. And we wanted to check whether it's reached Tor Relay that we had in Sweden. You can see there are different jump or different level shifts that you observe in a second phase. And just visually looking at it or using auto regressive moving average or ARMA, you can actually detect that. But there is an insight here, which is that not all the clients have the same level of noise. And for some of them, especially these guys, you could easily detect after five level of sending IP ID query and then five seconds of spooking. So, in the follow-up work, we try to use this insight to be able to come up with a scalable and efficient technique to be able to use it in a global way. And that technique is called AUGAR. Well, AUGAR adopts this probing method. First, for four seconds, it queries IP ID. Then in one second, send 10 spoofing packets. Look at the IP ID acceleration or second derivative and see whether we see a jump, a sudden jump at the time of perturbation when we did the spoofing. How confident we are that that jump is the results of our own spoof packet. Well, I'm not confident. Run it again. I think so. Run it again until you have sufficient confidence. It turns out there is a statistical analysis called sequential hypothesis testing that can be used to be able to gradually improve our confidence about the case we are detecting. So, I'm going to give you a very, very rough overview of how this work. But for sequential hypothesis testing, we need to define a random variable. And we use IP ID acceleration at the time of perturbation being one or zero based on you see jump or not. We also need to calculate some empirical priors, known probabilities. If you look at everything, what would be the probability that you see a jump when there is actually no blocking and so on? After we put all this together, then we can formalize an algorithm starting by run a trial. Update the sequence of values for the random variables. Then check whether the sequence of values belong to the distribution of where the blocking happens or not. What's the likelihood of that? If you are confident, if you reach to a level that we are satisfied, then we call it a case. So, putting all this together, this is how Aga works. We scan the whole IPv4, find global IP ID machines, and then we have some constraint that is it a stable machine? Is it a noisier or have a noise that we want to deal with? We also need to figure out what website are we interested to test reachability towards them? What countries we are? So, after we decide all the input, then we run a scheduler, making sure that new client and server are under the measurement in the same time, because they mess each other's detection. Then we actually use our analysis to be able to call the case and summarize the results. I started by saying that the common methods have this limitation, for example, coverage, continuity, and ethics. When it comes to coverage, there are more than 22 million global IP ID machines. These are Windows XP's of predecessor, and previous these, for example. Compared to the previous board, one successful project is a Rai Atlas, and they have around 10,000 probes globally deployed. When it comes to continuity, we don't depend on the end user, so it's much more reliable to use this. Asking volunteers to help, we are already reducing the risk, because there is no users conspiring against their governments to collect this data. But our approach is not also zero risk. If you look, if you look, you have a different kind of riskier. The client and server is exchanging CNAC and reset without each of them giving a consent, and we don't want to ask for a consent, because if you do, the dilemma exists. We have to go back, and it's just the same that's asking volunteers. So to deal with that, and copy that, to reduce the risk more, we don't use end IPs. We actually use two hot back routers, which hyperability are infrastructure machines, and use those as a vantage point. Even with this harsh constraint, we still have 53,000 global IP ID routers. To test the framework, to see whether it works, we chose two thousands of these global IP ID machines, uniformly selected from all the countries we had vantage point. We selected websites from citizen lab test list. This is the research organization in torrential universities where they crowd source a website that are potentially being blocked, or potentially sensitive. And then we use thousands of the website from Alexa, top 10k, and then we get the auger running for 17 days and collect this data. One of the challenges that we have to validate the auger was like, so what is the truth? What is the ground truth? What would we see that makes sense? So, and this is the biggest and fundamental challenge for internet censorship anyway. But so the first approach is leaning on intuition, which is like no client should show blocking towards all the websites. No server should show blocking for bulk of our clients. And if anything happens like that, we just trash it. We should see more bias towards the sensitive domain versus the ones that are popular. And so on. And also we hope to replicate the anecdotes, the reports out there. And we did all of those. And that's how we validate auger. So at the end, auger is a system that is scalable and efficient, ethical, and can be used to detect TCP IP blocking continuously. Yes, I know. That's just TCP IP. What about the other layers? Can we measure them remotely as well? Well, let me focus on the DNS. You might ask, is there a way that we can remotely detect DNS poisoning or manipulation? Well, let's think it out loud. From now on, I'm going to give just the highlights of the papers we work for the lack of the time. Well, if we scan the whole IP before, we have a lot of open DNS resolvers, which means that they are open to anybody sending a query to them to resolve. And these open DNS resolvers can be used as a vantage point. We can use open DNS resolvers in different ISPs around the world to see whether that DNS queries are poisoned or not. Well, wait, we need to make sure that they don't belong to the end user. So we come up with a lot of checks to make sure that these open DNS resolvers are organizational, belong to the ISP, or infrastructure. After we do that, then we start sending all our queries to these, let's say, open DNS resolvers in ISP in Bahrain for all the domain we are interested, and capture what we receive, what IPs we receive. The challenge is then to detect what is the wrong answer. And so we have to come up with a lot of heuristics, a set of heuristics. For example, the response that we received, is that equal to a reply we got from our control measurements where we know the IP is not blocked or poisoned or something, the content is there. We can actually look at the IP that we received and see whether it has a valid HTTP search with or without SNI or server name identification or something. And so on so forth. So we come up with lots of heuristics to detect the wrong answers. The results of all these efforts ended up being a project called satellite, which was started by Willa Scott. I'm sure he's in the audience somewhere, a great friend of mine, a very good supporter of sensor planet. Selflessly, he has been a miracle that I had an opportunity and fortune to met him. We have satellite. And satellite automate the whole steps that I told you. For this work, we use science that developed in both of the work. We call it satellite because of seniority and it's sticking with the name. So how much coverage satellite has? If you scan IPv4, you end up with 4.2 million of Pandianus resolvers in every country and their territories. We actually need to make sure there are for that reason, we put a harsh condition. We say that let's only use the ones that their valid PTR record followed this expression. Basically, let's just use the open DNS resolvers that are name servers or at least their PTR record suggests that. This is a really harsh constraint. Actually, my students have been adding more and more regular expression for the ones that we are sure they are organizational. But for now, just being this harsh, we have 40K open DNS resolvers in almost 169 countries, I guess. Sensorship happened in other layer as well. How do we want to deal with that remote channel with the remote side channel? And especially, what about HTTP traffic or disruption that can happen in the TLS centric? I hate water. Oh, no. It's well documented that many DPIs, especially in great furrows, China, monitor the traffic and then they see a key word, a sensitive key word like Falun Gong. They act and they drop traffic or send a reset. And as I mentioned earlier, there were enough clear text everywhere, even in TLS centric, SNI is in clear text. And for a long time, I was trying to come up with the way of detecting application layer using this fancy side channel. Like, how can I detect that when the client and server need to first establish a TCP handshake? How the side channel can jump in and then detect the rest? We were lucky enough that the end pointed to a protocol called Echo. It's a protocol designed on 1983. And it's for testing reasons. It's a debugging tool, basically. It's a predecessor to Ping. And basically, after you establish a TCP handshake to port 7, whatever you send the Echo servers on port 7, it's going to echo it back. Now think about it. How we can use Echo servers to be able to detect application layer blocking. Well, when it's not available, let's say I have an Echo server in the U.S. and measure my machine in the University of Michigan, I establish a TCP handshake and I send a GET request to using a sensor keyboard, for example. It's going to get back to me the same thing I sent. But now, let's put the DPI that is going to be triggered by it. Well, for sure, either I'm going to receive a reset first or something else. So we can actually come up with the algorithm to be able to use Echo servers to detect disruptions on application layer. Basically, keyboard blocking, URL blocking. Results of this is a tool called Quack. And Quack actually uses Echo servers to be able to detect in a scalable way and safely better the keywords are being blocked around the world. So what do we do is first scan the whole IPv4. We find 47K Echo servers running around the world. Then we need to be able to check whether they are not belong to the end users. And that was a very challenging part because there is not a clear signal as it's, there are 90% of them are infrastructure, but there is still some portion of them that we don't know. So what we do is we look at the freedom of house reports and the countries that are partially open or not open. Not free or partially free, what they call. This is around 50 countries. And for those, we randomly select some that we want and we use OS detection of NMAP. And if you give us back, it's a server, it's a switch, and so on, we use those. So with the help of so many collaborators, after almost six years, we end up with three systems that can capture the CBI people blocking DNS and application layer blocking using infrastructure and organizational machines. So while it was, it was a dream or a vision that we can come up with a better map to collect this data in a continuous way, thanks to the help of a lot of people, especially my students, Will and other collaborators, we now have SensorPlanet. SensorPlanet collects semi-weekly snapshots of internal censorship using our vantage point in all the layers and provide this data in a raw format now in our website. We also provide some visualization way for people to be able to see how many vantage points we have in each country and so on. Of course, this is the beginning of SensorPlanet. We launched this at August and we have been collecting data for almost four months and we have a long way to go. We have users right now through organizations using our data and helping us debug by finding things that doesn't make sense pointing to us. And any of you that ended up using this data, please share your feedback with us and we are very responsive to be able to change it. Not as much as Zuni. They have a collective of very well-dedicated people participating. So now that we have this SensorPlanet, let me give you how it can help when there is a political situation going on. You all must remember around October, very Jamal Khashoggi, Washington Post reporter disappeared, killed at the Saudi Arabia embassy in Turkey. At the time of this happening, there was a lot of media attention and this news, especially two weeks in, become very internationally spread. Since the planet didn't know this event was going to happen, so we have been collecting this data semi-vehicle for 2000 domain or so. And so we went back and we checked the Saudi Arabia. Did we see anything interesting? And yes, we saw for example at two weeks in, around October 16, the domains that we were, that was reuse category and media category, the censorship related to those doubled. And let me emphasize, we didn't see like a block and not block over the whole country. Not all the countries have a homogeneous censorship happening. We saw it in multiple of the ISPs that we had vantage point. Actually, I freaked out when one of the activists in Saudi Arabia told us that, I don't see this. And we said, what IS, what ISP were in? And this wasn't the ISP that we had vantage point in. So we were looking for hints that, is there anybody else seeing what we were seeing? And so we ended up seeing, there was a commander lap project that also saw around October 16, the number of malwares or whatever they're testing is also doubled or triple, I don't know, the other. So something was going on two weeks in when the news broke. Let me emphasize this news media that I'm talking about or the global news media that we checked like LA Times, Fox News, and so on. But we also checked Arab news, which is, as the activists told us, is a Saudi Arabia's propaganda newspaper that in one of the ISPs was being poisoned. So again, censorship measurement is a very complex problem. So where we're heading? Well, having said that about side channels and the techniques that help us to collect this data, I have to also say that the data we collect doesn't replicate the picture of the internet censorship. I mean, having a root access on a volunteer's machine to do a detailed test is powerful. So in the next step, in the next year, one of our goal is to join force with UNI to integrate the data from remote and basically local measurements to provide the best of the world's worlds. Also, we have been thinking a lot about what would be a good visualization tools that doesn't end up to misrepresent internet censorship. I literally hate that one. Hate it. The number of vantage points in countries are not equal. We don't know whether all the vantage points that the data is resulted from it is from one ISP or all of our ISP. And then with test domains that are like benign, like I don't know, it defined based on some western values of the freedom of expression. I believe in all of them, but still, culture economy might play something's red. And then we put colors on the map, rank the countries, call some countries awful, and not giving a full attention to the others. So something needs to be changed. And it seems a horizon to think about it more deeper. We want to be able to have more statistic tools to be able to spot when the patterns are changed. We want to be able to compare the countries. Then, for example, Telegram was being blocked at Russia. If you remember, millions of IPs being blocked. If you don't know, go to my friends, when I talk about Russia. You're going to learn a lot there. But anyway, so when the Russia was blocking Telegram, I said to everyone, I bet in the following, some other governments are going to jump to block Telegram as well. And that's actually what we heard, rumors like that. So we need to be able to do that automatically. And overall, I want to be able to develop an empirical science of internet censorship based on a rich data with the help of all of you. Sensor Planet is now being maintained by a group of dedicated students, a great friends that I have, and needs engineers, political scientists to jump on our data and help us to bring meaning to what we are collecting. So if you are a good engineer or a political scientist or a dedicated person who wants to change the world, reach out to me. As a reference for those of you interested, these are the publications that my talk was based on. And now I'm open to questions. All right, perfect. Thank you so much for your so far. We have some time for questions. So if you have a question in the room, please go to one of the room microphones, one, two, three, four and five in the very back. And if you're watching the stream, you can ask questions to the Signal Angel via IRC or Twitter. And we'll also make sure to relay those to the speaker and make sure those get asked. So let's just go ahead and start with my tool, please. Hey, great talk. Do you worry that by publishing your methods as well as your data that you're going to get a response from governments that are censoring things such that it makes it more difficult for you to monitor what's being censored or has that already happened? It hasn't happened. We have control measures to be able to detect that. But that has been, it's a really good question and often comes up after I present. I can tell you based on my experience, it's really hard to synchronize all the ISPs and all the countries to act to the CNAC and reset that I'm sending. Like, for example, for AUGAR, this is on solicited packets. And for the governments to block that, there are going to be a lot of collateral damage. You might say that, well, Roya, they're going to block the IP of University of Michigan, the spoofing machine. We have measure for that. We have multiple places that I am actually back, have a backup if that case happened. But overall, this is a global scale measurement. And even in one city or like multiple ISPs, you know, it's really hard to synchronize being like blocking something and maintaining that. So it is something that's in our mind thinking about. But as of now, it's not a worry. All right, then let's go over to Mike one. Thank you. I wondered, it's kind of similar to this question. What if you are measuring from a country that is blocking? Do you also distribute the measurements over several countries? Absolutely. Every snapshot that we collect is from all the vantage point we have in like certain countries and portion of vantage point in like China or like US because they have millions of applications. So we don't like thousands of. So basically, at each snapshot, which take us three days, we collect the data from all of all of the vantage point. And so let's say that somebody is reacting to us. We have a benign domain that we check as well, like for example, the domain example.com or random.com. So if we see something going on there, we actually double check. But but good point because now our efforts is very manual labor and we're trying to automate everything. So it's still a challenge. Thank you. All right, then let's go to Mike three. Hi. Have you measured how much does IP ID randomization break your probes? Oh, and this is also really good. Let me give a shout out to Antiras. He's the guy at 1998 discovered IP ID or published something that I ended up reading. So like for example, what is this? Linux or Ubuntu in the newest version, they randomized it. But it's still there all this legacy operating system like Windows XP and prejudice or free BST that are still have global IP ID. So one argument that often come up is what if all this machine get updated to the new operating system where it doesn't have a maintain global IP ID. And I can tell you that, well, we come up with another side channel for now that works. But my gut feeling is that if it didn't change from 1998 till now with all the things that everybody says that global IP ID variable is a horrible idea, it's not going to change in the coming five years. So we're good. Thank you. Okay, then let's just move on to Mike four. When you were introduced, I was wondering, does the detection of the blockage between the client server necessarily indicate censorship? So because you were talking about validating how I was wondering if it turns out that there is like a false alarm, what do you think could be the potential cause? You're absolutely right. And I try to emphasize on that that what we end up collecting is can be seen as a disruption. Something didn't work. The scenic or reset got disrupted. Is that is a censorship or it can be a random packet drop? And the way to be able to establish that confidence is to check whether aggregate the results. Do we see this blocking between multiple of the routers within that country or within that AS? Because if one of this is for accident that just didn't make sense or didn't get dropped, what about the others? So the whole idea and this is another point that I'm so concerned about most of these reports and anecdotes that we read is based on one VPN or one vantage points in the country. And then there are a lot of lot of conclusion out of that. And you often can ask that, well, this vantage point might be subject to so many different things than a government censorship. Also, I emphasize that the censorship that I use in this talk is any action that stop users access to get to the requested content. I'm trying to get away from a semantic where the intention applied. But great question. All right, then let's go back to make one right. Hi, you mentioned that you have a team of students working on all of these frameworks. I was wondering if your frameworks were open source or available online for collaboration and if so where those resources would be? So the data is open. The code hasn't been. For one reason is I'm so low confident in sharing code. Like I'm friend with Philippe Winter, Dave Feifell. These people are pro open source and they constantly blame me for that. But it really requires confidence to share code. So we are working on that. At least for Quack, I think the code is very easily can be shared. For Auger, we spend a heck of amount of time to make a production ready code. And for satellite, I think that is also ready. I can share them personally with you. But before sharing to the world, I want to actually give another person to audit and make sure we're not using a caseboard or something. I don't know. It's just completely my mind being a little bit conservative. But happy if you send me an email, I send you the code. Thank you. All right, then move to mic 2. Thanks again for sharing your great vision. I find it really fascinating, although I'm not really data scientist. But my question is, did you find any usefulness in your approaches in the spreading of the internet of things? I understood that you used routers to make queries. But did you send and maybe receive back any data from washing machines, toasters? I mean, I know being ethical and trying to not use end user machine limits your access a lot. But that's our goal. We are going to stick with things that don't belong to the end users. And so it's all routers, organizational machines. So I want to make sure that whatever you're using belong to the identity that can protect themselves if something went wrong. They can just say, hey, this is a freaking router. It received and sent so many things. I mean, look, let me give you, show you, anticipate that, for example. The volunteer might not be able to defend that because it's already conspiring and collecting the data. But good question. I wish I could, but I won't pass that line. All right. I don't see any more questions in the room right now, but we have one from the internet. So please signal, Angel. Yeah, actually a question from Kuldi 585. I was in an African country where the internet has been completely shut down. How can I quickly and safely inform others about the shutdown? So while I think local users' values are highly, highly needed, they can use social media like Twitter to send and say whatever, there is a project called IOTA. It's a project at KEDA, UCSD University in U.S., and Flip Winter, Alberto and Alistair are working on that. They basically, remotely, keep track of shutdowns and push them out. If you look at the IOTA on Twitter, you can see their live feed of how the shutdowns, where the shutdowns happen. So I haven't talked about how to reach to the users, telling them what we see, or how we can incorporate the user's feedback. We are working with a group of researchers that already developed tools to receive the data from Twitter and basically we use that as some level of ground truth, but only does it such a great job that I haven't felt the need. All right, unless the signal, Angel, has another question. Nope. Can I add one thing? So I was listening to a talk about how Iranian versus Arabs were sympathetic towards Boston bombing in the United States, and there were a lot of assumptions and a lot of conclusions were made that, oh, this, I'm completely paraphrasing, I don't remember, but this Iranian doesn't care because they didn't tweet as much. So basically, their input data was a bunch of tweets around the time of Boston bombing. After the talk was over, I said, you know that this in this country's Twitter is being blocked and so many people couldn't tweet. All right, that concludes our Q&A. So thanks so much, Roya. Thank you.