 My research question is how to measure the level of surveillance in a society by constructing a surveillance barometer. Let me tell you how this idea came about. The Federal Constitutional Court in Germany had a kind of an offhand remark in his decision on data retention that the legislator has to keep an eye on the general level of surveillance in the society. I at first also looking at the literature on the topic was quite skeptical because no one came up with an operational idea how to set such a calculation into work. There was much ink spilled on the topic but nothing really conclusive came out of it. But then I was reminded of a small episode from the 90s. There the legislator introduced the obligation of public prosecutors to report the number of telecommunications surveillance to a public agency where it is visible for the general public. The first year that the numbers were reported it was picked up by the National Press because there were quite stark discrepancies between the number of surveillance of telecommunications in the different German states. And it's still a discussion about why we have these discrepancies between the different states. And I thought can we take it from there and upscale this idea to construct what I would call surveillance barometer for Germany by including not only telecommunications surveillance but more generally all surveillance measures of the security agencies. For our method we made a key decision. We do not focus on the amount of data collected in data pools. We want to identify them but what we want to measure is the excess that security agencies have to these data pools. Starting from there a six-step procedure developed in how we can measure the surveillance level with regard to these data pools. And the first step is to identify the data pools. And we are interested in data pools which contain data of basically everybody in a society. That is not limited to data pools run by state agencies but also include the data that is collected by private actors. In a second step we look at the excess powers that security agencies have to these data pools. And this requires an intricate legal analysis of these security powers that the agencies have and a nice by-product of this procedure is that we get a kind of a cartography of the security agencies powers with regard to certain data pools. The third step then consists in getting the numbers of how often security agencies made use of their power. There in the beginning the data seemed to be really scarce. We had the telecommunications surveillance data but there seemed to be only scarce data on or systematic data on other surveillance measures. But lately in the last one or two years in most of the German state police codes reporting obligations have been introduced and now we get much more data from the police agencies than we expected in the beginning of the project because the police agencies now have to report at least the numbers regarding some of the most important surveillance measures. Not all surveillance measures are equal. They have quite a varying degree of intensity. The fourth step is necessary because data access is not data access. There are different types of data access and different data that can be accessed. For example if your health data is accessed that is much more severe than if some other just metadata of your telecommunication is accessed. So we had to develop a matrix in how to evaluate how to weigh the different type of powers that security agencies use and we took the criteria for these matrix from the decisions of our federal constitutional court on how the federal constitutional court weighs the proportionality of security measures and thus a matrix evolved that has more than a dozen criteria. The fifth step consists in integrating the empirical numbers on the amount of access the security agencies took with the intensity values that we calculated for the different measures and we are working now with two different models of integration. One gives more weight to the sheer amount of surveillance measures and the other gives more weight to the intensity of the measure and we have to see how this plays out when we go through the different measures that we have under evaluation. And the final step then is to aggregate the data from the different surveillance tools into one general surveillance barometer. That is we integrate all the data that we have on the different surveillance measures into one surveillance score and that is how we arrive at a general surveillance calculation for the German society. Well the key finding is that something like this can be done. We can construct an instrument that allows us to get a better picture of the general surveillance level in a society and that allows us to analyze the surveillance level even with respect to different surveillance instruments, different agencies, different states and different points in time. Even now where we have only processed limited data we can already see certain things that we would otherwise not see. For example since we have the telecommunication data that reaches back a couple of years we can see that the general level of surveillance of general communication stays just about the same but there's a significant shift between the surveillance of content data and metadata which can of course be explained by the spread of cryptography in social media services which make it harder and harder for security agencies to access the content of telecommunications. On the other hand more and more metadata is produced and can be harvested by the security agencies because people are more and more communicating in electronic ways, especially in social networks. Well the main relevance of being able to have such an instrument as a surveillance barometer is that it would give societies for the first time an insight into the actual surveillance practice in these societies. This would allow different reactions and open up different possibilities. On the one hand it can raise critical questions. For example why in certain states the general public is six times more likely to be surveyed in their telecommunication activities than in other states. Why is that so? There needs to be an explanation for this. On the other hand it can also deescalate certain public discussions. I remember when the online search where the state introduces malware into your computer systems and thus surveys your computer activities was first introduced there was a very heated discussion. If you talk to people at least some of them when they heard a funny sound in their computer or in their smartphone they thought that it must be infected by the state malware. Whereas when you look at the actual numbers they ranged in the low single digits. So having an empirically validated picture of the actual surveillance practices can on the one hand inform public discussions but also the legislator when it wants to introduce new surveillance measures or the courts when they have to judge on the proportionality of certain measures but also the security agencies themselves because for them discrepancies in the numbers might also raise interesting questions and might make them look for best practices and develop back best practices on the basis of such an analysis. So far our project has a purely conceptual character. It wants to show that such an instrument can be built. The next step would be to feed it with all the data that we could get our hands on. This instrument would have to be put on a permanent basis by an institution that delivers these numbers and integrates these numbers into the models each and every year. One could think to upscale the model beyond the borders of our nation state to other states so that an interspate comparison of surveillance levels would become possible. And a natural candidate for such an upscaling could be the European Union with the European Union member states so that we could not only get an impression of the surveillance level within Germany but also within other European member states and the European Union as a whole.