 We have a really interesting topic, the mapping of surveillance cameras. The talk is given by Martin, who has done this project. And this is also about the tests of phase recognition in the train station in Berlin. And there was quite a lot of media coverage. And he's going to talk about the automatic mapping of surveillance cameras. And now, Martin, take it away. Thanks. I would invite you to look at this picture. It's by the London Transport Agency. It's no satire. It has a 1984 look, but sadly it's no satire or advertisement. And the British sometimes make it easy to talk about surveillance. Someone did a freedom of information request and a poster was designed by the Central Illustration Agency, which seems like a department of the Ministry of Truth. If you need a cover for a dystopian book, it costs 2,000 pounds at the Central Illustration Agency. The Brits with surveillance are far ahead when it comes to surveillance. This poster is almost 15 years old, I think. And slowly a video surveillance also became common hand in Germany. And for a previous Interior Ministry, and Seehofer have created this project in Berlin, Südkreuz, the train station. It was from August 2018 to 2018. Three companies were involved. They surveyed the escalators and the entrance. The targets were... The test people were given Amazon gift cards as a reward. You needed to carry a Bluetooth beacon and a person. The Berlin Südkreuz is an S train station and they tracked who was in the station. And the test persons have handed in pictures in high quality and they try to find them through the cameras. And someone asked how this project was designed. And I would invite you to read this quote. It says more or less that the project was successful, successfully concluded, independent of any success, actual measured success. And if you talk about these fundamental rights violations to connect us with camera surveillance, if you don't even care about the results, whether this helps police agencies. As we've mentioned, if you talk about 90,000 people at the train station, you would have at least about 6,000 mistakes every day, false alarms. And you'd have to sort those out. And I was quite angry. I thought, I try to do something against it. This was what it looked like in the past. If you didn't want it to get scanned, you had to go on the left. And if you were part of the project or you didn't care, you could go on the right. And this test project, so you could circumvent it. The British are ahead of us. Like I did a lot of times. If you go, if you look at the surveillance cameras and where they are and map them, it's a see it, say it's sorted at the check-in security. And you should report whether you see someone who tries to map cameras. So in Germany, at least have the determinant. But in England, you are suspicious. So in England, you can't choose. If you want, if you don't want to be recorded into the face, you shouldn't complain. You shouldn't scream at the police. There's other options. If you know where the cameras are, it's mentioned in the reports. You could just simply turn to the side, either 15 degrees up or down. The recognition rate is significantly lower there. The police have found a solution out of their perspective. You should advertisement and signs in this direction so that you can better recognize the people. So you can turn away. In minds, people complain that people walk down the stairs and that they move a lot. And that's not that good for a facial recognition. But if you want to turn away or prevent that you are being recorded, then you need to know where the cameras are. And this is the project. Now, to summarize, he gave us the data into the personal ID that are misused. Nothing stays on the ID card. All police agencies are allowed to look at the data on the ID card. His idea was we should turn away. We walk down the stairs. We have masks, hopefully. And on the right, this is a blanket with a certain pattern that gives a lot of... That makes the system misfire. We recognize 30 faces in a single person, but it's some form of a countermeasure. And you can look at the talk at 30C3. These were the problems. Art problems are more simple. If we want to look where the cameras are and we want to find them, we can look almost the same. A gray box with a lens. Or if we... We always look at the same thing. And only the... So if they look at us, only the background is different. At the pole, at the behind trees, at the roof. When we know what we have to look for and we need a database... OpenStreetMap is an option for this. I'm not really well-versed in that. There are several nodes and you can put them together to describe a house or a parking spot. On the left is a very popular OpenStreet editor. And these separate nodes make up a building. If you use them separately, they can put up a separate object. There's this project man-made surveillance. And as you can see on the right, which properties it has. And you can see an interesting project surveillance under surveillance. This data from surveillance cameras are in OpenStreetMap, but most maps aren't rendering them. And OpenStreetMap does that for you. This, for example, is the Hamburg inner city of Hamburg, the main trade station. This is what it looks like. These circles show a range of the cameras. It's difficult at the train station if you have several levels. And that's why it overlaps a little bit. It's an interesting project. You can look at it under this URL up here. The cameras are according to several countries. You can see Germany, France, USA. USA has 11,300 cameras. There's probably a lot more, but nobody inputted them into the database of OpenStreetMap. So there's a lot to do still. And that's what I try to do. Now let's talk about time, split according to time. This is for Berlin. You can see 2012 there has been a spike. This is just an estimation. And you can see that the data can be fairly old. There's also an API. You can read the data. And I'm also using them if I need them. Overpass API. There's documentation in the OpenStreetMap movie. There's also Overpass Turbo. It visualizes the results. You can get them as a JSON or an XML depending on what you need for private projects. The API is public and can do a couple of requests a day. Now we have an idea where we want to store the data. But let's get to the main part of the project that collects the data. Oh, and there are three options. You can do it manually. You can use the Spoochy, which is an editor for Android phones. You can create the cameras on the map and insert the individual values like angle, direction, and so on. And this data will later be shown. And now we come to the part of the project that is shown in the center, which is the Android app that I have developed, which includes a self-trained object recognition based on TensorFlow. This is all based on the TensorFlow example app from 2018 or 2019. And the central part of this app is self-trained object recognition based on the material that I have collected over time. So basically I would be the person who would be accused of doing shady things by the Brits taking photos of cameras at the train stations. And then I'm also, yes. I want to show you a demo. This is the Android app. You have to walk alongside the camera along the camera. It determines the position of the mobile phone and determines thereby the position of the camera. It tries to determine the type of camera. And there's also the option to do it manually or to change the position if it didn't work 100% correctly. You can create training images. And in an earlier version of the app you could also upload them. And this improves the object recognition. And you can filter based on camera types. And this is all based on object recognition. This is the newer editor that I have included. You can adjust the individual types, direction, angles and which zones being surveilled. And later you can export all the data as a CSV file and import it into an OpenStreet editor. And all of these parts of the project work without an internet connection. The object recognition is running on the phone. And you can download the OpenStreetMap data and save it on the phone so that the entire mapping and screening can happen without an internet connection. If you don't want to create any mobile phone location data while you're doing this. So it might be better to not leave any digital traces even though you're not doing anything evil. I also included support for group activities. So basically you can say the first person should do this quadrant and the second person should do this quadrant so that you can split up the work. And then you can meet at the end of the day and share the data and upload it. So it's also a bit of a protection if not everyone individually uploads things but you only upload the data at the end of the day in a café. Also if the software is not recognizing a camera you can also take a photo, export it and potentially send it to me. The other part of the project is the 360 degrees camera which is semi-professional hardware. It's about 300 euros. The next component is the coral dashboard mini which has a TPU so a tensor processing unit which is useful for doing machine learning. And you can basically compile your programs for object recognition for this device and then you can run all of this offline without internet access and you can mount the camera on a helmet and then use your e-scooter or whatever device vehicle you want. You can travel through the city and try to capture cameras. And there are two red parts at the top. They help you to find the position. This is what the app on Android does anyway but in this case it's without the risk of anyone else logging where you are. And these are connected through different pins with a GPS board and it locks when a photo is being taken of the cameras. And the idea behind that is you get these 360 degrees photos with those two fish eye lenses and you see the roof. Of course it's not that round in reality and that's why you have to do some image editing before you can start the object recognition and of 45 or 60 degrees you have to cut out images and walk them back into the original shape. I will show you this a bit quicker. And now you see that the roof becomes straight and then you also see the camera. And each image is processed by the deathport mini which has a really nice processor. This is a mobile net V1. It takes about 25-26 milliseconds to process an image. So you get roughly 40 frames per second with this camera to process all these images. But what costs time is the splitting of the images and transforming of the pictures because we only have small CPU scores with not so many gigahertz. This is something that could be improved. This project I have only worked on until this prototype. But in recent times I have not found that much time to work on this further. We have the possibility to get the data stored in OpenStreetMap. Now we also have to do something with the data and I have developed the following. This is a PCB with an ESP32 and a GPS chip which can determine the position. It takes a couple of minutes until it finds the position based on the satellites. And with just a few meters of accuracy it gives you the position. You can use a battery and you can also make it go to sleep if you don't move it a lot. And the nicest thing is the dynamic NFC tech which are these two golden parts that connect to the antenna. The NFC chip is interesting. You can provide a string into the function. And analyze the data with the NFC tech. The camera position is stored under SD card and compares the position with the position of cameras in your surroundings. You have to load them onto the SD card but then the device works without an active connection. And just with this near field communication just with your phone to talk to this device. I'll show you once again a video. This is the movement activation. If the GPS module has a satellite only needs 10 seconds to reconnect to the tech you can see how it's built up. You can communicate all kinds of things. And if you have at the end of the day the device it gives you a surveillance report, a kind of surveillance report. You can access your position and all the cameras in this area. And which cameras might have seen you on OpenStreetMap. You can click at them. And this once again on Android. And you can show which angle the cameras have if the data is correctly entered into OpenStreetMap. You can download the cameras in the area directly from OpenStreetMap. Here once again more screenshots. I've waited for the bus on the left side at the main train station in Mainz. And on the right side I was in the bus it even worked. So it's not really helpful to have a GPS within a large metal box. But yeah. This is almost it. These are the two parts of the project. There's some more. I have a browser, in-browser validation software. If you have been gone mapping it exports the ZSV file and in the browser you can validate that. It checks which data is already existent. And if not you can approve and it's exported again. And you can input it into the OpenStreetMap editor. And the backend where you can upload the training data. I don't have that in action anymore because I have more focus as a group to map, to export. And to upload it jointly with one account. That's what I've focused on. And now a couple of questions to you. Or requests to you how you can help. If you have access to 360 degree scanners. Lots of cities. As private carbon needs to drive through cities. To help whether they need to check. The building department can use the 360 degree scanners. Even with a laser distance meter. And this company can go through the city with a car. Berlin for example has done that. One company is doing that. You can look it up through a freedom of information act. And there's some contracts you might ask for. So the data is mostly only available for internal use. But if you work into a data protection agency. If they have an interest to map surveillance cameras. You might suggest to have that data access to that data. I would think they could get access if they would want to. You could still pay attention to cameras. And input them into OpenStreetMap. I can work with that. Anyone else who wants to do that. If you're interested. I'll try with others with other organizations. Like lame your face this summer. I'll do mapping parties. If that's possible. So without an infection risk. If that's given. You can follow the reclaim of your face on Twitter. If you want to do printed circuit board design. You can look at that maybe if you want to. And tell me what I did wrong. And if you're in the area of event management or other. And have access to a professional 360 degree camera. That would be interesting to put it onto the roof of a car. And try how it works and make a couple rounds through a city. And look and analyze the data afterwards. And do other things also for OpenStreetMap. This is these are the ways you could help me. And you could also look at the things on GitHub that I've used for my presentation. On Vimeo you can look at all the demonstration videos. There's also more current demonstrations for the current versions of the Android app. You can follow me on Twitter. If you want to have updates to this project. Or want to know where the mapping parties take place. If you look for... You can see this as crowdfunding. You can look at this board again. And if you have questions, I'm here. Yeah, thank you for your attention. Okay, thank you very much for this nice, very technical talk. And I'm always happy to join these. We have some questions from the audience. And I would like to start with them. You said that you stopped with uploading of training data. And did you look at the uploaded training data? How do you ensure that people didn't upload cat pictures? I thought about that. I thought on the server that does the upload. There's in the object recognition, there's differences in the performance. The object recognition on your phone. Or the performance differences. You can see them in the accuracy. On the server, I would have a higher performance object recognition. Or a classification of the picture. And I would have tried whether someone uploads a cat or videos. And the things that these object recognition would have been recognized as not being a camera. I would have looked at that. But if someone would have uploaded cat pictures, then I would have just looked at those. Is the answer app available on Aftroid? And is it also possible to run it without extra hardware? Without extra hardware, yes. With F, it's not on Aftroid, it's not on Google Play. This project that I built, I probably would rebuild the app entirely. Because the object recognition and the machine learning APIs have changed. They have become more simple in the last three years. I used the example app and threw everything out that I didn't need and put my own material in. But now the TensorFlow APIs have become a lot similar. And out of complexity reasons, I probably would do it completely again. Rebuild it completely again. And I would put it on regular Android. And you can also build it yourself from GitHub. That's also a possibility. Did you try to use public images? For example, photos with position data that is being uploaded to Google. Especially Google Street View or maybe pictures from other social networks. Because there are already many of those 360 degrees pictures. And these often are with location data. I looked at Google Street View, definitely. In the Google Server API, they only have low resolution. And the efficiency is not there for my own use case. The license, you cannot use it without approval of Google. Google has all the data in higher quality. But they limit public access. I think I don't know how it works with user uploaded 360 degree pictures. What license they are using, whether it's determined by user or by Google. The 360 degree images, I didn't look at them. But I think those are more about nice views. And not about the inner city or maybe a little bit about the train station. But I'm not really sure whether they are really relevant for surveillance cameras to find a position of those. But it might be worth looking into that about the licensing. And I'll probably do that. Yes, there are many different image sources. Yeah, that's a good attempt to use that and analyze that. That's a good idea. So the IFG requests, do we actually get any information about the duration of storage of data? The most are in the area of train stations. And I think they are marked as such there. But in private areas of restaurants, they might surveil their own area. But those are also mostly labeled properly. There's a lot of private use that's not properly labeled. And the second question was about the storage duration. You can see the contracts, they are censored a lot of times. I didn't do any freedom of information requests. Storage duration should be mentioned. They say that faces and license plates are censored before they are published before. If you want, if you do a lot of work, then probably you might find the people that you are targeting. But I think the protection is in place. Storage duration should be in the contracts. You should find it out. And there were questions about the links and where can we get the OpenStreetMap data and also the tools. So where can you get the stuff? So you can get everything from GitHub. All parts of the project. This is the Overpass API. You should look into the documentation of OpenStreetMap. It's well documented. And the tag that you're looking for is man-made surveillance. It should be on, you can look at it on OpenStreetMap. The man-lowcase-made surveillance. I didn't include them in the slides, but you can find them there. The data. I think the project Surveillance Under Surveillance also explains it really well how they extract the data. You can simply ask me again if you have problems. That's not an issue. The website is anserve.org. Yeah, that's right. It's not that nice, but you can look at it. Yeah. Or you just start with GitHub. This is the more important part. And that's a question. Someone asks what about frozen streets recognition that cameras are only used for frozen street recognition. Is this something that you also encountered? So have you found some cases where the reason for the surveillance was really, really specific like that? I think with traffic, you have to, in certain cases, if you just want to count cars and not just anything else, surface-ice recognition, I didn't see that. But it's probably not the worst use case where you can direct the camera straight onto the ground and then you won't even see anything else. So that wouldn't survey a public space. So this is a really interesting application. Okay, thanks. What can you do with the data? So what users can you make with the data besides evading the cameras? I thought of a research direction group of people that are representative of the population. You can split them the way you want according to income, whatever, that you could put these trackers into their pockets and just look who would be affected by surveillance measures disproportionately over-affected. And you can imagine what the result would be. So you could think about whether currently the political situation is good. I have hopes that biometric surveillance wouldn't be discussed Europe-wide. And that can change, of course, anytime. But if this would be rolled out everywhere, you could think about some routing and input how much cameras you want to see on your roots more or less with a slider, whether you want to see more or less. And I would build such a routing service to have a possibility to be caught by as few cameras as possible. And this might be really complicated to do that, but it's also a possibility that we could do. But I have not done it now, until now, but it's still discussed. Another question from the audience and also a remark, really nice project. And I'm really looking forward to the mappings, the sum of mappings. Gladly, if you want to take part. Usually we do this locally. We have a small open-street map communities that are responsible. Also that the local CCC organizations, there are probably also a good contact point, what they say. And then we could meet. That would be great. The hardware tracker. So you can also get the hardware tracker through crowdfunding, right? You can find this on the web page or on Twitter or if you Google, on surf. Many thanks. Those were all the questions from the audience. And I think we should now proceed to the extended Q&A. Many thanks again. Thank you for your attention.