 Now, let's listen to what Adam Howey has to say. He's an artist living in Berlin and he studied in New York City. Actually, hands up in the audience, who of you backed his Kickstarter campaign on RF signal blocking phone cases? Anyone? Okay. So, how he's going to talk about, Adam is going to talk about retail surveillance, retail counter surveillance, 15 most unwanted and wanted things for surveillance and counter surveillance. All right. Thank you for everyone to coming here. This is my first CCC. And in the title of this presentation, what I wanted to talk about originally was 50 companies doing retail surveillance and come up with 50 ways of doing a retail model for counter surveillance. But what I found as I began working on that presentation is that it's relentless. It's literally thousands of companies and a nonstop battle to keep up with all the different retail surveillance tactics and modify your life to adjust your privacy settings accordingly. So I've changed the presentation a little bit to focus on one aspect of that. I've narrowed it down to photography or computer vision. And even within computer vision, narrowed it down to facial analysis. And for me, this started when I used to work as a photographer. When I moved to New York City about 12 years ago, I worked as a photographer. But I came across some quotes from Susan Sontag, and as I read her book on photography, it really influenced my perspective about the power, the ability for a camera to capture, to possess, to turn people into objects that you can possess and control the narrative for. And I think that's very clear when you look at aggressive paparazzi behavior, such as these photographers kind of attacking Britney Spears. But I think it's a little bit less clear how that narrative unfolds over time online for photos that are posted to the internet, to social media, and so on. And to talk about computer vision, I want to introduce a little bit the history of computer vision that is not entirely new. In about 1963 is the first recorded instance that I've come across from a recently declassified CIA memo that was a proposal for a simplified face recognition machine. Ultimately, in 1963, the technology was not quite where it needed to be to perform a robust, accurate face recognition system. But over the next few decades, and especially in the year 1969, three Japanese researchers made a lot of progress and were able to detect the first human face with computer vision. So I like to think of these first human face detections, which look a little bit like the head of broccoli or a light bulb, and it has the first cave paintings, and the first time that a computer was able to understand, in a very primitive way, what it was to appear as a human. Throughout the 70s and 80s, computer vision made only moderate gains. And in the 1990s, this program called Ferret, which was funded by, of course, military, Department of Defense, was a feasibility study to determine if facial recognition could play a significant role in law enforcement and in identifying enemy combatants at a distance. So this set into motion what Paul Vrilio calls the logistics of military perception. And in 2001, a breakthrough algorithm came out called the Viola-Jones algorithm. So the Viola-Jones algorithm was unique because it was very efficient and offered enough accuracy that you could deal with the tolerance of the kind of cost-benefit analysis of a very lightweight computer vision system that could be put onto embedded hardware at a very low cost with a decent frame rate. So what that set into motion is the ubiquity of computer vision face detection appearing on all sorts of different devices in 2001. So that really changed the model for where you could put computer vision. You didn't need a giant computer to do it. You only needed a very lightweight, small, embedded system. And of course, that brought a lot of problems for privacy. Now you have computers that can recognize faces. You can extract those faces. You can begin to do facial analysis, and it doesn't cost you a lot of money. You can do it on very cheap, low-cost hardware. So as OpenCV and this face detection algorithm began to propagate throughout culture, what was appearing in the 2008, around that time, was a real push towards computational photography, using cameras almost without a human in the loop, to recognize people and extract knowledge about them. So the same way that Susan Sontag talked about people using a camera to extract knowledge and possess people as an object, now it's very clear around 2005 to 2010 that this is going to be the future narrative, that computers will be extracting the narratives and labeling us and tagging us. So in 2010, I worked on a project that's all about modulating your appearance to reduce your confidence score to computer vision algorithm. The project is called CVDazzle, and it uses the vulnerabilities in the face detection profiles that exploits those with hair and makeup. So by doing hair and makeup in a certain location, you decrease the confidence score that that phase will appear. What it looks like when you run a test, I'll probably speed this up if I can, you can watch in a very slow down version how face detection works, and it's really just reading an image from left to right like a book. And you can see the results of the algorithm on the left, a very high confidence score for the face. Now if we fast forward to the end, what we see is a zero confidence score where there's one misplaced rectangle compared to a very high confidence score on the left. Another way to look at that is to use what's called a salience map to understand kind of like a heat map for where computer vision algorithm is looking, what it found interesting and salient in that image, so you can go back and see kind of where the computer vision eyes have been looking at an image. Recently researcher Voitec Fridge in University of West Bohemia ran a study on CV Dazzle to determine how effective or whether it was effective at all. And what he found is not that it's effective 100% of the time, which I don't think should be a requirement for a camouflage, and I think camouflage is often misunderstood as a Harry Potter invisibility cloak when camouflage actually is about optimizing the way that you appear and reducing visibility, moving matter between different parts of your electromagnetic spectrum possibly just even for a brief moment to evade observation. So achieving 100% of course would be great, but I don't think that should be a requirement for the way that we think about camouflage. The results of his analysis showed that the most effective pattern was when you cover the nose bridge area. So that's one of the biggest vulnerabilities of open CV space detectors, and the result was about 69% reduction in detection. Now if we compare that to World War I Dazzle camouflage, the original Dazzle, this has been debated whether it was effective at all or not, but Roy Barron's a camouflage historian has said, no in fact Dazzle was evaluated and it was about 50% effective. Now you could say 50% is not great for camouflage, but if you avoid one out of every two torpedoes that would explode your ship, that's pretty great. Since I worked on the project, it's kind of taken on a life of its own, appearing on TV show elementary. People have kind of taken the hints from what I posted online, reinterpreted it in their own way, which sometimes turns out great, sometimes turns out very interesting. But I'm happy to see overall that people are experimenting with this idea of just appearing in a new way, and I think it can also be very playful. After that project, which was in 2010 and 2013, became very aware and concerned about a different type of imaging, which is thermal surveillance, doesn't relate as much to retail yet, but thermal is becoming very cheap 10 years ago, 640 by 480 would cost you $20,000, 320 by 240 today cost $200. So the price has changed the way that we use this technology and thermal is becoming more and more of a consumer level technology. So what do you see here is a way to block. This is more of the Harry Potter invisibility cloak kind of technology. It's a silver-plated metal fabric fashioned in Islamic dress as a anti-drone hijab and an anti-drone burqa. The idea of the burqa is that it's reinterpreting religious dress in an era of mass surveillance to instead of create a separation between man and God, create a separation between man and drone. You can see, now this is a test for you. Like that game you play photo hunt, there are four people and you can see them very clearly, their heat signature, but there's a fifth person wearing the anti-drone burqa. It's really hard to see because of the projection. I'll give you the benefit of the doubt here. I'm going to play the animation. It'll become very clear when there's motion in it. But without motion, you can see that the visibility is near zero for wearing the anti-drone burqa. So it's very clear. Now you can see the legs and that's intentional. Now what you'll see is somebody walk out of a store. This is in the winter. This is actually quite a high temperature differential. And this person is just glowing. These projects, I approach them in a playful way, but they also touch on some very serious issues of national security and surveillance. And what I can't predict is who will find these threatening or interesting after releasing these earlier projects. One of the people that found it interesting was the Air Force General Counsel at the Pentagon who tweeted the project. And the other one is kind of funny, a request for an internal use-only publication from a three-letter agency. Asking for permission. Well, I'm never going to see it, I guess so. So this kind of working in this area, a lot of uncertainty about the way that people will perceive these projects. Ask for a comment. The NSA declined, obviously, to say anything about it. But it always makes me wonder where the line is in doing this kind of artwork. How far can you really go before you've gone too far? And I think you should just go further. So I've taken the idea of these garments and created a kind of shop called the Privacy Gift Shop, where I try to further these ideas, kind of like using the store to normalize through commerce. I think Holly Herndon said it best in a talk she gave using pop culture as a carrier signal to transmit these ideas. And I think commerce and the gift shop can have a friendly normalizing effect on otherwise kind of terrifying discourse around national security. So originally I did want to talk about these topics. I've noticed some great talks here that I want to point you to kind of allow me to expand on the facial recognition and computer vision. So I'm going to talk a little bit more about the Crystal's talk, about corporate surveillance, and another interesting talk about the ultrasound tracking ecosystem. So these are kind of all part of the corporate retail surveillance infrastructure. I'll just add two things to that. With Wi-Fi, you can now detect emotions with wireless signals. And I think this is worth spending too much time on. It's very easy to block this. You can turn this into a kind of jammer to create a lot of noise in the Wi-Fi spectrum and disrupt the motion detection signals. Another company to highlight is Indoor Atlas, which uses the geomagnetic signature measuring the Gauss on your phone's digital magnetometer to get a two meter resolution position within an indoor retail environment. But that's also very easy to block with a sheet magnet. Put a sheet magnet, put a magnetic shielding material on your phone case, or even a small piece of metal will change that Gauss signature enough to throw off. So these are both very easy technologies to circumvent. What I want to talk about then is computer vision and kind of modulating your appearance to minimize the damage to privacy. As I was preparing this, I came across... As a designer, we're influenced by everything, but what better element to pull from than something that's evolved over time? Birds have shorter wingspans now today because they need to be more aerodynamic to not be hit by cars. That's something that's happened in the last hundred years. And I thought this is a great metaphor for thinking through technology that we also need to evolve like birds. The problem is that's not really true. So Glockheed Martin based that on a 2013 study about, I think, cliff swallows. And it's a bit misleading to put it nicely because the study is very short-sighted, looking only at Nebraska and the birds lived near traffic in a bridge structure. So to make this extrapolation from the data, I think that's actually the metaphor is that we're over interpreting statistics to create a lot of hype and oversell technologies that mislead us about their true cost to society. So that's the metaphor of the hawk. And now we finally get to more of the computer vision part of it. To put things... To put the scale in reference of how much data can be gleaned from very small amount of visual information. We started the scale of one pixel, a one by one transparent pixel and this is the most popular image in the world in terms of the number of times it's been downloaded, displayed. Of course, you can't see it because it's transparent. And the only purpose of this image is to collect information about you. This pixel lives on Google.com, lives all over the ad ecosphere, can be represented in 43 bytes. So it's the most lightweight image. But I think this might be a better metaphor for what images are today. An image in some ways has become a shell for data, collecting data and surveillance. When we move up and fill in that square, we have 256 different values at gray scale. That number increases very quickly as we increase the size of the image to 4 billion possible combinations and a 2 by 2 gray scale image. We go to 4 by 4, now we're at 3.4 times 10 to the 38. We go to 7 by 6 and now we have enough information to do facial recognition. 7 by 6 at 256 gray values is enough to do 95% accurate facial recognition on the AT&T faces data set. That granted, the AT&T faces data set is all white guys and it's not a very large data set. Even with a larger data set called face scrub, you only get an 18% performance reduction compared to the original faces when you pixelate it to 14 by 14 pixels. At only 12 by 16 pixels, you can build an encoder to do scene recognition, activity recognition. So what you do in this case is you train your neural network on this low resolution data and then instead of interpreting a 640 by 480 image, you downscale that and you use the knowledge at 12 by 16 to interpret it. So we have a very large amount of information in a very small image space. Now we go up to 20 by 20. Here we have 256 to the 400th, a very large dimensional space. And these are the next four images are the optimal activations of OpenCV's Harcascade. So if you were to ask the algorithm to describe the perfect face, these would be the most perfect face that the algorithm would want to see. So it activates it maximally, very high confidence score. So the different profiles are called alt, alt tree, just default frontal face. Now we go up to 100 by 100. What can we do at 100 by 100? I have a feeling I better run out of time talking about everything that we can do. 100 by 100 is 2.5% of one Instagram photo. As we've seen going up from one pixel, even at one pixel there's a lot of information. You have 100% unique separability from zero to nine with Times New Roman. You have 97% unique separability when you reduce each character to one pixel. What that means is you can take a lot of redacted text, run it through a genetic algorithm or Markov chain, and then you can ascertain what that text was in merely a 2 by 6 pixelated image. The pixelation is not really redaction. It's simply a reduction, and sometimes that's enough information to tell you what you want to know. So every day there are about 370 million photos uploaded to the Facebook family, Instagram, and Facebook, and what happens to all those photos which contain a lot of faces? A lot of those faces are larger than 100 pixels. Every face that's uploaded to Instagram or Facebook is analyzed and possessed, and the knowledge is extracted not only from the face, but also the metadata around that region. I want to do a very quick survey of some of the companies and what they're doing with that data, and I'll focus on the most pernicious ones. Faceception is a company out of Israel that's looking at giving people, ranking your image if you look like a terrorist, if you look like a poker player is one of them. Bingo player, you could be an academic researcher, you can have a high IQ, and they know all of this by looking at your face. The idea is that your face is somehow linked to your DNA, and that your physical traits can describe who you are, what you know, and your performance as a human. That sounds kind of crazy, like Francis Galton, eugenics, inferring someone's capabilities purely based on physical traits, but they're not alone. Some of these scores I mentioned, high IQ, extrovert, and another thing they can tell purely by looking at your face is whether you would be a good brand promoter. Another research group is looking at predicting criminality using lip curvature, eye inner corner distance, and nose mouth angle. What they find is that criminal and non-criminal face images populate two quite distinctive manifolds. Again, not alone, a lot of other researchers are looking at how to take that very small 100 by 100 pixel amount of data and turn it into insights which could be used for marketing. Here we have a long list, including how trustworthy you are, sociable, typical, or weird. And what all this reminds me of is Francis Galton and eugenics, and who the real criminal in these cases would be is the people who are perpetrating this idea, not the people who are being looked at. There are some interesting things to learn from this, and I think by learning the ways that you're being looked at, then you can modify, begin to game the system, of course. One of them is if you have a wider mouth, you're more likely to be chosen as a leader, a CEO of a company, but conversely, if you have a narrower mouth, you're going to do better for an NGO, says this paper. You can, just looking at the relationship between these 100 by 100 pixel regions of face determine who's the most important person in an image, spatial information. You can ascertain somebody's pulse if you have a video by amplifying the green channel. Again, still within 100 by 100 pixels. I don't think I have time to tell this great story. Jetpack is a company that analyzed every public pixel of Instagram. They then sold that technology to Google. Instagram is, of course, Facebook. It's brilliant if you sell Facebook to Google. Well, what they did is just take information, again, from the facial region, and they built a guidebook. So what's a cool place to go? Oh, the place with a lot of hipster mustaches. Where's a place to pick up girls? Where photos of people with lipstick are geotagged? That's the idea for their product. And that's what happens to photos that are uploaded to Instagram. But beyond that, you can also begin to predict economic behavior purely from one photo. You can predict the decision-making capability about 20% better than a human can with computer vision algorithms. I'm going to have to move briefly, but kind of an example of what it would look like to impose a lot of algorithms on top of an image, a selfie. I like to say it contains more information than a shot, because you not only have the face, but you have all of the metadata and relational information around that. One of the companies, a few of them, Kairos Emotion, Clarify Affectiva, are operating in this space using some of the attributes I mentioned in those earlier papers, as well as this list of about 78 attributes that you can extract from a face and 47 kind of knowledge points that you can then infer based on those attributes. So what to do with all this information contained in a very small 2.5% of one Instagram photo? Well, as I've looked at in an earlier project, you can change the way that you appear. But I think there's also an opportunity to change, in camouflage, you can think about the figure and the ground relationship. I think there's also an opportunity to begin to modify the ground, things that appear behind you, next to you, and that can possibly interfere also with the computer vision confidence score. And that new project is called Hyperface, and what this is doing is taking those maximal activations, either from a more traditional classifier like the OpenCV, the older Jones, and here is a heat map of the most important areas of the face for two profiles, taking that information and then just giving an algorithm, overloading it with what it wants. So kind of oversaturating an area with faces to increase, to kind of divert the gaze of the computer vision algorithm. An early prototype for this looked like something like this, just a little bit spooky, maybe. But what you see here is all the maximal activations kind of overlaid. And when you put this through a computer vision face detector, you get about 1,200 possible face detections from it, not faces, but confidence mappings. You can refine that a little bit to create more of the kind of CAD pad, pixelated camouflage look. And then you can do something similar for a neural network to activate the face neuron of a neural network. Then you can use these and combine them to create, for this project, textile patterns that could be used, hopefully to modify the environment around you, whether it's somebody next to you, whether you're wearing it, maybe around your head or in a new way. That project is in collaboration with Hyphen Labs in New York City for their new work, Neurospeculative Afrofeminism, NSAF. And that project will come out in January. So probably done. But I like to end on this slide, which was introduced to me by friend Richard Rees, which shows a scene from 100 years ago in New York. Now, if you look at this photo, everybody's wearing a hat. If you look around the room, nobody's wearing a hat, right? So in 100 years from now, we're going to have a similar transformation of fashion in the way that we appear. And what will that look like? And hopefully it'll look like something that's according to appearing in a way where we optimize either our personal privacy or optimize... Yeah. Optimize according to the settings of mass surveillance. And I'll end on that. Thank you.