 I'm going to talk about lots of things this morning. Yesterday morning's keynote was interesting, the stuff about history. We have much to learn from history, obviously. My favourite historian, this guy called Tony Jutt, he wrote that the past is not empty, it's got stuff in it. You can't just wander around pretending it's not there because you'll bump into the furniture and hurt yourself. He also wrote that arguing about history was like yelling at football on the television. They can't hear you, they don't care and it doesn't make any difference anyway. I prefer yelling at the present, that's what I tend to do my shouting at. We can only use those lessons from history which are fantastically important but we can only use them if we understand when to apply them, if we understand the landscape of the present in which those strategies could be employed. I frequently get the feeling that we, let alone trying to understand the future, we have almost no idea what's going on right now. We live in a world that's deeply complex, that's structurally complex, that's intentionally complex and apparently incoherent that produces constant strange cognitive dissonances, strange combinations of events that appear to be only tangentially or very deeply interrelated in ways that we can't entirely tell, that we feel intuitively must be related to things that we're doing but we can't see them. The scope of these actions has become so large, so vast, so global and so interconnected that it eludes kind of individual view, whether it's kind of financial markets or whether it's kind of the rule of law and human actions or interactions with technology. They span such vast areas, we can barely fit them into a single mind. That's the world we live in. We need new strategies in order to deal with that. I'm going to talk a bit about those today. This is a tesseract. It's one of my favourite symbols for this kind of thing. It was a structure named by the Victorian British mathematician James Hinton. Here's some history. It's a representation of a four-dimensional object in three-dimensional space. The thing about the tesseract is also what it shows us is that four-dimensional objects will cast stable shadows down into the dimension beneath them. In this case, it's a three-dimensional object, two-dimensional shadow, but four-dimensional objects also cast stable shadows into the third dimension, which means that we can read these complex structures if we know exactly where to look for them. But their full structure remains entirely invisible to us as so much of our contemporary technologies remain entirely invisible to us. I want to, you know, we can track the edges of it. This is what I mean by invisible technologies. This is what I mean by the intangible things, the things that we don't see all around us. This is a bin in London. They put these bins in about a year ago, all around the city, around the business area in the centre of London, and they got very excited about them. God knows why, because they had these screens on them. They're recycling bins with television screens on them, and this is supposed to be amazing. Because brilliant, more ads. I don't care. But it was a big thing, and they are slightly connected. They had live travel data about the state of the underground on them and stuff like this. It was kind of just more crap in the environment, but okay, that's the kind of thing that people do. But the significance of this is as an invisible technology, when this thing suddenly becomes deeply interesting, but also completely intangible is what was revealed just last week about these bins, is that they're spying on us brilliantly. It turns out that the company that put in these bins fitted them with passive Wi-Fi sensors. So that essentially they are... As you walk along, I have set off an alarm. This is good. Did you do something about that? They don't want me talking about the bins. I'm afraid. Thank you very much, people behind the curtain. The thing is about these bins, as you walk past them, your phone, if you've got a smartphone, or a laptop, or anything that uses Wi-Fi, it's constantly looking for Wi-Fi networks. So it's sending out its MAC address, which is a unique ID for every single device. It's sending that out, and the Wi-Fi locations are pinging back to it and saying, yes, I'm here, you can connect to me. In this case, they're not. In this case, these objects, these bins have devices inside them that just listen for that MAC address, and then they store it. It's basically like web cookies in the real world. Sort of, except it's not really like that either. That's how the newspaper has been described, but it's not really like that because they leave no trace to us. There's no way of knowing on your end that this thing has recorded you walking past it. So there's about 100 of these bins around central London. There's only about 20 of them which had this put in, and they trialled it for about a week, and then the company released all the data, and we're like, look at this amazing thing we've done. And everyone went, excuse me, you've done what? That is not cool. And they're probably going to be banned. So it was all weird. It's weird on multiple levels. It's weird that people would do that and think it's okay, and that's a whole subject around what people who work in technology or what technology allows us to do that some plans possibly side steps certain necessary ethical considerations. It also speaks to the fact that we don't know what to do about it. And this is the bit where I get more worried. I actually went out last weekend and chalked all these bins with little signs like saying, this thing is addressing you. But I didn't even know what to put on them. I started putting like eyes with little kind of Wi-Fi symbols and stuff like this, but we don't have a symbol for like this object is spying on you through an electromagnetic spectrum that you can't even see. We don't have a language for describing this. And I sort of worry that we're just going to ban them rather than addressing this, because if we ban it, then other companies will just do similar kind of things. They'll do work around stuff rather than we work out how we talk about this thing. How we talk about this thing is important. How we understand these technologies is key because they infest the world. They're entirely intertwined with it in all these ways. I'm going to talk about some of the ways they're intertwined. I'm going to talk about shopping and robots. This is a factory or rather a warehouse in Pennsylvania. These things on the bottom, these little orange things, they're called Kiva robots. They're one of the more interesting types of warehousing assistance you can buy. It costs a few million dollars to install these little guys in your warehouse, but if you've got a big enough warehouse, they make a huge difference. So what they do is they have that whole area of the warehouse over here, this central area, stacked with these boxes, these stacks of shelves. When an order goes in, the little robots scurry off and they go under one of these lines of shelving and they pick it up and they bring it to where there's some people at the side who are putting stuff into boxes. It's this nice little collaborative thing. There's this brilliant interview. I think it was in Business Week or Wired or something where the reporter goes to the factory and he meets the manager of the factory and they're walking through the factory. He says, this is the kind of robot area. The reporter notices that the guy is sort of checking as they walk along and he's like, are we okay here? He's like, yeah, yeah, they're usually fine. They move quite fast, kind of like this. But it's an area, a space in which humans are kind of negotiating the rules of sharing space with automated systems because we've physicalised them into these objects. These are instantiations of kind of packing and shipping processes. We've put them into little boxes and therefore we can kind of see them and we can interact with them. But the processes that they engender then kind of spread out further. This is an Amazon warehouse in Rujli in Staffordshire in England, Northern England. It's built on the site of a former coal mine which tells you quite a lot about the last 20 years of industrial history. It's also, it doesn't use robots. It uses exclusively people. But it uses people in a very interesting way because of the way Amazon has developed its warehousing and storages practices based on things like using robots. And that is nothing in this warehouse is stored as a human would store it. Amazon uses a system called chaotic storage which algorithmically analyses the demand for different objects and the interrelated demands on the pairing of different things and then organizes its warehouses according to that. So if a human organized this space there would be the books over here from like by author A to Z and then they'd be like the DVDs here and then they'd be like the white goods here and stuff. No, that's not how warehouses like this work. That's deeply inefficient because no one wants the books from A to Z. They want a book and then they want a DVD in this. So you scatter it all over the place which is great, hugely more efficient. Although it means the space is totally unnavigable to a human being. You would not be able to find anything in this space on your own at all, right? Which means the people who work in it use devices that tell them constantly where they need to go for the next thing. It requires a technological mediation and augmentation in order to be able to navigate this space. It's impossible to do so otherwise. And that makes this space into what human geographers call a code space. It's a really fascinating and really important concept which is, and this is the core example. This is the canonical example of a code space which is an airport, departure area space most of us reasonably familiar with. We're familiar with it and we know how to interact with it which is that you arrive, you present your credentials, they are entered into a system, you are hopefully given permission to proceed and you move through this space and onwards. But if you've ever been in an airport when the system's gone down, you know quite how catastrophic that is. It's not just the immediate system that fails. It's the entire architecture. This ceases to be a modern contemporary functioning traveller experience. It turns into a big shed full of angry people. Everything fails because what's really being done here isn't just like keeping the rain off your head. It's processing everything through. This space is co-produced by architecture and software. It's what makes it a coded space in this kind of geographical term. Coded spaces, it turns out, are kind of everywhere. It's largely used in this kind of architectural geographical theories. They talk about spaces like this or the warehouses. But for me, the concept of a coded space is far more widely applicable than that. This is one of my favourite quotes of all time. This is a picture of the ENIAC. It was one of the first computers. It was built at the University of Pennsylvania between 1942 and 1946. It was one of those proper old mainframe things. A couple of room filled with valves and wires and all this kind of thing. There's this beautiful quote from a guy who worked there, a mathematician called Harry Reid. Harry Reid said that the ENIAC was, strangely, a very personal computer. Now we think of a personal computer as something that you carry around with yourself. The ENIAC was a computer that you lived inside. It was a couple of room size and you had to borrow your way through it and explore it. The ENIAC was one of the first computers. Computation is now everywhere. It is layered over everything and we are living inside that computer as well. The internet stretches entirely around the surface of the planet. The functions of computation are carried out both locally and globally in this thing that we call the cloud, which is actually massive sheds full of more computers. It extends all the way up to these satellites. This is the GPS network. Every time you use Google Maps on your phone, you're talking to satellites 20,000 kilometers up in space. That is a vast infrastructure, a superstructure of computation that we are all now living inside. It's a structure but not one that we can understand in purely physical dimensions. We have to look at where that code space extends out, who it involves and how it crosses over things. Zoom back in to those coded spaces, those coded workers. This is a terminal, a wearable computer of the type worn by the employees of the Amazon warehouse. This is one used by Tesco, which is the UK's largest supermarket. There was a thing last year where it turned out they were forcing all their warehouse workers in Dublin to wear these things and then this has a dual function, this machine, because it both gives them instructions on what they need to find and pack like the Amazon workers. It also monitors their break time and movements so that you are constantly taking analytics and applying them down to the human level now, monitoring when people take toilet breaks, monitoring how long their lunch breaks are. This device allows for that greater efficiency and for greater control. There's a whole bunch of labour aspects of this, not least that far less training is required to use these kind of things, which means you can apply lower wage workers on shorter term contracts, or practically no contracts at all. You can employ people who don't necessarily have English as the first language, or whatever the local language is, so there's less incentive to educate people. It lowers the barrier to work, it reduces people to these processes. It also atomises them. If you don't have short breaks, if you don't have long breaks, if you're constantly on the move, you walk around a warehouse, and in the case of Tesco, giving you points, or deducting them depending on that, you don't have time to stop and talk about unionising. There's a whole bunch of stuff. There's a direct technological effect. That extends back out to us again as well. These are remote control devices. If you've got a supermarket app on your phone, if you've got a shopping app, you're part of that particular segment of the code space as well. You're remotely controlling those workers almost directly. There's an automatic link from you pressing that to someone in a warehouse getting a command that sends them around this thing. We've automated that entire bit of the process, that entire thing has become intangible, and we're involved in it as well. It stretches back to the manufacturer of these objects as well. This is the canonical image of the iPhone worker. This was a photo of an unknown Foxconn worker that was found on someone's iPhone when it was delivered. It's brilliant. It's wonderful. It's this sudden glimpse into the blank, allegedly magical place from which all our extraordinary contemporary objects emerged from. Many of our contemporary objects come with this sheen and patino of magic. That's the word that iPhone things always use. It's not magic. It involves very large open-cast mines, a huge amount of trading, very large factories in China, and people. There are people involved in every stage of this. When you get little views into stuff like this, these are the weak signals of a legible system. Technology is so tuned to make aspects like this invisible to us that they prevent it from being critical about it, essentially. Things like this are these tiny glimpses into that system that makes it possible. Reminding us, like Harry Reid in the ENIAC, that there's always a person somewhere inside the box somewhere. We're involved as well. This is the original person inside the box. How many of you know what this is? Is that of interest? This is the mechanical Turk. The original mechanical Turk. Not Amazon's one, though. There are interesting parallels. The original mechanical Turk was an 18th century automaton. It was built by Wolfgang von Kempelin, who was a bit of an impresario, and he declared he'd made an automaton that could play chess. For about 40 years. This was a robot dressed up to look like a Turkish man that played very good chess. It toured the courts of Europe for 40 years, wowing people, showing this extraordinary wonder of the age. Of course, it was revealed after some time that there was always a person inside the box underneath. It was incredibly cleverly constructed. There's these panels on the front of it. Inside, someone would sit on a little sliding stool, and when one panel opened, they would slide to one side and some clockwork panels would slide in, and slide back again. Bizarra, as it seems, had evaded detection for almost 40 years. Inside the box was a string of identifiable people, usually ageing chess masters, usually drunk and usually broke. It's a really sad line of people that were stuck inside this thing for decades. What's almost most interesting? The Amazon Mechanical Turk, just for those who don't know it, is a system by which you can upload tasks to Amazon, to the cloud, tasks which are easily broken down into smaller steps, things like tagging images or proofreading, copy-editing, this kind of stuff, but specific tasks which are actually quite hard for computers to do, very easy for people to do, and the cloud that it gets distributed to is cheap humans. It's people all over the world who will do tiny, tiny tasks for very small amounts of money. Amazon's Mechanical Turk does it, it takes that work and it splits up, it puts the humans inside the box and lets you kind of address them computationally, it turns them into processes. The thing that gets me about this Mechanical Turk is that, yeah, it turned out to be a trick, it turned out to be a robot, but it didn't really matter. It's still kind of enchanced people, it's still made them wonder. People's mental model of this was kind of so powerful that it transformed their ideas of kind of what was possible. I think for me that stands for the fact that our perceptions of technology, our understanding of it, can often be as if not more important than the true capabilities of the technology itself. If we're building this world, we build it on the basis of what we kind of think about how these things work and not necessarily how they do, certainly when it comes to things like politics. But there's a brilliant moment that comes, I think, and that has come in certain areas, and that has come when we kind of start, stop competing with these technologies and we start collaborating with them. This is the very famous moment when a deep blue finally defeated Kasparov. This is the moment when we finally go, like chess, this thing that we've built up as the kind of pinnacle of human intellectual achievement, computers are better than us, bollocks. We've got that. For a while, chess was in this kind of horrible mental collapse of just like while we're screwed, we're done. Let's just wait for the cyborg overlords. Then interesting things started happening. If you look at computer chess now, the really interesting chess, chess is still not like a solved problem in computational terms. The really interesting work that's happening is actually around what's called central configuration chess, where it's teams of humans and computers playing against each other, because it turns out while the best computer will now completely wipe the floor with even our best grandmasters, a human collaborating with the computer will wipe the floor with a far more advanced computer, because there's complementary strategies at play here. I think that's a model to kind of think about all the time, but it's interesting to watch how these technologies are infiltrating particularly sport because of the strange debates they raise. Many of you have probably seen this system in various forms in various sports, depending on what you're interested in. Hawkeye, it's called. It's used a lot in tennis, so you might have seen it there. It's a system that models, based on a number of cameras, the position of the ball in the sport and predicts where it's going to land. So it decides whether the ball is in or out. Boom. The thing about this that I think is fascinating is that it's not real, right? It's a prediction. This is an algorithmic process that looks at the path of the ball and says where it will be. When it's close, when you see that little shadow of the ball on the ground, that's not real. That's what the computer thinks has probably happened. It's probably more accurate at doing that than a human is, but it's not perfect. It has this kind of aura of doing right because it's a technology, but it's not the same thing as truth, and that's important. Hawkeye was actually pioneered for cricket, which is a deeply silly sport, but quite a lot of fun if you're into it. Cricket's even more ridiculous in this regard because the major things it's predicting are things like this, which is even though the ball hit the batter, if he hadn't been there, it would have hit the stumps behind him. It tells us that there's a huge amount of variation in that, but it's important to work out. This has been going on within cricket for a few years now. This whole system was designed to solve this ridiculous Victorian problem that we invented because of this ridiculous sport. Cricket, weirdly, as one of the oldest and most ridiculous and formal of sports, has got incredibly high tech. It's kind of brilliant. In addition to Hawkeye, which is an incredibly complex vision sensing system based on military image recognition technology, they now also use infrared cameras. This is to check whether the batsman actually touched the ball, and they use that by seeing if there's a momentary infrared hotspot on the ball. The system is actually called hotspot. Again, this is a camera that was developed for military night vision that's now being used to decide the sporting thing. They have another thing called snicometer, which is a tiny high-frequency microphone hidden inside the stumps, which listens for the little click as the batsman hits it. The whole thing is wired. You're looking out at this field of men dressed in white and looking serious and lovely green grass and English summer's day, and you're looking at this sensor grid, this kind of really intensely surveilled computational space, which to me is kind of extraordinary. The weird sideshow of this is therefore discussions that aren't happening anywhere else are happening up here with blokes like this. This is Henry Blofeld, who's an English cricket commentator. He's depending on taste, either hilarious or awful. He's in his 70s, at least. He's been commenting on this for years. He's incredibly posh. He's like the most English person you can imagine. And yet him and his colleagues in the commentary box are having what to me sounds like one of the most advanced and nuanced debates currently happening in public about the extent of human versus technological agency, because they're talking about a system that's embedded in the world in a set of rules that we've come to understand that's visible, that's become concretised. Things like the Amazon warehouse and so on and so forth. No one sees those spaces. No one in those spaces has the power to articulate them, and we have no critical discourse for describing them. Into sport, suddenly this whole debate becomes visible. Visibility is incredibly key to this. We have to make these things visible, and then we have to embed them in systems so that they become legible so that we can talk about it. I think it's brilliant, bizarre but brilliant, that there's this incredibly deep conversation about the philosophy of human technological future happening on test match special. It concerns me that once again it's a bunch of old white dudes doing it, and not artists and engineers and technologists and politicians and a whole bunch of other people who should be having this debate, who can affect change in it. That's a thing we need to remedy. It's happening. The sport thing is getting wider and wider. This is the last sport slide, I promise, because they just started to introduce it into football, so that's suddenly taking this debate to a far wider audience. The four line technology now, they've decided in the English Premier League to use Hawkeye, same as cricket and tennis, to decide when the ball has crossed the line, and the referee wears a little watch that says, got all in big letters when it goes across. So that's good. I can't really argue about it in football, except people can. There's a written introduction to football has raised a whole extended that debate much further. Particularly in the person of this man who I like to think is the last human crusader against the robot overlords. It's Seth Blatter, the head of FIFA, who has fought a long campaign against goal line technology being introduced into football, because in his position, his view, the referee is part of the game. If the referee makes a mistake, that's because sport is a human endeavour and subject to human fallibilities. You can be as upset about it as you like, but that is the nature of humanity, of life, of the world, all of that. And Blatter's strongly held belief is that that should be maintained, that those things shouldn't be engineered and augmented out. And this is a debate that we need to be having all over. It's a debate that applies to the politics and the shops again. Do we augment, augment, augment, because it's capable, because it makes for greater efficiency? Or do we maybe think about human fallibility and frailty and so on and so forth? It's weird that it's being done by Seth Blatter, but it is. The other thing that happens to Seth Blatter, this is the link that I made, you might have seen this a while back, Seth Blatter's Twitter feed got hijacked by the Syrian Electronic Army. The Syrian Electronic Army are a hacker group who appear to be on the side of the of Bashar al-Sad in the Syrian conflict against the rebels, and they've been going around, they hijack all kinds of weird Twitter accounts and leave strange messages. They claimed Seth Blatter was resigning, and I like this. This is the kind of the robot hacker army rising up into this kind of battle with Seth Blatter. They should have known that possibly he was a supporter of those. But it digs in to kind of one of the many ways in which these different was, the retail, the shopping, the warfare, they all lap over because of the technologies involved. This is a picture of Eric Schmidt, he's the chairman of Google. This is his Twitter avatar, in fact. It's a weird one, he's wearing a flack jacket. Which sort of bothered me when I first noted it. I was like, that's a weird way to present yourself. That seems like of some kind of significance. Why would you be doing that? It turns out, as far as I've been able to establish that this photo was taken on a trip to Iraq in 2009, Schmidt promised to digitise the entire contents of what was then left of the National Museum in Baghdad. If you remember, the National Museum was vandalised and looted heavily after the invasion. But Google has now promised that it's going to do something about it. As far as I'm aware, it hasn't yet. I applaud any effort to preserve cultural objects. I remain deeply concerned about that being done by private companies who may or may not have an interest in this stuff. But it turns out that Google's involvement in contemporary warfare is rather deeper than simply protecting cultural treasures. A conference I was at recently, Schmidt was talking about his new book. He said some crazy things. He basically gave a long talk in which he presented technology as a neutral good, essentially. As something that could be rolled out and that would be of benefit to everyone and would be a good thing without any kind of any qualms, any worries about this, that you could just give people technologies and it would make stuff better. The rather terrifying example he gave of this, and this is a direct quote, was that if everyone in Rwanda had had cell phones, they wouldn't have been a genocide, which is not only terrifying, but just straight up wrong. But it's this idea that purely making visible, that by broadcasting, by allowing people to see things, we can change them. It's an ethos that extends through all our forms of contemporary social media and so on and so forth. It's also the same ethos that governments have when they surveil by making visible, by seeing, we can act. Google is actually heavily involved in international politics. They lobby heavy at politics. They have a think tank called Google Ideas, which has been providing technological assistance to, in this particular case, the rebels in Syria. Google is picking sides in wars, which is an interesting thing, particularly if you also think how many governments use Gmail, use Google Docs, et cetera, et cetera. This idea that by making something visible we can affect it at all very well, but it's not true. These are images from Google-owned satellites, satellites owned by DigitalGlo, part owned by Google. This is the digging and filling in of a mass grave in Daria and Syria a couple of months ago. Images captured from space. We have access to this imagery. It doesn't allow us necessarily to do anything about it, which question who's operating these things, who's providing them where they're going. Try and make these things visible. This is a project of mine called DroneStagram. DroneStagram takes records created by journalists collate information about drone strikes, particularly the kind of CIA drone strikes in Pakistan, the Yemen, ones outside the purview of declared wars, and then it tries to find the landscapes of those places on Google Maps. This isn't the exact location because the exact locations are basically unknown. It should be within 10, 20 kilometres, I think. I go and find these locations on Google Maps and I post them back to Instagram because that's where you go to get your daily dose of reality. That's where you go to see through other people's eyes. That's what our social medias are supposed to be doing, bringing us together, giving us this empathy. We developed these technologies that allow us to see through satellites to be able to take out your mobile phone and look through and we've already become bored with it. It's already banal. We use it for finding the local shop or restaurant for dinner. We've given ourselves this kind of all-seeing god power and we use it for very small things. There's this kind of break between the uses of technology as we use them socially for business and so on and so forth and their capabilities in the world, which I think are far greater than anything we're currently doing with them. I think that's, again, a problem of communication, it's a problem of debate, it's a problem of how we're articulating this stuff. It was strange. When I first started this project, it got an interesting, quite strong media reaction. It was covered on lots of press. Starting with the tech press, because obviously it's Instagram, so it's a social media story, but it expanded a bit beyond that. I got this really deep sense of the total inability of our media to have any meaningful discussion about technology because they simply didn't have the words or the concepts for dealing it. They didn't know what Instagram is, they don't know what the devices are, they don't know how these images are gathered by satellites, they have no technological understanding. These are the people who are supposed to be hopefully helping us navigate this world. We need better alternatives to that. But even if, you know, whether it's coverage of this, it's that thing when anything you actually know about is in the news and you realise that they know nothing, it's very terrifying, but it extends not just to things you know about, it's kind of everything, including lots of more stuff around the Syrian conflict. This is from Danish television. This was a report on the situation in Damascus a couple of months ago. The image in the background is from Assassin's Creed. It's not the real Damascus. Likewise, this is from BBC News. This is another report on Syria. This was a piece about Amnesty International, the UN Security Council's debates that they were having about the action that should be taken in Syria. That's the logo of the United Nations Space Command from Halo. Assuming both of these are kind of intern Google image-related mistakes, but you start to see this sort of thing happening all around. I'm not going to talk about that, although it's good. I'm going to talk about this. I think I've got five minutes, right? All right. Let's go back to Syria again quickly while I'm still in Syria. This is a tank built by Syrian rebels from plans they built on the internet. This is how it's controlled on the inside. Using the mechanics and technologies of a first-player shooter to run this thing. This is an intensely technologically augmented conflict that's going on at the moment. It's a great catapult being used in the suburbs of Damascus by rebels. The significant thing is what's going on here. This guy is filming it because the Syrian conflict is one that's being run almost entirely on YouTube. There's a reason for that. It's because funding is coming from other Gulf states to set up rebel groups and buy arms for them. In order to prove that money is being well spent, they're asked to video it and put it back onto YouTube again. The result of this is more strange effects like this one. This is a guy called Elliot Higgins who's a British man who's become an expert in the weapons flows around the Syrian conflict because he's watching those YouTube videos and he's seeing the weapons that are being used by the rebels and he's tracking back to where they come from. His work on analysing the weapons flows around the conflict, which he sees only through YouTube, is being cited in United Nations and humanitarian reports. This is an opportunity of using those same technologies to read back into the conflict again. To be able to say this is a deeply complex system, one that's incredibly hard to read, but by understanding some of the mechanics of it, understanding some of the flows, you can start to see the edges of it. Technology is, in all of these cases, it's not a neutral good. It's a tool, right? Technology is the concretisation, the instantiation of human politics and desires. It's deep and it's complex and it's hard to see the edges of it, but if you start to see a complete pattern, you can kind of render it legible. That's the thing, that's what I was concerned about. I was concerned that this stuff was so invisible, so illegible, that there was no chance of us getting a handle on it, that it would remain always out of sight. But what I realised, to examples like the Higgins thing, by just doing the research, by paying attention, by looking at this stuff, technology actually conversely also renders things visible because in order to make something into a technology, you have to write it down, right? You have to code it, you have to type the thing in, and at some level that makes it readable, that makes it approachable and understandable. The title of this talk you may have seen at the beginning was Naked Lunch, which is the quote from William Burroughs, and the definition of the naked lunch is the moment when everybody sees what is at the end of every fork, the moment of total clarity. And so while technology is making stuff more complex and more invisible and often illegible, it's also forcing it into view. The things we were talking about yesterday, if you're at the extraordinary session on online harassment, of which there's an explosion of attention from, that's not something that's just happened because of the internet, it's something that's been there kind of latent in our societies for so long, but it's made so powerfully visible, and we have this moment in which we can actually maybe do something about it. That applies to all of these issues. It's our responsibility to build tools that render these things legible, to pay attention enough to them, and to become literate in them ourselves so that we can do something useful with them. Thank you very much. Thank you. We're not going to have time for a lot of questions. I just want to say it's such an honour to be in the presence, I think of all of these speakers and be allowed to engage with all of these thoughts. Thank you. I have an immediate follow-up question. Just terminology. I do realise this is a consequence of belonging to a specific generation and being interested in these things, but twice you use the terms robot overlords and cyborg overlords, a sort of shorthand for a technologically dangerous future. I am not sure from this talk where you stand on that, do you in fact fear, not the robot overlords, but the technologies that we're making now? No. I don't fear the technologies that we're making. I fear some of the uses to which they're put. I happily bangle around terms like robot overlords and robot cyborgs because it's a useful shorthand for those things, as you said. But let's always bear in mind that all of this stuff is metaphors. It's all metaphors for objects, for computers, for people typing into those computers. And what we actually need really badly are better metaphors. All our current ones for this stuff are just deeply broken and they're not useful anymore. OK, because we don't have time. I'm just now going to make an executive decision, which is I'm going to ask one more question and you will be around for the day and maybe even tomorrow. So you can catch James and talk more about this and talk to each other about this as well. So then I think that the one question that we have time for is this, we need better words. Where do we start? If we go back to the micro-actions of Saint Gallup yesterday, what's the one micro-action we can take with us from this to work? I'm sorry, but I'm not going to do that. This is not a game for micro-actions, it's a game for macro-actions. It's a game for bigger understanding. There's no one little thing here. In that case, where do we start? Where's the door or the first page? The door is asking questions. The door is having the debate. The door is when presented with images that seem complex. So when presented with, when presented with, for example, the example of the last slide, something like the NSA spying revelations, which is a vast and complex and systematic thing, ask questions about it, not to get carried away in the kind of soap opera of what's happening to Edward Snowden or kind of what's happening on kind of individual levels, but look at this structurally and ask how it's meant to be understood. What we need to do is we need to develop a literacy in systems. Systems literacy is the literacy of the 21st century. It's the most important one and that's a long battle. Immediately, just start asking questions and asking how and why these things have come about rather than just what happened. But we are, I think, I mean, as you say, this is the whole world that we live in, is physically uncircled by these systems and we are not currently within a paradigm or a terminology or understanding where we can break that down into coherent things. You seem to be ahead of the curve relative to most of us. Where did you start? Just by doing this, just by asking questions, those images. If I see a picture of Eric Schmidt wearing a flack jacket, I go, right, where was that taken? Why? How? Like what's the background to this? Those things are trackable. What does this mean? What are we doing with it? It's possible we have all of this information at our fingertips. We just need to make use of it better. Thank you. Your book is coming out this fall. No, it's not. No, it's not? No. Your? No book. No book. I was hoping. Then I'm sorry. You look at my website. I'm sorry. It's the speaker this afternoon. I was hopefully projecting that maybe you could write it in that case. I'll get a book out. No book. A blog. I'm sorry. Thank you so much. Yeah.