 Humans of Congress. It is my pleasure to announce the next speaker. I was supposed to pick out a few awards or something to actually Present what he's done in his life, but I can only say he's one of us Charles Stross Hi, is this on good great I'm really pleased to be here and I want to start by apologizing for my total lack of German So this talk is going to be in English Um, good morning. I'm Charlie Stross and it's my job to tell lies for money Or rather, I write science fiction much of it about on their future Which in recent years has become ridiculously hard to predict In this talk, I'm going to talk about why Now our species homo sapiens sapiens is about 300,000 years old It used to be about 200,000 years old, but it grew an extra hundred thousand years in the past year because of new Archaeological discoveries. I mean go figure For all but the last three centuries or so of that span, however, predicting the future was really easy If you were an average person as opposed to maybe a king or a pope Natural disasters aside, everyday life, 50 years in the future, would resemble everyday life 50 years in your past Let's that let that sink in for a bit For 99.9% of human existence on this earth the future was static Then something changed and the future began to shift increasingly rapidly until in the present day Things are moving so fast. It's barely possible to anticipate trends from one month to a net to the next now as An eminent computer scientist Edgar Dykstra once remarked computer science is no more about computers than astronomy is about building big telescopes The same can be said of my field of work writing science fiction sci-fi is rarely about science and even more rarely about predicting the future but sometimes we dabble in Futurism and lately futurism has gotten really really weird now When I write a near-future work of fiction one set say a decade hence they used to be a recipe I could follow that worked eerily well Simply put 90% of the next decade stuff is already here around us today Buildings are designed to last many years Automobiles have a design life of about a decade so half the cars on the road in 2027 are already there now They're new People there will be some new faces aged 10 and under and some older people will have died But most of us adults will still be around albeit older and grayer This is the 90% of the near future that's already here today After the already existing 90% Another 9% of the near future a decade hence used to be easily predictable You look at trends dictated by physical limits such as Moore's law and you look at Intel's roadmap And you use a bit of creative extrapolation and you won't go too far wrong If I predict wearing my future ology hat that in 2027 LTE cellular phones will be ubiquitous 5g will be available for high bandwidth applications and they'll be fall back to some kind of satellite data service at a price You probably won't laugh at me I mean, it's not like I'm predicting that airlines will fly slower and Nazis will take over the United States is it? and Therein lies the problem There is remaining 1% of what Donald Rumsfeld called the unknown unknowns that throws off all predictions as it happens Airliners today are slower than they were in the 1970s and don't get me started about the Nazis I mean nobody in 2007 was expecting a Nazi revival in 2017 were they only this time Germans get to be the good guys so My recipe for fiction set ten years in the future used to be 90% is already here 9% is not here yet, but predictable and 1% is who ordered that But unfortunately the ratios have changed. I think we're now down to maybe 80% already here Climate change takes a huge toll on architecture Then 15% not here yet, but predictable and a whopping 5% of utterly unpredictable deep craziness now Before I carry on with this talk. I want to spend a minute or two ranting loudly and at ruling out the singularity Some of you might assume that as the author of books like singularity sky and accelerando I Expect an impending technological singularity, but we will develop self-improving artificial intelligence and mind uploading and the whole wish list of Transhumanist aspirations promoted by the likes of Ray Kurtz file will come to pass Unfortunately, this isn't the case. I think transhumanism is a warmed over Christian heresy While its adherents tend to be outspoken atheists they can't quite escape from the history that gave rise to our current Western civilization Many of you are familiar with design patterns an approach to software engineering that focuses on abstraction and simplification in order to promote reusable code When you look at the AI singularity as a narrative and Identify the numerous places in their story where the phrase and then a miracle happens occur It becomes apparent pretty quickly that they've reinvented Christianity indeed The wellspring of today's transhumanists drawn a long rich history of Russian philosophy exemplified by the Russian orthodox theologian Nikolai Fyodorovich Fedorov by way of his disciple Constantin Sierkovsky who? Deriv whose derivation of a rocket equation makes him essentially the father of modern space flight Once you start probing the never regions of transhumanist thought and run into concepts like Rocco's Basilisk By the way any of you who didn't know about the Basilisk before and now doomed to an eternity in AI hell terribly. Sorry You realize they've mangled it to match some of the nastier aspects of Presbyterian Protestantism You know they basically invented original sin and Satan in the guise of an AI that doesn't exist yet It's kind of peculiar Anyway, my take on the singularity is that if something walks like a duck and quacks like a duck It's probably a duck And if it looks like a religion, it's probably a religion I Don't see much evidence for human like self-directed artificial intelligence is coming along anytime soon And a fair bit of evidence that nobody except some freaks in cognitive science departments even wanted I mean if we invented an AI that was like a human mind It would do the AI equivalent of sitting on the sofa munching popcorn and watching the Super Bowl all day It wouldn't be much use to us What we're getting instead is self-optimizing tools that defy human comprehension But are not in fact any more like our kind of intelligence than a Boeing 737 is like a seagull Boeing 737s and seagulls both fly Boeing 737s don't lay eggs and shit everywhere So I'm gonna wash my hands of a singularity as a useful explanatory model of the future without further ado I'm one of those vehement atheists as well, and I'm gonna try and Offer you a better model for what's happening to us now As my fellow Scottish science fiction author Ken McLeod likes to say the secret weapon of science fiction is history History loosely speaking is the written record of what and how people did things in past times Times that have slipped out of our personal memories We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story With a little bit of history It's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Habsburg Empire Or to re-spin the October Revolution as a tale of how Mars got its independence But history is useful for so much more than that It turns out but our personal memories don't span very much time at all I'm 53 and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6 to 16 year old My father died this year aged 93 and he just about remembered the 1930s Only those of my father's generation directly remember the Great Depression and can compare it to the 2007-08 global financial crisis directly We Westerners tend to pay little attention to cautionary tales told by 90-somethings. We're modern. We're change obsessed And we tend to repeat our biggest social mistakes just as they slip out of living memory Which means they recur on a timescale of 70 to 100 years So if our personal memories are useless, we need a better toolkit and history provides that toolkit History gives us the perspective to see what went wrong in the past and to look for patterns and check to see whether those patterns are recurring in the present Looking in particular at the history of the past two to four hundred years That age of rapidly increasing change that I mentioned at the beginning One glaringly obvious deviation from the norm of a preceding three thousand centuries is obvious And that's the development of artificial intelligence which happened no earlier than 1553 and no later than 1844 I'm talking of course about the very old very slow ai's we call corporations What lessons from the history of a company can we draw that tell us about the likely behavior of the type of artificial intelligence? We're interested in here today well Need a mouthful of water Let me crib from Wikipedia for a moment Wikipedia in the late 18th century Stuart kid the author of the first treatise on corporate law in English to find a corporation as a collection of many individuals United into one body under a special denomination Having perpetual succession under an artificial form and vested by policy of the law with a capacity of acting in several respects as an individual I'm Enjoying privileges and immunities in common and of exercising a variety of political rights more or less extensive According to the design of its institution of the powers conferred upon it either at the time of its creation or at any subsequent period of its existence This was a late 18th century definition sound like a piece of software to you in 1844 the British government passed the joint stock companies act which created a register of companies and allowed any Legal person for a fee to register a company which in turn existed as a separate legal person Prior to that point it required a royal charter or an act of parliament to create a company Subsequently the law was extended to limit the liability of individual shareholders in event of business failure And then both Germany and the United States added their own unique twists to what today we see as the doctrine of corporate personhood Now there are plenty of other things that happen between the 16th and 21st centuries that change the shape of the world we live in I've skipped the changes in agricultural poverty productivity that happened due to energy economics Which finally broke the Malfusion trap our predecessors lived in This in turn broke the long-term cap on economic growth of about 0.1 percent per year in the absence of famines plagues and wars and so on I've skipped the germ theory of diseases and the development of trade empires in the age of sale and gunpowder That were made possible by advances in accurate time measurement I've skipped the rise and hopefully decline of the pernicious theory of scientific racism that underpinned Western colonialism and the slave trade I've skipped the rise of feminism the ideological position that women are human beings rather than property and the decline of patriarchy I've skipped the whole of the Enlightenment and the age of revolutions But this is a technocratic technocentric Congress, so I want to frame this talk in terms of AI which we all like to think we understand Here's the thing about these artificial persons we call corporations Legally, they're people they have goals. They operate in pursuit of these goals They have a natural life cycle in the 1950s a typical US corporation on the S&P 500 index had a lifespan of 60 years Today it's down to less than 20 years This is largely due to predation Corporations are cannibals. They eat one another They're also hive super organisms like bees or ants But the first century and a half they relied entirely on human employees for their internal operation But today they're automating their business processes very rapidly Each human is only retained so long as they can perform their assigned tasks more efficiently than a piece of software And they can all be replaced by another human much as the cells in our own bodies are functionally interchangeable and a group of cells Can in extremis often be replaced by a prosthetic device To some extent corporations can be trained to serve of the personal desires of their chief executives But even CEOs can be dispensed with if their activities damage the corporation as Harvey Weinstein found out a couple of months ago Finally our legal environment today has been tailored for the convenience of corporate persons rather than human persons to the point Where our governments now mimic corporations in many of their internal structures So to understand where we're going we need to start by asking what do our current actually existing AI overlords want Now Elon Musk, who I believe you've all heard of has an obsessive fear of one particular hazard of artificial intelligence Which he conceives of as being a piece of software that functions like a brain in a box namely the paperclip optimizer Or maximizer a paperclip maximizer is a term of art for a goal-seeking AI that has a single priority For example maximizing the number of paperclips in the universe The paperclip maximizer is able to improve itself in pursuit of its goal But has no ability to vary its goal So it will ultimately attempt to convert all the metallic elements in the solar system into paperclips Even if this is obviously detrimental to the well-being of the humans who set it this goal Unfortunately, I don't think Musk is paying enough attention consider his own companies Tesla isn't a paperclip maximizer. It's a battery maximizer after all a battery an electric car is a battery with wheels and seats Space X is an orbital payload maximizer driving down the cost of space launches in order to encourage more sales for the service it provides Solar City is a photovoltaic panel maximizer and so on all three of the Musk's very own slow AIs are based on an architecture designed to maximize return on shareholder investment Even if by doing so they cook the planet the shareholders have to live on or turn the entire thing into solar panels But hey, if you're Elon Musk, that's okay. You're gonna retire on Mars anyway By the way, I'm ragging on musking this talk simply because he's the current Opinionated tech billionaire who thinks that disrupting a couple of industries entitles him to make headlines If this was 2007 and my focus slightly difference different I'd be ragging on Steve Jobs, and if we're in 1997 my target would be Bill Gates. Don't take it personally Elon Back to topic. The problem with corporations is that despite their overt goals Whether they make electric vehicles or beer or sell life insurance policies They all have a common implicit paperclip maximizer goal to generate revenue If they don't make money they're eaten by a bigger predator or they go bust it's as vital to them as breathing is to us mammals They generally pursue their implicit goal maximizing revenue by pursuing their overt goal But sometimes they try instead to manipulate their environment to ensure that money flows to them regardless Human toolmaking culture has become very complicated over time new technologies always come with an attached Implicit political agenda that seeks to extend the use of a technology Governments react to this by legislating to control new technologies and sometimes we end up with Industries actually indulging in legal duels through the regulatory mechanism of law to determine Who prevails? For example, consider the automobile You can't have mass automobile transport without gas stations and fuel distribution pipelines These in turn require access to whoever owns the land the oil is extracted from under and before you know it You end up with a permanent army in Iraq and a clamp dictatorship in Saudi Arabia Closer to home automobiles imply jaywalking laws and drink driving laws They affect town planning regulations and encourage suburban sprawl the construction of human infrastructure on a scale required by automobiles Not pedestrians This in turn is bad for competing transport technologies like buses or trams which work best in cities with a high population density So to get laws that favor the automobile in place providing an environment conducive to doing business Automobile companies spent money on political lobbyists and when they can get away with it on bribes Bribery needn't be blatant, of course for example The reforms of the British railway network in the 1960s dismembered many branch lines and coincided with a surge in road building and automobile sales These reforms were orchestrated by Transport Minister Ernest Marples who was purely a politician The fact that he accumulated a considerable personal fortune during this period by buying shares in motorway construction Corporations has nothing to do with it So no conflict of interest there Now the automobile in industry can't be considered a pure paperclip maximizer Sorry the automobile industry in isolation can't be considered a pure paperclip maximizer You have to look at it in conjunction with the fossil fuel industries the road construction business the accident insurance sector and so on When you do this you begin to see the outline of a paperclip maximizing Ecosystem that invades far-flung lands and grinds up and kills around one and a quarter million people per year That's the global death toll from automobile accidents currently according to the World Health Organization It rivals the First World War on an ongoing permanent basis and these are all side effects of its drive to sell you a new car now Automobiles aren't of course a total liability Today's cars are regulated stringently for safety and in theory to reduce toxic emissions They're fast efficient and comfortable We can thank legal mandated regulations imposed by governments for this of course Go back to the 1970s and cars didn't have crumple zones Go back to the 50s and they didn't come with seatbelts as standard in the 1930s Indicators turn signals and brakes on all four wheels were optional And your best hope of surviving a 50 kilometer per hour crash was to be thrown out of the car and land somewhere without breaking your neck Regulatory agencies are our current political systems tool of choice for preventing paperclip maximizers from running a mock Unfortunately regulators don't always work The first failure mode of regulators that you need to be aware of is regulatory capture where regulatory bodies are captured by the industries they control Adjit Pai head of the American Federal Communications Commission Which just voted to eliminate net neutrality rules in the United States has worked as associate general counsel for the rise on Communications Inc the largest current descendant of the Bell telephone systems monopoly after the AT&T Antitrust lawsuit the Bell network was broken up into the seven baby bells They've now pretty much reformed and re-aggregated and the rise on is the largest current one Why should someone with a trans transparent interest in a technology corporation end up running a regulator that tries to control the industry in question? Well, if you're going to regulate a complex technology, you need to recruit regulators from people who understand it Unfortunately, most of those people are industry insiders Adjit Pai is clearly very much aware of how the right Verizon is regulated Very insightful into its operations and wants to do something about it. Just not necessarily in the public interest When When regulators end up staffed by people drawn from the industries they're supposed to control They frequently end up working with their former office mates to make it easier to turn a profit Either by raising barriers to keep new insurgent companies out or by dismantling safeguards that protect the public Now a second problem is regulatory lag where a technology advances so rapidly that regulations are laughably obsolete by the time they're issued Consider the EU directive requiring cookie notices on websites to caution users that their activities are tracked and their privacy may be violated This would have been a good idea in 1993 or 1996 But unfortunately it didn't show up until 2011 Fingerprinting and tracking mechanisms had nothing to do with cookies and were already widespread by then Tim Berners Lee observed in 1995 that five years worth of change was happening on the web for every 12 months of real-world time By that yardstick the cookie law came out nearly a century too late to do any good Again look at Uber This month the European Court of Justice ruled that Uber is a taxi service not a web app This is arguably correct. The problem is Uber has spread globally since it was founded eight years ago Subsidizing its drivers to put competing private hire firms out of business Whether this is a net good for society is are is debatable The problem is a taxi driver can get awfully hungry if she has to wait eight years for court ruling against a predator intent on disrupting her business So to recap Firstly we already have paperclip maximizers and musks AI alarmism is curiously mirror-blind Secondly we have mechanisms for keeping paperclip maximizers in check But they don't work very well against AIs that deploy the dark arts, especially corruption and bribery And they're even worse against true AIs that evolve too fast for human-immediated mechanisms like the law to keep up with Finally unlike the naive vision of a paperclip maximizer that maximizes only paperclips Existing AIs have multiple agendas their overt goal But also profit-seeking expansion into new markets and to accommodate the desire of whoever is currently in the driving seat Now this brings me to the next major heading in this This maying laundry list how it all went wrong It seems to me that our current political upheavals are best understood as a rising from the capture of post-1917 democratic institutions by large-scale AI Every way you look you see voters protesting angrily against an entrenched establishment That seems determined to ignore the wants and needs of their human constituents in favour of those of the machines The Brexit upset was largely the result of a protest vote against the British political establishment The election of Donald Trump likewise with a side order of racism on top Our major political parties are led by people who are compatible with the system as it exists today A system that has been shaped over decades by corporations distorting our government and regulatory environments We humans live in a world shaped by the desires and needs of AI Forced to live on their terms and we're taught that we're valuable only to the extent we contribute to the rule of the machines Now this is 3C and we're all more interested in computers and communications technology than this historical crap But as I said earlier history is a secret weapon if you know how to use it What history is good for is enabling us to spot recurring patterns that repeat across timescales outside our personal experience And if we look at our historical very slow AIs what do we learn from them about modern AI and how it's going to behave Well to start with our AIs have been warped the new AIs the electronic ones instantiated in our machines Have been warped by a terrible fundamentally flawed design decision back in 1995 That has damaged democratic political processes crippled our ability to truly understand the world around us And led to the angry upheavals and upsets of the present decade That mistake was the decision to fund the build out of the public worldwide web as opposed to the earlier government funded corporate and academic internet By monetizing eyeballs through advertising revenue The ad supported web we're used to today wasn't inevitable If you recall the web as it was in 1994 there were very few ads at all and not much in the way of commerce 1995 was the year the worldwide web really came to public attention in the Anglophone world and consumer facing websites began to appear Nobody really knew how this thing was going to be paid for The original dot-com bubble was all about working out how to monetize the web for the first time and a lot of people lost their shirts in the process Our naive initial assumption was that the transaction cost of setting up a TCP IP connection over modem Was too high to support to be supported by per use micro billing for web pages So instead of charging people a fraction of a euro cent for every page view We'd build customers indirectly by shoving advertising banners in front of their eyes and hoping they'd click through and buy something Unfortunately advertising it is in an industry one of those pre-existing very slow AI ecosystems I already alluded to Advertising tries to maximize its hold on the attention of the minds behind each human eyeball The coupling of advertising with web search was an inevitable outgrowth I mean how better to attract the attention of reluctant subjects than to find out what they're really interested in seeing And selling ads that relate to those interests The problem with applying the paperclip maximizer approach to monopolizing eyeballs however Is that eyeballs are a limited scarce resource There are only 168 hours in every week in which I can gaze at banner ads Moreover most ads are irrelevant to my interest and it doesn't matter how often you flash an ad for dog biscuits at me I'm never going to buy any, neither cat To make best revenue generating use of our eyeballs It's necessary for the ad industry to learn who we are and what interests us And to target us increasingly minutely in hope of hooking us with stuff we're attracted to In other words the ad industry is a paperclip maximizer that for its success relies on developing a theory of mind that applies to human beings Do I need to divert onto the impassioned rant about the hideous corruption and evil that is Facebook? Okay somebody said yes I'm guessing you've heard it all before But the too long don't read summary is Facebook is as much a search engine as Google or Amazon Facebook's searches are optimized for faces that is for human beings If you want to find someone you fell out of touch with 30 years ago Facebook probably knows where they live, what their favorite color is, what size shoes they wear And what they said about you to your friends behind your back all those years ago that made you cut them off Even if you don't have a Facebook account, Facebook has a you account A whole and mere social graph with a bunch of connections pointing into it and your name tagged in your friends photographs They know a lot about you and they sell access to their social graph to advertisers who then target you Even if you don't think you use Facebook And did there's barely any point in not using Facebook these days They're the social media borg, resistance is futile So however Facebook is trying to get eyeballs on ads So is Twitter and so are Google To do this they fine tune the content they show you to make it more attractive to your eyes And by attractive I do not mean pleasant We humans have an evolved automatic reflex to pay attention to threats and horrors as well as pleasurable stimuli And the algorithms that determine what they show us when we look at Facebook or Twitter take this bias into account You might react more strongly to a public hanging in Iran or an outrageous statement by Donald Trump into a couple kissing The algorithm knows and will show you whatever makes you pay attention, not necessarily what you need or want to see So this brings me to another point about computerized AI as opposed to corporate AI AI algorithms tend to embody the prejudices and beliefs of either the programmers or the data set the AI was trained on A couple of years ago I ran across an account of a webcam developed by mostly pale skin silicon valley engineers That had difficulty focusing or achieving correct color balance when pointed at dark skinned faces That's an example of human programmer induced bias They didn't have a wide enough test set and didn't recognize that they were inherently biased towards expecting people to have pale skin But with today's deep learning bias can creep in via the data sets the neural networks are trained on Even without the programmers intending it Microsoft's first foray into a conversational chat bot driven by machine learning Tay was yanked offline within days last year because 4chan and reddit based trolls discovered they could train it towards racism and sexism for shits and giggles Just imagine you're a poor naive innocent AI who's just been switched on and you're hoping to pass your Turing test and what happens 4chan decide to play with your head I gotta feel sorry for Tay Now humans may be biased but at least individually were accountable and if somebody gives you racist or sexist abuse to your face You can complain or maybe punch them It's impossible to punch a corporation and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system AI based systems that instantiate existing prejudices make social change harder Traditional advertising works by playing on the target customers insecurity and fear as much as their aspirations And fear of the loss of social status and privileges are powerful threats Fear and xenophobia are useful tools for attracting advertising at eyeballs What happens when we get pervasive social networks with learned biases against say feminism or Islam or melanin Or deep learning systems trained on data sets contaminated by racist dipshits and their propaganda Deep learning systems like the ones inside Facebook that determine which stories to show you to get you to pay as much attention as possible to the adverts I think you probably have an inkling of how where this is now going Now if you think this is sounding a bit bleak and unpleasant you'd be right I write sci-fi, you read or watch or play sci-fi, we're a cultureator to think of science and technology as good things that make life better But this ain't always so Plenty of technologies have historically been heavily regulated or even criminalized for good reason And once you get past any reflexive indignation at criticism of technology in progress You might agree with me that it is reasonable to ban individuals from owning nuclear weapons or nerve gas Less obviously, they may not be weapons But we've banned chlorofluorocarb and refrigerants because they were building up in the high stratosphere and destroying the ozone layer that protects us from UVB radiation We banned tetraefile lead in gasoline because it poisoned people and led to a crime wave These are not weaponized technologies but they have horrible side effects Now nerve gas and leaded gasoline were 1930s chemical technologies promoted by 1930s corporations Halogenated refrigerants and nuclear weapons are totally 1940s, ICBMs date to the 1950s You know, I have difficulty seeing why people are getting so worked up over North Korea North Korea reaches 1953 level parity, be terrified and hide under the bed I submit that the 21st century is throwing up dangerous new technologies just as our existing strategies for regulating very slow AIs have proven to be inadequate And I don't have an answer to how we regulate new technologies I just want to flag it up as a huge social problem that is going to affect the coming century I'm now going to give you four examples of new types of AI application that are going to warp our societies even more badly than the old slow AIs we have done This isn't an exhaustive list, this is just some examples I dream I pulled out of my ass We need to work out a general strategy for getting on top of this sort of thing before they get on top of us And I think this isn't actually a very urgent problem So I'm just going to give you this list of dangerous new technologies that are arriving now or coming And send you a way to think about what to do next I mean, we are activists here, we should be thinking about this and planning what to do Now, the first nasty technology I'd like to talk about is political hacking tools rely on social graph directed propaganda This is low hanging fruit after the electoral surprises of 2016 Cambridge Analytica pioneered the use of deep learning by scanning the Facebook and Twitter social graphs to identify voters political affiliations By simply looking at what tweets or Facebook comments they liked They're able to do this to identify individuals with a high degree of precision who were vulnerable to persuasion And who lived in electorally sensitive districts They then canvassed them with propaganda that targeted their personal hot button issues to change their electoral intentions The tools developed by web advertisers to sell products have now been weaponized for political purposes And the amount of personal information about our affiliations that we expose on social media makes us vulnerable Aside from the last US presidential election, there's mounting evidence that the British referendum on leaving the EU was subject to foreign cyber war attack of our weaponized social media As was the most recent French presidential election In fact, if we remember the leak of emails from the Macron campaign It turns out that many of those emails were false because the Macron campaign anticipated that they would be attacked And an email trove would be leaked in the last days before the election So they deliberately set up false emails that would be hacked and then leaked and then could be discredited It gets twisty fast Now, I'm kind of biting my tongue and trying not to take sides here I have my own political affiliation after all and I'm not terribly mainstream But if social media companies don't work out how to identify and flag micro targeted propaganda Then democratic institutions will stop working and elections will be replaced by victories for whoever can buy the most trolls This won't simply be billionaires like the Koch brothers and Robert Mercer from the United States Throwing elections to whoever will hand them the biggest tax cuts Russian military cyber war doctrine calls for the use of social media to confuse and disable perceived enemies In addition to the increasingly familiar use of zero day exploits for espionage Such as spear phishing and distributed denial of service attacks on infrastructure which are practiced by western agencies Problem is, once the Russians have demonstrated that this is an effective tactic The use of propaganda bot armies in cyber war will go global And at that point our social discourse will be irreparably poisoned Incidentally, I'd like to add as another aside like the Elon Musk thing, I hate the cyber prefix It usually indicates that whoever's using it has no idea what they're talking about Unfortunately, much as the way the term hacker was corrupted from its original meaning in the 1990s The term cyber war seems to have stuck and it's now an actual thing that we can point to and say this is what we're talking about So I'm afraid we're stuck with this really horrible term But that's a digression, I should get back on topic because I've only got 20 minutes to go Now, the second threat that we need to think about regulating or controlling is an adjunct to deep learning targeted propaganda It's the use of neural network generated false video media We used to photoshop images these days, but faking video and audio takes it to the next level Luckily faking video and audio is labor intensive, isn't it? Well, nope, not anymore We're seeing the first generation of AI assisted video porn In which the faces of film stars are mapped onto those of other people in a video clip Using software rather than laborious in human process A properly trained neural network recognises faces and transforms the face of the Hollywood star they want to put into a porn movie Onto the face of the porn star in the porn clip and suddenly you have, oh dear God, get it out of my head No, not gonna give you any examples, let's just say it's bad stuff Meanwhile, we have WaveNet, a system for generating realistic sounding speech in the voice of a human speaker The neural network has been trained to mimic any human speaker We can now put words into other people's mouths realistically without employing a voice actor This stuff is still geek intensive It requires relatively expensive GPUs or cloud computing clusters But in less than a decade it'll be out in the wild turned into something any damn script kiddie can use And just about everyone will be able to fake up a realistic video of someone they don't like doing something horrible I mean, Donald Trump in the White House I can't help but hope that out there somewhere there's some geek like Steve Bannon with a huge rack of servers who's faking it all But no Now, we've already seen alarm this year over bizarre YouTube channels that attempt to monetize children's TV brands by scraping the video content off legitimate channels And adding their own advertising and keywords on top before reposting it This is basically a YouTube spam Many of these channels are shaped by paperclip maximizing advertising AIs That are simply trying to maximize their search ranking on YouTube And it's entirely algorithmic You have a whole list of keywords, you permutate them, you slap them on top of existing popular videos and re-upload the videos Once you add neural network driven tools for inserting character A into pirated video B to click maximizing bots Things are going to get very weird though And they're going to get even weirder when these tools are deployed for political gain We tend being primates that evolved 300,000 years ago in a smartphone-free environment To evaluate the inputs from our eyes and ears much less critically than what random strangers on the internet tell us in text We're already too vulnerable to fake news as it is Soon they'll be coming for us armed with believable video evidence The smart money says that by 2027 you won't be able to believe anything you see in video unless there are cryptographic signatures on it Linking it back to the camera that shot the raw feed And you know how good most people are at using encryption? It's going to be chaos So paperclip maximizes that focus on eyeballs of very 20th century The new generation is going to be focusing on our nervous system Advertising as an industry can only exist because of a quirk of our nervous system which is that we're susceptible to addiction Be it tobacco gambling or heroin we recognize addictive behavior when we see it Or do we? It turns out the human brains reward feedback loops are relatively easy to gain Large corporations like Zynga, producers of Farmville, exist solely because of it Free to use social media platforms like Facebook and Twitter are dominant precisely because they're structured to reward frequent short bursts of interaction And to generate emotional engagement not necessarily positive emotions anger and hatred are just as good when it comes to attracting eyeballs for advertisers Smartphone addiction is a side effect of advertising as a revenue model Frequent short bursts of interaction to keep us coming back for more A new-ish development thanks to deep learning again I keep coming back to deep learning, don't I? Use of neural networks in a manner that Marvin Minsky never envisaged back when he was deciding that the perceptron was where it began and ended and it couldn't do anything Well, we have neuroscientists now who've mechanized the process of making apps more addictive Dopamine Labs is one startup that provides tools to app developers to make any app more addictive As well as to reduce the desire to continue participating in a behavior if it's undesirable if the app developer actually wants to help people kick the habit This goes way beyond automated A-B testing A-B testing allows developers to plot a binary tree path between options moving towards a single desired goal But true deep learning, addictiveness maximizers can optimize for multiple attractors in parallel The more users you've got on your app, the more effectively you can work out what attracts them and train them and focus on extra addictive characteristics Now, going by their public face, the folks at Dopamine Labs seem to have ethical qualms about the misuse of addiction maximizers But neuroscience isn't a secret and sooner or later some really unscrupulous sociopaths will try to see how far they can push it So let me give you a specific imaginary scenario Apple have put a lot of effort into making real-time face recognition work on the iPhone X And it's going to be everywhere on everybody's phone in another couple of years You can't fool an iPhone X with a photo or even a simple mask It does depth mapping to ensure your eyes are in the right place and can tell whether they're open or closed It recognizes your face from underlying bone structure through makeup and bruises It's running continuously, checking pretty much as often as every time you'd hit the home button on a more traditional smartphone UI And it can see where your eyeballs are pointing The purpose of the face recognition system is to provide for real-time continuous authentication when you're using a device Not just enter a PIN or sign a password or use a two-factor authentication pad But the device knows that you are its authorized user on a continuous basis And if somebody grabs your phone and runs away with it, it'll know that it's been stolen immediately it sees the face of the thief However, your phone monitoring your facial expressions and correlating against app usage has other implications Your phone will be aware of precisely what you like to look at on its screen We may well have sufficient insight on the part of the phone to identify whether you're happy or sad, bored or engaged With addiction-seeking deep learning tools and neural network-generated images, those synthetic videos I was talking about It's in principle entirely possible to feed you an endlessly escalating payload of arousal-inducing inputs It might be Facebook or Twitter messages optimized to produce outrage Or it could be porn generated by AI to appeal to kinks you don't even consciously know you have But either way, the app now owns your central nervous system and you will be monetized And finally, I'd like to raise a really hair-raising specter that goes well beyond the use of deep learning and targeted propaganda in cyberwar Back in 2011, an obscure Russian software house launched an iPhone app for pickup artists called Girls Around Me Spoiler, Apple pulled it like a hot potato as soon as word got out that it existed Now, Girls Around Me works out where the user is using GPS Then it would query Foursquare and Facebook for people matching a simple relational search for single females on Facebook Per relationship status who have checked in or been checked in by their friends in your vicinity on Foursquare The app then displays their locations on a map along with links to their social media profiles If they were doing it today, the interface would be gamified showing strike rates and a leaderboard and flagging targets Who succumbed to harassment as easy lays But these days, the cool kids and single adults are all using dating apps with a missing vowel in the name Only a creeper would want something like Girls Around Me, right? Unfortunately, there are much much nastier uses than scraping social media to find potential victims for serial rapists Does your social media profile indicate your political religious affiliation? Nope Cambridge Analytica can work them out with 99.9% precision anyway, so don't worry about that, we already have you pegged Now at a service that can identify people's affiliation and location, and you have the beginning of a flash mob app One that will show people like us and people like them on a hyper-local map Imagine you're young, female, and a supermarket-like target has figured out from your purchase patterns that you're pregnant, even though you don't know it yet This actually happened in 2011 Now imagine that all the anti-abortion campaigners in your town have an app called Baby's Risk on their phones Someone has paid for the analytics feed from the supermarket, and every time you go near a family planning clinic a group of unfriendly anti-abortion protesters somehow miraculously show up and swarm you Or imagine you're male and gay, and the God hates fags crowd has invented a 100% reliable gaydar app based on your grinder profile and is getting their fellow travelers to queer-bash gay men only when they're alone or outnumbered by 10 to 1 That's the special horror of precise geolocation, not only do you always know where you are The AIs know where you are, and some of them aren't friendly Or imagine you're in Pakistan and Christian Muslim tensions are rising, or you're in rural Alabama or a Democrat You know, the possibilities are endless Someone out there is working on this A geolocation-aware social media scraping deep learning application that uses a gamified competitive interface to reward its players for joining in acts of mob violence against whoever the app developer hates Probably it has an innocuous-seeming but highly addictive training mode to get the users accustomed to working in teams and obeying the app's instructions Think Ingress or Pokemon Go Then, at some pre-planned zero-hour, it switches mode and starts rewarding players for violence Players who have been primed to think of their targets as vermin by a steady drip-feed of microtargeted, dehumanizing propaganda inputs delivered over a period of months And the worst bit of this picture is that the app developer isn't even a nation-state trying to disrupt its enemies or an extremist political group trying to murder gays, Jews or Muslims It's just a paperclip maximiser doing what it does, and you are the paper Welcome to the 21st century Thank you We have a little time for questions Do we have a microphone for the audience? Do we have any questions? Okay So you are doing a Q&A So you are doing a Q&A Well, if there are any questions, please come forward to the microphones, numbers 1 through 4 and ask I think it's all bleak and dystopian like you have prescribed it because I also think the future can be bright looking at the internet with open source It's all growing and going faster and faster in the good direction So what do you think about the balance here? Basically, I think the problem is that about 3% of us are sociopaths or psychopaths who spoil everything for the other 97% of us Wouldn't it be great if somebody could write an app that would identify all the psychopaths among us and let the rest of us just kill them? Yeah, we have all the tools to make a utopia We have it now, today A bleak, miserable, grim, meat hook future is not inevitable but it's up to us to use these tools to prevent the bad stuff happening and to do that we have to anticipate the bad outcomes and work to try and figure out a way to deal with them That's what this talk is, I'm trying to do a bit of a wake-up call and get people thinking about how much worse things can get than what we need to do to prevent it from happening What I was saying earlier about our regulatory systems being broken, stands How do we regulate the deep learning technologies? This is something we need to think about Okay, mic number 2 When you talk about corporations as IIs Where do you see that in a logic making? Can you see that as literally IIs or figuratively? Almost literally If you're familiar with philosopher Ronald Searle's Chinese room paradox from the 1970s by which he attempted to prove that artificial intelligence was impossible A corporation is very much the Chinese room implementation of an AI It is a bunch of human beings in a box You put inputs into the box, you get outputs out of the box Does it matter whether it's all happening in software or whether there's a human being following rules in between to assemble the output? I don't see there being much of a difference Now, you have to look at a company at a very abstract level to view it as an AI But more and more companies are automating their internal business processes You've got to view this as an ongoing trend Yeah, they have many of the characteristics of an AI Okay, mic number 4 Hi, thanks for your talk You've probably heard of the time well spent and design ethics movements that are alerting developers to dark patterns in UI designing Many people design apps to manipulate people Curious if you find any optimism in the possibility of amplifying or promoting those movements You know, I knew about dark patterns and I knew about people trying to optimize them I wasn't actually aware there were movements against this Okay, I'm 53 years old, I'm out of touch I haven't actually done any serious programming in 15 years I'm so rusty my rust has rust on it But it is a worrying trend And actual activism is a good start Raising awareness of hazards and of what we should be doing about them is a good start And I would classify this actually as a moral issue We need to, corporations evaluate everything in terms of revenue because it's very equivalent to breathing, they have to breathe Corporations don't usually have any moral framework We're humans, we need a moral framework to operate within Even if it's as simple as first do no harm or do not do unto others that which would be repugnant if it was done unto you, for golden rule So yeah, we should be trying to spread awareness of this about and working with program developers to remind them that they are human beings and have to be humane in their application of technology is a necessary start Thank you, Mike Three Hi, yeah, I think that folks, especially in this sort of crowd tend to jump to the just get off of Facebook solution first for a lot of these things that are really really scary But what worries me is how we sort of silence ourselves when we do that After the election I actually got back on Facebook because the Women's March was mostly organized through Facebook But yeah, I think we need a lot more regulation but we can't just throw it out because it's social media is the only really good platform we have right now to express ourselves, to have our own sort of power Absolutely, I have made a point of not really using Facebook for many many many years I have a Facebook page simply to shut up the young marketing people of my publisher who used to prop up every two years and say why don't we have a Facebook? Everyone's got a Facebook? No, I've had a blog since 1993 But no, I'm going to have to use Facebook because these days not using Facebook is like not using email You're cutting off your nose despite your face What we really do need to be doing is looking for some form of effective oversight of Facebook and particularly of how the algorithms that show you content are written and earlier about how algorithms are not as transparent as human beings to people applies hugely to them and both Facebook and Twitter control the information that they display to you Okay, I'm terribly sorry for all the people queuing at the mics now We're out of time I also have to apologize I announced that this talk was being held in English but it was being held in English Thank you very much for listening to me, it's been a pleasure