 This is a talk I gave recently at a panel discussion held at Zeeba Design for Tech Fluence in Portland, Oregon titled AI as a two-edged sword. Ever since the very first computers humans have been trying to make them think like humans, we've been trying to make intelligence in the machine. This has taken two paths and here I'm dramatically over-simplifying, but one path has been to emulate the biology of the brain. What do dendrites and axons and all the different parts of neurons actually do? So we've created artificial neural neurons that have input, inputs, processing functions or threshold functions and outputs. Then the other group of people were working on the logic. What is the formal logic of decision-making look like if x then y except when z? This first group ended up being called neural networks. The second group ended up being called expert systems. Neural networks learned from big data sets. They are very calculation-intensive, which turned out to be a bit of a problem early on because computers weren't that fast. Sometimes if you get them in the right domain, they give you amazing results, but because they've got all these funny equations in the middle, there's no real audit trail. They can't tell you why they decided something. Expert systems take knowledge engineers with experts who then get interviewed to create explicit rules that could put into this expert system. Expert systems are on mainstream in many areas for predictive maintenance on machinery, for credit evaluation, for claims processing and insurance. Expert systems have been doing the work for several decades. Expert systems are nice because they leave a nice audit trail. You can tell why they did something. In 1969, two of the experts in the field, Marvin Minsky and Seymour Papert, wrote this book Perceptrons and basically put the kibosh on neural networks. They were wrong in the long run, because they were thinking about neural networks with one hidden layer, one layer in between the inputs and outputs. And for that, mathematically, they could kind of show that this was probably a dead end, but magic happened after that. In fact, in 1987, when I got in the field of artificial intelligence, I joined a little retainer market research firm called New Science Associates in South Norwalk, Connecticut. This is a call-out to my new science buddies. Later in 1991, I created a service called Continuous Information Environments at New Science, which covered all these technologies, which kind of may look familiar, but remember, this is 91. The internet becomes sort of open and public in 94-5. So this is all happening before the inner tubes show up, and you can kind of see the shape of it. So pretty interesting stuff. Today is 2017, 30 years after, and a lot has happened. In fact, now the umbrella term I use for this whole area is machine intelligence. Other people call it artificial intelligence, but that to me tends to lean toward the expert system side. And now neural networks have become deep learning, because now we have many hidden layers of artificial neurons. And we've come across huge data sets or created huge data sets so we can actually feed a lot of data in to teach these networks what's going on. And the hardware has had a great boost from standard issue, central processing units to graphics processing units stolen from the graphics engines to tensor processing units, which Google just announced and has put online. So you can actually go use them yourselves at their TensorFlow system online. We are seeing breakthrough performance in deep learning. Google recently swapped out the back end to Google Translate, which used to have regular parsing and regular semantic evaluation of languages to translate. They found data sets for good or worse from expert translators. So they had a lot of very good data to train their neural networks with. They switched in new translators and suddenly had a big bump in performance in accuracy of translations, including continuity at the paragraph level, where it no longer looked like a series of disjointed phrases and sentences glued together. Deep learning is good in narrow domains. The alpha go deep learning system that just beat the top go top rated go player in the world, which just happened, cannot play chess. It cannot come in out of the rain. It knows only how to play go. In the meantime, expert systems got much better, but not as much as deep learning deep learning really had some breakthroughs for the reasons I just gave. Kevin Kelly talks about this whole shift as cognition. Take whatever it is you're looking at an ad AI, that's what we're going to see. So we're going to see a lot of this happening. Machine intelligence will pervade everything. One of the goals of researchers in this area is something called artificial general intelligence, as opposed to narrow domains. What if you had a system that was good at most everything that kind of emulated humans? I'm not holding my breath for this one, but I don't count it out completely. Other people are looking at super intelligence systems that are actually far smarter than us. And then, you know, buyer beware or creator beware caveat factor, I don't know how you'd say that exactly. We may, in fact, be in trouble down the road, leading toward things like the singularity. So if we look ahead with artificial intelligence or machine intelligence, looking at 20 years from 2017 today to 2037 on the horizon, I think if you'll allow me to be a little extreme again, I think I see two different futures materializing. Utopia and a dystopia. The utopia looks like a world of abundance and efficiency. If you ever watched the Jetsons cartoons way back when, or if you watched Joaquin Phoenix fall in love with his avatar in her, you've sort of seen this. Well, it's a life of leisure and perhaps the singularity in a good sense. Like we passed through some loophole through which life is really quite great. One of many communities that have played with this future is called fully automated luxury communism. And you can go with that. On the dystopia side, there are more than enough science fiction plots to look at this side of it. There's Terminator with Skynet, Elysium, Westworld. I can barely watch Black Mirror because their futures are dystopian and so close to the present that they feel like they're going to happen next month. If you read The Circle or have watched the recent movie, it's not Faulkner, but it definitely says what if Facebook basically wins everything and we're always rating everybody and under constant surveillance. Wally is, in fact, an interesting dystopian future and there's a dystopian singularity too, where we go through this little little pinhole and suddenly emerge into a world where humans are worth nothing, where there's scarcity of everything, where there are no jobs to make money, where there's no meaning because there's no jobs because we have no connection to the earth and we're under constant surveillance. I can just as easily see this side happening. The utopian front says, oh, don't worry, jobs always appear and, you know, in the past when there was a revolution of technology, there's always a new set of jobs and AI, by the way, won't be so bad. It's really going to help us and, you know, privacy is so 1960s. So at this point I queried the audience. I said, where do you stand? Do you lean toward utopia? Not a perfect utopia, but more toward this is going to be okay. Do you lean toward dystopia where this is not going to be so hot or are you undecided? Is it just so confusing or you haven't made up your mind on this? And it was about a third, a third, a third, and I land where this star is. I land more toward dystopia. I think for reasons I'll tell you right now, I think we're really heading toward a disaster scenario, and we don't have a lot of systems in place to guide us back towards some kind of safe territory. I'm going to blame two four-letter acronyms, CMMC and PPPP, that are kind of distinguishing between us heading toward utopia and dystopia. The first one, CMMC, is my abbreviation for consumer mass-market capitalism, which is the flavor of capitalism that we are in right this minute. It is that the sort of the full rendition of consumerism brought to market with advertising, with all these different kinds of things, and it's only one flavor of capitalism, by the way. The image on the right there is my brain. I'm taking you over to it right now. This is actually my brain. It's a piece of software called the brain that I've been using for 20 years, feeding all these kinds of things. So in here we can find, you know, different variants of capitalism. So compassionate capitalism, the SAVA Foundation has written about it. There's also some interesting books over here about compassionate capitalism from Mark Benioff and a variety of others. So then I can go back up to variants and pick something like regenerative capitalism, which is a really interesting positive vision of using regeneration from ecology and biomimicry to create a kind of capitalism that regenerates the economy. So that's pretty interesting. So consumer mass-market capitalism is one particular version that we are stuck in today. Consumer mass-market capitalism has turned us from citizens into mere consumers. And my job as a consumer is just to choose between the Cheerios and the Cocoa Puffs, the Trump and the Hillary, not to actually get involved and help figure out how to do governance locally, not to make Cheerios and share them out, not to save seeds. None of those things are my job as a mere consumer. Notice that the language of advertising, which is the engine of consumer mass-market capitalism, is the language of war. We launch ad campaigns against target demographics. We pay by the impression, we're hoping to achieve market penetration. We send flights of messages against those demographics, may as well be missiles. We pay for hot leads, we pay bounties for hot leads, all those sorts of things. And in the US in particular, we have a very cruel social contract. And the administration, Trump's administration, backed by the full force and might of the Conservative Party, for strange reasons, is busy shredding whatever social safety net we used to have on the excuse that, well, people should have more motivation to get up and work. Maybe with not much realization that poverty in this country, in particular, although certainly in others, is a dismal trap. The people who get low interest loans are the people who don't need money and have a lot of money. Money is very expensive for people at the bottom of the pyramid, and everything else is hard to get out of. The time they have to commute to get into where their work might be, on transit that costs too much, all of that is incredibly expensive to them. So what we're seeing is increasing income inequality. We're seeing a real disjunct between the 1% and the 99%, which boiled over some years ago in Occupy Wall Street and has submerged again, and I think is going to come back boiling onto the streets at some point. So, you know, one way to look at consumer mass market capitalism's effects on the world is to look at where the money comes from that fuels Facebook and Google. And that money is from advertising, with intense Wall Street pressure to keep it going. How long can this play out? It's interesting, but I like Google. I'm pretty much all in. I have a Google phone. I use Google all the time. I'm on Facebook a lot, giving up a lot of my information. But there's clearly a Faustian bargain we are all making here in doing so. Other names for consumer mass market capitalism, or other models that kind of go along with it. Surveillance capitalism, we're always being watched and we're all of our information is being hoovered up. The stalker economy, where this information once vacuumed up and mixed and rematched and tied with other databases is used to manipulate us into buying more stuff we don't actually need. We have no privacy, we have no place to go, no place to hide. The scene in Minority Report where Ethan is walking through the mall trying to get to the precogs is pretty interesting because with technology available today, we could do this. We could do face recognition or pick up the phone in his pocket, which is sending out three different radio signals. At least we could do iris recognition, whatever it might be, and then pitch him exactly whatever it is his profile tells us he might need to buy. So we're not that far away from this. Jobs in the middle of all this are truly at risk. In the US, 27% of jobs are driving jobs. This is everything from long haul trucking right down to taxis, including local delivery and a series of others. And this will probably move first in different areas, but there are already autonomous vehicles on the roads today. Uber tested short haul trucking recently with a truck that went from two cities in Pennsylvania, 150 miles by itself. This is happening really, I think, much more rapidly than we think it will. And driving is a well-known occupation in the US. Retail is another one and department stores are doing everything they can to get rid of full-time employees as are all other corporations. Amazon has hit the full world of retail and is destroying everything except massage parlors and exercise gyms and restaurants. And pretty much other kinds of retail have a really hard time staying alive. Ironically, Amazon is busy creating some new stores with bricks and mortar. It's not just blue collar work that's going away, but thinking jobs are at risk. Writing jobs, legal jobs, if you're doing discovery in a lawsuit, you are much better off feeding all the documents to legal software. It's going to do a better job than your paralegals. Writing jobs, if you look at a news story about a sports event, bad weather, or an accident, it is just as easily written by software, by a human, and it's a lot cheaper. As I said, full-time work is just disappearing. We're entering the gig economy. This sharing economy has both been very helpful in providing people a little life raft in difficult work times, but also it has disaggregated work and made it so that it's hard to get benefits and have enough income coming in. As a result, we have the precariat, which is the precarious proletariat, a neologism that I think is very indicative of where we're heading. It's not so much that jobs are at risk, but tasks are at risk. And I think there's an important distinction here, because a lot of the research that's been done on how many jobs will automation take over are looking at the full job. And if the full job can't be knocked off, then that doesn't count. And I'm thinking if your job entails six different tasks and four out of those tasks are easily automatable, your job may be gone or it may just be reconfigured. If you're very fortunate, you will go up the food chain to a more interesting job. If you're not that fortunate, you'll be on the street, and there's very few systems that'll actually help you get a job again. The education system at this point is more or less a joke because it's expensive, time-consuming, you can't bankrupt yourself from its debt, and it isn't training us for the things that we need to actually understand. There's a two-curve problem that Ian Morrison presented many years ago. It kind of builds on the notion of S-curve. So the horse and carriage has its own S-curve where it begins long ago when we started domesticating horses and then attached to carriages and wheels to them. But then it peaks somewhere probably around 1915, which is apparently peak horse. And then it goes down because the second S-curve starts coming up of automobiles. Now the interesting place for the two-curve problem is when you're leaping or trying to leap from one curve to the next. And leaping here means entire societies, it means incumbent corporations that we're trying to stay in business, all of those. So in revolution after revolution, electrification, telegraph, internet, smart phones, you can plot these charts. And the second curve always starts generating jobs. Jobs come in with the second curve. In the case of carriages and horses to automobiles, as the blacksmiths go away and the people who cleaned up after horses in the streets or cities, we get road builders and gas stations and car repair, right? Well, I hate to be really grim about this, but from my perspective, software is like a flesh-eating bacterium. In none of these other revolutions was there some entity present that was busy eating work and just vanishing it, taking it away completely. If you look at it, humans get tired, they get sick, they want raises, they get cranky. Sometimes they're actually incompetent, and then they need to retire hopefully in comfort, right? So we and they need to save for their retirement, et cetera. Software doesn't do any of that stuff. It just keeps getting better, it keeps getting cheaper, and it's perfectly replicable. The only thing standing in the way of software just eating everything is people strongly protecting intellectual property and saying, no, you can't use my software, you have to pay a whole bunch of money for it, which ironically may be one of our few salvations here. So what if this time there's no second curve or the second curve is just really shallow? It doesn't employ a lot of people. It employs people in very flimsy ways. It doesn't really generate enough revenue for all the expenses people still have in this very cruel social contract. This is why my star was way over to the right on dystopia. So the second four letter acronym I'm going to blame for a lot of our situation is PPPP, which is the pale patriarchal penis people. I'm almost one of them. I'm one of the pale penis people, but I don't really believe in patriarchal command and control systems. The problem is that the algorithm culture, the algorithms are being written by people who've been told to create things for efficiency, scale, and profit. They've thrown away or set aside all notions of meaning, purpose, relationship, trust, society, all the squishy terms. They're not really figuring into these algorithms. And the companies that have basically gotten all of our attention and are reselling it are doing this. The mechanisms at the heart of consumer mass marketing are driving us toward this dystopia. And the mechanisms of governance that are in the background that might be our safety net are in fact shredding the safety net as we talk. It's a pretty nasty sort of situation. So what are some of our key levers? What might we do at this point? So one notion is we might hack capitalism. We might turn consumer mass market capitalism into some better notion. And some people are floating the idea of the guaranteed basic income or universal guaranteed income or several different names for it. But it's the idea that you give some amount to people so that at least they can have a roof over their heads and put some food in there on their families. I'm not a huge fan of guaranteed basic income. I think there are probably other ways to do this. But I do think that capitalism is savable. And there are many people like the regenerative capitalism movement and others who are working on this. They're just not getting that far because Wall Street still has all the money and puts all the pressure on everybody. A second thing to do might actually be to wake up and diversify text architects, meaning to keep them from all being pale patriarchal penis people, and to wake them up to the moral and ethical implications of the things that they're doing. I forgot to mention that I'm now reading Corey Doctra's book, Walk Away, which is a lovely science fiction novel about people who are walking away from what he calls default. And default is more or less what I'm describing as consumer mass market capitalism run amuck with intellectual property protection and violence kind of rampant. And people walking away from it are the ones who are walking into a world of abundance and shared resources. It is a very nice job of exploring what that might look like and how it works, although the fights between default and walk away are not that pretty to read. So the third thing we might do, I call designing from trust. And this is a whole separate talk. I did a talk for the AIGA a couple years ago where I described design from trust. You can see that and other links here. But design from trust basically says, hey, most of the institutions that we're living under right now, living inside of, how we design cities and where we put traffic lights, how we teach our children, how we manage employees, all these things are designed from mistrust of the average human. And it turns out that there are dozens upon dozens of people and movements who have discovered how to design from trust. I'm not inventing this from whole cloth. I'm actually saying this exists. And if we paid attention to those, we could use them to revamp and improve capitalism and maybe make our way towards something better. So there's my gain talk. The AIGA talk was for gain. There's my TEDx talk on the left and on the bottom. There's my brain, which you can go browse for free at jerrysbrain.com or find in the App Store. If you have an iPad, it looks really pretty. There's an app in the App Store called Jerry's Brain that has my face on the icon. Thanks very much for listening. I appreciate your attention. Send me feedback through Twitter or whatever other means if you'd like. You can Google my name and find me easily. Thanks.