 I was asked to share a bit about what's going on with AI trends and how it might affect sort of the evolution of our species, our relationship with technology as well, and even what it means to be human. So this is ambitiously called how artificial machine intelligence might even be a path towards human 2.0, whatever that might mean. Quick introduction about what I do. I'm a venture capitalist at Mayfield Fund, which is one of the first venture capital firms in the world. We backed companies such as Atari and Amgen, Genentech. We're now in the fourth generation of our leadership and manage about 3 billion total, and we typically have invested in some pretty interesting companies, some of which you might have heard of, like Lyft, if you've ever taken a Lyft, SolarCity, Marketo, ClassPass, some fun things like that. So it's enterprise companies and consumer companies. I lead our consumer practice, so looking at consumer internet, mobile technologies. We're investing out of a $400 million dollar current fund, and we're usually investing really early. Half the times, there's people with just an idea. Okay, so it's usually often pretty early. Is that right? Thank you. So currently managing about 60 companies, and we also get a look at what's happening in India and China through our teams there as well. So some key trends. Why is AI bubbling up these days? Some really key enablers. There's going to be 10 billion plus mobile connected devices by 2018. That's more than the number of people on the planet. Each one of these has more computing power than the computers that got Apollo rockets to the moon. Internet of things. Everything in your house, around your house, on you, near you, is going to get connected to the internet. It's going to grow eyes, ears, voice, brain, all these sorts of things. They're going to be on all the time watching, learning, seeing, observing. From your fridge to your toilet, yes, your toilet, to your toaster, to everything else. And there'll be a lot of interesting new applications, and some that are pretty, frankly, downright scary. What's interesting, the startups doing this, they're not even sure what the applications are, but it's so cheap to connect devices to the internet with the same chips in your phone that you can assume everything and anything will get a cloud connection. Another big change. We've had smartphones for a while now. There's something like 40,000 apps submitted every year, but I'll guess how many apps most people download after you get your phone. The net new number of downloads is usually zero. Once you have your apps set on your front page, you just don't play with them anymore. So instead, everybody's using chat. Everybody's on Messenger, WhatsApp. We chat. Facebook Messenger is a big one. And so, in other words, we're saying messaging is the new app. All the things that you normally do on the internet and apps are getting done in messaging today, which is kind of interesting. Soon you'll be asking questions, doing search, getting recommendations, buying things, booking a Lyft or an Uber all within chat. China has already had this with WeChat, and we're seeing in this in the US as well. So what's going on is we're using all these apps you see. You're creating a digital footprint. This notion of digital exhaust, all of the signals, every site you surf, every Amazon purchase you make, every Spotify track you listen to. There's so much data exhaust on what you do. This digital footprint is almost like a perfect data selfie of all the things you do, your preferences, your behavior. It's kind of interesting because the behavior is sometimes very different than what you think of yourself as. So have you ever seen on online dating profiles, people always say everybody likes puppies. We all like long walks on the beach. We all say A, B, and C that we do. But if you actually look at the data exhaust of what you really do, it'd be a far different picture than how you describe yourself. So where we're headed with all this data, I studied engineering control systems when I was younger. And the big thing about controlling any system, step one, you have to be able to measure it. Now that we have these phones everywhere and we've got tracking pixels on all the websites as well as analytics on every website you see, every app, we measure everything. You can see everything that you do. So we've got the measure part done. Second, now we're in the stage of understanding it, comprehending it. Machine learning, analytics systems, all of the software out there is quickly making sense of the things that are being measured. And if you can comprehend and measure something, theoretically, you can control it. You can have feedback loops that tune it, optimize it, do better recommendations, do better curation for you. And then with that, theoretically, you can really improve it, turbocharge it, make it better, faster, bigger, more efficient. I've always been a proponent of kind of this notion of quantified self. So this was a bunch of geeky people years ago. We would measure everything we ate, we'd measure the number of steps we took, we'd measure our heartbeat, we'd measure all this stuff just to get an understanding of what our bodies were doing. That was really fringe before, but now it's become mainstream. How many of you have tried to fit bit? I'm sure many have seen and tried them. That used to be kind of a fringe behavior. People used to think, why do I care about the number of steps I take? But the number of people interested in their data is getting bigger and bigger. And from not just understanding your body, perhaps we're moving towards this notion of quantified life. All aspects of your life are now measurable and potentially optimizable. Whether it's work, the number of emails you put out, your response time, this is an extension of how people used to measure people working in factories, the amount of time it took to screw in a rivet or something else like that. We can now see and measure exactly what it is the people working on all the time. You could even quantify the play and entertainment that your kids do. Back to the notion of the four-year-old and tablet, you know, there's parental modes now where you can see exactly the sites, the activities, the games that your kids are doing. And you can kind of break it down by minute by minute on what they're doing. Amazon measures everything you buy, it's tuning it all the time, so we have easily quantified commerce and consumption. The number of sites that measure your every click, your every move, even can measure your attention, your intention on sites like New York Times or YouTube or whatnot. We're now in this attention economy and we're measuring everybody's attention all the time because that's the one thing that's very scarce, the share of your mind and your attention each day. I joke about this, but people used to say you can't measure love. Turns out you actually can. There's so many algorithms out there, you know, folks have hacked match.com, eHarmony, the Tinder folks are magical about this, but you can actually do quantified love. You can know the number of people you need to look at, you can tell by the different signals how long you look at, somebody for you swipe left or right, you can mathematically predict when things will be matches now. And so this gets to a really interesting notion that with data, maybe you can measure and quantify everything. We're going to have quantified home pet, baby, quantified parenting, quantified family. Soon you can pretty much get insights into every aspect of your life, sports, learning, even creativity. People used to say, well, creativity, that's completely abstract, but it's not. You can measure the output of your different creative habits, the time you put into it. As a musician myself, I'm waiting for the day where I can feed in every song I've ever written into a machine learning algorithm with enough of the song's samples, it will know my favorite chord progressions, my preference on cadences, key signatures, whatnot. It'll create almost like a digital thumbprint of my musical style. And if you were to do the same with the Beatles catalog, then I could say, hey, take my fingerprint and mash it up with the Beatles. And what would Tim Times the Beatles look like? Spit out 100 copies of that. Tell me which ones are interesting. And I will claim ownership of that, authorship of that. So we are moving towards this notion of quantified, then to AI and machine assisted. So imagine all aspects of your life that potentially could have AI assistance, augmentation, optimization with it from your health, travel, work, environment, even your spiritual life, your entertainment life. A few definitions here. How is this going to affect your life in the next few years? We've got, remember we talked about chat, everybody's doing messaging now, right? We now have the rise of chat bots, these agents. There were some screenshots of some of these things before where you are conversing with these little digital agents helping you with things. And then there's sort of true AI. But right now, we're in the world of chat apps. So within Facebook Messenger or iMessage or whatnot, we've got chat interfaces. And a lot of times, these are human agents that are able to amplify the amount of work they do with these automated chat interfaces much, much faster than being on the phone or kind of like writing manual emails. It allows human agents to support more customers 24-7. Next, we've got bots, which they're a little more automated. They're kind of like automation of existing flow charts. In a lot of jobs, you have standard tasks you do. You have to follow a standard workflow. The more standardized it is, the more automatable it is by these bots. So there's a new term. A sci-fi author friend of mine, Rob Reed, said, is, I think we're in the era of centaurs. I said, what do you mean, Rob? He said, well, a centaur is kind of like a half horse, half human. What if it's kind of like humans plus AI doing stuff better than what just humans could do before? That's kind of what we have today as humans augmented by AI to make better decisions or catch blind spots. My favorite example is my brother is a radiation oncologist at Stanford Hospital. There are now AI machine learning systems that can help radiologists spot errors on the way they're assessing radiology charts. So it's kind of like human decision support. Humans are really good at patterns, but sometimes we kind of ignore edge cases or can't keep up with the latest learnings, but that's sometimes what AI can be good at. So it's not replacing the human radiologist. It's kind of like a second opinion machine for you. And then, of course, where we go next is true AI. This combines things like natural language processing, natural language understanding, machine learning, and then the real mark of AI is deep learning. Deep learning is fascinating. This is what enabled the Google Deep Dream algorithm to create its own images, to do image recognition, deep go to beat world go champions at things. And what's I think a little bit scary about it is with the way some of these neural networks work, they learn things from vast pools of data, but we don't know why. So imagine a system that you can't explain. You can't trace the work of how it got there. You can't work backwards and debug it. It just gets to a bunch of answers magically and you don't know why. So this is something that, you know, is very scary for folks. And thank goodness I think we have researchers working on debuggable neural nets and AI, but imagine someday a career known as AI forensics, you know. Why did that self-driving car run over that cat? How did it get to that conclusion? Things like that, which today, honestly, some of these systems we can't quite understand why they think the way they do. A lot, just some resources for you, but if you want to learn more about it, there's some great places like bot.watch or slackbotlist.com. The number of these AI algorithms and bots coming up is coming out so fast. It's actually literally hard to keep track of these things. So this is affecting your life right now because it's something universal we all have seen before. How many of you have been through these lines waiting, waiting, waiting? This is actually where a lot of these AI services are going to hit our lives every day. This is an example of one of the first little chatbot apps called Magic. And it was basically just by iMessage or text message, you'd text anything saying, hey, I need a plane ticket. And it would literally walk through a bunch of steps to get you what you needed. A cheap flight, a better bag, a Christmas gift for your mom, whatever it was. The thing is, though, for this generation one, it was actually humans behind the scenes typing it. But nowadays, there's more and more artificial intelligence and automation going on. And the point is, with this concierge on Messenger, you don't know if you're talking to a human and AI, or maybe it's both. We've even seen now the first AI coaches and therapists that are as good as human coaches and clinically proven and also reimbursable by insurance. LARC is one of the first that helps with diabetes management. And it's proven that this guardian angel in your pocket messaging you throughout the day is as effective, if not more effective, than a human therapist that you might see once in a while. So much so, it's reimbursable and prescribable now. So what will every business need to be good at? Every business is going to have to know how to directly message to you, instead of just mailing you things. It's going to have to know how to message with you over iMessage or WhatsApp for everything from customer support, sales, service, all those things. They're going to be collecting tons of proprietary data on you. They're going to know your location, payment info, your history, your identity, your preferences, all those sorts of things. That's potentially a nightmare if that gets leaked out or lost. These systems have to work seamlessly with other services and platforms through APIs. For example, it'll probably integrate with Google Maps, it'll integrate with Gmail, Spotify, all sorts of other services. So soon machines and machines are talking automatically and talking about you and the things that it could be doing for you. There's one saying in the world of artificial intelligence, context is king, meaning the more you know about someone, the more value you have and can get when it starts with user identity and also understanding transaction history. But what is interesting in the arms race of AI, it's all about these proprietary algorithms and how to train them. This leads me to this really interesting realization I had. In the era of artificial intelligence, the new natural resources, the scarce resources, the new oil and gold is actually proprietary data and AI talent. What I mean by that, AI is only as good as the data you train it on and you usually need tons of data. Who has the biggest data pools in the world? Basically, there's five companies, Google, Amazon, Facebook, Microsoft, and Apple. In China, it's BAT, Baidu, Alibaba, Tencent. They have the biggest data pools. They're sort of like the data equivalent of superpowers. There's not even governments can catch up to the amounts of proprietary data they have and it's almost impossible to replicate. This forces what we jokingly call the slow kids, all the companies you know about PG&E, Comcast, Time Warner, et cetera, to try to catch up. They're doing so by trying to buy startup companies. Every industry, healthcare, retailers, banks, et cetera, they're struggling to keep up with this because the amount of proprietary data it's being hard to catch up with. More importantly, if you have that data, you have to be able to mine it. That's the other part of this challenge. There's only six, seven universities in the world that spit out world-class AI talent and every single one of those teams are getting snapped up by Google, Amazon, Facebook, et cetera. These are the new rock stars. If you have a PhD from a respected place like University of Montreal under a rock star professor like Yashua Benjio, you probably command $2 million just to be acquired by a Facebook or Apple. That's more than NBA player salaries. You can generally say that AI talent are the new rock stars out there. My biggest fear, what if power consolidates to less than these 10 players around the world? The thing about the AI arms race is sort of like the old saying, the rich get richer. Here with the data and the talent, you exponentially start to outpace your competition. Sidebar question. So what do you suggest to your kids to major in the post-AI world? Is it AI, data science? Those are the obvious ones. But it might be other things too, design, behavioral economics. Believe it or not, I'm going to steer my four-year-old into even some side studies in things like philosophy and ethics because if AI is just a technology, a really powerful one, it's just a tool. And the only thing that matters with it is what intention and energy do you bring to it? AI could be used for some amazing things. It could be used to destroy systems pretty quickly too. So the notion of AI ethics, data ethics will be a whole new hot field that's probably not even being taught in schools yet. Where I'm hopeful this will help us in terms of AI helping humanity, remember we talked about LARC and this notion of this angel on your shoulder? What if we had human coaches and AI coaches as your personal coach and concierge in all aspects of your life? And it would have data and dashboards that it could show you. But just like you outsource your fitness life to potentially a personal trainer and maybe, in time, your food life to a personal nutritionist, why wouldn't we do the same things for all the things that you really don't want to worry about? Your financial life, maybe even your information diet. So just a quick poll. Have you ever been down an Instagram rat hole or like a Facebook rat hole where you're just scrolling and scrolling out of habit and then 45 minutes later you realize you just spent 45 minutes on cat gifts? And then you feel really bad about it because you'll never get those 45 minutes back. And this is something I worry a lot about because traditional media, all the books in this library, they have stopping cues. Stopping cues are things that tell us that we're done. End of a chapter, end of a page, end of a song, end of a playlist. All the apps today are designed for addiction and infinite scroll, infinite session links which drive more engagement for advertisers to charge more. So we've taken away the stopping cues and we've actually created a set of digital opiates. These are digital forms of drugs which are hacking our addiction systems. So we're going to need AI and technology to help us actually self-regulate against our own technologies. Did any of you see this movie? This is her with Joaquin Phoenix and Scarlett Johansson. And this is one possibility how this can go. It literally is, imagine the computer personalized to you in your ear helping you, guiding you, being your mirror, being your soulmate, helping advise you on all the different choices that you have to make through your life. But why? Why does any of this stuff matter? Why are we building this stuff? This really isn't nothing new. We've throughout human history have always tried to build systems and frameworks to understand one thing which is really to understand ourselves. I think this is the world's second oldest profession, the oracle, the fortune teller, the sage, the seer. Why am I here? What am I meant to do? What does this stuff mean? What's in these tea leaves? It's all that yourself. Where I think we're going to, and I hope this is kind of a bit of an out there prediction, but with better machine intelligence it'll help us achieve better human intelligence and maybe better humans. So this is sort of a newer field we think of as transformative technology but could we yield AI to help us make us more human, not less? This involves things like sensor tech, like wearables. Soon we'll have hearables. We'll have ingestibles, all sorts of things that give us data about ourselves. Data technologies, big data, real-time analytics on ourselves. Genotech, CRISPR, where you can do gene editing and even hack our own biology. Chemical tech, neurotech, all sorts of technologies here that could basically help us enter sort of potentially a brave new world. We will potentially see the rise of trans humans, basically folks with AI that are almost like neural prosthetics that can help us think about things. Are we headed towards a collective consciousness? If you've ever read Yuval Harari's Homo Deus, that's one of his predictions that we're headed towards kind of a mass networked, augmented human consciousness. There is this notion if everything is measured and if everything's transparent, could the death of privacy be an ideal outcome? There is one saying, only bad stuff happens in secret. So I mean, this will probably never truly be because the issue with privacy, not everybody has equal access to data and openness. But if we did, it'd be kind of an interesting thing. We're also in the attention economy. I think we're moving to the story economy and I'll talk about that in a bit. My biggest fear, we might have a new cybernetic divide. The first digital divide was a difference between who had accessed the internet and who doesn't. What if a certain set of early adopters get access to AI and these augmentative technologies? What if they leapfrog other folks so quickly that it creates a whole new divide in terms of who has access to these sort of transformative technologies? So for some interesting previews of what this could look like, I mentioned Yuval Harari's Homo Deus. Ramesh Naam wrote a wonderful book called Nexus. It's a sci-fi kind of thriller. It's like the next Matrix, but it was a wonderful sort of preview into what could be. Now you've seen virtual reality, augmented reality. My hope is this could help us be more empathetic. If you could literally walk a mile in someone else's shoes, would you feel more empathetic towards them? The flip side of that? If you could tune your reality and make everything look the way you would want it, why would you ever get out of that? So there could be a whole new future where we need VR addiction clinics. There's always sort of a white cloud and a dark cloud to kind of outcome for how these technologies could be used. So what's the point of all this? Did you ever hear the parable of the million typewriting monkeys, which is if you put a million monkeys in front of a million typewriters, a lot of them type for a million years, they will come up with eventually all the works of Shakespeare. It actually ties to one of my favorite stories. And because we're in a library, I had to share this with you. Jorge Luis Borges runs a short story called The Library of Babel. Wonderful story, you can google it. Inagist, imagine an infinite library of identical chambers, identical number of shells, identical number of books. Each book has a number of pages. In this library, there's one book, the contents are nothing but A, A, A, A, A, A, A, A. The next book would be A, A, A, A, A, A, A, A, B. On down the line, till there's one book that says Z, Z, Z, Z, Z, Z. Contained in this library is every possible combination of all letters period. Therefore, it contains every possible story in every possible language. It is the possibility space of all possible scenarios. One could argue the Big Bang is nothing more than the universe trying to map out its own library of Babel by mapping out every possible combination of planets, stars, configurations to see what kind of life comes out of it. Similarly, the Cambrian explosion of life was every possible combination of DNA combining to see which one percent will crawl out of this ledge, right? It was running every possible scenario. So something I think a lot about is the most beautiful success stories often begin with a lot of experimentation, pain and failure. You know how pearls are made, right? Basically, oysters have this irritation their whole life and they deal with it by forming a protective coating and polishing it shiny to create something really beautiful around it. It's very similar to actually a lot of great, wonderful human success stories. They face tremendous trauma or challenge and find a superpower in response for that, right? What I think real is really going on is we've got human storytelling, which is us in our lives facing these traumas and challenges, finding our own superpowers through it. But then now we've got the data to capture our stories, connect them with other stories and learn from each other. So it used to be your whole life back in the day was summed up in one eulogy. You would say you were born in 1832, you died in 1874, you had two kids and you did this. Now remember we talked about that data footprint, that data selfie? You could relive and know every single moment of every action you did. You'd have the sum totality of a human person's behavior captured in data, the ultimate data portrait of that person, the complete story. You'd be able to search it, you'd be able to cross tabulate, you'd be able to see who was similar, what the differences all were. In other words, you're kind of mapping out the library of Babel, right? So I have kind of got this more abstract notion that maybe mankind plus machine is God and my definition of God could be globally omniscient data. Or there's one saying I've heard, the difference between man and God is that God has perfect data. AI, all these tools, we're getting towards more and more transparent complete data. What happens when we do reach that state where we have more of this perfect data? You know, maybe it is that AI is helping us accelerate the mapping out of human stories to build out this library of Babel. But this is something we've never had before. And it's a wonderfully exciting time. But I think these questions are more important than ever because these changes happen faster than expected. And when they do, we sometimes can't recognize the changes that come until it's too late. So this is a lot of what I spend my time thinking about. And it doesn't add the notion of why are we building the technologies we do. I used to not think about that because the answer used to be, oh, to make money, to build businesses. But the truth is, if we're creating technology and the nature of technology is to automate and take away jobs, are we also building technologies and companies to create new work, to help people up level, to retrain them? And as up to 45% of jobs get taken away, what is it that humans will do? The prior speakers that said, people don't exist for just a paycheck. We need a sense of meaning. And so that deeper meaning is one of the big unanswered questions that we need to wrestle with now. So I'll sum it up their thoughts. But I hope that was helpful in sort of the view of what's going on with AI. So yeah, I guess there's a short Q&A session or thoughts. I would love to get your reactions. How does one keep up with all the new things that keep coming out in an exponential form? I don't know. One might say you need an AI just to keep up with it. And actually, believe it or not, that is one of the potential advantages. I talked to my brother, like I said, who's a doctor. And he said, no human doctor can possibly read all the new medical journal publications. If you had a bot reading those and trying to help you summarize, it could be, we kind of have this notion of a neural prosthetic. What if it's learning for me and tapping me on the shoulder, decision support? So next time he goes to read a chart, it might say, mm-mm, yeah, that does look like that. But this new publication just came out which said it could look like this, too. So it could be that as data gets so quick, humans can't keep up and we need data tools to keep up with the data. Just to follow up on that, I've always dreamed about being able to put on a headset and go to sleep and be able to know a foreign language, let's say. Something like that. Yeah. And there is more and more technologies like that where you're using binaural beats and all sorts of things while you sleep for training the brain. And we're starting to get more science behind that. It's a very promising area. Thank you. Hi. A little while ago, Microsoft had that bot that came out and within 24 hours, it went from being this sort of naive thing to this horribly racist homophobic bot. And they discovered fairly quickly, as you've pointed out, that the artificial intelligence is only as good as the information that goes into it. Exactly. So how in this new age of artificial intelligence do you prevent that type of event from occurring when you can't necessarily foresee that type of thing? I love this question. This is the thing that I'm thinking about the most about these days. Do you remember back when food companies, before they had revealed their nutritional ingredients, we needed the FDA, we needed others to help regulate that, we will soon need some sort of regulation or transparency around data. Because if you feed AI the wrong data, you could get it to do all sorts of crazy things. There will even be a new form of malignant attack called data hacks. You could purposely feed the wrong data to an algorithm to get it to whatever conclusion you want. And that's a whole new field. People aren't even thinking about security companies haven't even recognized it. So we'll need a combination of regulatory bodies that understand this stuff. We'll need industry to be more transparent about it. I kind of think someday somebody needs to write the equivalent of a data bill of rights. So for example, if my life is judged by some black box AI or algorithm, I deserve a right to know how that AI works. And man, if you use some neural net and you can't tell me how it works because it's too complicated, that's not good enough. Similarly, data itself is never unbiased. The way it's collected, the way it's collated, the way it's cleaned up always has inherent bias. So we'll even need the notion of data transparency, data quality as well. The other issue I have with artificial intelligence is, for example, when you used your Google search engine, gradually, gradually, gradually, it's just starts feeding you the things that it thinks you want to know to the exclusion of things you might need to know. And I feel that it's now become very difficult to see the things that I need to know because it doesn't want to serve it to me. And how do we get beyond that when obviously all of these entities want to feed us things that we would be interested in to try and get us to buy things? Exactly. And that might come back to the regulatory part. Left to their own devices, what do Facebook, Google, et cetera, want? They want more money. They want to feed more things that it thinks you want, right, to get more transactions. But that excludes all the things that you might want to be looking at. More importantly, they're not showing you how they score you. So I have this notion of sort of maybe algorithmic transparency. Like, I would love for Google to be able to show me, how do you score me, you know, what does my data model look like? And may I have some input into that, please? May I retune this so I only don't want to just see cat ads or whatever it is? Right? So I think, you know, as a population, we need to start pushing for more data transparency on the data sets of ourselves and the algorithms and what's being collected on us. Hi. I totally agree with what you just said about kind of transparency and openness around this stuff. But I'm interested coming from a VC in the perspective of, like, it's the government's job to regulate this. And I'm wondering what role VCs can play in the way that they invest in and expecting saying, like, we're not going to invest in you unless you have subscribed to this Bill of Rights or whatever it might be. It's something we're becoming more mindful of these days. And I actually ultimately do believe there is karma in business. Maybe you could say Uber is an example of that. But something I've learned as a VC is a startup company is really only an extension of its founder's values, integrity, personality. And so it's, you know, as board members, it's our job to keep track of that and make sure the intentions are in the right place. And so just like, you know, manufacturing companies all have ISO standards and things like that. We are going to need more of these ethics checks as well. And so something I'm trying to do more and more, even as a VC myself, is to ask questions like, I observe X, Y, and Z that the company is doing. What's the intention behind that? Because a lot of times it comes down to intention, right? And there will probably be more scrutiny on terms of service in terms of, are you being fair with your users explaining what you're doing? So that's going to be the tension. Making money is usually about proprietary data and closing things off, but earning trust and fairness and integrity is about openness and transparency. So that fundamental tension is something that founders as well as board directors and investors, you know, have to be more mindful of. Yeah. So I was wondering, like, basically with the rise in AI, and I kind of wanted to take a shift in a different direction, like, how do you think that plays on the global scale? And as far as like military and like warfare, because like we're entering that age where even like, you know, like the states and, you know, Europe and all these countries, like, all their, like that is a source of powers, like their networks connected to their like tanks and machines like that. So how do you think like this advancement in AI, how does that affect like potential risk for like, you know, weaponizing it and deploying these artificial intelligence sentient things that can like constantly hack networks like more efficiently than any programmer ever could up until this point? Like, how do you think that actually plays out? And then it's like five, 10 years. That's a whole new dimension of cyber warfare for one. There's two things I've been thinking about. One is, you know, how we talk about Internet of Things, like all these devices, whether it's tanks or missiles or blenders or whatever, getting connected, they're actually a lot more vulnerable than people think. I've got some friends who work at underwriters labs, you know, they're supposed to certify consumer devices. Do you know most connected cars are completely unsecured and unprotected? You could basically hijack a lot of the data feeds on connected cars. You'd think the military is going to be, you know, better about some of these things. But a lot of the suppliers to, you know, jet planes and all of their often commercial grade ones too. And so there's probably going to be a lot of vulnerabilities that can be exploited even in military grade equipment. So you've got hacker risk there. You know, the second piece that I've been thinking a lot about too is that we're going to have all new kinds of, you know, data hackers and basically false data pools. We've kind of sought with fake news already, right? So imagine what if future elections were won by charismatic public figures armed with mass AI segmentation engines like Cambridge Analytica or Pinpoint Predictive backed by shadow billionaires like the Mercers? You could basically steer public opinion by how you segment them out and feed the exact news story you want. Many voters are single issue voters. What if I knew that at scale and could promise every voter the one thing they wanted to hear at scale over things like Facebook Messenger? So that's yet another type of digital warfare is you could basically, I don't want to call it brainwash, but you could steer perceptions of an entire population and turn them against something or other even through social media. So that's yet another type of form of weaponizable information. Wonderful book called Weapons of Math Destruction that's worth reading that outlines what could happen there. Thank you. Hello. I'm just thinking in terms of what is the role and power of average citizens in this brave new world? Because I worry as a statistician that statistical literacy is really poor and we do have a saying that not everything that can be counted counts and not everything that counts can be counted and you're talking about data literacy and something even bigger than that and that's fine when you're talking about corporations and you know big institutions but what happens to average people? It's something I worry a lot about. Some of these nuances they're so esoteric even people in the field don't get it right? How would you explain this to average people and then when you have things like fake news and things that you can hack as well? You know something I've once heard is that when presented with so much nuance and complexity it's just easier just not to believe or try it it's just easier to tune it out which is why sometimes just resort to you know we're in a post fact society that's it. I just want to hear from the people I want to hear from. I don't want to believe that but I know there will be some folks where when faced with all these technologies and numbers and stats and nuances it's just it's too much right? So I am thinking of will we need non-profits that even present AI technology what not on behalf of people who don't have access to it there could be whole new NGOs and whatnot that are formed for that because we got to democratize access to this stuff too but I'm wrestling with that one myself. I don't know how to teach my four year old about this stuff someday. Some have a follow-up question on that because this is digital inclusion week so I have to ask this question. Do you think the AI is like contribute to the digital divide as people who has less access to the technology get worse off or it's contribute to digital inclusion because for example people who do not have the tech skills to type they can use like the audio system to navigate the website or navigate their cell phone. So what's your take on that? It's both and because again AI is just technology technology is just a tool it matters what the intentions are we use it for right. There are examples where sometimes technologies enable people to do things they couldn't before. There are a lot of workers in third world countries that do work by SMS now on old feature phones that were handed down that created employment they never could have before and so my hope is that we will have such equivalence to even in the AI world I'll give you a quick example one new future type of job in the AI economy believe it or not is data preparation and data cleanup and well basically tag reviewing what I mean by that is remember how we said AI is only as good as the data you trained it on most of the work in AI is actually just processing the data making sure it's properly collated that it's not repetitive that is for you know kind of classified correctly it's almost like a janitorial job but we're basically data janitors right and then the other kind of job is AI is just making a bunch of guesses somebody has to grade the guess so we're actually going to be sort of like judgment call you know quality assurance for a lot of these things that's that is one potential type of new work but um you know that I think it comes down to you know again our intentions as researchers technologists investors business growers we gotta keep thinking about does this net net benefit society too otherwise you know powers just going to keep accruing to these major platforms I have a question concerning like um obviously for all the reasons you went through there's lots of reasons for VC and investment in the development of AI from a corporate standpoint but in the AI industry there are concerns about like obviously AI is getting better and better one day it'll be as smart as a human but there's nothing stopping it from becoming infinitely smarter than anyone um is there any VC interest in looking that far into the future and what's controlling or potentially containing AI super human yeah it is uh more and more a thing that we're looking at until now VC interest has been how do we use AI to make human workers faster smarter we're getting to the point now there are certain types of jobs that are completely automatable starting with customer support and then sort of analysts that look through logs any kind of rote routine job is automatable then we're getting to the next level is when does AI come up with conclusions humans could not have that's when you have things like these feed forward neural networks um if you look up google's deep dream algorithm it's creating imagery no human could possibly created before it's almost like watching computers dream and so we're now entering an era where computers and AI are coming up with solutions and and sort of formulations that no human would have ever thought of and that could unlock things but it's almost like encountering an alien species right because when AI hits these new levels of intelligence it won't be the same as human intelligence but it might not be an intelligence we recognize remember so because we're talking about how these neural nets train themselves on big pools of data they reach conclusions that are sometimes stunning we don't know how right and so that's the thing we're wondering is um if we don't understand them but you could still utilize them and there's danger in that too yeah and just just to clarify kind of as an end like the next step to that if those assumptions or the conclusions that the AI come to are detrimental to themselves or to other AI systems or you know even to us is there anyone looking at VC investment into solutions preparing or even considering the potential for that situation yeah it's something that we're thinking more of thank goodness you know certain thought leaders like Elon Musk and Jeff Bezos they've been pushing for a open AI consortiums because one of the best ways to combat the interests of just private groups is more open source more transparency making it publicly available but we'd like to see more of that continue as well too yeah hi i was just wondering your opinion on social enterprises and you mentioned that people are looking for a mission while entrepreneurs with missions that are actually solving real problems not just going after profit what's your opinion as a VC on that and where do you see your fund in future i see people saying that maybe every VC fund in the world in future will have a social enterprise VC so i'm just wondering your opinion on that we are starting to see that there are more investment groups that have a double bottom line or a positive impact focus um DBL ventures um Omidyar networks others that blend the social impact part it it is something i think about too now is all right is this a business that also does good and what i measure good by does it create jobs does it bring net happiness to folks is it helping you know that kind of that sort of thing um i think it's a great question too because uh if AI helps us reach an era of abundance then we have things like universal basic income if that frees us up from just having to get through survival maybe we have more bandwidth to focus on you know what are our passions what things going to be better at and maybe more how do we help each other so that's kind of my hope is can AI help us offload the monotony of just surviving through the day right and if it does that it frees you up to pursue more things um you know i i'd mentioned that one scenario before post ubi you know there's one scenario do you see walley you know where everyone's living in the space cruiser and they're basically fighting board all the time or is it like ancient Greece where there was some form of ubi and people were freed to focus on advances in philosophy art um athletics you know that that's sort of the the paradise scenario and i'd like to believe in that one we live in a country where we can't get basic health care pass so the idea of getting a universal basic income just sounds like a pie in the sky thing so with that in mind for those people for that large number of people who aren't tech savvy but still do need to have a paycheck what jobs are going to be left over after artificial intelligence takes away the jobs that the non tech savvy can occupy you know we're still believers that you know health care and jobs for humans are helping supporting each other will still be quite in demand um we're seeing one shift even with autonomous cars you'll still often need a human there because they might not be driving but they'll serve as a concierge for other things whether it's delivering services or even greeting people or just you know kind of being a friendly face um it's a question that's really really tough i'm still wondering this a lot because uh there was a ted talk recently i just saw it said when horses were the key unit of labor there were 22 million horses worldwide in peak population now there's three million because we just don't need them anymore so there is that question um that it's very haunting will we someday not need as many human beings on the planet uh potentially so there's one argument that says as you increase quality of life people naturally stop having as many kids because there's just more stuff better stuff to do we've seen that countries like japan and others that's one possible argument of how maybe the population growth thing will take care of itself if we improve quality of life for everybody universally but i do agree with you i'm really worried about um access to these things for everyone problem with technology it's never universally distributed um author william gibson once said the future is always here it's just never universally distributed meaning certain people get access to it first and so it takes an active effort to want to democratize it that's our job here we create a lot of this stuff so we need to always focus on how does this help the rust belt i group in detroit how does this help people in detroit as well but um it needs us being mindful about this instead of just focusing on making money all right everybody can we have a hand for tim chang please