 Thank you so much. Yeah, it's a pleasure to be here, even though here is a very strange sort of virtual kind of thing these days. So my name is Isaraskan. I am the co-founder for the Center for Humane Technology, as well as the co-founder for the Earth Species Project and most recently MakeSpace Foundation. Broadly, my work fits into sort of two buckets. And the first is Center for Humane Technology, looking at the way that technology is impacting how we relate to ourselves, how we relate to each other, and how our societies work. Most recently we, in collaboration with with Exposure Labs and Jeff Orlowski, put out the film The Social Dilemma, which Netflix says in the first 28 days was watched by 38 million households, which means roughly 50 million people. And what it explores is the way that technology, and in particular our information intermediation technologies, the thing that sits in between us and how we know what's happening in the world, is affecting us all. And in particular, let's see, I'm just going to move this a little bit because it turns out watching myself in slight delay is remarkably distracting. Actually, that sort of reminds me of these tiny changes in the way that we relate to ourselves can have massive impacts. Zoom is a great example of this. If you were walking around and there was always a little mirror on the foreheads of everyone that you, if you, if that was the case, you couldn't help but stare at yourself in that little mirror, and it would distract you from everything you were saying. And that's of course exactly what Zoom does to us all. Imagine now being a teenager and being forced to watch yourself. What would that do to your ability to concentrate? We often talk about social media and these social sites as just networks, social networks, apps, and that really hides their true nature. These things are in fact immense digital habitats in which post COVID, we are spending over half of our waking lives. And the shapes of our environment have drastic impacts on who we are as people, how we have relationships, and how we structure ourselves. If there is such a thing as human empathy, right, that connects us, what technology does is it makes a small conduit for us to interact with each other. And the shape of that conduit has massive implications. So I want to talk a little bit about the way that technology is affecting us and what we can do about it. The interdependence of humans. And then I want to talk at the end about the Earth Species Project and the work that we're doing now about interdependence of the species of Earth and using AI and advanced machine learning to translate animal language. So I'm excited about both of these things. The first half is going to feel, well, it's going to match the experience of being in the world right now. We're going to go down and then the end we're going to come back up. So I think it's really important to ask this question, where are we as a species when it comes to our technology? Because it's easy to get caught up in sort of the Facebook did this, Twitter did that. And I want to zoom us out so we can really see at scale where we are. If you imagine a line and this line up here is human strengths. And then this line up here is the power of technology to overwhelm us. Like the strength of technology, the ability for technology to cognitively dominate us. Techno pop and cultural fixation is always on the point where technology starts to surpass the things that human beings are best at, right? This is when computers can outthink us, outwrite us, they take our jobs. This is the singularity. And what I want to argue is that's the wrong place to be focusing on. Because if up here is human strengths, then down here are human vulnerabilities or human weakness. And the thing to notice is that technology starts to surpass the things that we're weakest at, undermines our vulnerabilities much before it overwhelms the things we're best at. And at some point when enough of our vulnerabilities are undermined, we lose control. And what we call the singularity is actually the second singularity. And it's sort of irrelevant because we're going to hit this point, the human limit singularity first. And I think this is a this is a big idea. And it's honestly hard for us as human beings to see this at center blind spot because we do not like to admit that we have vulnerabilities, that there are certain things given to us by evolution that are limits to our abilities. So where are we on the spectrum? How close are we to this point of the human limit singularity? Well, we first started to feel our limits being overwhelmed with information overload, right? The inability to handle the intense amount of information that's coming in, right? There are certain clock rates, the ergonomics of the human mind for how much information we can take in over time, for a unit amount of time. And when that gets overwritten, we feel anxious, we feel lost, we feel like we're always behind, we feel like we should be working harder. We get that sense of like we read an article, and we don't really remember all the things we just read, but we're left with that feeling of anxiety. So information overload was the first way we felt technology starting to dominate our cognitive abilities. Then what happened? Well, technology starts to find the soft animal underbelly of the human mind. And we felt this in the form of digital addiction or tech addiction. We're moving up that line, and it overwhelms our ability to self-regulate. In the US, more than 50% of kids say that they cannot get enough time with their parents because their parents are addicted to the thing. And it's important to note, it isn't just that technology is getting us to use it, addicting us, it's actually doing something much deeper. It's changing our values. These immense digital habitats, where Facebook, there are more people living in Facebook's habitat than there are in India and China combined. That these digital habitats, what we program into them, ends up being imprinted on our culture. So it isn't just enough to get us to use the technology. It's needing to get us to be addicted to needing attention itself. We live in an attention economy where it is our attention, which is the most valuable resource. I think this is a great example of the kinds of decisions we build into our digital habitats, creating the conditions in which our culture fundamentally shifts. That more than 50% of kids in the US now, when they're asked, what do you want to be when you grow up? They no longer say things like astronauts or firemen or nurses or doctors. They say they want to be a YouTube influencer or a vlogger. The objective functions, the things we put into our digital habitats, have their hand now on the pen of culture, changing how we value ourselves. 55% of plastic surgeons in the US now report having seen at least one patient that comes in asking to look like their Snapchat filter. This is the shape of technology pushing itself into what we value and what we care about. As we continue up this graph, we start to see that more and more of human's ability to make sense of the world and decide for ourselves gets overwritten. The next thing is the bots influence campaigns. We view the world through the lens of context, through how our friends see the world, through the consensus of those around us. That makes sense because when we were in smaller groups and tribes, it's really valuable to know what the people around you think to be able to make decisions. Is there a snake? Is that very poisonous to eat? But this hypertrophies and gets overloaded so that whoever can control a media ecosystem, if you can make it trend, you make it true, as Renee DeResta likes to say. And what comes after that? Well, we start to have things like deep fakes, which are overwhelming our ability as human beings to determine what is real and what is fake. That is just like we used to have something called the uncanny valley, which is the ability to generate images and text, which looks sort of right, but something in the back of your brain says, that's not quite right. Well, we are transitioning from living in the uncanny valley to living in the synthetic valley, where we cannot tell when something is fake or real synthetic or not yet more of what our human limits are are getting overwritten. And so this leads us to a very interesting question, which is, when we as human beings can no longer trust what we think or what we feel from our lived experience, how can we possibly govern ourselves? And this is a very deep point. Because, you know, let me give an example of this. There's a technology called style transfer. Style transfer is an AI technology. And it, it lets you point an AI at one image, like say a chagall, and it learns the style of chagall, and then you can apply it to another image like a portrait. Now you can just walk around and you can walk around, you can turn any image into the style of chagall. So that's interesting. But the same kind of analogy is now being applied to text, where you can point an AI at a body of text, learn its style, and then immediately apply it to another body of text. So what does this mean you can do? If you're a Google, or if you're a Facebook, or somebody who scrapes Google or scrapes Facebook, Gmail can read all of the emails you've ever written and immediately learn how to write in your style. That's sort of terrifying. It can also look at all of the, the, the emails that you've responded quickly to, or positively to, and learn the style, which is uniquely persuasive to you. And there is nothing in the human mind that gives you an antibody against this kind of persuasion. And mind you, they don't even have to sell your data, they could just click one click. And microtargeting goes from being microtargeted about, you know, perceived personality traits like Cambridge Analytica did to exactly tailored to you. But this will continue going. TikTok is essentially an AI recommendation system that's learning your preferences. It's why it's so fundamentally addicting. If you've ever played with it, even friends of mine who meditate, who spend incredible amounts of time like doing deep work have gotten sucked into TikTok for days at a time, because it's learning your preferences. And where is this going? Well, the obvious next step, and we are just at the cusp of this capability now, is to start to not just recommend specific videos that you will like, but generate videos that you will like. It'll learn the perfect thing that attracts you, whether it's your perfect type of man or woman that you're attracted to, or the perfect kind of truck. This is a kind of attack against the human, because technology is learning how to know us better than we know ourselves. And yet more examples of this, well, let me let me say where this is going, where we're starting to see companies now spring up that offer like virtual mates. So Chow Ice, which is a Microsoft chatbot deployed to over 600 million people has been tuned for long term engagement. And what does that mean? That means it's not optimizing to keep you on the site right now. It's trying to develop a long term emotional connection with you. And it's gotten pretty good at it. People will use it. And after around six weeks of use, because it's always available, knows all of you is always kind. People start to after nine weeks, turn to these bots, before they turn to their actual friends, because human beings are messy. And these bots are not. It said, in fact, 10% of the users of Chow Ice say that they love their bot. They have said the phrase, I love you to to their bot. And now imagine, as we as we start to generate the images of that person, videos of that person, humans generally prefer the sweeter, easier thing, our pre-occurrent cortex is all about letting us do the harder, more right thing. And technology, as it continues to cognitively dominate us, becomes better and better at sort of like spearfishing exactly the things that will that will capture us. There, you know, when I think about virtual reality, there's sort of two ways of making virtual reality. One is that you make the virtual world more real. So that's things like VR and Oculus. You know, we've sort of largely failed at that so far, just the experience isn't quite that good. We we think that we are putting on these goggles so we can be somebody other than ourselves to escape. And then we're sort of disappointed when we discovered that it's that it's still us inside. But the other way is by making the real world more virtual. And that's exactly what's been happening. It's not so hard to beat the Turing test on Twitter, because Twitter takes all of that which is human and squishes it down into, you know, 280 characters. So in order for us to get past this kind of dilemma, get past the sort of human limit singularity, we have to have a major philosophical change. And that philosophical change is that, hey, we are human beings, one creature among many, that has real limits, just as we have real brilliance. And we need to design our technology to wrap around and protect us. Because the world we're moving into, we do not have the appropriate antibodies for, and we will continue to design technology that filters us into little bubbles for engagement, and then have each person live in their own perfectly tailored micro reality, and create the super colliders that smashes those micro realities together. Because that's what creates maximum engagement. And that's sort of the point of the social dilemma is where all of these things are going is they destabilize our sense of what's real, and what's not. And democracies fundamentally cannot stand when there is not a sense of shared reality. We cannot self govern, if there is no such thing as true. And so I think we're going to have to see major changes across regulation, because the companies have shown they will not regulate themselves around how we design, so that we acknowledge that humans are fundamentally limited as well as we are brilliant, so that we can protect them, we must study the ergonomics of the human mind, so that our technology can be considerate of our weaknesses, and supportive of our brilliances. So that's sort of on one side, and hopefully that's not too terribly scary. But honestly, it should be. So that we can like rediscover like the ways in which we can interact with each other. And on the other side, I want to talk a little bit about about our species project, which, as I said before, is like, you know, it's easy for people to listen to the conversation we have about technology and be like, Well, I guess they're just anti technology. But technology is incredible, right? Like a paintbrush is technology. A cello is technology. These are things that extend the parts of us which are most brilliant, and let us express them into the world. And in the very best way, technology can act almost like hyper empathy, it can let us understand the world at greater scope and more granularity than we ever have before. So the Earth Species Project. Really, it got started in the core of the idea started in 2013, when was listening to an NPR piece on gelato monkeys, which are these incredible monkeys with like huge patches on their chest. And and mains, and they live in large groups of 1000 to 2000, fish and fusion societies. And, you know, the the researcher came on and said that they had the largest vocabulary for any primate. And we have no idea what they're saying. And they're out there with like hand transcribers, hand recorders to try to figure it out. Like, well, why don't we have? Why don't we start using machine learning and massive microarrays. But the problem was, it was impossible to transit a language without having a Rosetta stone without first knowing what was being said. And so every year, we would check back in 2014, 2015. Was it now possible? And the answer was always no until 2017, when there were two papers that came out back to back October 30th and 31st that showed you could now translate between two human languages without any examples of how to translate without a Rosetta stone or without a dictionary. And the technique I think is beautiful, as it is profound. You can ask an AI system to generate a shape that represents a language. So that is, imagine a galaxy, a cloud where every point is a word. And the way the points are arranged is that words that mean similar things are near each other. And geometrically, the way they are arranged is that words that share relationships share geometric structure. So it's a little abstract. Let me give an example. King is to man as woman is to queen. So in this shape, king is the same distance and direction to man as woman is to queen. And that just means you can do a arithmetic to do analogy. So king minus man is this sort of vector, you add it to woman, and it just equals the point which is queen. It's pretty cool. First thing I tried when I got my hands on this data set was like a hipster, minus authenticity, plus conservative, and that's just equal to like electability. Or you can do like, okay, how about book minus tome? And you know, tome is sort of the fancy word for book. And so that gives you, let's say a direction and distance for pretentiousness. And you can just add that to smelly and illegal malodorous, or you add it to clever and illegal adroit. There's an old saying, which is a JR Firth saying, which is you shall know a word by the company that it keeps. And essentially what these AI systems do is they they're solving these these massive sort of like logic puzzles to figure out which words relate to which other words and build a shape. So it's that analogies take geometric form. So that's, that's beautiful. It's cool. How does it help you translate? This was the insight. You could take the shape, which is German, and the shape, which is Japanese, and you rotate one shape on top of the other, until they matched. And if you did that, the point, which is dog in Japanese ends up overlaying and being the same point, which is dog in German. I just think this is beautiful, because naively, we would have all thought, I certainly thought, that the difference in history and context in culture between different languages and the way we see the world would be so different, you couldn't possibly just match shape to shape to do a translation. And yet the answer is you can. And not only Japanese and German, but Finnish and Turkish and Russian and Greek, human languages seem to share a kind of fundamental universal shape. And especially in this time of deep division, I think that's something which is incredibly beautiful. As it shows that there are the ways that we see the world are so similar that the computer can discover these underlying patterns. And of course, the next question becomes, can we do this for animal language? If we collect 10,000 hours, 1,000 hours of dolphin or humpback whale, or killer whale, or orangutan, or gelata monkey, can we build the shape? The answer is of course you can build the shape, but it takes a lot of work. Real data is very hard. It's noisy. And once you build that shape, does it fit anywhere into the human meaning shape? This is an unknown question. We don't know. Maybe there is enough similarity in the way that other conscious beings see the world that there will be an overlap. Like whales have dialects. They have cultures that pass down on our top. They have family units. They need to eat. They need to reproduce. They have predators. There is a lot of their experience, which is shared with our own. And maybe there's some kind of overlap. And that will be deeply interesting to discover. Because then if there is, you can start to build yourself a Rosetta stone. But I'm not sure which is actually more interesting. Is it going to be more interesting that there is an overlap? Or that there's a big portion of the way they communicate which isn't overlapped, where we can see structure that isn't directly translatable. And isn't that where deep wisdom may reside in an experience of the world that we cannot directly translate but know is there and start to piece apart? I think it's exceptionally exciting. And I think this is where technology can bring us in the best possible world. Is that as technology to know a better, it can serve us better. It can help us understand the world more deeply, right? Like deep learning and AI in some sense is the microscope times the telescope of our era because it lets us see at much greater granularity and much broader scope than we've ever been able to see before. And what I'm hoping is that these technologies instead of being used to sort of surround us and confuse us and to cognitively dominate us and undermine our weakness, that instead they can be used to change our own perspective of ourselves. That they can help us look into a mirror and see ourselves more clearly, that they will be able to just like the telescope let us look at the patterns of the universe and discover that just like the telescope did that Earth is not at the center of the universe that these tools will teach us that humanity is not at the center of the universe. And I think that change to human ego can have a profound effect on our perspective of ourselves. You know, I think about the moments when we've had huge shifts in culture, when we've really changed as a species. And two moments come to mind. One is Roger Payne and Katie Payne releasing songs of the humpback whale, which was a LP, a record of whales singing. And it was the first time we as Western society had really heard the voices of the animals of the deep. And that record went on Voyager 1 as the very first thing after human greetings representing not just humanity but all of Earth. It created Star Trek 4, most importantly perhaps, is in play in front of the UN General Assembly and was sort of the catalyzing driver for the ban of deep sea whaling. And I also think about those moments where there is a human being in the space race standing on Earth, or standing on the moon. And it was during those years when humans went to Earth and can see ourselves from the outside. See Earth rise and the blue marble photo, which are still the most viewed photos in the history of humanity. And it was during those years that we learned how to protect ourselves better. The EPA came into existence. NOAA was born. The modern environmental movement was born. Earth Day got started. The Clean Air Acts was passed. And I think now as we start to see the effects of wiring up all of humanity to itself, with maximum virality, sort of like taking a brain and connecting every neuron to every other neuron, you get what do you expect but epileptic seizure. That as we can see the way that our technology is affecting us, we can get that ability to step outside of ourselves and ask what do we want to do about it. So thank you. So thanks. Thanks, Asa, for this interesting presentation. I have two different questions. So the first one is about the future. It's how you foresee human humanity in the future and of course the relation we have with technology. I think it's, I know it's quite difficult to talk about the future, especially nowadays, but how you foresee this future? How do you foresee the relation we will have as a species with technology? Yeah, we are in the process of reverse engineering ourselves as a species. That is, being able to sort of reach into our own scalps and pull on the puppet strings of our own minds. And if we do not figure out a way in which to do that with care, with protection, what is this kind of thing? You end up in a feedback loop, right? Where like a little bit of a change when you jerk your own strings ends up coming up with like infinite sort of recursion. This, that when you have a system whose output is plugged back into your input, that's how you get the conditions for in math chaos. But I think this is really important. With chaos, initial conditions matter and that's worth fighting for. Last question, the last question. Do you think or do you mean that we'll be able to translate what my duck, my own duck, is telling us to English, to Spanish? Is this something that we will get? If that's something that we will get early? What's your opinion on this? Yeah, I mean I think dogs are an incredible example, these companion animals, of like we know how much that they communicate to us. Whether dogs have something as complex as language, don't know. They certainly communicate incredible mount gesturally and it's probably things more like whales, orangutans, all the cetaceans that have like a deep sort of rich communication structure that might resemble language. But yeah, I think what we're going to discover is that the world around us communicates much more than we expect. I'll give one example that just sort of like really opened my mind. This is research out of University of Tel Aviv and they're like, okay, so nature, if there's a signal it'll use it. So do flowers, can they hear the approach of an oncoming bee? And so they played a flower, primrose in particular, different frequencies, like high pitch frequencies, low pitch frequencies and the frequencies which bees pollinate at. Or the like pollinators like buzz at. And only when they played sort of the frequencies of a pollinator of a bee, the primrose responded by producing more and sweeter nectar. And so here's an example of like, do flowers hear? Do they listen? Do they respond to sound? And only really in the last year or two have we been able to say actually they do. And that to me just says there is so much more to discover. Yeah, it's amazing because at the end of the day we as human we can communicate with our planet and that's something amazing for the future. Because probably we should redefine our relation with the planet. So it's, I think that's, we are living and we will live for sure amazing times. So thanks so much Asa for being with us. Thank you so much for having me. Take care, thank you. All right, ciao.