 Welcome everyone to today's radical exchange between Yvonne Nohari and Audrey Ting. I'm Fuja Ulhaber and I'm very humbled to be part of this conversation. The title of today's conversation is to be hacked or not to be hacked, the future of identity, work, and democracy. Joining me from Israel is Yvonne. He is a gay historian and author of three wildly popular books, which you've probably heard of or read. The first one is Sapiens, which is a history of our distant past. The second one is Homo Deus, which is about our distant future. And the most recent book is 21 Lessons for the 21st Century, which is about today. Also joining me is Audrey Ting from Taiwan. Audrey is the first digital minister of Taiwan, also the first transgender member of the Taiwanese cabinet. Audrey is also an artist and hacktivist and also an anarchist. So given that we're still in Pride Month, I'd like to start the conversation today about gender identity. And in particular, ask both of you about your process of self-discovery in your own gender identities and how that has influenced your view of technology. Yvonne, we'll start with you. So the process of realizing that I'm gay and coming out really shaped my attitude, not just towards technology, but towards science and history in general. It first made me realize how little people can know about themselves. I came out, I realized that I was gay when only when I was 21. And for years, I mean, I look back at the time when I was 15 or 16 and I just really can't understand it. I mean, it should have been extremely obvious that I'm attracted to boys and not to girls. But I think I'm an intelligent person. I should have figured it out, but I just didn't. It took me, it was kind of a split in my mind that I didn't know this about myself. And this is why today, when I look at the development of new surveillance technology, one of the things that most interests me is what happens when somebody out there can know me better than I know myself. I'm quite sure that if Facebook or TikTok or whatever existed and when I was a teenager, they could find out I was gay in like two seconds, long, long before I knew that about myself. And what does it mean to live in a world when a corporation or a government can know something so important about me that I don't know about myself? And this is one of the big questions I have today about technology and its impact on politics and society. Audrey, can you tell us about your process? Certainly. So I will first acknowledge that Taiwan is one of the very few, maybe the only jurisdiction now in the world to hold a physical parade a couple of days ago and yesterday, too, of the gay pride and transgender and LGBTIQ rights march. In Taiwan, it's only been one year since we legalized marriage equality, the first in Asia, by adopting a very innovative way of legalizing the bylaws but not the in-laws, meaning that in the legal code, we make sure that when two same-sex couples wed, they wed as individuals with all the same rights and duties, but they are families because in Mandarin, we have eight different names for aunts and uncles. These are not affected. And so this to me signifies something that I feel very personally when I was a teenager because my natural testosterone level is that of maybe a 80-year-old male human being, meaning that I'm somewhere between the average male adolescence and average female adolescence testosterone level when I was 13 or 14 years old. And so I was very lucky at the time to have discovered the Web, the Internet, and a lot of gender non-binary and genderqueer people who informs me that even though I may be alone in my neighborhood of, say, 100 people, actually, even if this is just 1 in 100, 1 in 1,000, that means that there's millions of us on the Internet that can form such a non-binary kind of support group to make sure that our own lived-in experiences can be shared freely. Later on, when I was 24 years old, I would go through the second puberty, the female puberty, which lasts for another two or three years. And so it enables me, as someone who works as a politician, making sure that I understand and emphasize with all the different sides, because in my mind, there is no half of the world that's different from me. I can emphasize with other people's lived-in experiences because I've been through the different puberties as well. So when we legalized marriage equality, I think it really is an innovation when we discover this intergenerational way of reconciling our different positions, the older generation relying on family and more group values, community values, and the younger generation more individualistic values. But in our legal code, we make sure that we respect those traditions in a transcultural way. So I often just translate the official name of our country as a transcultural republic of citizens, and that also constitutes my main work. Great, thank you. Ajee, do you worry about the situation which you've all described where technology can know ourselves better than we know ourselves and before we know ourselves? A lot of my work is to ensure that social sectors and in radical exchange terms, a data coalition or a data cooperative owns the means of production, in this case, the production of data. That is to say, if people produce data in a way that is passive, that enables a surveillance state or surveillance capitalism, and that will lead to the scenario where you've always very much articulated in his worries. But if the social sector, that is to say, if ordinary citizens can understand that they're collecting, for example, in Taiwan, the leading contact tracing technology winner of our coronavirus hackers on the lockboard, they collect the whereabouts, their temperature, their symptoms and so on, but it never transmits anywhere. It keeps it strictly within their phone and not anywhere else. And when the contact tracers, the medical officers, comes to investigate, it generates a one-time link that have exactly the kind of information that contact tracing needs without divulging any private details about their friends and families, as would often be revealed by a traditional contact tracing interview. This is just a very simple example, but this shows the autonomous nature of people when they're owning their own data, when they're sharing it only with their most intimate and trusted friends and families, and together, this intersectional data collaborative can prove to be much more powerful than any forced please install this application technology data state or the multinational companies can have an effect on the society. And so I would argue that Taiwan's successful counter pandemic is based on this kind of social sector collaboratives that owns the data and does not store it in the quote-unquote cloud, but rather only in each other's personal devices. Maybe if I can say something about that, I definitely don't believe in technological determinism. I don't think that the kind of either surveillance capitalism or the surveillance totalitarianism that we see developing in some countries, whether it's in the US or in China, whether this is an inevitable outcome of the current technological breakthrough. I think it's a big danger. The biggest danger really is the rise of a new kind of totalitarianism that we have never seen before in history, simply because it's now technically feasible to follow everybody all the time. Even in the darkest moments of the 20th century, in Stalin's Russia or in Mao's China, it was simply technically impossible to follow everybody all the time and to know them better than you know yourself. If you need a government, a policeman, government agent, KGB agent to follow everybody 24 hours a day you don't have enough agents. And even if you have all the agents, they just produce paper reports about what you do. Somebody needs to read the reports and analyze them. That's impossible. Now it's becoming feasible technically to do that because you don't need human agents. You have all the sensors and cameras and microphones and you don't need human analysts. You have AI and machine learning and so forth. So it is becoming a possibility, but it's not inevitable. I think if we take the right actions, like what's being done in Taiwan, that can make this dystopian scenario prevented from happening. And we saw it in the 20th century, that you can use the same technology to build completely different kinds of regimes. You just need to look at South Korea and North Korea, same people, same geography, same history, same culture using the same technology in a completely different way. But an even deeper question is what happens, let's say we succeed in preventing the rise of digital dictatorships when some government follows us all the time and knows everything about it. What happens if the data is really collected in a responsible and secure way, it serves us and not the government or some big corporation? Still, the deep philosophical question is even in this situation, authority is likely to shift away from humans to algorithms in the most important decisions of our life, like what to study or what to work or whom to marry. It's not that I have this all-knowing government that forces me to do something. It's just that I know the algorithm knows me far better than I know myself and can make recommendations for me and I increasingly just rely on what the algorithm tells me. It improves all the time. The algorithm doesn't need to be perfect, it just needs to be better on average than me in making these decisions and gradually the authority will shift and philosophically I think this is the really big question of our time. Even if we prevent the dystopian scenario of digital dictatorships, how do we deal with democratic algorithms that serve us but still know us better than we know ourselves? Maybe we ignore the outline and I will just comment on that viewpoint. For the sake of brevity, I'm just going to say code but when I say code, please think algorithm. Code is having, of course, the kind of impact as you all describe it because code is like law but it's not a law of text, it's like a law of physics and cyberspace because code determines what can happen, what cannot happen. Technically, it cannot happen but it takes a lot of effort like being a professional hacker to make it happen. For most people it cannot happen because it's pre-regulated by the code and so it also regulates what is transparent. For example, code can make the state transparent to the citizen as Taiwan does or it can make citizens transparent to the state as the PRC does and things like that. Every time that we deploy code as part of our society, it establishes a normativity that tells us what's legal, what's even thinkable by design, just like physics. You cannot even think of, oh, I'm going to violate a physics law today because that's just not how the world works. That basically has a very different position than our current text-based normativity. When you can do civil disobedience, you will fully occupy the parliament as we did in 2014 and then you argue it's legal and you convince the judges. The impact, as Yuval said, is that whenever we deploy code, we must have the same kind of access to justice, to the same kind of access to the open futures, to the different interpretations that's either agreed by the social norm which would be a positive impact or it would be set by a few people and basically restrict everybody else's imagination which would have a negative social impact even if it is not by one or two actors, even if it's by tens of thousands of programmers. That still is a kind of restriction and to me also a negative social impact. Yeah, I think it's an extremely important point, this comparison between code and physics because a lot of people today still don't get it. The enormous power of coders to actually shape reality. So yes, coders can't change the E equals mc2, we can't change that. But social reality is increasingly constructed by these codes. It starts with very, very simple thing like you have these even in the old days, like you go to a government minister and you fill up some form and you have to somebody decided that on the form you have to check male or female and these are the only two options and in order to fill in your application or your form or whatever you have to tick one and because somebody decided, some functioner decided that the form will have only these two options then this is now your reality. I often tick both by the way, yeah. Yeah, but then again, I mean in some systems you can't just tick both, if it's paper paper is still in a way, that's a good example because paper in a way is still more enabling. I mean if you're creative you get this government form on paper and you tick both boxes wonderful, but if it's on a computer then somebody coded the form in such a way that no, you can only tick once and unless you tick it doesn't go on to the next screen or whatever. So, and this is now your reality and maybe it's some 22 year old guy in California who did it without even thinking too deeply that he is making this deep philosophical and ethical and political decision that will have an impact on the lives of people all over the world. And you can see it when you input the emoji that is to say like in the movie Arrival those very abstract symbols that we all use to communicate now and for a very long time the default emojis are all male and you have to pass a gender selector for it to look like a woman and just in very recent years like in the last year or two did the multinationals and the unicode construction actually the standard to the code makers started to say no, the default person know that laughing with joyful laughter with joyful tears face need to look gender neutral by default and if you wanted to look like a boy or like a girl you have to do additional work and it must be the same amount of work to make it look like a boy and like a girl. I think that is the kind of norm that I'm saying that the code makers if it doesn't allow for future interpretations if the maker of the checkboxes doesn't allow for a other or non-binary choice which by the way Taiwan provides for people arriving to our airports when you're doing the health check form then if you don't design that and then of course you would then rely on people who are civic hackers meaning that you imagine different civic futures to patch at hand however Taiwan is the only jurisdiction in Asia that has the complete freedom of assembly of speech and so on so civic hackers will not be punished unduly in every other place in Asia just let like same-sex marriage is not possible this kind of civic hacking can often get people in trouble and to me that reflects how much a society is willing to look at its algorithmic code as flexible as its legal code with a due process of change. This issue of the difference between natural law that shapes our life and the rules that we invent it's one of the main themes of history of course every culture every religion claims that they rules their laws the laws of nature but and those who break the law of doing something unnatural but this is obviously wrong because as you said if a law is really natural you simply cannot break it so if some religion comes and says for two men to love one another or for two women to get married with one another this is unnatural this is by definition wrong a real natural law like you can't move faster than the speed of light you simply can't break it it's not up to you obviously biology and physics enable two women to love each other or to have sex with one another it's only human code which says no no no this is wrong we don't want to allow it and you know the good thing in a way about computer code is that in many cases even though of course computer code has inside it a lot of biases either programmed intentionally by human engineers or programmed unintentionally still the good thing about computer code is that in essence it can be corrected much more easily if a human being has a bias say gay people or against black people you can explain to that you can discover this person or this system the codes have a bias and you can explain it to people and people can even agree and that will not be enough to change the bias because the bias comes from some place far deeper than our conscious intelligence it comes from our subconscious now in computer code you can say computers don't have a subconscious if you find what's wearing the code the bias is encoded and you change that in a way it's much easier to make a computer code gay friendly or LGBT friendly than to make a human being change the big biases so this is an interesting point. Audrey you mentioned in a previous talk in the recent COVID crisis this example of pink masks and how civic technology facilitated gender mainstreaming and could support that and actually go deep into our biases can you tell us a little bit about that example? certainly so in Taiwan while we say the social innovation and our pandemic response system is based on the three pillars of fast fair and fun and the fast part is the collective intelligence system that literally rely on the most ancient communication technology that is that landline and so anyone with a telephone smart or not can dial 1922 which is a simple landline number it's toll free and tell whatever they want to tell to the central epidemic the CECC and one day in April there was a young boy who said in our district because we ration medical masks at the time when you ration masks you don't get to pick the color it just so happens that all his rations was in pink color and so he was afraid to go to school saying that my classmate may bully me for wearing pink medical mask and so I think one of his friends called 1922 to tell the CECC of this problem and the very next day in our daily live stream press conference where the CECC answers all the journalist questions you can see every medical officer regardless of gender start wearing pink medical mask and that immediately gained national popularity a lot of the avatars famous people and famous pages get turned pink pink suddenly became the most hip color and so then it teaches about gender mainstreaming and I think this just made everybody a little bit more transgender which I think is a good idea and so the point here is that if the norm people feel that they have a stake in the norm and just with a simple phone call and regardless of age I mean that boy probably isn't of legal age probably isn't 18 years out just through this simple phone call and convincing in a very natural manner and appealing to the CECC's idea of mask for all if a few boys doesn't wear a mask because it's pink then it actually is a public health threat to everybody else as well and because of that they took on this gender mainstreaming role very quickly within 24 hours and this fast iteration cycle this agile response then makes the social sector more strong and more robust because everybody instead of waiting for the command from the command center they can actually just participate in the code making basically wearing a mask is a kind of code and what this code signifies is that it protects myself from my own hands in Taiwan so I'm taking care of my own health I'm washing my hands properly wearing a mask to remind myself that and also remind other people to protect themselves as well and that idea have a higher R value than other ideas for example where a mask to protect others to respect others and so on and pink medical as just add to the hip factor of wearing a mask so all together this increase maybe the R value of ideas of memes even more than it was before and so the CECC is in charge of amplifying those pro-social ideas and this is what I mean by Republic of Citizens It seems like Audrey your view is to take technology and use code to help us assist us right so assistive intelligence and and you've all you're you worry about correct me if I'm wrong it seems like you worry about code codifying our say existing biases Audrey your solution seems to be a participatory framework combined with fast iterations is that how you would characterize the solution yeah definitely and and it also must be fair and also fun which I will get to later but the fast part yes it's essential because if the government only responds with those what we call patches right fixes to the system if we respond only every year or even every four years in case of votes and elections which like three bits uploaded every four years then there's just not sufficient signal to correct a previously biased or wrong code but if everybody can very freely fork that is to say develop alternate visions and also merge within a 24-hour cycle then something magical happens it enable the few civic technologies to become like civil engineers because their work will be then used by over half a population which makes this code makers the same kind of role as the highway makers the road makers and so on but with the additional benefit of everybody being able to imagine different futures and if it gets rough consensus that is to say if a lot of people can live with it then it just turns into the overall new reality for the society in a very rapid fashion like from pink being sissy to pink being very hip and cool it's literally just 24 hours I think that the main issue for me again from a historical perspective is that democracy gives authority to the desires and feelings of people I mean this is the ultimate authority in a democracy and I completely agree that letting people voice their desires their feelings just once in four years is certainly not enough it's not efficient but the big challenge we are facing and will increasingly face in the 21st century is that now there is the technology to hack human beings and therefore also to increasingly manipulate their desires and emotions of course throughout history kings and emperors and prophets and religions they always tried to get inside people's minds understand what's happening there and manipulate it and we saw in history mass movements of manipulation again like the totalitarian movements of the 20th century but ultimately it wasn't it was inefficient not only because they didn't have the technology that I discussed earlier to really follow everybody all the time also the main obstacle was simply the lack of biological knowledge we didn't humans didn't understand human biology the human brain well enough to really understand what's happening there so in the end humans remained like black boxes that even somebody like Stalin or Mao or Hitler couldn't really figure out what's happening there and now it's not just the breakthrough in computer science but it's the same time the breakthrough in the biological sciences that are opening up this black box they are enabling to again hack human beings understand what's happening inside and therefore open completely new ways of manipulation and once you have something like that the ability to manipulate on scale the desires and emotions and feelings of millions of people then simply having fast faster iteration of feedback is not necessarily enough and again the full ability to hack human beings it's still in the future we are still not there yet but even what's been happening in the last few years is alarming there really you have all these algorithms and apps and devices that what they are really about is hacking human beings you have the smartest people in the world working on this problem of how to push our emotional buttons you have the big cooperation and they say look people are spending 30 minutes a day on our app on our device on our platform we want them to spend one hour this is your mission for this year they take the smartest people in the world and give them this task how to hijack people's attention and keep them on our platform these smartest people in the world discovered how to press our emotional buttons the fear button, the hate button the greed button and this is the easiest way to grab people attention and looking to the future again I mean the threat of a rising dictatorship and you kind of dictatorship is a big one but even if we avoid that how to deal with the new tools for hacking the human brain the human mind that's the really big question again because taking the example I began this interview with if I think about myself when I was say 14 and this algorithm analyzes my behavior just you say analyzes where my eyes go like I walk down the beach and the algorithm analyzes if I focus on cute guys or cute girls or it analyzes what happens to my eyes when I watch videos or television or whatever and it discovers that I like boys more than girls and it tells it to me or it uses it to manipulate me in some way so you know if it's a bad manipulation like Coca-Cola is using this knowledge to sell me something I don't need they show me commercials is sexy guys so I buy the product and I don't know why then they are using it against me but the really big issue is what if the algorithm isn't malign it's not working in the service of some corporation and I don't know this about myself but the algorithm knows it there is kind of an imbalance here and what happens then should it tell me that I'm gay should it kind of expose me slowly to different contents that will enable me to realize this about myself I mean what is the proper kind of relationship with this kind of entity I'll say one more thing we had this kind of entity throughout history in a way mother or teacher my mother is somebody who when I was 14 maybe she didn't know I was gay but she knew a lot of things about me I didn't realize but my mother had my best interests in mind when thinking how to use this information about me and we have thousands of years of experience in building the kind of beneficial parent-child relationship now we are suddenly creating a completely new kind of entity who actually knows about me far more even than my mother and we have no cultural or historical traditions about what kind of relationship I have with my AI mentor that has all this information about me I don't want this to say dystopian or utopian it's just fascinating as a historian to think what kind of relationships will emerge out of this new technology yes to this point actually there were two points one is the lack of accountability there was the Kohakala example and one was the value alignment which is all watch over by machines of loving grace so the first point is easier to address Taiwan in our previous presidential election managed to establish a norm through a completely independent branch of the government called the control or the control branch that makes the campaign donation and campaign expenses radically transparent meaning the raw data is published for independent journalists to analyze and they've been doing this because we the civic have been petitioning this even doing acts of civil disobedience for that and so when we really started doing that back in the mayoral election in 2018 we discovered that there's a large chunk missing the social media advertisements these were not reported as campaign donations neither as expenses and many of them maybe came from outside Taiwan and we don't know it really is an unaccountable black box and we read a course about the reports about how some foreign powers interfered with some other country's elections using hyperprecision targeting technology exactly the kind that Yuval described it predicts in a micro prediction way what people's kind of hidden fears and hopes are and they cater to those fears and hopes and just target this very tiny slice of people trying to persuade them to go to vote or to avoid certain kind of candidate or do some kind of emotional manipulation and so we tell all the multinationals that okay look at our control Yuan this radical transparency is the Taiwanese norm and you have two choices you can either publish your real-time advertisement library just as our control Yuan does in radical transparency so people engaging in such dark manipulations will be unshamed or you can just simply not run political and social advertisement during our election session your choice and we did not pass a law for that we basically just let them know there will be social sanctions if you violate the control Yuan norm our election norm and so Facebook decided to radically open their ads library while Google and Twitter and so on just simply refuse to run political advertisements during our election so that is a very neat example to the accountability issue which is a more to me minor issue the value alignment issue is much larger just as our mothers fathers and community members may all offer interpretations that is to say their such advice to us of course have our own best interest in mind are nevertheless colored by their life experiences and even though those interpretations may be valid they also to a growing teenager forecloses certain other possibilities because that's the power of interpretation and so to me I think a way to free ourselves from this value alignment issue is just to have as a norm multiple interpretations just as you can have many human assistance each perfectly value aligned to you and make accountable explanation wherever they do some decision not in your best interest you will have those different human assistance compare notes and if one of them consistently make things that are not value aligned to you at least you have other assistance to warn you about and I think this plurality instead of a singularity vision is what I have written in my own job description like instead of user experience we need to think about human experience user experience I know some other industry who also use the term user you only care about the time that you spent addicted with that technology when you use the term user is a zero sum game of attention and time smart but if you think of the total human experience then these different interpretations may add to one another and eventually liberate oneself from one singular vision of oneself I agree that one way to deal with this issue is to have this plurality of actors and viewpoints and when people think about algorithms taking over they most fear about democracies and democracies are vulnerable in things like election manipulations but people don't realize that in the long run when you talk about algorithms really taking over not in a science fiction way of the robots rebelling and trying to kill us but of algorithms actually gathering more and more power to themselves so even if you still have a human president or a human prime minister actually all the important decisions being taken by an algorithm that the prime minister cannot even understand like the algorithm comes to the PM and tells look there is a huge financial crisis about to happen and we must do this but I can't explain to you why because your brain just can't analyze all the data that I have gathered so even though the PM is still the officially the one in charge actually it's the algorithm running the show and this is increasingly happening and the funny thing is the dictatorships are actually far more vulnerable to this kind of algorithmic takeover if you think about say the PRC I often think about maybe if I had time to write a science fiction novel or a science fiction movie about algorithms taking over my favorite setting will actually be let's say the communist party of the PRC but what happens imagine if the party gives algorithms increasing control over the appointment of lower ranking officials not the people to the Politburo this is too political it's too complicated but let's say appointing all the officials in the local cities and branches and so forth it is increasingly done by an algorithm that constantly follows members of the communist party and collects data and analyzes it and learns from experience and very soon nobody in the PR in the communist party actually understands why the algorithm is deciding to appoint this person or to advance that person but they trust the algorithm and very soon the algorithm basically takes over the party and some day the Politburo wakes up and say no no no it's gone too far we've lost control it's too late for them because the algorithm has already appointed all the lower ranking members and this kind of algorithmic takeover which I think is far far more likely than the science fiction scenarios of a robot rebellion this can actually happen far more easily in an authoritarian regime than in a democratic regime you just the only ingredient missing is for the people higher up to develop enough trust in the algorithm in a democracy you need to convince millions of people to trust the algorithm in order for the algorithm to take over in an authoritarian regime you just need to convince a handful of people which are already primed to accept this kind of logic that there is somebody who collects all the information and knows best that's just one possible scenario but the really deep problem of value alignment is that even if you have a democracy and you have many players as the algorithms get to know us better and as we listen to them in more and more decisions in life they increasingly also control our own values and that's especially true if they accompany us from an early age like I'm now 44 so my values have been shaped by decades of experience and if an algorithm now increasingly makes decision for me the algorithm will still find it's difficult to change my core values but if you start with a baby or a young child and more and more decisions about the life of that child are taken by an AI mentor again not an evil mentor that actually serves some cooperation but a mentor which is supposedly really serving the interests of that child but it learns on the way it changes and you don't really know you trust the algorithm but you don't really know what where are these decisions coming from a human can't go over all the data and understand it and these decisions also shape the values of the child as she or he is growing up so again I don't have a fixed opinion about it I don't think it's dystopia or utopia I just think it's a completely new kind of situation that as a historian fascinates me one of the ideas in radical exchange is data dignity and we alluded to it earlier at the start of the conversation and the architecture and I think one of the things to consider here is how can we architect algorithms so we have this plurality and data dignity I think is useful and the idea of data dignity is that you separate control and use of the data so once you separate those two the monopoly and monopsony on data that big tech and big governments have if you do that and you separate control and use then you can imagine that there's going to be lots of sort of different data cooperatives or collectives which can accept or reject algorithms right and we'll see you can imagine a plurality of algorithms on top of a plurality of these collectives which we choose from Audrey do you think that that's a compelling vision of the future that could solve some of the problems which you've always worried about? The idea of a single mentor presupposes a kind of linear development path and as a junior high school dropout I have no personal experience with linear path and so I hear people have attended this thing called university but in any case what I'm trying to get to is that really in Taiwan our new curriculum starting last year invites the children to set their own projects to solve structural problems by problem-based learning and teachers it could be institutional, it could be in the community college it could be in the local like elderly learning groups and so on indigenous language circles and so on these are the different circles that this child when they care for example about climate change they can reach out to the various circles interested in that thing instead of relying on the textbook teaching them the truths and facts about climate science which doesn't make sense unless you have a compelling motivation to understand and solve this problem right? So the idea of a motivational self-motivational learning is at the core of our new curriculum and this is after decades of alternative experimental homeschooling all sorts of different education experiments in Taiwan which are legal all of them up to 10% of Taiwanese young people can choose their own curriculum free of the official one for the past decade or so and after we learn from what work and what didn't we decided that this kind of autonomous joining the circles that tackle the same problem is the best way to free from individual to individual competition which tend to dominate East Asian education thought and once you get trapped into that linear growth then of course you will have you know the first place, second place third place as if on the same runners track but if you're attracted by a systemic problem that you seek to solve then you basically choose your own course and you win at the starting point I guess and then you meet other people who are also forming new constellations from all the different disciplines and all the different cultures in a transcultural way and in this way everybody you meet will kind of be in a very different culture and they probably don't agree with about their world views and their algorithms if they empower themselves with augmented systems intelligence will probably have very different values and then the child will be able then to form in a sense their own constellation and so I think this is what's the radical exchange idea of intersectional data really shines in the sense of oneself we define me as really just a bunch of hashtags the more dimensions that I explore of course the more unique this combination is but at the end is just the plurality of hashtags that I associate myself to and therefore curate the kind of data that's useful to these different ideas or different values but all the while remaining true to my own chosen combination of this constellation and that's I think at the core of the data dignity idea. I think that the kind of ideas of the kind of AI mentor doesn't imply a single trajectory or a particular value system and just the opposite I mean it can actually encourage exploration and wide breadth of interest even more than traditional education systems if you think about something like I don't know music so let's say I have a particular musical taste now one vision of the algorithmic side cake or the algorithm mentor is that the AI learns what I likes and just gives me more and more of that and kind of imprisons me in the cocoon or the prison of my own previous biases and opinions but the opposite view is that no because it knows me so well it also knows the best way to expose me to let's say new musical tastes it can even calculate the I don't know sometimes when you try too much then it backfires so it knows that I don't know that 10% of the music that it gives me would be from genres or traditions that I myself would never think of trying and it can also know the best moment in the day or the week when I'm most open to new experiences in the traditional way of school you go to music class so music class is every Tuesday at 11 o'clock that's it and this is when you are supposed to be exposed to new kinds of music but maybe on Tuesday at 11 o'clock I'm very tired or I'm concerned about something and this is the worst moment to try and introduce me to jazz or to Indonesian gamelan music but the AI will know that actually at 7 o'clock in the evening on that particular day I'm much more open so it will try then in this way hacking human beings does not necessarily mean imprisoning them in their own previous preferences and biases that lead to unprecedented variety and exploration so it can really go in a lot of different ways I completely agree when I said linear progression I merely refer to the kind of singular pronoun that you're referring to AI with the it instead of day which also could be singular but anyway what I'm trying to say is that for example my personal phone is this feature phone it doesn't even have a touch screen and so I don't get addicted to it I don't know about you but if I touch screen very addictive and I don't really like being addicted to that surface and so being a feature phone I deliberately restrict my input bandwidth to this device so that this device probably will never have the sufficient bits about my preferences to make the kind of exploratory judgments or interpretations that you've just described it which is in technical terms a blended volition of my different moments or across my communities and so on and when you try to wildly guess my preferences extrapolating my volition because my input bits to it is so low it invariably gets it very wrong hilariously wrong and so I will not pay much attention to it and so this is kind of like wearing a medical mask this protects I'm not talking about the biological germs and viruses I'm using it as an analogy I use for example the Facebook feed eradicator which is like a mental medical mask if you install this plugin it removes the Facebook feed from the Facebook app that are autonomous that is to say if you intentionally do it that's still possible you can still do search view live streams or whatever but all the unpredictable part that those that pushes your emotional or dopamine or whatever buttons these disappears and replace it with a Zen saying or a Adler saying or whatever saying and so what I'm trying to say is that before the society developed a norm of counter-spawn of people flagging things as spawn assassin for lack of a better name things that you can install by yourself like personal protective equipment and finally people figure out the norm around spawn and then nowadays we don't worry about spawn that much and because we understand that our attention is too precious to give to the spawnmas and so I think it is either a vicious cycle of you giving it more attention and the scammers have more bits to work with or if you deny them the initial contact and then you protect you and your own community from the ripple effects and once the infodemic have a R value under one then these bad ideas or malign ideas will not spread and even if so called pro-social ideas if it's kind of hacking into our automatic system will be kept away because we will have sufficient room to breathe by our own conscious systems which is the human minds moderation system in theory. So I'm going to switch gears a little bit in the conversation and I think shift over given the current COVID crisis to talk about some of the global problems that we're worried about and you've all you have three on your list, AI, climate change and nuclear weapons I'm not sure if you've added pandemics onto that list but one of the remarkable things about this crisis has been Taiwan's exceptional performance here in terms of suppressing the virus without a lockdown and without community spread and you know it's a it is a national narrative and a success to Taiwan but it's a global problem so Audrey my question for you is what was the narrative of Taiwan's success do you think that it was a shared nationalist identity that pulled the Taiwanese together or do you think it was a shared participation in solving the problem and what can we extrapolate around the world and to replicate the success. There's actually two crises at the same time there is the pandemic which is the biological one and then there is the anxiety and fear and outrage and conspiracy theories and panic buying that is collectively referred to as the infodemic a good analogy is that if those ideas of conspiracy theory they if you do not put out the vaccines of mind that is to say deliberative intentional communication materials of basic scientific understanding then people would suffer actually from an epistemic void because they don't know really what's going on and they tend to feel in whatever the mental projection is which tend to divide people more and make things even worse and so in Taiwan in very early on we established the fast fair fund principles and I talked about fast anyone who care enough about asking anything about counter-pandemic strategy can call 192 and get their questions answered or the journalists will ask their questions in a much more elaborate way and get the daily 2pm briefing understanding but even that is not sufficient to quail people's fear for example about the lack of protective gears and protective masks we had a very early on a panic buying of medical masks when it was first distributed in convenience stores and pharmacies and there was a civic technologist with the name Howard Wu in Tainan city developed this very simple idea he coded a map on which he advised his friends and families to report which parts of the city still have the mask in store in stock so you can see the green ones are the ones that still have masks in stock and the red ones are the ones that have run out of stock so just by this very simple gesture he made sure that people can self-report where can they spend less time queuing needlessly and then they can queue fruitfully but he didn't anticipate this get national press attention and so he very quickly had to shut the website down because he owed Google because he used the Google map API 20k US dollars after just 2 days and so he had to shut it down but one of the people using his app was me and so I talked to our premier minister and say we need to trust citizens with open data and I think this is one of the most interesting thing in Taiwan's history of building open data and open APIs in that when we switched to ration the mask through the pharmacies anyone can use their single payer national health card which covers 99.9% of Taiwanese population they can get those masks and at nearby pharmacy and that is a machine system that publish the stock level of every pharmacy by every 30 seconds and more than 100 different survey technologies developed maps, chatbots, voice assistants and so on so it become essentially a distributed ledger in which you can only update every 30 seconds but without any possibility of go back in time to change the numbers and if people queue in line finally gets their three mask per week at the time or nine mask per two weeks later on they expect after a couple minutes to see in their phone that the stock level of that pharmacy will deplete by nine or ten if they are a child if it doesn't deplete or if rather increases then they will just call 192 to right there and report this anomaly to the CECC and I spend the time to talk about this in detail because this captures what I think was at the root of the idea of a data collaborative is everybody holding each other accountable everybody ensuring that there really is a fair distribution because they can witness by themselves and independent analysts can write more dashboards to show there's oversupply in certain areas under supply in certain areas there's people who work very long hours so they cannot collect a pharmacy so we have to work with convenience store that open 24 hours a day already international narrative because the most after we develop this code which is all open source by the way people in South Korea use the Taiwan model to convince their government that publishing the number every week or every day at the end of the day is not enough you have to do a real-time API just like Taiwan did and so the first mass rationing availability map in South Korea was written by Fin-Jeong-Kyeong in Taiwan even though he doesn't speak Korean he speaks JavaScript which is what's important here and so this enable a new breed of civic technologists who work as civil engineers because more than half a population 10 million people use their work and so the fairness of all kinds I think is at its core and at this moment there's more than 90% of people who have used our mass rationing system and the remaining team maybe they have already plenty of mass in storage before the pandemic they can also use that app to dedicate their uncollected quota to international friends for humanitarian aid so you can see actually 300,000 people's names in Taiwan can help that us that dedicated more than 5 million medical mass internationally and you see when high-level officials start wearing this mass that prints made in Taiwan then people approach us and say we also want the blueprint that you can automate the production of mass a day in a small automated factory and we also share the blueprints to many other jurisdictions as well and so the fairness is definitely not just national it is really an international perspective so that's a fair pillar of fast, fair and fun Taiwan is also on the climate change which is another obvious global externality has also had an interesting innovation, social innovation Audrey can you maybe tell us a little bit about the distributed sensors operated by citizens and run on a distributed ledger is that a solution we can also definitely the mass map were able to be prototype so quickly because there was already another map called the air map that's already in place so people in Taiwan voluntarily joined this distributed ledger and basically dedicate their school company or whatever to measure the climate to measure for example air pollution levels and so on and upload it to the civil IOT system which is powered by distributed ledger technology and what this means is that if you live in a place with some air pollution and you want to know whether it's from mobile immobile or overseas sources you reach not for the like 100 or so very high precision weather stations in the country but rather to your primary schools and interesting observations by even high schoolers in their data stewardship process using those very cheap less than 100 US dollars air boxes and connected to the 4G network which is 16 US dollars per month for unlimited data connection everywhere in Taiwan because we have brought in as a human right and all of this enabled this kind of collective intelligence that contributes to climate science and because all the data are then at least one copy of the ledger is in the national high speed computing center the NCHC which is the top 20 supercomputer in the world top 10 if you count the energy carbon footprint but in any case this supercomputer would then be able to take any junior high schoolers code and if it's a better code to predict the air pollution or to predict the climate model and things like that it would automatically be able to access the entire civil IOT system without this junior high schooler have to download any data to their personal computer so this in a sense is democratizing even very basic things like climate research and climate science to the citizen scientists again another application of our fairness principle and that's why we can get a mass map running so quickly. Yuval does this make me more optimistic about these global challenges that we face? Yeah I think that with many of these global challenges so the solution has to be global but of course it's rooted it's based in individual countries the most important thing is not to fall into the trap of thinking that there is a contradiction between nationalism and globalism and that we need to choose it. There is no contradiction nationalism is about loving your compatriots not about hating foreigners and in many situations like in this pandemic always global climate change if you really love your compatriots and you want to take care of them you have to cooperate with foreigners so to be a good nationalist you also have to be a globalist there is no contradiction and I think we are seeing it with initiatives like the one we just heard in Taiwan and also a very important thing is that some people think that to deal with these emergencies whether the pandemic or global climate change we need some kind of authoritarian regime that will tell everybody what to do otherwise there is no way to reach a consensus but the example of Taiwan proves the opposite and not just Taiwan in this pandemic it's true that authoritarian PRC have dealt with the epidemic better than the democratic USA but they are not the only examples many of the countries which have dealt with the epidemic the best whether it's in East Asia like Taiwan and South Korea whether it's New Zealand or Greece or Germany they are democracies because generally a well informed and self motivated population is far more efficient than an ignorant and police population with a well informed population in democratic countries instead of wasting resources on policing the people you can actually benefit from their initiatives and this is the best way forward so we are actually coming to our time here so I think I will just ask a final question and that is narratives for the future so you've all you are a medieval historian looking at the picture and Audrey you are a technologist hacking the present how do we develop that new shared story for the future without erasing our unique and individual attributes and differences to solve these global problems and we are sure in a new renaissance and what's the narrative of that renaissance Audrey I'll start with you well certainly for many people who worry about Taiwan's future I will start with the island people often ask me where is Taiwan going what are we going as a nation as a country and I often say that it's very predictable the tip of Taiwan the peak of Taiwan in indigenous language or the jade mountain grows every year 2 cm sometimes 3 cm and so we are growing towards the sky we are growing skyward and that's a geological answer but why do the peak of Taiwan grows that's because we are caught between the Eurasian plate on one side and the Philippine sea on the other and they bump into each other all the time causing endless earthquakes and because of that we learn to make our own buildings resilient to the earthquakes but also our ideascape resilient in Taiwan you can get a lot of people not just academicians but everyday practitioners arguing for a PRC style authoritarian control of the data you can get people arguing equally strong about a social infrastructure GDPR style protecting the person's best privacy interest against the surveilling state from the European thought point or you can get people arguing again very strongly from the US based viewpoint of basically an asset oil extraction based idea around data and so on and so all of these ideas co-exist in Taiwan and just as in the very beginning when I said we legalized marriage equality by legalizing the bylaws and not the in-laws we always managed to find innovations that captures the common values out of the different positions and that I think is the true vision of sustainability of working for the benefit of homo sapiens seven generations down the line because that's what matters and what doesn't matter is the kind of zero sum games that people play at this present point using their own viewpoint and Taiwan benefits from those plural viewpoints each with their own AI sidekicks I'm sure and that actually frees us from this dominating overarching narrative the same way as when we have more than 20 national languages and so I actually think that this Taiwan model is not confined to Taiwan we can see many similarly minded people that looks past behind zero sum games in various different ways rather good exchange for example works with the market power for the social benefit or the other way around you don't know but the idea is that it looks past the traditional divides between like the false dichotomies and see them rather as different dimensions that you can develop on both dimensions and reach a higher plane of existence if you will and I think that is also humanity's future we will benefit from the plurality of civilizations and indeed grow skyward I would say that humans are storytelling animals we rule this planet because we are the only animal as far as we know that can create imaginary stories and believe them and this is the key for cooperation among humans we cooperate because we believe in imaginary stories about gods and nations and money even though these things exist only in our own imagination even only in our own mind and this is not bad this is the bedrock of almost everything we do I mean money obviously has no objective value it's only here that money has value in contrast to say a banana that has an objective value I can eat it and it sustains me but it's not bad without money we couldn't have trade networks like we have today the key thing is to create stories that serve us without being enslaved by them I mean the danger that humans constantly face is that they come up with some big story to help organize society and then they forget it's just a story we invented they get trapped in it and they start harming themselves or others in the name of the story if you think about something simple like a game like football so obviously we invented football and it's fun nothing bad about it but if you start beating up or killing people because you lost a game then that's a problem and it's the same when we look to the future we need to create new stories to unite humankind but we have to be extremely careful to remember that it's all done in order to alleviate suffering I would say that the test of reality, reality is still there behind all the codes and all the stories, reality is still there and I would define reality by suffering if you want to know whether something is real or not whether the hero of your story you believe in the nation or in some god or in a corporation or whatever you want to know if it's real ask whether it can suffer now a nation cannot suffer money can't suffer when the dollar loses its value it doesn't suffer computers too code as far as we know doesn't suffer so whatever story we create in the 21st century in order to deal with the new challenges we should constantly ask ourselves this question who actually suffers and remember that everything we do is in order to alleviate that suffering then we are on safe ground that's a very powerful way to interpret which is very enlightening and as a oyster vegan not go into a debate of whether oysters are real or not of whether they can suffer or not but I think what makes sense really is to empower the people and by people I mean any being that can suffer to empower people closest to the suffering and if we keep coding to empower people who are closest to the pain who are indeed suffering then I would argue that they then become hackers in the civic hacking sense that they can not be restrained by their biology because it's pride month after all or restrained by their social standing or even other old stories that people merely repeat but do not co-create and then being liberated from those old stories they become story weavers that can then determine a better destiny for everyone in the sapient kind of way but I think if we concentrate power to the people who are feeling the least suffering people who are already enjoying too much hedonistic lifestyles then we are in real danger because even though hedonism is not zero sum it tends to self reinforce itself into a self trapping cycle so I would also say that to hack or to be hacked is not the question that individual level rather is on a society level and we can keep looking at just like genie index we can look at the code weavers story weavers index of how much individuals who are closest to the pain and suffering can co-create a norm and a code that we're living by I fully support that that's a very very good way of putting that wonderful thank you thank you Yvonne, thank you Audrey Audrey you have this beautiful quote which is sort of your own story would you mind sharing that on the singularity is near but I won't say it sure sure ok alright so yeah it's my job description actually so three and a half years ago when I first become digital minister people often confuse digital with IT information technology or ICT plus communication technology and keep telling people technology is talking to machines and digital is about forming a new possibility in societies but it's hard to distinguish those two apart so I wrote a poem or a prayer really as my job description so I will read that by Pooja's request and it goes like this when we see the internet of things let's make it an internet of beings when we see virtual reality let's make it a shared reality when we see machine learning let's make it collaborative learning when we see user experience let's make it about human experience and whenever we hear the singularity is near that is always remember the plurality is here very beautiful thank you very much thank you Audrey thank you you all I hope this has been enlightening for both of you it's certainly been an honor and an experience for me and I wish you both a wonderful end to pride month yes live long prosper